As far as I understood sequential logic, any memory device outputs the new value from the next clock cycle. So the program in terms of clock cycles is as follows:
1)The clock starts
2)The A register outputs the address of KBD from the next cycle from the cycle in which the command was implemented
3)D outputs the value of M of KBD from the next cycle from the cycle in which the command was implemented
4)The A register outputs the address of SOMETHING from the next cycle from the cycle in which the command was implemented
Is this correct? If so, what happens if the value of KBD changes after D=M? Is it simply ignored till the program loops back to check the value of KBD again?
During the first clock cycle (CLK #1), the A register is being loaded with the value of KBD (24576). This value appears at the output of the A register at the beginning of the next clock cycle (CLK #2). During that same cycle (CLK #2), the D register is being loaded with the value found at address KBD, which appears at the output of the D register at the beginning of CLK #3. Also during CLK #3 the A register is being loaded with the value associated with SOMETHING, which appears at the output of the A register at the beginning of CLK #4.
If the value of KBD changes during other times, that change is not seen. The program is only aware of the value located there when it actually accesses it.
Sorry for having to reopen this topic after concluding it but I had another doubt:
According to Chapter 3, the next instruction is decided by the output of the PC. So, my program works like this(?):
Clk 1- The input of the A register is being set to value of KBD
The PC is being incremented by 1
Clk 2- The A register outputs the value of KBD
The PC is successfully incremented by 1
The CPU processes the value of PC and goes to that instruction, processes it to be D=M , sets the input of D to be the value at KBD
So my doubt is: Does the part 3 of Clk 2 take place in a short enough interval of time to satisfy the setup time of D of the simulated clock (if the simulated clock has one) and does it satisfy it on a real computer clock?
The maximum clock speed is established by the critical path, which is the longest amount of time it can take from when the outputs of the registers change at the beginning of the clock cycle to when they are stable at the inputs of the registers (or memory elements in general) in order to satisfy the set-up time requirements.
If you run the clock faster than that, you are not guaranteed that things will work properly.
There are all kinds of games that are used to speed things up. One of the classic is to pipeline the architecture so that the entire path is broken into a few (usually four, but nothing sacred about that) pieces that therefore take (in this example) four clock cycles to completely execute an instruction. But the clock can run nearly four times as fast, so the actual time it takes is just a little slower than the original case. However, you get to start executing a new instruction on each clock, so at any given time you have four instructions being executed so you can execute the code at nearly four times the speed.
There are some gotchas, however, mostly doing with jumps since you don't know which instruction is actually going to need to be executed next. There are several ways of dealing with that, but not surprisingly they all add quite a bit of complexity.
The thing to remember here is that, between rising clock edges, the circuit is just a combinatorial logic circuit. So you can think of it this way.
Before the rising clock edge all of the logic has settled down and it is a stable state, including the signals at the inputs to all of the memory elements as well as their respective load signals. This includes the register inside the program counter.
At the rising clock signal all of the memory elements that have their load signals asserted (including the program counter) write the values present at their inputs into their memory cells. The outputs of these memory elements reflect these changes a short amount of time later (the propagation delay).
At that point all of the rest of the circuit begin to react to these changes in the outputs. The output of the PC changed, which means that the address to the instruction ROM changed, which results in the output of the instruction ROM changing, which results in the output of the instruction decoder changing, which results in the control signals to the ALU and other devices changing. At the same time, the outputs of the D and A registers might be changing (since they may have just loaded new values), which might result in the address being sent to the data RAM changing, which might result in the output of the data RAM changing. All of these can result in changes to the X and Y inputs to the ALU changing which, along with the changes in the control signals, can result in changes in the ALU outputs, which could change the inputs to some of the memory elements, including the PC.
This is a sequence of events that happens as fast as it can -- as fast as the signals propagate themselves through all of the gates involved. But eventually it all settles down and all of the signals achieve a steady state. Until the clock edge rises again, at which point the whole process repeats itself.
The maximum amount of time required for all of these things to reach that steady state, under all possible conditions, determines the minimum amount of time that must exist between successive clock edges, which establishes the maximum clock speed for the system.
So, a sequential chip propagates it's output after a delay, the output stabilizes and therefore the input of the next sequential/combinational chip stabilizes and so on and the clock speed is tailored such that the entire process consisting of all chips takes place between one rising edge to the next one.
Yes. But to be clear, the "and so on" only applies to the combinatorial chips.
Here's perhaps a slightly better way I could have phrased it.
At each rising clock edge the first thing that happens is the outputs of all of the sequential chips are updated based on their inputs just prior to the clock edge. Once their outputs finish changing, they remain static for the remainder of that clock cycle which is then devoted to letting those changes propagate through the combinatorial logic, all of which must settle before the next rising clock edge.
So sequential chips update their state when the clock rises and remain dormant for the rest of the cycle while the combinational chips work on the output of the sequential chips throughout the clock cycle.
Yes, although "work on" might be overstating things a bit (depending on what someone envisions when they think of what that entails). The signals are just propagating from the inputs to the outputs of all the combinatorial chips.