Assembly language and chips that run at different speeds

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Assembly language and chips that run at different speeds

rayman22201
I asked this question on the Coursera forum, but I didn't get any response there. This place seems more active, so maybe you guys can help answer my questions?

The Hack computer is very nice because all the chips are synchronized with a single global clock.

But that is not true for most modern computers. Each piece of the memory hierarchy often runs at different speeds, and modern RAM can do crazy things like using both the rising and falling edge of the clock to perform reads and writes.

How does the machine handle those delays while being invisible to the assembly language?

I am aware that extremely slow reads or writes, such as hard disks, are asynchronous and send interrupt signals to let the cpu know when they are finished, but from what I remember when writing x86 assembly in college, main memory loads appeared synchronous to your program.

You never have to explicitly tell the cpu to wait for the data to finish loading from main memory. How does the cpu pause the execution of the program until the fetch completes? Is there some logic wired into the Program Counter chip that pauses the program until the load is finished?

I have heard terms like "this instruction takes X cycles to complete". This implies that it is possible for one assembly instruction to take more than one clock cycle to complete.

How is that actually implemented in hardware so that the order of your program is kept correct?

There are lots of other factors in modern computers like multiple cores, memory caches, and instruction pipelining, which all affect how this works, but I would like to know how this "maintaining of execution order" is implemented at the most basic level.

TLDR;

If I wanted to make the Hack computer work with a memory chip that ran at a different clock speed than the cpu, how could I do it?

Thanks in advance, I am really enjoying the course,

Ray :-)
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Assembly language and chips that run at different speeds

cadet1620
Administrator
Delays associated with accessing memory are not readily visible to assembly language programs. If the CPU accesses memory that is slower than the CPU, the memory asserts a "wait" signal back to the processor. The processor waits, neither executing the current instruction nor incrementing the PC. In the simplest implementation the CPU just ignores the clock signal until the memory deasserts "wait".

I haven't tried implementing this in the Hack CPU, but I think that you would want to add "and not wait" to all of the c-instruction control signals for ARegister.load, DRegister.load, PC.load, PC.inc and writeM.

Since the Hack CPU has single cycle memory-to-memory instructions, a slow memory system must still accept write data without delay. The memory system would need to latch the address and write data in registers and then do its write cycle. If the CPU asks for a read before the write cycle completes, the memory would assert "wait", complete the write cycle and the read cycle, then deassert "wait". During both write and read cycles the CPU would be delayed by the "wait" signal, but once the memory system is ready the CPU will execute the instructions in normal order.

This gets trickier when there are multiple slow memory sources, say both the RAM and the ROM in the Hack computer.


Having instructions that take more than one clock cycle is more common than single cycle instructions like the Hack computer.

In the CPU, there is an instruction sequencer that advances every clock cycle, setting the control signals as needed for the each phase of the instruction. The final phase of each instruction includes either increment PC or load PC.

An example of multi-phase instructions would be PUSH and POP.  Assume the CPU has a SP (stack pointer) register, then PUSH D would need to do two operations: RAM[SP] = D and SP= SP+1. The PC would not be updated until the second phase of the PUSH instruction executed.

--Mark
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Assembly language and chips that run at different speeds

rayman22201
I waited 2 months and never got a response on Coursera lol.
Thank you so much for the quick and very thorough answer!

A wait signal is actually fairly intuitive, and makes perfect sense.
The multi-phase instruction sequencer also makes sense in a similar way.

I assume that this is the (very simplified) idea behind how "atomic" operations like Compare and Swap operations work on modern processors.

With multiple slow memory sources, is it not sufficient to have a wait signal for each slow memory that is then ORed together to generate a master wait signal?

In the Hack computer, I think a ROM wait signal would only need to be tied into ROM reads (PC.load and PC.inc) since, by definition ROM is read only, and the Hack computer only reads from the ROM once per cycle, to get the current instruction. Or is that totally off base?
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Assembly language and chips that run at different speeds

cadet1620
Administrator
rayman22201 wrote
With multiple slow memory sources, is it not sufficient to have a wait signal for each slow memory that is then ORed together to generate a master wait signal?
It's even easier than that.  The wait signal is usually a /wait (not wait) signal that is connected to v+ through a pull-up resistor so it is true by defualt.  Then all the devices that many want to assert /wait are "open drain" drivers that act like open switches when they are not asserting /wait and like connections to ground when they do assert /wait.  This way they can all be wired together with no logic required.
In the Hack computer, I think a ROM wait signal would only need to be tied into ROM reads (PC.load and PC.inc) since, by definition ROM is read only, and the Hack computer only reads from the ROM once per cycle, to get the current instruction. Or is that totally off base?
You are thinking correctly.  The other signal the the CPU needs to make this work correctly with slow ROM is a readM signal that tells the RAM/Screen when the CPU is done reading the ROM and all the control signals, including addressM, associated with the instruction have had a chance to stabilize.  Then the RAM can assert /WAIT for it's R/W cycle.

--Mark
Loading...