Currently I am a master's student in an IT related programme. After listening to the description of this course I instantly became interested because it promises to explain the computer logic from ground up.
My ultimate goal is to build a very simple but physical computer. I am viewing this course as a conceptual foundation for my 'ultimate project'. I've always felt like not being really a computer scientist unless I create my own computer, even the simplest one.
I consider myself having a good general knowledge in programming (high level languages as well as Assembler x86 programming). Also I've done some courses on circuit design, operating systems course and other courses related to TECS. Although I was studying different layers of abstraction covered in this course it is difficult for me to integrate all the knowledge so that I can build my own computer. Hopefully the gaps between the layers will be filled in by this wonderful course.
One more reason, although a very minor one, is to keep me sane while I am undergoing some boring and demanding IT management courses ...
As an additional motivation I will post here the dates when I finish each chapter and exercises.
I really hope someday, someone will post an extension to this course where the first part is implemented in real hardware. I feel that this fundamental aspect is THE blind spot for the majority of Computer Science students.
Must look at how a DFF is implemented using combinatorial circuits.
Didn't really figured out on my own that an output pin can be duplicated by renaming the right-hand side. I think appendix A should state that explicitly because it is not that intuitive.
I think Hack CPU architecture can be considered RISK type. Before that I programmed for 8086 which is CISC type. I have two-fold feeling about Hack assembler: on one hand it is very cumbersome to do even simple things (e.g. to perform a base+offset addressing one needs 6 instructions here: @base, D=M, @20, D=D+A, A=D, D=M; whereas in 8086 this is a single instruction) but on the other hand it is very easy and fast to learn to program on this CPU. All in all, I guess it is much easier to write compilers for RISK architectures, so normally no one will write assembler code by hand on RISK processors because compilers should be much better at that.
I did multiplication first by finding the products of the form R1 * 2^i (where i varies from 0 to 7) and then adding only the powers of 2 where bits in R0's representation are equal to 1. Had to adjust the test file because my implementation of multiplication takes at most ~550 cycles (127*127 ~550 cycles, 6 * 7 ~ 450 cycles). I wonder how they determine the running time in 80x86 CPU, because it depends on the value of the numbers. Probably the upper bound is taken, which means some CPU cycles are always wasted on those CPUs => could be a reason why RISK processors are considered to consume less power...
Had a puzzling bug which caused sometimes the screen not to be cleared after releasing the key. It seems that I was writing past the final screen memory which happens to be the keyboard memory map. Probably the internal implementation of the keyboard listener doesn't write 0 into KBD at each cycle but only on key press / key release combinations so that overwritten value was not erased causing the screen not be erased. As a fun variation of this exercise one can write the keycode of the character repeatedly on the screen to get different patterns. Also typing several characters in quick succession yields different patterns.
Looking forward to chapter 5, which hopefully will clarify the PC architecture at the hardware level.