how is the alu designed ? As in how to decide which functions to put in and also how to decide how certain calculations lead to certain results ? In Hack's ALU we observe that doing the computations of certain functions in certain order leads to desired result . how do Electrical engineers design that sort of logic ? how much time does it take for that sort of thinking and design process ? A single person does that ? Will i be able to do them like the author of tthis book ?
No, the Hack ALU is NOT typical of real world ALUs. But that's fine, because it's not meant to serve the same purpose as a real world ALU.
The Hack ALU is designed to teach certain concepts in a built-from-the-ground-up fashion and so it is kept intentionally very simple and not only easy to understand how to implement it, but also easy to understand how to use it. While extremely elegant, it is NOT very efficient in a real world sense because it lacks support for basic operations that (nearly) any real world ALU would consider essential to getting acceptable performance. As just one example, the ability to shift/rotate the register contents in both directions.
Because it has six control signals, there are 64 potential operations it can perform. Some of them are redundant, but there are many beyond the 18 that are spec'ed that serve potentially useful functions. Why aren't they included? Because, like the 2-input NOR gate, they just don't happen to be needed in any of the later projects.
While I don't know for sure, I imagine that the ALU and the ISA (Instruction Set Architecture) for the Hack was an evolutionary and iterative process balancing the desire to keep a structurally simple ALU/CPU while retaining enough processing capability so as to not make the VM translator require extremely subtle tricks to implement its functionality in assembly.
As for whether a real-world processor would reduce the ALU's control inputs to five instead of six, probably not -- at least not at the ALU level. You want to minimize the need to decode signals as this adds complexity and delay which can affect maximum achievable clock speed as well as power and cost. At the CPU level it might very well combine all of the control signals into fewer bits in order to conserve bits in the instruction words, but probably only if there is value in doing so -- meaning if it allows the CPU to do things it otherwise wouldn't be able to and the penalty in speed/cost/power is deemed acceptable in exchange.
As an aside, one of the things I'm thinking of having my graduate students do is to analyze their VM translator implementation and see if they can identify two instructions that they can eliminate, reducing it to 16. That would allow for the possibility of freeing up two more instruction bits to be used for something else, such as additional registers so that a compiler could perform some register allocation and associated optimization.