This is only one way that early relay computers have been improved.

There are many more improvements possible.

For example, in the computer in the book, the clock generated the timing signals for an instruction, then output nothing for the same amount of time, then generated the timing signals for the next instruction, then output nothing for the same amount of time, etc. However, with a small improvement to the clock, those times of outputing nothing can be eliminated and that will speed up the computer by a factor of two.

Also, the computer in the book does an instruction in nine steps. This can be reduced to seven steps by eliminating copying the instruction address to memory and then copying the instruction address back to a register in the processor. Instead, the instruction address can be copied to another register directly while another step is being done.

The number of steps required to do an instruction can be further reduced by reading all four words of the instruction to four registers in one step. This requires sort of breaking the memory up into four memories that can be read simultaneously. Now, an instruction can be done in four steps:

1. Copy the instruction from memory to registers.

2. Copy the 'from data' from memory to a register.

3. Copy the 'to data' from memory to a register.

4. Copy the result from the 'back register' to memory.

In the book, it took nine steps to do an instruction. Then, the clock output nothing for the same amount of time, so an instruction took eighteen cycles. This can be reduced to four cycles (at the cost of more complexity), for a speed up by a factor of four and a half.

We want ways to GREATLY increase the amount of processing done per second. One way to do this is to change switch types from relays to transistors, as we have seen. Another way to GREATLY increase the amount of processing (calculation / computation) done per second is to use parallel processing.

For example, consider a 32-bit computer like the one in the book. There are 32 address bits and so over four billion addresses. If all these addresses are used, then there are over four billion times 32 (32 bits per word) or over 128 billion bits of memory. A 32-bit processor of the type in the book requires very few switches (relays or transistors) and very little wire compared to all that memory. Therefore, for not much more cost, one can have about 65,536 computers with only about 65,536 thousand words (about 2,000,000 bits) of memory (instead of about four billion) each. Then you could do up to about 65,536 times as much computation per second.

Next, we want to find a way to hook all 65,536 computers together so that all 65,536 computers can work on one large problem simultaneously. We want each of the computers to be able to use each others memory (including tables) and we want to be able to program this group of 65,536 computers normally. It turns out that this can be done easily and simply by using virtual processors, as we will see.


Page 38c

Page 38a . . . Page 1 . . . Page 39