Daisy III: Sequential logic

All the Boolean and arithmetic chips built so far have been combinational. Combinational chips compute functions that depend solely on combinations of their input values. These relatively simple chips provide many important processing functions (like the ALU) but they cannot maintain state.

Since computers must be able to store and recall values too, they need memory elements that can preserve data over time. These memory elements are built using sequential logic.

Time

The act of “remembering something” is inherently time-dependent. You remember now what has been committed to memory earlier. Thus is order to build chips that remember information, we must first develop some standard means for representing the progression of time.

In most computers passage of time is represented by a master clock that delivers a continuous train of alternating signals. The hardware implementation is typically based on an oscillator that alternates continuously between two phases labeled 0-1, low-high, tick-tock etc.

The time between the beginning of a tick and the end of the subsequent tock is called a cycle. Each clock cycle is taken to model one discrete time unit. The clock phase at any given time is represented by a binary digit and broadcasted to every sequential chip throughout the computer platform.

Orchestration

Recall that combinational chips change when their inputs change, irrespective of time. In contrast, sequential chips ensure that their outputs change only at the point of transition from one clock cycle to the next, and not during the cycle itself. In fact, sequential chips are allowed to be in unstable states during clock cycles, requiring only that at the beginning of next cycle they output correct values. This clock-controlled behaviour of sequential chips is used to synchronise the overall computer architecture.

To illustrate, suppose we instruct the arithmetic logic unit (ALU) to compute x + y where x is the value of a nearby memory element and y is the value of a remote memory element. Because of various physical constraints, the electric signals representing x and y will likely arrive at the ALU at different times. However, being a combinational chip, the ALU is insensitive to the concept of time—it continuously adds up whichever data values happen to lodge in its inputs. Thus it will take some time before the ALU’s output stabilises to the correct x + y result. Until then, the ALU will generate garbage.

Is this a problem? Since the output of the ALU is always routed to some sort of a sequential chip (a register, a RAM location etc.) we don’t really care. All we have to do is ensure that the length of the clock cycle be slightly longer than the time it takes a bit to travel the longest distance from one chip in the architecture to another. Then we are guaranteed that by the time the sequential chip update its state (at the beginning of next clock cycle), the inputs that it receives from the ALU will be valid. This, in a nutshell, is the trick that synchronises a set of standalone hardware components into a well-coordinated system.