Argomenti trattati: nanocomputer, nanotecnologia, entropia
Drexler's mechanical logic
K. Eric Drexler, "the father of nanotechnology", has designed and subjected to a thorough going analysis, a mechanical logic for nanocomputers. A mechanical logic has the disadvantage of being slower than an electronic one, but has the major advantage that at the molecular level it is possible to design, and analyse the operation of, a mechanical logic with current molecular simulation software, and be reasonably certain that the design, if built, would work. By the time we get around to actually building molecular computers, our analytical tools will be better than they are now, and what's more we'll be able to augment the simulations with physical experiments. So real nanocomputer designs won't have to be nearly so conservative as this one. In particular, they'll probably be electronic, and thus probably some orders of magnitude faster. But don't worry: this mechanical design is already plenty fast.
This formulation is sometimes called "rod logic" because instead of wires, it uses molecular-sized rods. (E.g. a nanometer in diameter and from ten to a hundred nanometers long.) Each rod represents a logic 0 or 1 by its position, sliding slightly along its length to make the transition.
To do logic, the rods have knobs on them which may or may not be blocked by something, preventing the rod from changing state. The "something" is simply other knobs on other rods, which block or don't block the first rod, depending on their state. (see Fig. 2) The logic is clocked by pulling on rods through the equivalent of a spring, so that it moves unless blocked . (We can d raw a workable parallel to transistors, which block or don't block a dock from changing the voltage on a wire, depending on the voltage of another wire.)
The rods move in a fixed, rigid housing structure which might be thought of as a hunk of diamond with appropriate channels cut out of it (although it wouldn't be built that way). The rods are supported along their entire length so the blocking does not place any bending stress on them.
Any logic function is now simply constructed: for an AND gate, for example, take two input rods and place knobs so that they block the output rod when they are in the "0" position. The output rod will only be able to move to the "1" when both inputs are "1."
(Now that we can build a single gate, aren't we just about finished, by virtue of Murdocca's design? Well, we'd still need clocking and some mechanism to handle a delay-line memory; but more to the point the design produces a computer that is about a billion times slower than you could build with conventional logic designs!)
The motion of the rods is limited by thespeed of sound (indiamond);butthey are so short (e.g. one-tenth of a micron) that the switching times are s till a tenth of a nanosecond. The speed of an entire nanocomputer of this kind of design will be limited by thermal noise and energy dissipation, which can produce enough variation in the shapes of the molecular parts to keep them from working right. Drexler gives a detailed analysis of the sources of such error in Nanosystems (chapter 12). The energy dissipated per switching operation is conservatively estimated at about 0.013 maJ.
This is less than 5% of the fundamental limit for bit destruction; as long, of course, as the gate in question doesn't destroy bits! Most of the logic design in a }od logic nanocomputer must be either conservative logic or retractile cascades. To demonstrate the difference, considera NOT gate. One can implement a conservative NOT because it has an exact inverse (which happens to be itself). One could implement a conservative NOT in rod logic with a single gear meshing with racks on input and output rods. A retractile NOT on the other hand would be a single rod crossing with knobs preventing the output from moving to "1" if the input was "1". The "retractile" part is that the output must be let back down easy, in such a way that the energy stored in the spring is retrieved, before the input is reset for the next operation.
If this is not done, e.g. if the input is released first, the output rod will return under force of its spring and the energy stored in the spring will be dissipated as heat. In order for this not to happen, the output rod must be returned first, and then the input may be.
One very powerful and widely used technique in logic design is called the PLA (programmed logic array). A PLA is readily designed in retractile cascade style; it also has a remarkably good match to the geometric constraints of the rod logic, which requires the input and output rods from any interaction to form a right angle. The PLA consists of three sets of rods: the inputs, the minterms, and the outputs (see Fig. 3) . First the input rods are moved, i.e. set to the input values. Then the minterm rods are pushed; some of them move and some don't depending on which inputs blocked them. In an electronic PLA each input is both fed directly into these interactions, and its negation is; this need not be done in the rod logic since its effect can be had by altering the position of the knobs. Sometimes the number of minterms can be reduced for the same reason.
After the minterm rods are pushed, the output rods are pushed, and the appropriate value is encoded in which ones actually move. Now, the important thing in preserving reversibility (what makes this a retractile cascade) is that after this operation, first, the output rods must be let back down gently; then the minterms let back down gently; and finally the inputs can be released.
Notice that in the figure, the input rods are at rest in the 0 position, while the output rods are at rest in the 1 position. (And in any given operation, exactly one of the minterm rods will slide to the left.) . PLA's can implement any logic function necessary in a computer, although there are more efficient circuits for some of them that are commonly used instead. More crucial, however, to a full grasp of the mechanisms of a nanocomputer, is memory.
Registers and memory
Memory is a problem; if we follow the rules for conservative or retractile reversible logic, memory is impossible to implement. This is because any memory with a "write" function erases bits by definition.
In Drexler's rod-logic design, all the bit-erasing functionality is concentrated in the registers. The register design is fairly complex, to keep the energy dissipated in this process near its theoretical minimum.
The main problem is that for a physical system to retain one of two states reliably, which is what you want in a memory, there must be a potential barrier between the states that is significantly higher than kT, or thermal fluctuations will be sufficient to nip the bit at random. But in simple implementations, the height of the barrier determines how much energy is lost when the system changes state.
Consider an ordinary light switch. When you flip it, there's a spring that resists your finger until the half way point, and then it snaps into place, dissipating all the energy you put into it as heat, vibration, and sound. (A "silent" switch is worse, since it dissipates by friction and you have to push all the way across.) The weaker the spring, and the more likely that some vibration will flip the switch when you didn't intend it. The trick is to have some way to change the strength of the spring (or to have the effect of doing so).
In Figure 4, there is a simplified version of Drexler's register. The bit it contains is reflected in the position of the shaded ball (In the real design it's more complicated so that the value can be read!). (a) and (b) show the register when it contains 0 and 1 respectively. In (c), the barrier has been lowered and the ball is free to wander freely between both positions; this stage increases entropy. In (d), the register is reset to 0. The similarity to compressing a gas-filled cylinder is apparent; this is where In(2) kT joules of work are converted into heat. Now to write the next bit, the input rod (on the right) is either extended (a 1, see (f)) or not (a 0, see (e)) and then the barrier raised. Finally, the spring rod (on the left) is retracted to get back to (a) or (b). If a 1 was written, the input rod did work to compress the ball into the spring, but that energy can be retrieved when the spring rod is retracted. The mechanisms to do this are just the same as in the logic portions, e.g. having the rods mechanically coupled to a flywheel.
Registers like this which are going to be used to erase bits will tend to be located near heat sinks or coolant ducts; bit erasure is the largest component of power dissipation in the rod logic design. Memory can be implemented as lots of registers; registers occupy about 40 cubic nanometers per bit. Thus about 3 megabytes worth of registers fill a cubic micron. One would probably use register memory for cache, and use a mechanical tape system for main storage, however. The "tape" would be a long carbon chain with side groups that differed enough to be 1's and 0's. Since the whole computer is mechanical, the difkrence in speeds is not as bad as macroscopic tapes on electronic computers. Such a tape system might have a density inthe neighborhood of a gigabyte per cubic micron. Access times for using a tape as a random access memory consist almost entirely of latency; if the length of individual tapes is kept to under 100 kbytes, this is in the 10's of microseconds.
Motors
In order to drive all this mechanical logic we need a motor of some kind; Drexler has designed an electric motor which is nothing short of amazing. (Clearly this is of import well beyond computers.) The reason is that the scaling laws for power density are in our favor as we go down toward the nanometer realm. At macroscopic sizes, almost all electric motors are electromagnetic; at nano scales, they will be electrostatic. The motor is essentially a Van de Graff generator run in reverse (but it works just fine as a generator, as do some macroscopic electric motors). The power density of the motor is over 10^15 W/m^3; this corresponds to packing the power of a fanjet from a 747 into a cubic centimeter. (It's not clear what you'd do with it if you did, though!)
Ultimately, the ability to make small, powerful motors is going to be more important for nanorobots than nanocomputers per se. The speed advantage of electronics over mechanical logic is almost certain to drive the descent into nanocomputer design.
Other logics for nanocomputers
Before going into other extensions of conventional digital logic, there is another form of nanocomputer that rnay appear earlier for technological reasons. That's the molecular biocomputer.
Imagine that a DNA molecule is a tape, upon which is written 2 bits of information per base pair (the DNA molecule is a long string of adenine-thymine and guanine-cytosine pairs). Imagine, in particular, this to be the tape of a Turing machine, which is represented by some humongous clump of special-purpose enzymes that reads the "tape," changes state, replaces a base pair with a new one, and slides up and down the "tape." If one could design the enzyme dump using conventional molecular biology techniques (and each of the individual functions it needs to do are done somewhere, somehow, by some natural enzyme) you'd have a molecular computer.
Other mechanical logics
Now, back to mechanical logic Most macroscopic mechanical logic in the past has typically been based on rods that turned instead of sliding. It's reasonable to assume that similar designs could be implemented at the nano scale.
Electronic logic
It's dear that quantum mechanics allows for medhanisms that capture a single electron and hold it reliably in one place. After all, that's what an atom is. Individual electrons doing specific, well-defined things under the laws of quantum mechanics, is what happens in typical chemical reactions. Clearly there is no basic physical law that prevents us from building nanocomputers that handle electrons as individual objects.
What is not so clear is how, specifically, they will work. Quantum mechanics is computationally very expensive to simulate, and intuitively harder to understand, than the essentially "physical object" models used in mechanical nanotechnology designs thus far. Indeed, the designs are typically larger and slower than they would have to be in reality, simply to avoid having to confront the analysis of quantum effects.
Ultimately, however, nanotechnologists will be "quantum mechanics." Computes based on quantum effects will be even smaller, more efficient, and much faster than mechanical ones of the type presented above. They will use much the same logical structure: it's quite possible to design retractile cascades even in conventional transistors (where it's an extension of techniques called "dry switching" in power electronics and "hot clocks" in VLSI design).
There are schemes, with some mathematical plausibility, to harness quantum state superposition for implicit parallel processing. In my humble opinion, these will require some conceptual breakthrough (or at the very least, significant experimental clarification) about the phenomenon of the collapse of the Schroedinger wave function before they can be harnessed by a buildable device. Keep your fingers crossed!
Conclusion
Beyond certain rapidly approaching limits of size and speed, any computer must use use logical reversibility to limit bit destruction. This is particularly true of nanocomputers with molecular-scale components, which if designed according to standard current-day irreversible techniques, explode.
We can design nanocomputers today which we are virtually certain would work if constructed. They use mechanical parts that are more than one atom but less than ten atoms across in a typical short dimension. The parts move at rates of up to ten billion times per second; processors built that way could be expected to run at rates of 1000 MIPS. Such a processor, and a megabyte of very fast memory, would fit in a cubic micron (the size of a bacterium). A gigabyte of somewhat slower memory would fit in another cubic micron. A pile of ten thousand such computers would be just large enough to see with the naked eye.
FURTHER READING
Drexler, K. Eric: Nanosystems: Molecular Machinery, Manufacturing, and Computation, Wiley Interscience, New York, 1992
Proceedings of the Physics of Computation Workshop, Dallas, October 1993: IEEE Press, (in press). (particularly papers by Merlde, Hall, and Koller)
Hennessy, J.L. & Patterson, D.A.: Computer Architecture: A Quantitative Approach, Morgan Kaufmann, San Mateo, CA, 1990
Watson, Hopkins, Roberts, Steitz, & Weiner: Molecular Biology of the Gene, BenJamin/Cumrnings, Menlo Park, CA, 1987 (4th ed.)
Leff, Harvey S. and Andrew F. Rex: Maxwell's Demon: Entropy, Information, Computing Princeton University Press, Princeton, NJ, 1990 (Particularly papers by Landauer and Bennett)