Argomenti trattati: nanocomputer, nanotecnologia, entropia





"NANOCOMPUTERS
21st Century hypercomputing"
di J. Storrs Hall

Testo tratto dalla rivista "Extropy", #10, vol.4, n.2, Winter/Spring 1993, Los Angeles, CA, USA




If the price and performance figures for transportation technology had followed the same curves as those for computers for the past 50 years, you'd be able to buy a top-of-the-line luxury car for $10. What's more, its mileage would be such as to allow you to drive around the world on one gallon of gas. That would only take about half an hour since top speed would be in the neighborhood of 50,000 mph (twice Earth's escape velocity).
Oh, yes, and it would seat 5000 people.
Comparisons like this serve to point out just how radically computers have improved in cost, power consumption, speed, and memory capaci ty over the past half century. Is it possible that we could see as much improvement again, and in less than another half century? The answer appears to be yes.
At one time, people measured the technological sophistication of computers in "generations". There were vacuum tubes, disaete transistors, IC's, and finally large scale integration. However, since the mid-70's, the entire processing unit has increasingly come to be put on a single chip and called a "miaoprocessor." After these future advances have happened, today's microprocessors will look the way ENIAC does to us now. The then-extant computers need a different name; we'll refer to them as nanocomputers. "Micro" does exemplify at least the device size (on the order of a micron) and instruction speed (on the order of a microsecond) of the microprocessor. In at least one design, which we'll examine below, the nanometer and nanosecond are the appropriate measures instead.
Do we really need nanocomputers? After all, you have to be able to see the screen and press the keys even if the processor is microscopic. The answer to this question lies in realizing just how closely economics and technological constraints determine what computers are used for. In the mid-sixties, IBM sold a small computer for wha t was then the average price of a house. Today, single-chip micros of roughly the same computational power cost less than $5 and are used as controllers in toaster-ovens. Similarly, we can imagine putting a nanocomputer in each particle of pigment to implement "intelligent paint", or at each pixel location in an artificial retina to implement image understanding algorithms.
This last is a point worth emphasizing. With today's processing technology, robots operating outside a rigid, tightly controlled environment are extremely expensiveand runningat theragged edgeof the state of the art. Even though current systems can, for example, drive in traffic at highway speeds, no one is going to replace truckdrivers with them until their cost comes down by some orders of magnitude. Effective robotics depends on enough computational power to perform sensory perception; nanocomputers should make this cost-effective the way microcomputers did text processing.
Beyond providing robots with the processing power humans already have, there is the opportunity of extending those powers themselves. Nanocomputers represent enough power in little enough space that it would make sense to implant them in your head to extend your sensorium, augment your memory, sharpen your reasoning. As is slowly being understood in the world of existing computers, the interface to the human is easily the most computationally intensive task of all.
Last but not least - in some sense, the most important application for nanocomputers - is as the controllers for nanomechanical devices. In particular, molecular assemblers will need nanocomputers to control them; and we will need assemblers to build nanocomputers. (In the jargon of nanotechnology, an "assembler" is a robot or other mechanical manipulator small enough to build objects using tools and building blocks that are individual molecules.)

What is a nanocomputer?

Currently, the feature sizes in state-ofthe-art VLSI fabrication are on the order of half a micron, i.e. 500 nanometers. In fifteen years, using nothing more than a curve-fitting, trend-line prediction, this number will be somewhere in the neighborhood of 10 nanometers; would it be appropriate to refer to such a chip as a nanocomputer?
For the purposes of this article, no. We want to talkabout much more specific notions of future computing technology. First, we're expecting the thing to be built with atomic precision. Thisdoesnotmean that there will be some robot arm that places one atom, then another, and so forth, until the whole computer is done. It means that the "design" of the computer specifies where each atom is to be. We expect the working parts (whether electrical or mechanical) to be essentially single molecules.
We can reasonably expect the switches, gates, or other embodiment of the logical elements to be on the order of a nanometer in size. (They may have to be further apart than that if electrical due to electron tunneling.) In any case, it is quite reasonable to expect the entire computer to be smaller than a cubic micron, which contains a billion cubic nanometers.

Nanotechnology

The nanotechnology-assembler nanocomputer dependence sounds like a self-referential loop, and it is. But many technologies are that way; machine tools make precision parts used in machine tools. Bootstrapping into a self-supporting technology is not a trivial problem, but it's not an impossible one either.
Another self-referential loop in nanotechnology is slightly more complicated. We would like assemblers to be self-reproducing. This would allow for nanocomputers and other nanotechnological products to be inexpensive, because each fixed initial investment would lead to an exponentially increasing number of nanomechanisms, rather than a linearly increasing number. But how can a machine make a copy of itself? The problem is that while we can imagine, for example, a robot arm that can screw, bolt, solder, and weld enough to assemble a robot arm from parts, it needs a sequence of instructions to obey in this process. And there is more than one instruction per part. But the instructions must be embodied in some physical form, so to finish the process, we need instructions to build the instructions, and soon, in an infinite regress. The answer to this seeming conundrum was given mathematically by John von Neumann, and at roughly the same time (the '50's) was teased out of the naturally occurring selfreprod ucing machines we find all around us, living cells. It turns out to be the same answer in both cases.
First, design a machine that can build machines, like the robot arm above. (In a cell, there is such a thing, called a ribosome.) Next, we need another machine which is special purpose, and does nothing but copy instructions. (In the cell it's called a replisome.) Finally, we need a set of instructions that includes directions for making both of the rnachines, plus whatever ancillary devices and general operating procedures may be needed. an the cell, this is the DNA.) Now we read the instructions through the first machine, which makes all the new machinery necessary. Then we read it through the second machine, which makes a new set of instructions. And there's our whole new self-reproducing system, with no infinite regress.


General computer principles

When you get right down to it, a computer is just a device for changing information. You put in your input, and it gives you your output. If it's being used as the controller for anything from a toaster to a robot, it gets its input from its sensors and gives its output to its motors, switches, solenoids, speakers, and what have you.
Internally, the computer has a memory, which is used to store the information it's working on. Many functions, even so simple as a push-on, push-off lightswitch, need memory by definition. But the computer also uses memory to help break the job down to size, so as to be able to change the data it receives in little pieces, one at a time. The more memory you allow, and the smaller the pieces, the simpler the actual hardware can be.
There is a "folk theorem" in the computer world that the NAND gate is computationally universal. This is true in the sense that one can design any logic circuit using only NAND gates. However, something much more surprising is also true. Less than ten years ago, Miles Murdocca, working at Bell Labs, showed (in an unpublished paper) how to build a universal computer using nothing but delay elements and one single OR gate. Just one. Not circuits using arbitrarily many of just one kind of gate.
Murdocca's computer works by driving the notion of a computer to its very barest of essentials. A computer is a memory and a device to change the information in the memory. Generally we add two more specifics: The information is encoded and changed under the rules of Boolean logic; and the changes happen in synchronized, discrete steps. To build a computer, then, we need some way to remember bits; someway to perform boolean logic; and some way to dock the sequence of remember, perform, remember, perform, etc.
In computers from historical times to date, information has been encoded either as the position of a mechanical part, the voltage on a wire, or a combination (as in a relay-based computer). The major reason is that these are the easiest encodings to use in the logic part of the particular technology in question. It seems reasonable to expect to see these encodings at the nano level, for the same reasons.


Additional constraints for nanocomputers

About a year ago I had the occasion to design a nano-computer using Eric Drexler's mechanical rod logic, which will be examined in detail later in this article. As someone who was used to the size and speed constraints of electronics, I was in my glory with this new medium. I went wild, adding functionality, pipelining, multicomputing, theworks. I could build a super-computer beyond the wildest dreams of Cray, the size of a bacterium! What I didn't do was pay any attention to "this crazy reversible computing stuff."
--Until I did the heat dissipation calculations. The problem is that there really is a fundamental physical limit involved in computation, but it represents an amount of energy so small (it's comparable to the thermal energy of one atom) that it is totally negligible in existing physical devices. But in a nanocomputer, it far outweighs all the other heat-producing mechanisms; in fact, my nanocomputer design had the same heat dissipation per unit volume as a low-grade explosive. Back to the drawing board...
Since the earliest electronic computers in the 1940's, energy dissipation per device has been declining exponentially with time. Device size hasundergone a similar decline, with the result that overall dissipation per unit volume has been relatively constant (see the horizontal bar in Fig. 1). Historically, the portion represented by the thermodynamic limit for irreversible operations was completely insignificant; it is still in the parts per million range. However, with densities and speeds in the expected range for nanocomputers, it is extremely significant. Thus nanocomputer designers will be forced to pay attenffon to reversibility.
Efficient computers, like efficient heat engines, must be as nearly reversible as possible. Rolf Landauer showed in a landmark paper (in 1961) that the characteristic irreversible operation in computation is the erasure of a bit of information; the other operations can be carried out in principle reversibly and without the dissipation of energy. And as in heat engines, the reason reversibility matters is the Second Law of Thermodynamics, the law of entropy.



(segue)