Argomenti trattati: nanocomputer, nanotecnologia, entropia
Entropy
The subject of entropy seems to give rise to more misconceptions and disagreements than any other scientific principle. Relativity and quantum mechanics have been similarly abused, but much of the abuse is in the form of extensions of the concepts that are frankly metapharical. Relativity and quantum mechanics do not, in general, apply to the everyday world, but entropy does . When people use meta phorical extensions of entropy on phenomena that are governed by actual entropy, confusion occurs.
It's possible, on the other hand, to give a metaphorical explanation of entropy that is carefully rigged togive all the same answers as actual entropy. Here it is; but if you'd prefer to take myword for it that nanocomputers must be reversible, you can skip to the next section.
Let us suppose that we are going to have a computer simulation of some closed physical system. We can have as high an accuracy as we like, but the total amount of information, i.e. the number of bits in the computer's memory, is in the end some fixed finite number. Now since the physical system we're simulating is closed, there will be no input to the simulation once it is started.
Since there is a fixed number of bits, say K, there is a fixed number of possible descriptions of the system the simulation can ever possibly express, namely 2^K of them. Now by the first law of thermodynamics, conservation of energy, total energy in a closed system is constant. Thus we can pick all of the states with a given energy, and call them "allowable", and the rest are forbidden. The first law constrains the system to remain within the allowable subset of states but says no more about which states within that set the system will occupy.
There is another constraint, however, in the sense that the laws of physics are deterministic; given a state, there is a single successor state the systern can occupy in the next instant of ffrne. an the real world, this is more complicated in two ways: Time and the state space are continuous, and quantum mechanics provides for multiple successor (and predecessor) states. However, the mathematical form of quantum mechanics (i.e. Hamiltonian transformations) gives it properties analogous to the the model so for perspicuity, we will stick with the discrete, deterministic model.) What is more, the laws are such that each state has not only a unique successor, but a unique predecessor.
Let's try to make this notion a little more intuitive. Each "state" in our computer simulation corresponds to some description of all the individual atoms in the physical system. For each atom, we know exactly where it is, exactly how fast it is going, exactly in what direction, etc. As we move forward in time, we can calculate all the electrical, gravitational, and if we care, nuclear, forces on that atom due to all the other atoms, and compute just where it will be some tiny increment of time in the future. Clearly, to just the same degree of precision, we can calculate exactly where it must have been the same tiny amount of time in the past. The math of the physical laws allow you to simulate going backwards just as deterministically as you can simulate going forwards.
So, suppose we have a simulation of a box which has a partition dividing it in half. There is some gas in one half, i.e. atoms bouncing around, and none in the other. Now suppose the partition disappears: the atoms that would have bounced off it will continue on into the empty half, which pretty soon won't be empty any more. The atoms will be distributed more or less evenly throughout the box.
What happens if we suddenly stop the simulation and run it backwards? In fact, each atom will retrace the exact path it took since the partition disappeared, and by the time the partition should reappear, the atoms will all be in the original half.
In reality, we don't see this happen. Remember that in our model there is a distinct causal chain of states from the state where the atoms are all spread out but about to move into half the box, to the state where they are actually in half the box. This means that the numberof states from which the atoms are about to compress spontaneously (in some specific number of timesteps) is the same as the number of states in which they are all in one half of the box.
The important thing to remember is that the total energy (which is proportional to the sum of the squares of the velocities of the atoms) must be the same. If we used a simulated piston to push the atoms back into the original half, we would find a 1-to-1 mapping between spreadout states and compressed ones; but the compressed ones would be higher-energy states.
How many states are we talking about here? Well, suppose that all we know about any specific atom is which side of the box it is in, which we can represent with a single bit. If the box has just 100 atoms in it, there will be more than 1,267,000,000,000,000,000,000,000,000,000 states in which the atoms are spread around evenly, and one state where they are all on one side. A similar ratio holds between the number of states (with full descriptions) where the atoms are spread out, and the subset of those states where they are about to pile over into one side.
It's clear that quantum mechanics allows for mechanisms that capture a single electron and hold it reliably in one place. Individual electrons doing specific, welldefined things under the laws of quantum mechanics is what happens in typical chemical reactions... there is no basic physical law that prevents us from building nanocomputers that handle electrons as individual objects.
We are now going to talk about entropy. In order to relate the simulation
model of a physical system to the way physical scientists view physical systems, we'll use the term "microstate" to represent what we have been calling a state in the simulation, i .e . one specific configuration of the system where all the bits are known. We'll use "macrostate" to refer to what a physical scientist thinks about the system. This means knowing the temperature, pressure, mass, volume, chemical composition, physical shape, etc, but not knowing everything about every atom.
Clearly, there are lots of microstates in a macrostate. The log of the number of microstates in a given macrostate is the entropy of the macrostate. (Physical entropies are generally given as natural logs, but we will talk in terms of base 2 logs with the understanding that a scaling factor may be needed to translate back.) To put it more simply, the entropy of a given macro state is the number of bits needed to specify which of its microstates the system is actually in.
With these definitions, the second law of thermodynamics is quite straightforward. If a proposed physical transformation maps a macrostate with a higher entropy into one with a lower entropy, we know it is impossible. Remember the causal chains of (micro)states: they can neither branch nor coalesce. Now suppose at the beginning of some physical process, the system was in a macrostate with a trillion microstates; we have no idea which microstate, it could be any one of them. Therefore at the end of the process, it can still be in a trillion microstates, each at the end of a causal chain reaching back to the corresponding original microstate. Obviously, the system cannot be in any macrostate with fewer than a trillion microstates, i.e. with a lower entropy than that of the original macrostate.
Now suppose I have a beaker of waterataspecific temperatureand pressure. It has, according to a physicist or chemist, a specific entropy. But suppose I happen to know a little more about it, e.g. I have a map of the currents and vortices still flowing around in it from when it was stirred. There are a lot of the microstates that would be allowed from the simpler description, that I know thewater is not really in. Isn't its entropy "really" lower? Who gets to say which is the "real" macrostate, whose size determines the "true" entropy of the system?
The answer is, that entropy isn't a property of the physical system at all, but a property of the description. After all, the real system is only in one single microstate! (Ignoring quantum mechanics.) This does sound a bit strange: Surely the "true" entropy of any system is then O. And we should be able to induce a transformation from this system into any macrostate we like, even one with much lower entropy than that of the original macrostate of the system as conventionally measured.
Let's consider the little box with the atoms of gas in it. The gas is evenly spread through the box, a partition is placed, and there is a Maxwell's Demon with a door to let the atoms through selectively. But the demon isn't going to try anything fancy. We're going to assume that we know the exact position and velocity of each atom in advance, so we will be able to provide the demon with a control tape that tells him when to open and close the door without observing the atoms at all. In fact, this would work, the demon can herd all the atoms into one side without expending any energy.
Why doesn't this violate the second law? Well, let's count up thecausal chains. The entropy problem in the first place is that there are many many fewer microstates in the final macrostate, namely the one with all the atoms on one side, than in the original so that many original microstates must somehow map into a single final one. But with the demon at work, we can run the simulation backwards by running the demon backwards too; the sequence of door-opening and closing that got us to our particular final microstate is clearly enough information to determine which original microstate we started from. Thus the final state, including the tape, is still in a one-to-one mapping with the original state, and the second law is not violated.
The curious thing to note about this gedanken experiment is that the demon can compress the gas without expending energy; what he cannot do is erase the tape! This would leave the system with too few final states.
What happens if the demon starts with a blank tape, instead of one where the microstate of the system is already recorded? Can he measure the system on the fly? Again yes, but only if he writes his measurements on the tape. Again the critical point is that the data on the tape serves to make the number of possible final microstates as large as the number of possible original microstates.
In practice, of course, the way one would obtain the same result, i.e. moving the atoms into half the box, would be to use a piston to compress the gas and then bring it in contact with a heat sink and let it cool back to the original temperature. Energy, in the form of work, is put into the system in the first phase and leaves the system, in the form of heat, in the second phase. At the end of the process the system is the same as the demon left it but there is no tape full of information. Clearly there is some sense in which the dissipation of heat is equivalent to erasing the tape.
In terms of the simulation model the demon directly removes one bit from the position description of each atom(storing it on the tape). The piston compression moves a bit from the position to the velocity description, and the cooling process removes that bit (storing it in the heat sink). The entropy of the gas decreases, and that of the heat sink increases.
Of course, dissipating heat is not the only way to erase a bit. Any process that "moves" entropy, i.e. decreasing it in one part of a system at the expense of another part, will do. For example, instead of increasing the temperature of a heat sink, we could have expanded its volume. Or disordered a set of initia!ly aligned regions of magnetization (in other words, written the bits on a tape). Or any other physical process which would inaease the amount of information necessary to identify the system's microstate. However, heat dissipation is probably the easiest of these mechanisms to maintain as a continuous process over long periods of time, and it is well understood and widely practiced.
A state-of-the-art processor, with 100,000 gates erasing a bit per gate per cyde, at 100 MHz, dissipates about 28 nanowatts due to entropy. (At room temperature. Each bit costs you the natural log of 2, times Boltzmann/s constant, times the absolute temperature in Kelvins, joules of energy dissipation, which comes to about 2.87 maJ (milli-attoJoule, 10^-21 Joules)). Since it actually dissipates 100 million times this this much, or more, nobody cares. But with a trillion-fold decrease in volume and thousand-fold increase in speed, the nanocomputer is "a whole 'nother ball game."
Thus there are two new design rules that the nanocomputer designer must adopt:
(1) Erase as few bits as possible.
(2) Eliminate entropy loss in operations that do not erase bits.
We eliminate entropy loss in logical operations by what is known as "logical reversibility". Suppose we have in our computer registers A and B, and an instruction ADD A,B that adds A to B. Now in ordinary computers that would be done by forming the sum A+B, erasing the previous contents of register B, and then storing the sum there. However, it isn't logically necessary to do this; since we can recreate the old value of B by subtracting A from the new value, no information has been lost, and thus it is possible to design a circuit that can perform ADD A, B without erasing any bits.
Addition has the property that its inputs and results are related in such a way that the result can replace one of the inputs with no loss of information. However, many useful, even necessary, functions don't have this property. We can still use those functions reversibly; the only trick needed is not to erase the inputs! Ultimately, of course, you have to get rid of the input in order to process the next one; but you can always erase the output without entropic cost if you've saved the input.
This leads to structures in reversible computation called "retractile cascades." Each of a series of (logical) circuits computes a function of the output of its predecessor. If the final output is erased first, and then the next-to-last, and so forth, the entire operation is reversible, and can be done (in theory!) without any energy dissipation.
If we adopt these rules throughout our computer design, we can reduce the number of bits erased per cycle from around 100,000 to around 10.
(segue)