- Entropy (arrow of time)
Entropy is the only quantity in the physical sciences that "picks" a particular direction for time, sometimes called an
arrow of time. As one goes "forward" in time, the second law of thermodynamicssays that the entropy of an isolated system can only increase or remain the same; it cannot decrease. Hence, from one perspective, entropy measurement is thought of as a kind of clock.
By contrast, all physical processes occurring at the microscopic level, such as mechanics, do not pick out an arrow of time. Going forward in time, an atom might move to the left, whereas going backward in time, the same atom might move to the right; the behavior of the atom is not "qualitatively" different in either case. In contrast, it would be an astronomically improbable event if a macroscopic amount of gas that originally filled a container evenly, spontaneously shrunk to occupy only half the container.
Certain subatomic interactions involving the
weak nuclear forceviolate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This however is not linked to the thermodynamic arrow of time, which is the main issue of this article, nor has it anything to do with our daily experience of time irreversibility.
The Second Law of Thermodynamics allows for the entropy to "remain the same". If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction, which would by definition be the "past". Thus this law is about the
boundary conditionsrather than the equations of motionof our world.
Unlike most other laws of physics, the Second Law of Thermodynamics is statistical in nature, and its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 x1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only "fantastically" unlikely -- so unlikely that no macroscopic violation of the Second Law has ever been observed.
The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the
boundary conditionsof the early universe. According to the Big Bangtheory, the Universewas initially very hot with energy distributed uniformly. For a system in which gravityis important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy available to perform useful work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this doesn't address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again increase, but its entropy would continue to increase due to the continued growth of perturbations and eventually black holeformation. [Penrose, R. "The Road to Reality" pp686-734]
Mathematics of the arrow
mathematicsbehind the "arrow of time", entropy, and basis of the second law of thermodynamicsderive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854):
Here, as common experience demonstrates, when a hot body "T1", such as a furnace, is put into physical contact, such as being connected via a body of fluid (
working body), with a coldbody "T2", such as a stream of cold water, energywill invariably flow from hot to cold in the form of heat"Q", and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausiusas a function to measure the molecular irreversibilityof this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation.
In this diagram, one can calculate the entropy change Δ"S" for the passage of the quantity of
heat"Q" from the temperature"T1", through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature "T2". Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water.
Next, if we make the assignment, as originally done by Clausius:
Then the entropy change or "equivalence-value" for this transformation is:
and by factoring out Q, we have the following form, as was derived by Clausius:
Thus, for example, if Q was 50 units, "T1" was initially 100 degrees, and "T2" was initially 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are
irreversibleby virtue of the fact that molecules of a system, for example two molecules in a tank, will not only do external work (such as to push a piston), but will also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists.
As an example, one could film a ball near the
Earthas it moved up, slowed gradually to a stop, and then fell back down to the same position it started in. If all of the laws of physics were symmetrical one would expect it to take the same amount of time for the ball to go up as it did for it to come down, but as it turns out the film would show it taking longer on one leg of the journey than it did on the other. If one were to look at just this stretch of film one might conclude, incorrectly, that Gravity was not symmetric. However, if the film were good enough to capture the motions of each air molecule it would tell a different story.
In that case what would be seen would be that, as the film was played in one direction, the ball would be constantly colliding with the molecules of air in front of it, transferring some of its momentum to them, and those collisions would be slowing it down. If the film was played in the reverse direction it would show molecules of air striking the ball from behind, and speeding it up. Once one takes into account both gravity and momentum the film shows that they are both symmetrical forces. This still leaves the asymmetry in the film unaccounted for. The second law of thermodynamics explains that asymmetry.
The second law of thermodynamics relates to the entropy of a system (in this case the 'system' is the ball, the surrounding air, and the Earth's gravity). It states that closed systems will tend to change from a state of higher order to a state of lower order. In the above example the system starts out with a large amount of
Kinetic energy(speed or motion) concentrated in the ball. As it moves through the air the ball transfers some of that energy to the air in the form of heat. Eventually the ball ends up in the same place it started out in, but the ball has less energy than it started out with and the air has the same amount more. A more orderly state has become less orderly since the energy has become less concentrated and more diffused.
To take the example one step further, one could say one continued to film long enough to see the ball strike the ground. In that case one would see an end state where "all" of the
kinetic energyhas been turned into heat and sound waves and transferred from the ball into the ground and air. There are many other examples of such processes, including frictionand heat dissipation. Any irreversible process can be used as an "arrow" that points to a direction in time.
See also another example.
1867, James Clerk Maxwellintroduced a now-famous thought experimentthat highlighted the contrast between the statistical nature of entropy and the deterministic nature of the underlying physical processes. This experiment, known as Maxwell's demon, consists of a hypothetical "demon" that guards a trapdoor between two containers filled with gases at equal temperatures. By allowing fast molecules through the trapdoor in only one direction and only slow molecules in the other direction, the demon raises the temperature of one gas and lowers the temperature of the other, apparently violating the Second Law.
Maxwell's thought experiment was only resolved in the
20th centuryby Leó Szilárd, Charles H. Bennett, and others. The key idea is that the demon itself necessarily possesses a non-negligible amount of entropy that increases even as the gases lose entropy, so that the entropy of the system as a whole increases. This is because the demon has to contain many internal "parts" if it is to perform its job reliably, and therefore has to be considered a "macroscopic" system with non-vanishing entropy. An equivalent way of saying this is that the information possessed by the demon on which atoms are considered "fast" or "slow", can be considered a form of entropy known as information entropy.
An important difference between the past and the future, which has often been overlooked in past discussions, is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated. For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the
macrostateof the system). This is closely related to the Second Law of Thermodynamics.
Take for example (experiment A) a closed box which is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state will be a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it will not become only half-full later, except for the most unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions are such that the particles have random locations and speeds. This is not correct for the final conditions of the system, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations.The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning.
In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. In order to do this one must acknowledge the difference between the measured entropy of a system - which is dependent only on its
macrostate(its volume, temperature etc.) - and its information entropy, which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it. Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy will be smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations.
Now, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its
information entropy) is constant in time. If there are no correlations between the particles initially, then this is just the initial thermodynamic entropy of the system. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations will form with time, and for a time which is not too long - the correlations between particles will only increase with time; therefore, the measured entropy must also increase with time [ [http://www.ucl.ac.uk/~ucesjph/reality/entropy/text.html Some Misconceptions about Entropy ] ] (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates - a time which can be roughly estimated as , where is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else).
The arrow of time in various phenomena
All phenomena that behave differently in one time direction can ultimately be linked to the
Second Law of Thermodynamics. This includes the fact that ice cubes melt in hot coffee rather than assembling themselves out of the coffee, that a block sliding on a rough surface slows down rather than speeding up, and that we can remember the past rather than the future. This last phenomenon, called the "psychological arrow of time", has deep connections with Maxwell's demon and the physics of information; In fact, it is easy to understand its link to the Second Law of Thermodynamics if one views memory as correlation between brain cells (or computer bits) and the outer world. Since the Second Law of Thermodynamics is equivalent to the growth with time of such correlations, then it states that memory will be created as we move towards the future (rather than towards the past).
Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological
Some current research in
dynamical systemsindicates a possible "explanation" for the arrow of time. There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers a differential equation, where one of the parameters is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodicor mixing, and it is strongly suspected that mixing and ergodicity somehow underly the fundamental mechanism of the arrow of time.
Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is (
as of 2006) impossible, and is an exercise in hand-waving. Some progress can be made by studying discrete-time models or difference equations.Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible.
There are other systems which are chaotic, and are also explicitly time-reversible: among these is the
Baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operatoror transfer operatorfor the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. These means that these operators do not have a unique eigenvaluespectrum that is independent of the choice of basis. In the case of the Baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways, one of which describes the forward-time evolution of the system, and one which describes the backwards-time evolution.
As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as
Hadamard's billiards, or the Anosov flowon the tangent space of PSL(2,R).
Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of
rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time.
Another, distinct, approach is by the study of
quantum chaos. Here, one attempts to quantize systems which are classically chaotic, ergodic or mixing. The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case.
Some processes which involve high energy particles and are governed by the
weak force(such as K-mesondecay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry ( CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to our day-to-day experience of the arrow of time. A notable exception is the wave function collapsein quantum mechanics, which is an irreversible process. It has been conjectured that the collapse of the wave function may be the reason for the Second Law of Thermodynamics. However it is more accepted today that the opposite is correct, namely that the (possibly merely apparent) wave function collapseis a consequence of quantum decoherence, a process which is ultimately an outcome of the Second Law of Thermodynamics.
It currently seems that the ultimate reason for a preferred time direction is that the
universeas a whole was in a highly ordered state at its very early stages, shortly after the big bang, and that any fluctuations in it were uncorrelated. The question of why this highly ordered state existed, and how to describe it, remains an area of research. Currently, the most promising direction is the theory of cosmic inflation.
According to this theory our universe (or, rather, its accessible part, a radius of 46 billion light years around our location) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner which is supposed to be such that these fluctuations are uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics.
Our universe is probably an
open universe, so that its expansion will never terminate, but it is an interesting thought experimentto imagine what would have happened had our universe been closed. In such a case, its expansion will stop at a certain time in the distant future, and it will then begin to shrink. Moreover, a closed universe is finite.It is unclear what will happen to the Second Law of Thermodynamicsin such a case. One could imagine at least three different scenarios (in fact, only the third one is probable, since the first two require very simple cosmic evolution):
* A highly controversial view is that in such a case the arrow of time will be reversed. The quantum fluctuations - which in the meantime have evolved into galaxies and stars - will be in
superpositionin such a way that the whole process described above is reversed - i.e. the fluctuations are erased by destructive interferenceand total uniformity is achieved once again. Thus the universe ends in a big crunchwhich is very similar to its beginning in the big bang. Because the two are totally symmetric, and the final state is very highly ordered - entropy has to decrease close to the end of the universe, so that the Second Law of Thermodynamics is reversed when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement( quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease - because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles which have arrived earlier.
* It could be that this is the crucial point where the
wavefunction collapseis important: if the collapse is real, then the quantum fluctuations will not be in superposition any longer, but rather they had collapsed to a particular state (a particular arrangement of galaxies and stars), thus creating a big crunchwhich is very different from the big bang. Such a scenario may be viewed as adding boundary conditions(say, at the distant future) which dictate the wavefunction collapse [ [http://www.arxiv.org/abs/quant-ph?papernum=0507269 [quant-ph/0507269 Two-time interpretation of quantum mechanics ] ] .
* Finally, highly smooth initial conditions may lead to a highly non-smooth final state. Highly non-smooth gravitational systems tend to collapse to
black holes, so the wavefunctionof the whole universe evolves from a superpositionof small fluctuations to a superpositionof states with many black holes in each. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the first scenario described above), and consists of mostly black holes rather than free particles.
In the first scenario, the cosmological
arrow of timeis the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed.
In the second and third scenarios, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time. In the second scenario, the quantum arrow of time may be seen as the deep reason for this.
History of entropy
Arrow of time
* [http://plato.stanford.edu/entries/time-thermo/ Thermodynamic Asymmetry in Time] at the online
Stanford Encyclopedia of Philosophy
Wikimedia Foundation. 2010.
Look at other dictionaries:
Arrow of time — This article is an overview of the subject. For a more technical discussion and for information related to current research, see Entropy (arrow of time). In the natural sciences, arrow of time, or time’s arrow, is a term coined in 1927 by British … Wikipedia
Entropy — This article is about entropy in thermodynamics. For entropy in information theory, see Entropy (information theory). For a comparison of entropy in information theory with entropy in thermodynamics, see Entropy in thermodynamics and information… … Wikipedia
Time in physics — In physics, the treatment of time is a central issue. It has been treated as a question of geometry. One can measure time and treat it as a geometrical dimension, such as length, and perform mathematical operations on it. It is a scalar quantity… … Wikipedia
Time — This article is about the measurement. For the magazine, see Time (magazine). For other uses, see Time (disambiguation). The flow of sand in an hourglass can be used to keep track of elapsed time. It also concretely represents the present as… … Wikipedia
time's arrow — Unlike space, time as we apprehend it has a direction. There is an asymmetry between the past (fixed) and the future (yet to exist). Time s arrow is whatever gives time this direction. Five aspects of the direction are: (i) that according to the… … Philosophy dictionary
Maximum entropy thermodynamics — In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon… … Wikipedia
Eternalism (philosophy of time) — Eternalism is a philosophical approach to the ontological nature of time. It builds on the standard method of modeling time as a dimension in physics, to give time a similar ontology to that of space. This would mean that time is just another… … Wikipedia
Loschmidt's paradox — Loschmidt s paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from time symmetric dynamics. This puts the time reversal symmetry of (almost) all known low level… … Wikipedia
Second law of thermodynamics — The second law of thermodynamics is an expression of the universal law of increasing entropy, stating that the entropy of an isolated system which is not in equilibrium will tend to increase over time, approaching a maximum value at… … Wikipedia
List of unsolved problems in physics — This is a list of some of the major unsolved problems in physics. Some of these problems are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are… … Wikipedia