You'll Never Know


by Jim Holt

Impossibility: The Limits of Science and the Science of Limits
by John Barrow Oxford University Press 279 pp $30 May

The most shocking thing I ever heard in Paris was not said by a poststructuralist or Maoist, let alone a deconstructionist. It was said by a mathematician, Ivar Ekeland, who used to be president of one of the University of Paris campuses. Over tea in his apartment near the Panthéon, Ekeland and I were talking about the solar system. Was it eternally stable? Or would minute gravitational perturbations among the planets cause the thing to crack up a few billion years down the line?

I knew this was a hard question. It had stumped Isaac Newton, who ended up arguing that God must be stepping in from time to time and making little adjustments to keep the planetary clockwork ticking. In the nineteenth century, Laplace thought he had proved that the dynamics of the solar system were self-sustaining. When Napoleon asked him where God fit in the scheme, Laplace replied, "Je n'avais pas besoin de cette hypothèse." Then, early in this century, PoincarČ showed that Laplace had been wrong: The mutual gravitational tugging of the planets was so complicated that the question of the system's stability was mathematically insoluble--not just practically, but in principle. I knew that, too.

Yet Ekeland went one further. The question was not merely hard, he maintained. It was not merely undecidable by mathematics. It was meaningless.

At this my reason rebelled. Surely, by the law of the excluded middle, the solar system was either stable or it wasn't. There had to be a fact of the matter as to whether it would eventually fall apart, I insisted--even if we would never know it.

But things turned out to be rather more subtle than I expected. As Ekeland explained it, if you look at the abstract higher-dimensional "space" consisting of all the possible positions and momenta of the nine planets, it turns out that the set of points for which the system would be stable is dense. This means that between any two points corresponding to stability, no matter how close, there is another one corresponding to crack-up. In other words, the distance between stability and crack-up is literally infinitesimal. So the only way to determine whether the solar system is eternally stable would be to measure the positions and momenta of all the planets with absolute accuracy. But this would be a gross violation of Heisenberg's uncertainty principle. Not only do the exact position and momentum of a particle--whether an electron or a planet--resist simultaneous measurement, their mutual vagueness is built right into the world. It is not just futile to ask after these two numbers, according to quantum theory it is meaningless. Not even God could know them (assuming He is bound by the same laws of logic). Therefore, it is meaningless to ask whether our solar system is stable.

This is, of course, just the sort of disconcerting thing a naive American tends to discover on a Paris holiday. But what sort of conclusion should one draw from it? Does it represent one of the limits of science?

I hadn't really thought about that until I read Impossibility, a new book by John Barrow, the Sussex astronomer and prolific popularizer. There has been much talk recently about the scientific enterprise running out of steam or crashing into impassable barriers. The phrase "the end of science" has been bandied about, especially by the journalist John Horgan, whose book of that title caused a terrific row when it was published in 1996. The ensuing debate struck me as boringly inconclusive, mainly because so many different meanings commingled uneasily in the phrase "the end of science": that we had learned all there was to know about the fundamental workings of the universe, so no more revolutions of Newtonian or Darwinian magnitude lay on the horizon; that our puny brains, evolved on the African savanna, could go no further in comprehending the Book of Nature, and our puny particle accelerators could never match the energies of the big bang; that the cosmos, though knowable in some small measure, is for the most part chaotic and unintelligible.

Impossibility is something more than a gloss on these contentious themes. Barrow has a novel angle: Science itself predicts that there are things it can't predict. A hallmark of pseudoscience, he observes, is that it promises total knowledge, offering to reveal the causes and qualities of all we behold. A sophisticated theory of the world, by contrast, implicitly contains its own epistemic limits. Take my opening example: Is it a limitation of science that it cannot answer the plain man's question, "Will the planets ever crash into one another?" In fact, science can do something more impressive. It can show that the very concepts and assumptions built into the question--shared by the plain man and Newton alike--are nonsense, as far as the world is concerned.

This science-of-limits peg is a promising one, and Barrow hangs a great deal of expository material on it. In his earlier books--notably Theories of Everything(1991) and Pi in the Sky (1994)--he has proved himself a dab hand at haute vulgarisation, blending deep ideas with teasing conjecture. Impossibility, however, shows signs of being a hasty pudding, over-egged with epigraphs--do we really need to be reminded of E.M. Forster's "Only connect," let alone James Bond's "Never say never again"?--and dubiously relevant topics. What do time travel and its related paradoxes, for instance, have to do with the limits of science? It is diverting to hear from Barrow about the "cumulative audience paradox," which says that if time travel were ultimately possible, events like the Crucifixion ought to have been thronged with billions of voyeurs from the future. But if time travelers threaten to go back and rearrange the past, that is a difficulty not for scientists but for historians (see, for instance, Adam Michnik's quip, "Socialism is a system in which the past cannot be predicted").

Some of contemporary science's limits are, of course, purely contingent. We know next to nothing, for example, about how the human mind works. Why? Because an insufficiency of volunteers willing to donate their brains to science before they are dead keeps us from figuring out the mental microcircuitry. Other limits, though, seem to be logically necessary, obstructing the inquiry of every possible civilization. Göaut;del's theorem may infect physics with incompleteness the same way it does mathematics, Barrow speculates. Worse still is the theory of cosmic inflation, which purports to account for what was going on just prior to the big bang. "Inflation acts as a cosmological filter," Barrow writes. "It pushes information about the initial structure of the Universe out beyond our present horizon where we cannot see it; then, it overwrites the region we can see with new information. It is the ultimate cosmic censor."

Even lowly sociology has its contribution to make to the science of limits. Consider how different research groups, dispersed around the globe, used to develop their own approaches to scientific problems in decades past. Today, by contrast, the global connectionism fostered by the Internet has destroyed much of this diversity, and each subject area tends to coalesce into one research group. As a result, Barrow observes,

single, central paradigms are now strongly reinforced, and young researchers become increasingly involved in detailed elaborations of them.... Interpersonal contact is reduced, and contact with books and printed journals is minimized. Paradoxically, these trends have common consequences: they remove the chance of discovering new things by chance.

For each sociological view of science, there is a corresponding poetic image. For Karl Popper, science was a growing tree, "springing from countless roots...which ultimately, high up, tend to unite into one common stem." For Vannevar Bush, the creator of the National Science Foundation, it was a mansion under construction. For the French physicist Pierre Duhem, it was a gradually moving tide. Each of these images, as Barrow observes, carries its own implications for the completion of inquiry. For my money, however, the most arresting pair of images--and the most humbling to science--centers on the main begetter of the scientific revolution, Isaac Newton. Whereas William Words worth saw Newton "voyaging through strange seas of Thought, alone," Newton pictured himself rather differently: "like a boy playing on the sea-shore and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

John Barrow himself is partly responsible for a rather potent image of the end of science, the Omega Point. In a 1986 book that he co-wrote with the physicist Frank Tipler, The Anthropic Cosmological Principle, it was conjectured that, if the expansion of the universe is eventually arrested and the whole show ends in a Big Crunch, the energy just before the final moment will permit literally infinite information processing. All possible knowledge will be computed in a split second just before the end, and all scientific theories will be completed.

Alas, it looks as if that optimistic scenario will never be realized. The New York Times recently reported that scientists have decided pretty much conclusively that there is not enough gravitational matter in the cosmos to reverse the expansion. Things will end not in a Big Crunch but in a Big Chill. So much for the Omega Point and the (almost) complete scientific knowledge it promised.

Oh well. To paraphrase the gastronome Brillat-Savarin: The discovery of a new dish does more for the happiness of human kind than the discovery of a new scientific theory.

Home | Editorial Content | Where to get it | Ordering Information | Advertising

Copyright © 1998 Lingua Franca,Inc. All rights reserved.