The postwar revolution in meteorology that was ignited by digital electronic computers gave rise to a long dawn of dreamy ambition among weather forecasters and researchers. In the 1950s and 1960s, the study of weather was finally coming into its own as both a science and a service. At last, scientists had the technical wherewithal to apply the objective tools of the laws of physics to unraveling the old mysteries of the atmosphere. Those ingenious Numerical Weather Prediction computer programs were instruments of incredible power and scope, and the quickening pace of technological advance seemed to promise limitless potential. The whole turbulent, unwieldy problem had been compressed into something like an electronic bottle and at long last brought into the laboratory, where hypotheses could be tested and where increasingly sophisticated and accurate model atmospheres could bring every last detail into finer and finer focus. Not only had usable weather forecasts been achieved, but a workable model climate had been created.
It was a boom time. Returning World War II veterans had peopled a population explosion in the science at universities, in the U.S. Weather Bureau, and in private industry. Governments were supplying increasing funds for its growth, fueling major research programs. The combatant nations found themselves with large air forces requiring elaborate meteorological services, and growing civilian aviation fleets were making increasingly challenging demands on weathermen. Technology was defining both the problem and the solution. Advances in computing power, rapidly enlarging networks of observations, and the successful launch of TIROS I, the world's first weather satellite, all inspired confidence that meteorologists could accomplish their new mission. Researchers and forecasters who long had wrestled with the mysteries of weather seemed free at last from the hoary veil of frustration and disappointment.
From long labor was coming the fruits of an old promise. In 1814, the great French mathematician Pierre-Simon Laplace had famously described a vision of a clockwork world that physical scientists had long taken as an article of faith. The present state of the universe, wrote Laplace, should be seen "as the effect of its prior state and as the cause of the one that will follow." He envisioned an "intelligence which at a given instant knew all of the forces by which nature is animated and the respective situation of the things of which nature is composed . . . nothing for it would be uncertain, and the future, like the past, would be present to its eyes." Laplace had inspired Vilhelm Bjerknes's call a century later for a rigorously scientific meteorology; and as recently as 1959, the Swede Tor Bergeron described the weather forecast as "the most important and promising but still unsolved Laplacian problem on our planet."
Free and flush, impressed with Numerical Weather Prediction and infected with the optimistic spirit of the era, weather scientists in the 1960s greeted the new day with soaring expectations. Some allowed themselves to dream of a time when forecasting would finally attain the precisional heights of astronomy, its older and more venerated sister science. As in astronomy, the laws of Isaac Newton had finally taken their rightful place in meteorology, and now what remained for weather science was to achieve the same degree of exactitude. Like astronomers calculating the distant return of a comet, meteorologists could see themselves wielding powers of prediction farther and farther into the future.
More than that, the day was not far off when weather not only would be forecast to perfection, but its rains and winds would be tamed. Like the bounties of land and sea, the unruly products of the sky would be intelligently redesigned to the purposes of enlightened humanity. The dangers of fogs and hails would be relics of the past. Man-made showers would prevent droughts. Man-made droughts would prevent floods. Hurricanes would be suppressed or steered harmlessly out to sea. Barring enlightenment, John von Neumann's ambition was gaining new currency: weather control would become a weapon of war.
Even as weather scientists were chasing these goals, however, another line of research was beginning to cast a very different light on the character of the atmosphere and the problem of accurate forecasting. While most meteorologists concerned with weather prediction were trying to extend the range or improve the accuracy of forecasts, an uncommonly gifted researcher at the Massachusetts Institute of Technology was going his own way. Ed Lorenz was using his new Royal McBee computer to operate a stripped-down numerical forecasting model that posed this different, more fundamental question about the atmosphere: Just how predictable is it? In 1961 came an important answer that was a showstopping disappointment to the most optimistic weather forecasters.
In the hands of a less mathematically inclined scientist, the fundamental limits to the detailed predictability of the atmosphere might have gone undefined for years. In the event, it required a kind of intuitive leap that only a born mathematician was likely to make. "As a small child, I enjoyed playing with numbers," Ed Lorenz would recall of his days growing up in suburban West Hartford, Connecticut, as the son of a mechanical engineer in a home where the game of chess was popular recreation. He went to Dartmouth College, where he majored in mathematics, and then in 1938 he enrolled at Harvard University as a graduate student in math. Like Jule Charney, Lorenz was bred into a meteorologist in the 1940s by the exigencies of the impending war. He came down Massachusetts Avenue from Harvard to MIT to learn how to be a weatherman in one of the nine-month crash courses that trained thousands of military forecasters as the country entered World War II. He served in the Army Air Corps, forecasting conditions for bomber pilots out of weather centrals in Saipan and Okinawa. At the end of the war, after pondering his choices, he decided he could accomplish more for meteorology than mathematics and accepted a research position at MIT. He would study dynamics and teach statistical forecasting for a time, occupying a special disciplinary realm between research and appli-cation—not predicting weather but, more fundamentally, investigating its predictability.
Some statistical forecasters were arguing at the time that certain of their techniques could achieve the same accuracy as Numerical Weather Prediction. In the winter of 1961, Lorenz was putting that claim to the test, running a stripped-down model atmosphere on his Royal McBee LGP-30 computer. If patterns of periodic behavior emerged, he reasoned, the statisticians might be right. Certain patterns repeated themselves, just like weather, although they were not regularly periodic. Along the way, however, he encountered a circumstance that looked at first like an accident or a computer glitch.
He had interrupted the running of the program and wanted to restart it at a midway point in its operation. To do this, he retyped into the computer the last values of the simulated weather variables it had printed out. And then he went down the hall to get a cup of coffee, leaving the Royal McBee to chug along noisily. When Lorenz returned, he found that the computer program, rather than repeating its previous patterns, now was cranking out very different kinds of "weather." After checking out the obvious explanations, like a blown vacuum tube, the mathematician began looking more closely at the numbers. An extremely small but enormously significant difference emerged. As programmed, the computer was operating with the variables defined way out to six decimal points, although he had instructed it, when printing out its results, to round off the values to three decimal points, to thousandths. Lorenz had punched in these slightly abbreviated values as the new initial conditions. Now he realized suddenly that great differences in the weather patterns had grown from the minute differences in the two sets of figures.
"This was exciting," Lorenz recalled. "If the real atmosphere behaved in the same manner as the model, long-range weather prediction would be impossible, since most real weather elements are certainly not measured accurately to three decimal places. Over the following months, I became convinced that the lack of periodicity and the growth of the small differences were somehow related and I was eventually able to prove that, under fairly general conditions, either type of behavior implied the other. Phenomena that behave in this manner are now collectively referred to as chaos. This discovery was the most exciting event in my career."
Lorenz had discovered a characteristic that is inherent not only to the atmosphere but to many systems, although it would take years for physicists and biologists and others to locate in the Journal of the Atmospheric Sciences and to absorb the meaning for their own disciplines of his groundbreaking 1963 paper, "Deterministic Nonperiodic Flow." The key phrase is "sensitive dependence on initial conditions." Small differences in initial conditions lead to large differences in outcome. This principle would come to be known as the butterfly effect, a coinage inspired by an only slightly fanciful question Lorenz posed as the title of a talk to the American Association for the Advancement of Science in 1972: "Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?"
The principle of chaos ranges far and wide over the sciences. All of the signal noise that scientists had been smoothing out for the sake of finding patterns turned out to be more important than anyone supposed. Unstable systems of multiple variables act in a way that confounds intuition: small differences in input don't lead to small differences in output, but inevitably they lead to big ones. Writing in 1994, meteorologist Stanley David Gedzelman, at the City College of New York, described the impact of the discovery in the magazine Weatherwise. "For centuries scientists put on blinders, ignoring quirky and unpredictable phenomena in order to ferret out a tenuous sense of order in nature," he wrote. "This is the triumph of science, but it made us deny that chaos lurks everywhere. Ed Lorenz not only opened our eyes to the ever-present chaos in nature but also found its governing principles. He crowned 20th century meteorology with a discovery that irreversibly changed our view of the world."
Chaos eventually would become the business of physicians studying the irregular beats of the heart, of biologists trying to understand sudden spikes or collapses of animal populations, and of physicists, astronomers, chemists, and geologists reexamining unpredictable behavior in their fields.
To meteorology, its meaning was readily apparent, as Lorenz recognized immediately in 1961. He concluded in his landmark 1963 paper: "In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent." No matter how perfect the observations or how perfect the implementation of the laws governing the turbulent processes of the atmosphere, detailed weather forecasting beyond several days would still be impossible. The features of weather are variable in both space and time, differing from one place to another, one moment to the next.
Drawing the line between what weather forecasters can do and what they can't, finding the intersection of the upper limits of detailed daily forecasts and the lower limits of chaos, has been the subject of continuing research since the 1970s. Computer-generated Numerical Weather Prediction occupies a central role in all this. The computer giveth, and the computer taketh away. Before Numerical Weather Prediction, the effects of chaos were lost in the imprecision and subjectivity of forecasting. As meteorologist Philip D. Thompson later related, the issue of predictability "wasn't even a very sensible question" until numerical methods came along. As Lorenz and subsequent researchers illustrated, the main instrument in analyzing the problem was the same that revealed it: the Numerical Weather Prediction model.
While many meteorologists at the time were disinclined to pursue a "bad news" study of their practical limits, the meaning of Lorenz's work was immediately recognized by the great Jule Charney, his friend and fellow researcher down the hall at MIT. Lorenz recalled that in the mid-1960s Charney was actively promoting the big international meteorological study known as the Global Atmospheric Research Program. One of the big selling points for government financial support was the aim of GARP to produce two-week weather forecasts. The discovery of chaos posed an immediate threat to such a widely heralded ambition. Lorenz recalled that Charney "became concerned that two-week forecasting might be proven impossible even before the first two-week forecast could be produced, and he managed to replace the aim of making these forecasts with the more modest aim of determining whether such forecasts were feasible."
Charney discussed the possibility of chaotic behavior in 1964 at a conference in Boulder, Colorado, of leading meteorological researchers from 10 nations. Between sessions, he persuaded the global circulation modelers to test their computer models for sensitive dependence, as Lorenz had done, by running two forecasts with minor differences in initial conditions. The modelers reported back that on average, the small errors in temperature or wind pattern doubled in five days. "The five-day doubling time seemed to offer considerable promise for one-week forecasts, but very little hope for one-month forecasts, while two-week forecasts seemed to be near the borderline," Lorenz concluded.
The discovery that atmospheric motions are so sensitive to small differences in variables also caused the retirement of the so-called analog method as a tool for forecasting the details of daily weather. A mathematical study by Lorenz in 1969 proved the futility of looking for a set of past conditions that so closely matched the present that it told the future. "There are numerous mediocre analogues but no truly good ones," he concluded. The effect of small errors were difficult to calculate, he said, because truly small errors were hard to find. The recognition that small differences in initial conditions grow into large differences in weather cast a whole new light on the subject. A forecasting technique based on the apparent similarity of bygone conditions was not going to trick a chaotic atmosphere into revealing the weather of its future.
While the two-week range remains the most commonly accepted limit for detailed weather forecasting on theoretical grounds, modern forecasting's practical limit remains closer to a single week. Extended forecasts of increasing skill have been issued by the European Centre for Medium-Range Weather Forecasts in Reading, England, where the most powerful supercomputers are employed, and by the U.S. National Weather Service's National Centers for Environmental Prediction in Camp Springs, Maryland, which always seems to lag behind in computer horsepower and forecasting skill. In any case, a detailed two-week forecast currently is so eroded by multiplying error effects that under the best circumstances it offers only a modest advantage over pure guesswork.
Short-range accuracy has been boosted by numerical models of greater sophistication running on computers of greater power and speed, and over the first 40 years of operational Numerical Weather Prediction the range of accuracy has been extended from three days to about six. Well short of the two-week borderline suggested by studies of chaos, the signs of diminishing returns are already beginning to show. Extending the same level of detail and skill out beyond four or five days is coming at an increasingly high cost. And still there are surprises, embarrassing busted forecasts that every winter draw attention to the state of a science that modern users of its products rather remarkably take for granted. A man who before everyone else understood the meaning of chaos to the science of weather forecasting, Edward Lorenz has a different point of view on the subject: "To the often-heard question, 'Why can't we make better weather forecasts?' I have been tempted to reply, 'Well, why should we be able to make any forecasts at all?"'
Was this article helpful?