Numerical weather prediction techniques had continued to improve throughout the 1950s. As computer power grew and meteorologists con-
tinued to develop a more sophisticated theory of the general circulation of the atmosphere, numerical modelers designed increasingly complex forecasting models. In fact, it often appeared that computer architecture would prove to be the primary obstacle to accurate short- and long-term weather forecasts. As computers could handle more data and process them faster, meteorologists would be standing by to exploit their new capabilities.
This overriding theme in modeling met with a disconcerting halt in the early 1960s with a discovery made by the MIT meteorologist Edward N. Lorenz. Lorenz had been working on his own computer model of the atmosphere and had made numerous "runs" of the data. That is, he had written the program, given it data, and then run it on the computer to produce a forecast. Typically in this kind of work, after a given run the model is adjusted and run again. One day, Lorenz decided to save time by putting in data produced from a previous run and starting the calculation from the middle instead of the beginning. Doing other things while the machine computed the new results, Lorenz was stunned upon his return to see that the new answer was wildly different from his previous run. The newly forecast weather pattern was not even remotely similar to any of the previously calculated patterns—it was almost unrecognizable.
Puzzled, Lorenz looked back to see where he could have erred. To save paper, he had printed out the results from the earlier run to only three decimal places. After all, the calculation was probably only accurate to three decimal places and the remaining trailing digits should not have made a difference. Lorenz had been confident that there would be no problem with entering the three-digit numbers into the middle of the computer program. He was wrong. So was the prevailing idea that
small differences in beginning conditions would not make a difference in model output.
Intrigued, Lorenz continued to study how model output was affected by the minutest changes in model input. He concluded that the smallest differences could lead to radically different forecasts and that the differences became greater as the forecast period lengthened. His discovery meant that modelers would find it more difficult to produce long-range (months and years) forecasts than they had previously thought.
Lorenz's discovery that infinitesimal changes in the atmosphere can lead to profound differences in atmospheric behavior became known as the "butterfly effect," from the title of his 1972 talk "Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?" given at the 139th meeting of the American Association for the Advancement of Science. In fact, Lorenz had originally used flapping seagulls as a metaphor for the idea that the very slightest movement somewhere on Earth could change the weather thousands of miles away. The point remained the same: The atmosphere is inherently unstable with respect to small physical changes—it is chaotic. Climate change is just as likely to be a rapid event as a slow one—and the probability that a long-range forecast will ever be perfect is extremely small.
Was this article helpful?