Virtual Atmosphere and Real Weather

Progress in numerical weather prediction continued throughout this decade. By 1952, Jule Charney and his Meteorology Project team had reached the point where it was time to talk about going "operational." In all scientific research projects there is a time when the technique under development needs to leave the controlled world of the laboratory, where scientists have virtually unlimited time to analyze and perfect data, adjust their methodology, and consult other scientists. A new methodology may work very well in the laboratory or computer center, but the true test of its worth occurs when it enters the "real world," where it must produce a usable product in limited time, with imperfect data and balky equipment. In the early 1950s, it was time for the virtual reality of numerical weather prediction to meet the real world.

Jule Charney

Jule Charney, Norman Phillips, Glenn Lewis, Norma Gilbarg, and George Platzman of the Meteorology Project at the Institute for Advanced Study, Princeton, New Jersey, 1952 (Photography by project member Joseph Smagorinsky; AIP Emilio Segre Visual Archives)

Besides the need to convince theoretical and applied meteorologists that numerical weather prediction techniques accurately portrayed the future state of the atmosphere, meteorologists such as Charney in the United States and Rossby in Sweden faced three primary challenges. First, they needed a computer that would withstand the rigors of everyday use. John von Neumann's new computer (dubbed Johnniac) in Princeton could run the atmospheric models, but it had persistent hardware problems. The Swedish machine BESK, modeled on von Neumann's computer architecture, had similar problems but had been

Jule Charney, Norman Phillips, Glenn Lewis, Norma Gilbarg, and George Platzman of the Meteorology Project at the Institute for Advanced Study, Princeton, New Jersey, 1952 (Photography by project member Joseph Smagorinsky; AIP Emilio Segre Visual Archives)

built with an operational use in mind. Second, they needed to be able to obtain data from around the world, sort out and remove the faulty observations, and feed the rest into the computer in a short amount of time. In the development phase, this often took weeks; for an operational forecast, meteorologists would only have a few hours to prepare data. To solve this nontrivial problem, meteorologists worked with both communications specialists and computer specialists developing automated data processing techniques. Third, they needed more than just a handful of people to deal with the input of data and the interpretation of the results. This new breed of meteorologist needed a sense of the atmosphere and the mathematical skills to adjust atmospheric models until the computer-created virtual atmosphere looked like the real weather outside the window.

The Joint Numerical Weather Prediction Unit, a combined effort of the U.S. Weather Bureau, Air Force, and Navy, started producing its first operational weather maps in 1955, several months after Rossby's team and BESK started producing the same kinds of maps for Sweden. These first prognostic charts looked meteorological, but they were not as good as those produced by experienced weather forecasters. Instead of producing surface forecasts, which must take friction, and therefore topography, into account (a very difficult and time-consuming problem), the first models produced a chart for the 500-millibar (about 18,000-ft. [5,5000-m]) level. Meteorologists chose this level because it is considered to be the midpoint in the atmosphere—half of the total mass of air in the atmosphere is above this level and half below. Motion at this level determines what happens at the surface and was the flight level for most airplanes at the time. With each computer run, meteorologists found additional problems with the models, which they revised and then put back into operation. Although it was not a fast process, continuous improvement allowed the gradual phasing out of hand-drawn charts in favor of the computer-generated versions that were sent electronically to civilian and military weather stations around the country and at sea.

Atmospheric models remained comparatively rudimentary throughout the 1950s because computers were not sophisticated enough to handle the large numbers of variables required to describe the atmosphere completely. Throughout the 20th century, computer models of weather, and later climate, were limited by the size of available computers. The ability to test theoretical ideas about atmospheric behavior "quickly" (days instead of months) made numerical weather prediction a valuable tool for understanding the atmosphere. By the end of the decade, the USSR, Japan, Great Britain, and Germany had all established modeling groups in addition to those at U.S. and Swedish centers. Their combined efforts led to rapid advances in scientific knowledge of atmospheric conditions and behavior.

Was this article helpful?

0 0
Solar Power Sensation V2

Solar Power Sensation V2

This is a product all about solar power. Within this product you will get 24 videos, 5 guides, reviews and much more. This product is great for affiliate marketers who is trying to market products all about alternative energy.

Get My Free Ebook


Responses

  • cailin
    What is the midpoint of the atmosphere?
    9 years ago

Post a comment