Ozone is another trace atmospheric gas located in a layer between 15 and 50 kilometers (km) (9-30 miles) above the earth ' s surface in the stratosphere. Figure 5.5 displays the various layers of the atmosphere as well as the position of the ozone layer. It is in this highly chemically energetic area that ozone is constantly destroyed and regenerated via a series of photochemically catalyzed reactions in which oxygen molecules migrating up from the troposphere react with ultraviolet light, splitting it into highly reactive oxygen atoms that combine with neaby oxygen molecules to form ozone and heat. With solar radiation strongest above the equator, the photodissociation of oxygen is strongest there. It is from there that the newly formed ozone is carried around
Figure 5.5. Layers of the earth 's atmosphere. (Figure reproduced from Climate Change and Human Health, with the kind permission of the World Health Organization, Geneva.)
Figure 5.5. Layers of the earth 's atmosphere. (Figure reproduced from Climate Change and Human Health, with the kind permission of the World Health Organization, Geneva.)
the earth and across the poles by stratospheric winds. Given the normal chemistry of the atmosphere, the level of ozone would be expected to remain constant. As we have seen, however, unique synthetic chlorine-containing chemicals have altered that. For more than two decades we have been witnessing, during a specific period each year, a severe decline in ozone levels. The "hole" in the ozone layer is produced each year during a 4-6-week period beginning in late September, which is the beginning of spring in the Southern Hemisphere. The area of the Antarctic ozone hole, about the size of the continental United States, has grown larger each year during this period.
Four to six weeks after the appearance of the Antarctic ozone hole, ozone from the Southern Hemisphere midlatitudes is carried to the South Pole by the atmosphere's general circulation patterns and replenishes the absent gas. The following spring the cycle is repeated. Ozone depletion due to the CFCs is not a theory; it is a demonstrated fact.
The synthetic halocarbon chemicals, thought to be especially environmentally friendly, drifted into the upper atmosphere, where their chlorine atoms played hob with the ozone layer. Chlorine atoms destroyed stratospheric ozone, and were destroying them faster than they could be replaced. This was not a good omen for the folks beneath, who were and continue to be at increased risk of adverse health effects.
The 2005 Springtime Antarctic Ozone Depletion, the ozone hole, began developing over the South Pole in late August. In early August, surface temperatures were -110.7°F (-79°C), the coldest temperature recorded since the late 1990s. By September, ozone concentrations above the South Pole were 20% below amounts measured in August. If this loss rate continues through October, the severity and size of the 2005 ozone hole will approach that of 2003, when the largest ozone hole previously measured opened over the Antarctic  .
During March 2005, the ozone layer over Britain was reduced to half its normal thickness, a clear loss of shielding against UVA and UVB. The combination of the coldest arctic winter and the high pressure weather system over the North Atlantic had created ideal conditions for ozone loss. The ozone layer is usually 4-5 mm thick. In March it was down to 2.5 mm. Anything below 2 mm is considered a hole. A layer half as thick as it would normally be, will allow 4 times as much UV radiation to penetrate earthward [20, 21].
With the Montreal Protocol in effect, there is reason to be encouraged that continued decline in chlorine atoms in the upper atmosphere will prove effective and protective.
These half-dozen trace gases are just that, traces—minuscule amounts, yet they pack tremendous clout in that the temperature of our planet is controlled by them. So, yes, it does appear that humankind, with its addition to, and effect on, these gases does have the power to affect the forces of nature. As we shall now see, models are needed to simulate and quantify the climatic response to current and future human activity with respect to these trace gases.
But first, how do we know that we humans are in fact the culprits of the current and future warming? A salient question. Let us pursue the evidence, which comes from a number of sources.
First, carbon atoms in CO2 emitted by fossil fuels—coal, oil, peat, and natural gas—differ from the carbon atoms in CO2 from present - day plant material. As noted earlier, all living things contain carbon. But elemental carbon has three forms, all of which have the same chemical characteristics, but different atomic weights and different numbers of neutrons in their nuclei. Elements with different atomic weights are called isotopes. Carbon has three isotopes; carbon-12, -13, and -14. Carbon-12 has six protons and six neutrons; carbon-13 has six protons and seven neutrons; carbon-14 has six protons and eight neutrons. Carbon-12 is the stable isotope, while '3C and 14C are radioactive isotopes; that is, they emit radiation. Cosmic rays in the outer reaches of space bombard our atmosphere producing 14C, replacing the 14C that decays away. Because of this constant replacement, the ratio of 12C to 14C (12C/14C) remains constant. As living things exchange carbon dioxide with the air, they take in this constant 12C/14C ratio. When living things die, the 14C continues to decay away. Consequently, by determining the 12C/14C ratio, it is possible to determine the age of a sample of carbon. Ergo, the 12C/14C ratio that fossil fuels contain will be far different than the ratio in present-day carbon-containing materials.
The carbon ratio of the CO2 trapped in air bubbles of ancient ice cores has been very different from that of the CO2 trapped in air bubbles since the 1700s. Twentieth-century observational records and measurements on polar-ice-trapped bubbles show a 20-30% increase in CO2 over preindustrial levels. Conversely, levels of CO2 in ancient ice cores, over 200,000 years, are 25% less than current levels. Fossil fuels were formed tens to hundreds of millions of years ago, and the fraction of their carbon nuclei that were once radioactive, are no longer so, while CO2 from relatively recent natural sources remains radioactive. As we have seen in Figure 5.2, atmospheric CO2 levels have been increasing steadily since recording began in 1958. Moreover, there is more CO2 in the atmosphere over the Northern Hemisphere than the Southern Hemisphere. Most human activity resides in the Northern Hemisphere, and it takes about a year for Northern Hemisphere emissions to circulate through the atmosphere and reach the Southern Hemisphere. Also bear in mind that 95% of the total atmospheric CO2 levels is of natural origin. It is the 3-5% added by human activity that appears sufficient to tilt the energy budget toward a warmer world, with its many predicted dislocations—based on model projections, which raises the question. Are the predictions reliable, and why models?
We first consider the models. Models mean simulations by computers because our earth is far too large to bring in to any laboratory, and far too complex, with far too many variables to test singly or even several simultaneously. The many, and more often than not, simultaneously interacting—along with both positive and negative—forcings can be managed only by supercomputers. Models do work, and with good reason. Calculus, the mathematics of continuous change, is the reason. The idea that change or motion can be represented by mathematical equations is the essence of computer modeling of global climate—with its concern for an ever- changing fluidlike atmosphere with a broad range of interactive elements. If climate variables can be represented by an equation, or equations, then calculus can deal with them via differentiation and integration; differentiation computes the rate at which one variable changes with respect to another at any instant, while integration takes an equation in terms of rate of change and converts it into an equation in terms of the variables that do the changing. Evidence of continuously varying change, if required, is at hand: motor vehicles moving at changing velocities; birds in flight, wheeling and soaring; breezes on a balmy day; or a baseball player preparing to steal a base. In the words of the ancient Greeks, "All things flow. "
The atmosphere and oceans are in constant motion, continually changing. If the foregoing is correct, the elements of climate, those that are known, should be reducible to mathematical statements and their solutions, providing descriptions of climate over time. It makes sense. Calculus has worked for over 250 years. It needs no defense. But it is worth remarking that if, say, a model's description of drag forces on a plane ' s wing, or the stability of a car ferry running in high seas proves less than satisfactory during trial runs, then reruns are not a problem. But climate cannot be studied under controlled conditions in the field or in a lab. It will not hold still for appraisal. Nevertheless, questions must be posed, and an essential one is this: Is the nature of climate, the process, sufficiently well understood to reduce to an appropriate set of equations? Do we need to know it all? How much comprehension is necessary to obtain credible answers? An answer is not elusive. But it must be understood that this vast natural phenomenon of planetary climate can be studied only by computer simulation. There is no other way. It's all we've got. Over the past 40+ years, since the first simulations were seen in 1963, models have improved and have become far more sensitive, and reliable. But because relationships can be expressed in the language of mathematics, this does not mean that they are without fault or error. Nevertheless, models are also based on established physical laws, including the laws of gravity and conservation of energy, momentum, and mass.
It is this reliance on basic physical laws that lends credence to the predication that a buildup of greenhouse gases will lead to an alteration in the earth's climate. Components, or coupled combinations of components, of the climate system can be represented by models of varying complexity. The most complex atmosphere and ocean models are referred to as general circulation models (GCMs) or air-ocean GCMs, and, as noted, are expressed mathmatically. Current models are solved spatially on a three-dimensional grid of points on the globe as shown in Figure 5.6 , with a horizontal resolution of 250 km and 10-30 levels or boxes, vertically. A typical ocean model has a horizontal resolution of 125-250km and a resolution of 200-400 m vertically.
Consider a grid covering the surface of the earth and having a number of vertical layers or boxes, a network of points. The computer must calculate temperature, pressure, wind velocity, humidity, cloudiness, and dozens of other variables at literally millions of points. Each of these calculations must be repeated at timesteps of a few minutes. Typical models could have a horizontal resolution of about 6° longitude x 6° latitude with 10 vertical levels, which gives about 18,000 grid points or fields to be calculated. If there are only 8 variables at each grid point, 144,000 evaluations are needed at each timestep. To obtain a 25-year climate simulation with half-hour timesteps, 438,000 timesteps will be needed. If 100 arithmetical operations are needed to produce one of the fields at a point for each timestep, 6 x 1012 calculations are required. A very fast computer can calculate at a rate of 109 operations per second, which indicates that a climate simulation will require about 6000 seconds or 100 hours of running time, and this provides only climate points with gaps of hundreds of miles. Within each box or vertical, there is only one value representing the climate, within, say, 100,000 square miles. Supercomputers and even more sensitive models are constantly reducing these gaps.
Usually the first step attempts to simulate and quantify the present climate for several decades without any external forcing. The quality of these simulations is assessed by comparing them with the actual past climate. The next step would introduce external forcings by, for example, doubling the concentration
Figure 5.6. Modelers divide the earth' s surface and atmosphere into a grid of "boxes" to more effectively manage their data. Here, an exploded view of a single box with its nine levels suggests the degree of complexity of the interacting variables. More than 17,000 boxes or climate units are used to obtain a "picture" of world climate
Figure 5.6. Modelers divide the earth' s surface and atmosphere into a grid of "boxes" to more effectively manage their data. Here, an exploded view of a single box with its nine levels suggests the degree of complexity of the interacting variables. More than 17,000 boxes or climate units are used to obtain a "picture" of world climate of CO2 and running the model to a new equilibrium. The difference between the simulations provides an estimate of the change due to the doubling, and perhaps more importantly, the sensitivity of the climate to a change in radiative forcing. Other simulations would include combinations of GHGs, aerosol additions, and cloud and ocean effects. But these multiple variables do require supercomputers and enough time to achieve another equilibrium climate, which could take weeks or months.
Recently the National Center for Atmospheric Research, in Boulder, Colorado, introduced a powerful new version of a supercomputer-based system to model climate and project global temperature increases. This new system, Community Climate System Model, version 3 (CCSM3), has indicated in early trial runs that the global temperature may rise more than its previous version had projected, if we humans continue to emit large quantities of CO2 into the atmosphere. CCSM3 has projected a temperature rise of 2.6°C (4.7°F) in a scenario in which atmospheric levels of CO2 are suddenly doubled . This is more than the 2°C (3.6°F) increase predicted by the earlier version. Although the developers of the new version believe that it is a more accurate model, they do not yet know why it is so much more sensitive to increased CO2 levels. Be that as it may, the fact that models are based on known physical laws, and can reproduce many features of current and past climates, gives increasing confidence in their reliability for projecting future climate. Although models are far from crystal balls, they have shown that warming would not occur uniformly over the planet. It is expected to be more intense at the higher latitudes, and greater in winter than in summer.
Recently, climate researchers at the Goddard Institute for Space Studies reported that their climate model was validated by actual measurements of the oceans' heat content. From both the model and the measurements, they reported that "the earth was currently receiving 0.85+/-0.15 watts per square meter, more energy from the sun then it is emitting into space." This imbalance they inform us, "2s confirmed by precise measurements of increasing ocean heat content over the past ten years" . Indeed, this is the type of validation that will give models greater credibility and reliability.
In 1988, the World Meteorological Organization and the United Nations Environmental Program created the Intergovernmental Panel on Climate Change, the IPCC, an assemblage of the world ' s foremost climate scientists who would seek to interpret the flow of computer-produced data from around the world. Working Group I was given the task of assessing the climate change issue: Working Group II was to assess the impacts of climate change, and Working Group III was to formulate response strategies. Over the ensuing years, three major reports have been published presenting their best estimates of warming scenarios. The most recent, the Third Annual Report, was published in 2001. The Group I assessment projected three possible scenarios as shown in Figure 5.7, which are based on levels of CO2 emitted, and the level of warming that each could engender. A fourth report is a work in progress and is expected to be published in 2007. However, indications of what could be expected were discussed at a recent IPCC meeting in Paris. Over the decade of the three annual reports, the range of the likely climate sensitivity to be expected by the year 2100 was 1.5-4.5°C (34-40.1°F). Certain model predictions indicated a modest warming, while others found that temperatures could rise by a scorching 4.5°C (40.1°F). Currently, scientist/modelers, using more powerful computers and equipped with a better understanding of atmospheric processes, are beginning to reduce uncertainty, and appear to be reaching a consensus for a moderately strong rise in temperature. Evidence seems to converge on a 3°C (37.4°F) rise for a doubling of CO2 ' with a range of 2.6-4.0°C. With a strong consensus, disadvantages, as we shall shortly see, far outweigh advantages of a warmer world. When published in 2007, this will not be welcome news. But it may well shake up the skeptics .
The British government, for example, is fully convinced that rising global temperatures are not in its best interests. For them, global warming is approaching a critical point of no return. Beyond that point they are certain that drought, crop failure, and rising sea levels will be irreversible. The American
position, championed by President George W. Bush, argues that cuts in carbon emissions would sorely damage the US economy.
David A. King, Chief Scientific Advisor to the British government, recently scoffed at that idea. As he put it, "It's a myth that reducing carbon emissions necessarily makes us poorer. "Taking action," he said, "to tackle climate change can create economic opportunities and higher living standards." And he spelled it out with numbers. "Between 1990 and 2000," he observed, "Great Britain's economy grew by 30%, employment increased by 48% and our green house gas emissions fell by 12 percent" .
I recently asked James Hansen of GISS whether he was still optimistic that global warming could be avoided. He was optimistic "in the sense that it is possible to limit climate change to a level that avoids the most serious problems, but only if concerted actions are taken . . . If they are not, if the U.S., China and India build huge infra-structures of conventional coal-fired power plants, then we are in trouble" .
It is this "trouble" that many countries of the world tried to deal with at the First World Climate Conference in 1979—over two decades ago, as it morphed into the Kyoto Protocols—and are still grappling with.
The First World Climate Conference, held in Geneva, Switzerland, presented the initial evidence of the adverse effects of human interference with global climate and the consequences that could ensue. In 1988, the UN i i General Assembly enacted resolution 43/53 urging the "Protection of Global Climate for present and future generations of mankind," which recognized climate change as a concern for all people. It was also in 1988, as noted above, that the IPCC was created and 2 years later issued its first annual report.
It was evident that anthropogenic emissions knew no boundaries and that no country would be spared as economic downfalls in one area would lead to negative changes in other areas. A global treaty was negotiated, and in May 1992, The United Nations Framework Convention on Climate Change (UNFCCC) was adopted, and at the earth Summit Conference in Rio de Janeiro, countries signed on. But its provisions were insufficient to deal effectively with climate change. Firmer commitments were adopted in Kyoto, Japan, in 1997, and the Kyoto Protocols were born. But Kyoto ,s emission reduction requirements have been sticking points for the Bush Administration. What did Kyoto require? In general, parties to the Protocol, which the US government continues to resist, must reduce or limit their emissions commensurate to their 1990 levels. Developed countries must reduce their collective emissions of GHGs by at least 5% by the end of the first commitment period, 2008-2015. But reduction targets do vary among the first world countries; for example, the Russian Federation and Ukraine were especially favored. They are required only to stabilize their emissions, and bear in mind that Russia is the world , s second highest emitter after the United States, with approximately 17% of the total, while Australia, Iceland, and Norway could even increase emissions, but the countries of the European Union, Canada, Japan, and the United States were required to cut emissions in order to achieve the group's 5% goal.
As of April 2004, 124 countries have signed the Protocol. However, their total carbon emissions did not reach the 55% of CO2 emissions required for ratification, until Russia signed on late in 2004. The United States, the world's largest emitter with 36% of the total, remains a holdout, believing there is insufficient evidence warranting a 7% reduction below its 1990 level. "Although it is important to set targets and time tables, the fundamental problem of climate change cannot be settled that simply" [26, 27].
Furthermore, under UNFCCC, the countries of the world pledged to avoid dangerous human interference with the climate. But "dangerous" was never defined, and over the past decade emissions have done nothing but climb. Current CO2 levels have not been exceeded over the past 420,000 years, and the rate of increase during the twentieth century has been unprecedented during the past 20,000 years. Indeed, this past century has set many unfortunate records.
On February 16, 2005, a stricter addendum to the Kyoto Protocols took effect, but there remains no agreement on what level of cuts would lead to climate stability. Too many people who ought to know better are urging more research, which is easier than making the difficult decisions required. Dr. James Hansen called for a sense of urgency in 1988, and in February 2005, he reiterated his initial urging. "I think that the scientific evidence now warrants a new sense of urgency." At the recent meeting in February of the World Economic Forum, where Tony Blair, Prime Minister of Britain, urged the United States to join the industrialized nations in agreeing to curbs on GHGs, he said, "It would be wrong to say that the evidence of danger is not clearly and persuasively advocated by a very large number of entirely independent and compelling voices "  .
As the new, stricter accords take effect, there is some resentment. Europeans have set some of the most stringent emission-reducing targets. They bear some resentment that the United States and China resist bearing the extra costs of emission reduction. But more importantly, they know that the goal of curtailing emissions will not be realized because the atmosphere is a global, not a local, problem. The fact that time is not in anyone's favor has not been lost on a number of major international corporations who are changing their behavior regardless of whether they are in or out of Kyoto member countries. They are changing their behavior because they know that career politicians will not take strong positions supporting reductions of emissions, jeopardizing their elected offices, as climate change deals with a world well beyond their lifetimes.
Michael G. Morris, CEO of American Electric Power, had it right when he said that further delay will serve no one, and will be detrimental; this was highlighted in July 2005, when the Attorney Generals of eight states—Connecticut, California, Iowa, New Jersey, New York, Rhode Island, Vermont, and Wisconsin—filed a lawsuit against the country's five largest power plant CO2 emitters: American Power Company, the Southern Company, TVA, Xcel Energy, and The Cinergy Corporation. These power companies account for about a quarter of the American power industry's CO2 emissions, and about 10% of the nation' s total CO2 emissions. Their suit, filed under the federal public nuisance doctrine, claims that the emissions clearly damage their states. That doctrine states and maintains that if a company's activities in one state cause harm in another, then the state where the harm occurs may sue to halt the injurious product. This is not a new law. Apparently it has been applied for over 100 years in interstate pollution cases . Whether they win or lose is not the point. More to the point is the fact that global warming, or climate change, for those who prefer the less threatening expression, appears to be moving away from the political arena, where paralysis seems to have set in, to the movers and shakers who may just force the necessary changes because it is in their best interests to do so.
Adding his clout, Jeffrey Immelt, the CEO of General Electric and one of the heavy hitters in the business world, made it clear in May 2005 that GE, the largest company in the United States, would double its investment in energy and environmental technologies that would prepare it for what he sees as a huge global market for products that help other companies—and countries such as China and India—reduce emissions of GHGs. Further, Immelt believes that mandatory controls on CO2 emissions are necessary and inevitable. Many companies are moving in this direction in spite of the lack of federal emission regulations, which confer a competitive advantage on those who do nothing.
The questions that linger are whether there is observable evidence of a global warming trend, what means are available for cutting emissions, and whether CO2 can be appropriately collected and disposed of.
Was this article helpful?
Your Alternative Fuel Solution for Saving Money, Reducing Oil Dependency, and Helping the Planet. Ethanol is an alternative to gasoline. The use of ethanol has been demonstrated to reduce greenhouse emissions slightly as compared to gasoline. Through this ebook, you are going to learn what you will need to know why choosing an alternative fuel may benefit you and your future.