It is difficult to address the sea level rise problem using tide gauge data, because water level records reflect many processes in addition to a secular trend. We have seen in Chapter 3 that tide gauge records appear very "noisy" due to seasonal-to-interannual and longer variations of water level. The purpose of this chapter is to use the longest tide gauge records, plus wind data and the appropriate physics, to see if there is hope for improving the signal-to-noise ratio. The results are very positive; interannual and longer variations of sea level on the U.S. east coast can be explained and modeled.

Some natural processes, such as the astronomical tides, have known, simple periods associated with them. It is surprising, therefore, to find energetic frequency bands (i.e., certain periods) in which the conspicuous variability is associated with an almost random forcing. The term "decadal" in the title of this chapter (and elsewhere) is used loosely. While there certainly is power at periods near a decade, power is actually spread over a wide frequency band. In normal usage the term implies "longer than a year and extending out beyond 10 years." If one should ask, How much longer than a decade? we must answer with chagrin, We don't know! The periods of "decadal" sea level signals are longer than we can treat rigorously; existing measurement records are not long enough to enable treating them rigorously in a statistical sense.

Examples of this long-period variability are shown in figures found in other chapters of this book, especially Chapter 3, or for example Woodworth (1990) or Douglas (1992). The decadal variability is not a new problem; Fig. 7.1 is patterned after the work of Hicks and Crosby (1974; or see, for example, Hicks et al., 1983). These are now-classic examples of sea level variability at a few U.S. tide gauges. Data such as these are available at the Web sites of the National Ocean Service ( and the Permanent Service for Mean Sea level ( and in the CD-ROM that accompanies this volume. These figures show how similar the variability can appear to be

Sea Level Rise

Copyright © 2001 by Academic Press. All rights of reproduction in any form reserved.


Figure 7.1 Sea level records from long-term stations along the coasts of the United States. The individual data points are the annual mean values; the solid line shows a low-passed version that retains variability at periods longer than 5 years but suppresses variability at periods shorter than 2.5 years.


Figure 7.1 Sea level records from long-term stations along the coasts of the United States. The individual data points are the annual mean values; the solid line shows a low-passed version that retains variability at periods longer than 5 years but suppresses variability at periods shorter than 2.5 years.

along our coasts, although a sharp eye will quickly find particular events at one location that are not duplicated at others. But the records in Fig. 7.1 do have in common an overall increase of sea level. In Alaska, however, as in many high-latitude locations near glaciated regions, the long-term trend is clearly a fall of sea level. Geology, not oceanography, dominates at such locations.

Several important features in Fig. 7.1 deserve emphasis. First, while it is not apparent in the figures, there is an enormous amount of variability at higher frequencies that has been suppressed (or "filtered out"). Some details about the necessary signal processing are included in Appendix 7.1. Second, the noticeable bumps and wiggles are sometimes similar between New York and Charleston, as in the early 1970s, and sometimes not, as for example the large peak in the late 1940s at Charleston that is not found at New York. These features, perhaps to the surprise of many, are a result not of coastal processes but of the large-scale winds over the Atlantic Ocean.

The most important feature of Fig. 7.1 is that sea level rises over time at these mid-latitude stations—a phenomenon seen in nearly all long mid-latitude records. Typical values around the United States are approximately 2-4 mm/ yr, or in ordinary terms, ~1 foot per century. While progress has been made in determining and understanding the worldwide increase of sea level, we are only now beginning to understand the causes of the decadal fluctuations. This topic is addressed later in this chapter.

The rise of sea level and its variability along mid-latitude coasts are well known; Sturges et al. (1998) have shown a direct connection between the decadal variability along the coast and similar fluctuations in the open ocean. However, in the central North Atlantic, at the latitudes of the subtropical gyre, there is but one island tide gauge: Bermuda. It is well known, however, that at low frequencies the tide gauges at islands do a good job of representing the ocean on large scales. See, for example, Wunsch (1972) or Roemmich (1990). In contrast to the scarcity of sea level records in the broad oceans, along the U.S. east and west coasts there is a wealth of tide gauge data. The general features of these tide gauge records are a gradual rise of sea level imbedded in a background of low-frequency variability: see, for example, the papers of Barnett (1984), Douglas (1991, and Chapter 3 of this book), Woodworth (1990), and Wunsch (1992).

Our main interest here is sea level at the coast. However, it is important for the analysis that follows to recognize an important point: sea levels at mid-ocean tide gauges, such as Bermuda, or the equivalent variations derived from subsurface density data (Levitus, 1990) show variability much like that on the U.S. east coast: low-frequency variations having peak-to-peak amplitudes over 10 cm.

For examples of coastal sea level data, we show one each from the U.S. east and west coasts. The records at Charleston and San Francisco are some of the longest. Figure 7.2 shows the power spectra1 at these stations in two ways: the first panel, Fig 7.2a, shows what most in this field would call the usual method, which uses a log-log presentation. In the second panel, 7.2b, we use the variance-preserving method, which shows to the eye where (in frequency) most of the power lies. It would appear from the first panel that the power in these records is concentrated in a few "lumps" of energy in certain frequency bands. Even though these records are long, we should not attach much physical significance to the power in these specific frequency bands. If we cut the records into halves and compute the spectra for each half, we find for both of these long records (as well as at most others) that the high spots in one half correspond to the low spots in the other. That is, the frequency distribution is seemingly random and varies enormously from decade to decade and from place to place.

1 These spectra are derived from the records of monthly data. The power peak at a year (frequency 0.0833/month) is typical of coastal gauges and is consistent with observations offshore. The annual cycle at these latitudes results largely from the annual cycle of stored heat. The other spikes are at sub-multiples of 12 mo.: 6 mo, 4 mo, etc.

San Francisco

San Francisco

Frequency (1/month)

Figure 7.2 Power spectra of two long sea level records: San Francisco, California, and Charleston, South Carolina. Panel A shows the ordinary version, with log-log scales; Panel B shows the so-called variance-preserving form, in which the power in each frequency region is proportional to the area apparent to the eye. The smoothing is by 5 Hanning passes at the higher frequencies, decreasing toward lower frequencies to show the variance at the longest periods.

The stations that have the longest records usually show the long-term sea level rise most clearly. This is so because the fluctuations on the order of 10-20 cm peak-to-peak are typical of the decadal sea level variability. And this variability, of course, obscures changes in the underlying trend.

7.1.1 Magnitude of the Long-Term Rise of Sea Level

Suppose we wish to determine the slope of the sea level signals at New York, for example (see Fig. 7.1). It is a trivial task to make a least-squares, straight-line fit to any set of data. If all we want is a good estimate, a pencil and a ruler will do. Yet suppose for a hypothetical record that the variability from decadal signals caused sea level to be unusually low at the beginning of our record, such as in the late 1920s at New York or 1945 at San Francisco.

Frequency (1/month)

Figure 7.2 (Continued)

Frequency (1/month)

Figure 7.2 (Continued)

And by essentially random events, suppose that in our hypothetical record the sea level was unusually high at the end, such as near 1920 at New York or at 1940 at San Francisco. As discussed in detail in Chapter 3, such anomalies would lead to an apparent slope that would be much too great compared with the true long-term trend. With the benefit of a considerably longer record, we are able to detect such anomalies. We only know that the computed trend is too great because, in our hypothetical example, we can detect that the random fluctuations are low at the beginning and high at the end. We can detect this only if we have an "adequately long" record in some sense. In general, of course, we do not have such insight.

A person trained in signal processing will tell us that it is also possible to filter these decadal signals—to remove the undesired variations—to reveal the underlying trend. But two problems arise. First, what periods (or frequencies) shall we remove? This turns out to a most difficult choice—one not provided by the external parameters of the problem. And second, after an appropriate filtering operation, it is necessary in high-accuracy applications to discard long segments at the ends of the record.2 After much filtering, therefore, little signal remains!

Figure 7.2 shows the spectra of two of the long-term tide gauges. The "decadal signals" causing the problems just described have their main power at periods longer than 100 months. (Although the absolute amount of power in the records is not large at these long periods, as can be seen in Fig. 7.2B (p. 169), when we examine the low-frequency motions for the sea level rise problem, this power becomes the dominant issue.) The power continues to increase out to the lowest frequencies; this tendency is called a "red spectrum," somewhat in analogy to red light appearing at the low-frequency end of the visible spectrum. It requires care to obtain a meaningful spectrum when the power is found at periods that are not short relative to the length of the record. Most standard methods designed to be used in electrical engineering, for example, are constructed with the implicit assumption that one can obtain a signal of any necessary duration. Such methods are not appropriate in most oceanographic or geophysical situations. To remove this low-frequency power by careful filtering would require that we discard on the order of 100-200 months of signal at each end of the record. Different filtering methods produce different results, but this is an essential feature of accurate analysis. In other words, one is faced with the necessity of discarding on the order of 200-400 months of signal if common filtering techniques are to be employed. This is a major fraction of the length of most records! For a more familiar example, observing a single tidal cycle of the semidiurnal tide would give us a nice observation of one cycle of something, but no knowledge about the fortnightly tides. A general rule of thumb in such analyses is that until we have seen roughly 10 cycles we do not have enough information to determine meaningful statistics about the process. As will be shown below, it appears that the seemingly random fluctuations in sea level data have periods at least as long as 20 years, so the rule of thumb is not satisfied.

7.1.2 Determining a Change in the Rate of Sea Level Rise

Now on to the most disturbing fact about the problems associated with decadal variability. This discussion has focused on the difficulty of obtaining the slope of the curves of Fig. 7.1. But our true task here is to study the effects (if any) of man-made changes in the rate of sea level rise. That is, we would like to put a reliable straight-line fit to the first part of the record, a straight-line fit to the last part of the record where presumably anthropogenic effects would be more pronounced, and determine the change of slope where the two lines meet. This is a challenge for two reasons. First, for the reasons described

2 Some processing methods suggest "tricks", such as reflecting the signal back upon itself at the ends, that allow us to retain the full record length in the filtered result. These of course are equivalent to knowing the data before the record began and after it has ended. Such divine insight is desirable but difficult to achieve.

above, it is technically difficult. Second, because the low-frequency variability is as large as the "effect" we wish to document, different analysts will perceive different places in the record at which the slope seems to change. For example, the records at New York and Charleston suggest an apparent increase in the rate of rise beginning in the late 1920s. Is this the effect of man-made emissions, or just a random transient? Clearly, if such an effect is a real physical event, we would expect to find similar effects at all major tide gauges. To study this problem further, consider tide gauge data taken at San Francisco in more detail.

Figure 7.3 gives a closer look at the data portion prior to 1900. San Francisco has one of the few gauges where data of this duration are available. Sea level seems to be approximately stable if we examine the record from 1865 to 1900. However, if we chose to begin our analysis in the early 1880s, we would see a fall of sea level at the rate of 20 cm/100 yr for a few decades during the late 1800s! Therefore, if we believe, on the basis of a great quantity of other data, that there is a long-term general rise of sea level, we are forced to conclude that the period of sea level fall during the late 1800s at San Francisco is an example of a transient anomaly—perhaps a random event. If this feature is taken as an example of how large a random transient feature can be, we must

Figure 7.3 Sea level at San Francisco, extending back to the 1850s, from monthly mean values. This plot shows only the portion before 1900, to emphasize the interval of declining sea level after the early 1880s.

conclude that such events can have durations as long as 20 years. This is not a happy conclusion, but we ignore it at our peril. How can we be confident that the next such fluctuation will not be longer?

The appropriate scientific question becomes: What phenomena cause these low-frequency signals? Can we understand them well enough to remove them on the basis of physical principles? We show in a later section that the culprit is wind over the open ocean.


7.2.1 Observed Signals in the Atlantic Ocean

A meaningful question to consider is: How big is the interdecadal sea level signal in the open ocean, and why is the open-ocean signal relevant to coastal variability? Workers in other fields are often surprised by the nature of low-frequency, open-ocean waves. The idea that an ocean wave can have periods of many years is usually an amazing revelation. Yet this is clearly the case. Figure 7.4, an updated version of the original figure shown in Sturges and Hong (1995), shows a comparison between sea level at Bermuda computed from a numerical model and the tide gauge there. Because this comparison is strikingly good, we tend to believe that the model output is reliable to within a few centimeters at these long periods. The first point to notice is that these waves have amplitudes of nearly 20 cm peak-to-peak—the same as the decadal signals observed at the coast. These waves travel at a few centimeters per second (see Sturges et al., 1998, for details as to the wave speeds), so the travel time across the Atlantic is roughly 5 years or more. These are forced waves, and the forcing is from the large-scale winds over the ocean. The important point here is that these long so-called Rossby waves travel only to the west. They strike the western side of the oceans, or the eastern coasts of the continent, creating a sharp distinction between sea level variations on east coasts and west coasts.

7.2.2 A Rossby Wave Model

Rossby waves, the dominant low-frequency waves of the ocean and the atmosphere, are described by Platzman (1968). The dominant restoring force for these waves arises not merely from the Coriolis force, which is simply an effect of the rotation of the earth, but from the curious fact that, for waves of such large, planetary scale, the Coriolis force changes with latitude. Almost all large signals in the ocean or weather systems in the atmosphere propagate via Rossby waves. In the ocean, the long, low-frequency waves travel to the west. In the atmosphere (weather systems for example), by contrast, they travel to the east.

1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000

Figure 7.4 Comparison between observed sea level at Bermuda and a Rossby wave model calculation. Sea level at Bermuda is shown by the dashed curve; the model output is shown by the solid curve. The model is forced only by open ocean winds. This is an updated version of the similar figure shown by Sturges and Hong (1995).

1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000

Figure 7.4 Comparison between observed sea level at Bermuda and a Rossby wave model calculation. Sea level at Bermuda is shown by the dashed curve; the model output is shown by the solid curve. The model is forced only by open ocean winds. This is an updated version of the similar figure shown by Sturges and Hong (1995).

The model of wind forcing that we have used for the results presented here is patterned after Meyers (1979) and Kessler (1990) and was used by Sturges and Hong (1995) and by Sturges et al. (1998). The model that produced the solid curve in Fig. 7.4 concentrates on forcing at periods longer than a few years, at which the ocean's response will be primarily from long, nondispersive, baroclinic Rossby waves. Price and Magaard (1986) found that waves at these long periods traveled within a few degrees of due westward. Mayer and Weisberg (1993) made an important contribution by showing that the frequency content of the observed wind forcing is very much like the frequency content of sea level—a most important clue. Levitus (1990) was one of the first to show solid evidence that the ocean undergoes such low-frequency variations. Greatbatch et al. (1991), Ezer et al. (1995), and others have shown that in the open ocean, numerical models can reproduce the observed oceanic variability. Our focus here of course is on showing that such results also apply directly at the coast.

The model we have used is intentionally simple. We separate the ocean's response into a series of what are called vertical modes. It is easy to envision the way in which vertical modes work. They are exactly like the shapes of the various harmonics one can make by plucking a violin string—or the spring on a screen door. For the violin string, the tension is the same everywhere, so we get uniform sine waves; in the ocean, the shapes are a little different because the vertical density stratification (analogous to the string tension) changes with depth; other than that the physical problem is the same. This gives the result that the low-frequency, large-scale wind-driven response for fluctuations of the sea surface (for the «th mode) is described by the product of a vertical structure function (whose shape is standard and need not concern us here) and a pressure function that is a function of space and time, as

0 0

Post a comment