Inductive and deductive climate models

It has become customary to define three categories of global climate models (Claussen etal. 2002; Renssen etal. 2004) (Figure 4.2).

• Conceptual models are made of a small number of differential equations designed to represent interactions between the major climate components. They are called inductive because the number of adjustable parameters is of the same order of magnitude as the number of differential equations (number of degrees of freedom). Their primary purpose is to formulate a phenomenological theory of climate dynamics. This may cover problems as various as the stability of the ocean circulation (Stommel 1961) or the astronomical theory of paleoclimates (Imbrie and Imbrie 1980; Saltzman and Maasch 1990; Paillard 2001). Conceptual models can produce very complex solutions that may even be chaotic. The conceptual models that can successfully be tuned on the climate record provide a structure to observations which, according to information theory (Leung and North 1990), may confer on them a prediction skill.*

• Comprehensive climate models are built from first principles of physics (equations of movement, radiative transfer, etc.) numerically implemented on three-dimensional grids representing the atmosphere, the oceans, and sea-ice.t The characteristic horizontal spatial scale of the grid is of the order of 100 km and the

* Saltzman (2002) considered as an "act of faith" that long-term climate dynamics may be described by some low-order model, similar to thinking in physics that the cosmos is governed by a "unified theory". There is no easy demonstration of this, but Hargreaves and Annan (2002) showed that the Saltzman and Maasch (1990) model does have significant skill in predicting climate over about 100 kyr. t Technically, the discretized equations of motion may be solved directly on the grid (grid-based models). Another possibility (spectral models) is to compute first the spherical harmonics of the physical quantities and then to resolve the equations of motion in this "conjugate" space. Differential operators, such as the Laplacian, are indeed more easily expressed in the conjugate space. The spatial resolution of a spectral model depends on the number of spherical transforms retained to perform the calculations. For example, T32 means a triangular (T) truncation to the first 32 spherical harmonics. This approximately corresponds to a resolution of400 km x 400 km.

integration time step is a few hours (see Johns etal. (2006) for a recent example). Synoptic atmospheric variability is thus explicitly calculated. Comprehensive climate models are "deductive" because the number of constitutive equations is several orders of magnitude larger than the number of adjustable parameters. Phenomena occurring at spatial scales smaller than the model grid, such as convective cloud formation, are parameterized by means ofphenomenological equations. Climate modelers establish these equations on the basis of local observations (soundings, aircraft measurements, surface data) and specialized models. A parameterization has to be "physically reasonable" and respect conservation principles (conservation of energy, entropy, momentum, etc.). In spite of these constraints, different mathematical formulations of a parameterization may seem to provide equally good results and there is no easy way to know which one is best. This is what is called structural uncertainty.

In practice, the parameters need to be tuned again so that the climate generated by the global model agrees with observed global-scale features of the climate system (e.g. the existence of a meridional overturning cell in the North Atlantic). This tuning process is too often viewed as a necessary evil about which little detail is given in the model documentation.

Fortunately, tuning tends to be better recognized as a natural step of model development. Model parameters are attributed uncertainty ranges (constrained by laboratory measurements and local observations), and the likelihood of a given parameter combination is estimated from the agreement between the model and a well-defined set of global observations (surface temperature, precipitation, satellite estimates of the radiative balance, etc.) (more detail is given below).

Nowadays, the development of comprehensive models mobilizes large multi-disciplinary teams driven by the aim of providing reliable future climate predictions. These models must therefore include all processes relevant at the decadal to century time-scales with as much detail as computational power permits. This covers various aspects such as soil dynamics, vegetation dynamics, ocean biogeo-chemistry, ice-sheet dynamics, and river hydrology. At the time of writing, a 100-year long simulation of climate with a comprehensive climate model (e.g. 1.5 x 1.5° resolution) requires more than a month of a supercomputer with powerful data storage facilities.

• Earth models of intermediate complexity (EMICs) fill the gap between conceptual and comprehensive climate models (Claussen et al. 2002). They resemble comprehensive climate models but calculations are made on longer time-scales and larger spatial scales. The degree of parameterization is higher, which may imply a larger structural uncertainty. Yet, like comprehensive models, the number of degrees of freedom in EMICs exceeds the number of adjustable parameters by several orders of magnitude. The EMIC category covers a range of models that may be used to study interdecadal to astronomical time-scales depending on the model. Examples of EMICS used to study the Holocene are given in Table 4.1.

Earth models of intermediate complexity and conceptual models are useful because they cover spatial and temporal scales for which comprehensive models

Table 4.1 Examples of Earth models of intermediate complexity (EMICs) used to study the Holocene


Example of published applications over the Holocene



Prediction of the next glacial inception

Loutre et al. 2007


Vegetation-climate interactions at high latitudes

Crucifix et al. 2002


Impact of freshwater input in the North Atlantic around 8000 years ago

Factor decomposition (see text)

Desertification of the Sahara in response to orbital forcing

Changes in ocean and terrestrial carbon storages during the Holocene

Bauer et al. 2004 Ganopolski et al. 1998 Claussen et al. 1999; Claussen et al. this volume Brovkin et al. 2002

Green McGill

Analysis of the carbon budget

Wang et al. 2005a

Paleoclimate Model

Existence of a climate optimum after the disappearance of the Laurentide Ice Sheet

Impact of freshwater input in the North Atlantic around 8000 years ago

Wang et al. 2005b Wang and Mysak 2005

ECBILT (coupled to a low-resolution ocean model)

The influence of changes in precipitation and temperature on the evolution of three representative glaciers

Weber and Oerlemans 2003

ECBILT-CLIO (coupled to a higher resolution ocean-sea-ice model)

See text

Impact of freshwater input in the North Atlantic around 8000 years ago

Renssen et al. 2005 Renssen et al. 2002

are not suitable. Consider the glacial-interglacial cycles: the growth and decay of the total continental ice mass at the glacial-interglacial time-scale is of the order of a few centimeters of sea-level equivalent per year. These one or two centimeters result from a difference between total evaporation, precipitation, melting, and freezing that is so small compared with the quantities themselves that it cannot be confidently estimated by a general circulation model (Saltzman 1988, 2002). In fact, the present net accumulation rates of snow over Antarctica and Greenland are even not accurately known (Rignot and Thomas 2002). Earth models of intermediate complexity provide a solution. They may be tuned to reproduce reasonable results over a given section of a glacial-interglacial cycle (e.g. the last glacial inception, as in Gallee et al. 2002) and then used to study other periods (Loutre et al. 2007).

We have so far considered global climate models. Comparison with (paleo) data may make it necessary to resolve smaller spatial scales than those of a comprehensive climate model. The method demanding least computing time is statistical downscaling (Murphy 1998), where statistical relationships are applied between climate variations at the synoptic scale (200-300 km) and the local climate. Two more elaborate strategies are documented in the literature: nesting and zooming (Giorgi and Mearns 1999). Under "nesting", output of a global model is used to drive a regional dynamical model of the atmosphere that resolves horizontal length scales of the order of 30 to 50 km. "Regional" indicates that this high-resolution model only covers a defined region of the globe, such as Europe. Zooming is based on a comprehensive climate model featuring an irregular mesh refined over the region of interest in order to capture the smaller spatial scales.

Was this article helpful?

0 0


  • bruto
    Are climate models deductive?
    8 years ago

Post a comment