The classic methodology assumes a clear distinction between model input (initial conditions, boundary conditions, parameters) and output (the predicted climate). Boundary conditions and parameters are considered as "known" and there is no formal uncertainty attached to them. The model output is then compared with observations. There is usually a pair of experiments. One is designed to produce a simulation of the pre-industrial climate and, in the other, boundary conditions and certain parameters (such as greenhouse gas concentrations) are modified to predict past climate.
The Paleoclimate Modeling Intercomparison Project (PMIP) framed past climate simulations (mid-Holocene and Last Glacial Maximum) with different comprehensive climate models and organized a systematic comparison between the model output and paleoclimatic data (Joussaume and Taylor 2000; Braconnot et al. 2007) (Figure 4.3). Appropriate proxy models may facilitate the model-data comparison. Proxy models are calibrated to map climate model outputs on observable quantities such as the dominant biome (Haxeltine and Prentice 1996), lake level, or glacier length (Weber and Oerlemans 2003). Climate models may also directly include the necessary equations to simulate observable features such as dust flux (Joussaume and Jouzel 1993a), oxygen (Joussaume and Jouzel 1993b), carbon (Marchal et al. 1999), and boron isotopic ratios (LeGrande et al. 2006).
It was shown that climate models correctly reproduce a number of observed features of the mid-Holocene climate: increased precipitation in the Sahel, decreased precipitation in South America, reduced sea-ice cover in the Norwegian Sea, northward advance of boreal forest in Russia, and reduced frequency of El
Figure 4.3 Change in annual mean surface temperature induced by switching trom today's orbital forcing to that of 6000 years ago (see also Figure 4.4) as simulated by Paleoclimate Modeling Intercomparison Project (PMIP) models (the mean model response is displayed). The plot evidences well the annual polar warming and equatorial cooling. Sensitivity experiments suggest that the annual tropical cooling is mainly caused by the increase in obliquity and the resulting decrease in annual mean insolation below 43° oflatitude. The polar warming results trom a combination of obliquity (larger annual mean insolation, concentrated in summer) and precession (larger summer and autumn insolation). Seasonal changes in insolation are translated into an annual temperature signal by the sea-ice feedback. Cooling of northern sub-tropical deserts is a signature of enhanced summer monsoon precipitation and the resulting increase in surface evaporation. Note that the vegetation response was not taken into account in these experiments. (Data were supplied by Jean-Yves Peterschmitt and extracted trom the PMIP database in Saclay, France.)
Niño events (see the recent reviews by Braconnot et al. 2004; Renssen et al. 2004; Cane et al. 2006; pioneering work is covered in Wright et al. 1993). It is then possible to decrypt the mechanisms of these climate changes by means of appropriate sensitivity experiments. One method, known as "factor separation" (Stein and Alpert 1993), consists in sequentially freezing certain components such as vegetation or sea-ice distribution normally calculated by the model. It was used by Ganopolski et al. (1998) to show that hemispheric warming in response to mid-Holocene orbital parameters results from feedbacks between boreal vegetation and sea-ice (in the CLIMBER model) (see also Harvey 1988; Crucifix and Loutre 2002). More generally, feedback analysis of mid-Holocene experiments has highlighted the importance of ocean dynamics and vegetation and justified including vegetation dynamics in climate models used for future climate prediction.
Climate models are never in "perfect agreement" with data. For example, the "IPSL" (i.e. the climate model of the "Institut Pierre et Simon Laplace" in Saclay, France) model simulation of the mid-Holocene climate indicates, compared with the present-day, increased aridity in central Eurasia and a northward advance of the boreal forest's northern limit in Canada, contrary to observations (Wohlfahrt et al. 2004). Climate modelers tend to be very defensive when it comes to model-data comparisons because discrepancies call the model performance into question. This attitude is unfortunate because model-data differences contain particularly useful information. There are at least two things the modeler would like to know about them. First, what is their cause? Are they due to a process badly accounted for, an inadequate boundary condition (e.g. incorrect specification of ice sheets) or a data misinterpretation? Second, does this disagreement affect the model's prediction of future climate change?
A new methodology is being formalized to address these questions, called "probabilistic inference with climate models" or "climate data assimilation" (Rougier 2006). The fundamental idea is to attribute explicitly uncertainty ranges to model parameters, which are explored by performing large ensembles of sensitivity experiments. A likelihood is attributed to a parameter value depending on (i) the difference between the model prediction obtained with this parameter value and the data-estimate, (ii) the data uncertainty, and (iii) prior knowledge on the parameter. This is a data assimilation because observations (temperature, precipitation, etc.) provide explicit constraints on model parameters and/or boundary conditions.
Climate data assimilation may help us to refine estimates on phenomenological parameters used in parameterizations. It therefore provides a formal framework to "tuning" by clarifying which information is being used to constrain parameters and estimate probability distribution functions on both model input and output.
The trouble with this process is that it puts a high demand on computing resources because it requires numerous experiments. It has therefore been implemented in only a few cases using modern climatic observations (Murphy et al. 2004), plus a small number of studies using large-scale estimates of the Last Glacial Maximum temperature (Annan et al. 2005; Schneider von Deimling et al. 2006).
Examples above are based on steady-state experiments. Data assimilation also applies to transient experiments. In short experiments (i.e., time length shorter than the dissipative time-scales of the system), data assimilation is used to provide estimates of climate variables for which there is no direct observation (e.g. Goosse et al. this volume) by constraining initial conditions (cf. Jones and Widdmann 2004; van der Schrier and Barkmeijer 2005). In long transient simulations, data assimilation may be used to constrain model parameters (Hargreaves and Annan 2002) (see below).
Was this article helpful?