The Bulletin of The Atomic Scientists has published an admirably candid collation of what some high profile climate modelers have to say to each other
The uncertainty in climate modeling Some cautionary excerpts follow, but best read the whole thing
Climate prediction works well for some variables and poorly for others
James Murphy
" are the results too dependent on debatable expert assumptions for some variables, hence precluding predictions that are more than a "best guess" and an uncertainty range?
In practice, the answer is likely to differ from variable to variable, possibly even from season to season. Probability distributions for average future changes in surface temperature and precipitation, for instance, may be within reach, because the main processes expected to drive such changes are captured in the current generation of climate models.
On the other hand, changes in, say, surface evaporation or wind speed are too dependent on basic physical assumptions, which vary between different models--the problem being that we currently lack the detailed observations and understanding needed to distinguish which of the different assumptions is correct. In the latter case, we might decline the challenge of probabilistic prediction. The important thing is to have quantitative methods of determining where to draw the line, rather than relying purely on expert judgement.
In UKCIP08, for example, we are handling this problem by combining results from two different types of ensemble data: .... For some climate variables this works reasonably well, implying that probabilities can reasonably be estimated. For other variables we find much wider divergence between the two types of ensemble data, implying that probabilistic predictions aren't yet feasible.
... We can only judge credibility by assessing whether climate models capture the relevant processes on a case-by-case basis.....t the potential benefits of waiting for better information from future generations of models needs to be made very clear.
Climate modeling is still an abstraction of reality
Leonard A. Smith
... I doubt we can blame the media for many problems climate science seems set to bring upon itself. .., the pivotal question is whether today's models are able to provide additional relevant information, and if so, on what scales?
... Suppose a cup of coffee slips from my hand. I know enough physics to predict that it will fall to the floor. I'm also confident that if I reduce the distance it falls, it's less likely to shatter and create a mess. We know with this kind of certainty that the greenhouse effect has initiated global warming. Yet, I also understand that I don't know enough physics to predict how many shards will result if the cup shatters or where exactly the coffee will splatter. Nor can I usefully estimate a "critical height" below which the cup has an acceptably small chance of shattering. The fact that I know I don't know these things is valuable information for evidence-based decision support: It keeps me from making overconfident plans based on the "best available information." Instead, I act based upon what I do know--the cup is likely to shatter, and therefore, I plan to jump left.
But what if the physicist next to me offers to sell me a probability distribution from his state-of-the-art "falling fluid container" model? His implied probabilities describe whether or not my cup will shatter, what the shards will do if it does, and the trajectory of various drops of coffee after the cup lands (so we know which direction to move to avoid coffee splatter). High-resolution MPEG movies of the model are available for a small extra charge, and they look great!
Should I pay attention to him?
Do we believe that today's models can provide decision-relevant probabilities at a resolution of tens of square kilometers for the year 2060--or even 2020 for that matter? No.... to accept a naive realist interpretation of model behaviors cast as a Bayesian probability distribution is, as mathematician and philosopher Alfred Whitehead surmised, to mistake an abstract concept for concrete reality....Shorn of this context, model results have an aura of exactitude that can be misleading.
Interpreting climate predictions should be collaborative
Claudia Tebaldi
We all seem to agree that our state-of-the-art models aren't satisfactory representations of climate on Earth--at least not to the degree required to make decisions with them. We also agree that people are concerned with climate change ...
Beyond this common ground, we fall on different points of the spectrum between James's pragmatic approach, where he proposes giving decision makers information as our "best guess" about future outcomes nonetheless, and Lenny's highly skeptical position--namely, there's no hope in approximating the real world in any useful sense.
The reader who doesn't dabble in climate modeling or statistics is probably asking herself, "What am I to make of all this?" To which I would say, "That's exactly what I want you to think!"
Let me explain: If I can say anything for sure, it's that I don't want anyone to take a precooked climate projection--whether a single model or a multi-model ensemble, probabilistic or not--and run with it. Any decision will be best served by looking at the available observational and modeled information and listening to the opinion of climate modelers and climatologists. The experts will be able to form an integrated evaluation based on changes already observed,...
... By doing so, we may get closer to a full characterization of the uncertainties that we know exist....As for the unknown unknowns. . . There's no way around those.
Probabilities aren't so scary
James Murphy
Policy makers and planners now accept the reality of man-made climate change and are turning their thoughts toward adaptation. Should they make their decisions now, or wait to see if better climate models can provide more precise information? .... Modeling centers have worked incredibly hard on this during the past decade, yet the range of projections hasn't changed much....There's no obvious reason to expect more than gradual progress,
The scientific credibility of a probability distribution function therefore depends not on having near-perfect, error-free models, but on how well the methodology used to sample the uncertainties and convert the model results into probabilities summarizes the risk of alternative outcomes. This is by no means an easy hurdle, but one I believe we are capable of overcoming.
Of course, the probabilities must be accompanied by sensitivity analyses that educate users not to interpret the probabilities to too high a degree of precision. room should be made for changing forecasts in the future....perhaps we can be less frightened of telling users what we believe--and perhaps we can credit them with the intelligence not to interpret our current beliefs as everlasting promises.
In planning for the future, make room for changing forecasts
Leonard A. Smith
...Seeking a perfect relationship that would bring decision-support bliss might hinder the development of our warts-and-all models, which, while improving, are in no way converging towards perfection....Gavin notes that the average of many models is frequently "closer to the observations" than any individual model. While true, this doesn't suggest that such an average can support good decision making. Even when the model is perfect, averaging can be a bad idea. Consider the parable of ... three...statisticians who cannot swim but desperately need to cross a river.
Each has a forecast of the change in the river’s depth from one bank to the other; in each forecast, there's a different point at which the river is so deep that the statisticians will drown. According to the average of their forecasts, it’s safe to ford the river. Even though the average is closer to the actual depth profile of the river than any one of the forecasts, action based on that information is not safe--if they cross the river, they’ll drown. .... there are few, if any, examples of purely model-based probability forecasts supporting good decision-making ...
Not all climate models are created equal
Claudia Tebaldi
It's exhilarating to see the fruits of climate research achieve such prominence in the media, political debate, and concerns of industrial and municipal stakeholders. As scientists, though, it's incumbent upon us not to mislead the lay audience...., I wholeheartedly agree with Gavin that these kinds of probabilistic projections aren't appropriate for risk analysis and decision making under uncertainty and won't be for a long time.
Climate models produce projections, not probabilities
Gavin Schmidt
"I'm just a model whose intentions are good, Oh Lord, please don't let me be misunderstood," Nina Simone may as well have sung. Models are fundamentally necessary in all sciences. They exist to synthesize knowledge, to quantify the effects of counteracting forces, and most importantly, to make predictions--the evaluation of which is at the heart of the scientific method...
Climate models are amalgams of fundamental physics, approximations to well-known equations, and empirical estimates (known as parameterizations) of processes that either can't be resolved (because they happen on too small a physical scale) or that are only poorly constrained from data.... models are all limited by similar computational constraints and have developed in the same modeling tradition. Thus while they are different, they are not independent in any strict statistical sense.
Does agreement on a particular phenomenon across multiple models with various underlying assumptions affect your opinion on whether or not it is a robust projection? The answer, almost certainly, is yes. Such agreement implies that differences in the model inputs, including approach (e.g. a spectral or grid point model), parameterizations (e.g. different estimates for how moist convective plumes interact with their environment), and computer hardware did not materially affect the outcome, which is thus a reasonable reflection of the underlying physics.
Does such agreement "prove" that a given projection will indeed come to pass? No... the assumed changes in forcings in the future may not transpire...
So how should one interpret future projections from climate models? I suggest a more heuristic approach. If models agree that something (global warming and subtropical drying for instance) is relatively robust, then it is a reasonable working hypothesis ... while it is seductive to attempt to corner our ignorance with the seeming certainty of 95-percent confidence intervals, the comfort it gives is likely to be an illusion. Climate modeling might be better seen as a Baedeker for the future, giving some insight into what might be found, rather than a precise itinerary."