by Judith Curry

I spotted this presentation by Arthur Dempster, Harvard statistician, in the Series on Mathematical and Statistical Approaches to Climate Modeling hosted by the Isaac Newton Institute for Mathematical Sciences.

Dempster is widely known as the co-originator of Dempster-Shafer Evidence Theory (see the Wikipedia for an overview). Elements of evidence theory have been discussed on several previous threads (see Italian Flag, reasoning about floods).

I find this presentation to be quite provocative. Here are some excerpts:

**Two Cultures**

*One concern is the basic problem of trying to get physicists and statisticians on the same page. Statisticians think of themselves as dealing with data as “information with context”. . . and with statistical models as parts of a complex system of tools for extracting meaning from statistical data. Physicists on the other hand tend to think of models as approximations to scientific truth, with the ultimate goal of research being to arrive at representations and explanations of such truth. The two cultures are very different.*

*Almost in contradiction to their pure science backgrounds, it has become a basic function of physical climate modelers to inform policymakers and other real world stakeholders about possible alternative future climates. When used in this mode, climate models are treated as carriers of information, so move closer to statistical models. Specifically, physical models become interpretable as information when their equations are regarded as approximating relations among the values of actual real world variables at successive points in actual time. Statistical models, should similarly be regarded as describing probabilistic relations among unknown true values of such variables, including probabilistic time dependence.*

**Approaching Probabilistic Models: What are the Issues?**

*What are the problems and prospects for moving from weather to the longer time scales of climate change? What unknowns are predictable probabilistically on longer time scales, and which are not?*

*A fundamental issue concerns the nature of uncertainty. All can agree that predictions are uncertain. But what mathematics should be used when computing and reporting predictive uncertainties? Here the divergence of the two cultures is astonishing. Few physicists have training and hence knowledge of how the mathematics of probability, together with its relations to scientific uncertainty, has developed over 300 years into a formidable set of theoretical structures and tools. The identity of the academic discipline of statistics was transformed, especially over the middle decades of the 20th Century, by competing methodologies for addressing scientific uncertainty. This is not the place to delve into explaining what developed, and how there are differing viewpoints, with mine in particular lying outside the statistical mainstream. *

*I believe it is fair to say, however, that how physicists approach scientific uncertainty has been scarcely touched by fundamental developments within statistics concerning mathematical representations of scientific uncertainty. An indication of the disconnect is provided by the guidelines used by the IPCC in its 2007 major report, where the terms “likelihood” and “confidence” were recommended for two types of uncertainty reports, apparently in complete ignorance of how these terms have been used for more than 60 years as basic textbook concepts in statistics, having nothing whatsoever in common with the recommended IPCC language (which I regard as operationally very confusing). Another indication is that experts from the statistical research community constituted according to one source only about 1% of the attendees at the recent Edinburgh conference on statistical climatology.*

**Roles for Statistical Modeling**

*Physical modelers often refer to two basic sources of uncertainty when interpreting the output of a climate simulator, namely, uncertainty about initial conditions, and uncertainty from discretization, or transform truncation, of space/time variables. From my outsider’s perspective, I would prefer an emphasis on attempting to model and analyze only the unique actual climate system, instead of the current practice of running and analyzing a series of mathematical and therefore artificial climate systems. Of course, the same pair of uncertainty sources arise in the combined physical/ statistical modeling approach that I am advocating. *

*My suggested model type is captured by the term “hidden Markov model”. The thing that is hidden is the actual past, present, and future of the real climate system, which is the domain of physical thinking and modeling that proceeds forward in time, such as may be represented, for example, by the equations of AOGCMs. Since the real processes are hidden, they cannot be directly simulated. Alongside the hidden system there is empirical data also linked to real space/time, and partially obscured from the actual system by observational error. The goal of hidden Markov analysis is to update posterior probability assessments of the true system, including limited ranges of past, present, and future, given stepwise accrual of empirical data. It is these probability assessments that should be updated sequentially. Once models are specified, this becomes a defined computational task for Bayesian or DS analysis. Fast algorithms are known. They may look like simulations from a physical model, but are conceptually different because they sample the posterior probability assessment from fused empirical and theoretical information sources, typically using MCMC methodologies.*

JC comment: hidden Markov models is something new to me, here is the Wikipedia description.

*One task for the statistical research community is to formulate and implement probabilistic space/time models, from which principles of statistical inference determine posterior probabilistic assessments of the true climate, to whatever level of detail the assumed state space permits. For the past up to the present, hidden Markov analyses, such as the familiar Kalman filter, or more complex versions thereof, fuse the information from the past concerning the actual process with information from the current empirical record. For predicting the future climate, there is no data, so statistical error models are no longer operational. The climate proceeds on its own with probabilistic uncertainty entering only through probabilistic uncertainty about the present state of the actual system. Predictive analysis then proceeds by forward propagation of probabilistic uncertainty.*

*The necessary discreteness of physical models suggests that they might best be regarded as tracking local averages across neighboring space/time regions. Because they are approximations to relations believed to hold for infinitesimal changes across time and space, they reflect model errors arising from the inability to represent natural processes in space/time domains smaller than discretization can capture. Physical modelers typically introduce “parameters” that attempt to adjust difference equations for the missing processes. I sense that statisticians could become more deeply involved in the development of probabilistic representations of such discretization errors, which in effect turn even the non-empirical component of the model into a parametrized stochastic process whose parameters need to be assessed through formal statistical inference tools (e.g., Bayes or DS).*

*The Problem of Chaos*

*Presumably most detailed physical climate models of the atmosphere predict chaotic instabilities in the real world climate system, analogous to those that make longer range weather forecasting a low skill enterprise. How might these instabilities impact the task of devising credible probabilistic predictions of long term trends in future climates? My response is to make a radical proposal, linking the recognized difficulty of predicting chaotic systems as the future time horizon grows with a fundamental change in how probabilities should be similarly degraded on a similar time scale. *

*The proposal is based on a weakening of Bayesian theory that I originally developed in a series of papers in the 1960s. Further developments were spearheaded by Glenn Shafer in the 1970s and 1980s ,who gave the theory an AI spin, and named it the theory of belief functions. I now prefer to call the DS (for Dempster-Shafer) calculus. It appears to be gradually gaining increased recognition and respect.A detailed exposition of DS is not possible in this note, but I wish to draw attention to two basic features of the DS system. The first is that probabilities are no longer additive. By this I mean that if p denotes probability “for” the truth of a particular assertion, here some statement about a specific aspect of the Earth’s climate in the future under assumed forcing, while q denotes probability “against” the truth of the assertion, there is no longer a requirement that p + q = 1. Instead these probabilities are allowed to be subadditive, meaning thatingeneral p+q<1.Thedifference1-p-qislabeledr,sothat now p + q + r = 1, with r referred to as the probability of “don’t know”. (Note: each of p, q, and r is limited to the closed interval [0,1].)*

*It is only for the last two years that I have focused on trying to explain what is meant by the DS concept of “don’t know”. I was helped when I ran across a reference to the following remarks by economist John Maynard Keynes, remarks that I believe have not been taken sufficiently seriously:*

*By “uncertain” knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to he summed. (Excerpted from “The General Theory of Employment”, Quarterly Journal of Economics, February, 1937, pages 209-223.)*

*A second fundamental feature of the DS calculus is a particular “rule of combination”, or principle, for combining information from different sources, such as physical models and empirical data concerning past and present. The DS rule is linked to a DS concept of independence, which might therefore be regarded as a severe restriction on its applicability, except that many special cases of the rule are routinely used with little overt concern about independence, including Bayesian combination of likelihood and prior, and Boolean logical combination. Independence in the mathematics of ordinary additive probabilities is another special case. Most models in the burgeoning field of applied probability can be viewed as constructed from many independent components.*

*The DS rule of combination is a powerfully inclusive tool of probabilistic analysis, with potentially important applications to probabilistic climate prediction. In particular, I hypothesize that DS-style probabilities of “don’t know” could come to be a basic way to separate unpredictable from predictable aspects of climate change. From the DS perspective, Bayesian inference cannot do this in a satisfactory way. An simple illustration may help to support my argument. It is easy to find on the web beautiful discussions and simulations of simple chaotic systems, beginning from the simple 3D model popularized by the late Ed Lorenz. The basic problem is that small perturbations in initial conditions often grow into large perturbations over “time”. A Bayesian approach puts a prior on the initial position, which may be tiny, yet soon is projected forward to a much more spread out marginal distribution. The result is typically a limiting predictive distribution over the whole system. In a weaker DS framework, the prior may simply be a small region about initial point, where you “don’t know” where the true initial condition is in the small region, i.e., you have r = 1 for the (p, q, r) of that region. As time progresses, your predicted “don’t know” region with r = 1 grows, possibly taking over the whole system. It looks to me pretty obvious that the DS option of a logical (i.e., nonprobabilistic) analysis is needed to represent the fade-out of predictability for chaotic systems. Additive Bayesian predictive posterior distributions are unable to function in this way.*

**The Problem of “Complex Systems”**

*Climate prediction not only has the problem of nonlinearity in dynamical systems, but also shares with the analysis of typical real complex systems the equally great trouble associated with the presence of a huge panoply of variables, subsystems, and possible feedbacks in play. I remember Rol Madden commenting, back in the 90s when I used to visit NCAR, that it would be a sheer accident if numerical experiments with GCMs were to give credible quantitative representations of the real climate, presumably including future effects of increasing GHG concentrations. *

*Suppose that the system was defined at the outset to include the full carbon and hydrological systems. Then add atmospheric and ocean chemistry, and then living and breathing systems everywhere. Much “don’t know” abounds simply about the present and recent past, let alone the fundamental problem of getting quantitative about the future.*

*What does this say about research priorities? Given the real climate system, characterized not only by fundamental “don’t know” coming from dynamical nonlinearities, but also fundamental “don’t know” coming simply from an inability to supply meaningful evidence-based priors in the presence of complexity, I believe that communities faced with needs for real predictions of complex systems should be investing in models of probabilistic prediction that provide measures of the type of “don’t know” described by Keynes, including the DS approach as a leading candidate for numerical implementations.*

**JC comments**: there are some powerful and new ideas here of relevance to climate modeling, notably the formal inclusion of “I don’t know”. I don’t quite understand all of this or how it might work in the context of climate modeling, I look forward to your interpretations and discussion.

The reason that you don’t understand him is because he is not writing very clearly, and because we do not yet know how to solve the problem of the mathematics of complex systems – but his underlying arguments are correct.

As someone who works in economics, which relies upon the “Dynamic Stochastic General Equilibrium Model’ and it’s related calculus, and is even worse at prediction than weather, I can suggest expanding the problem he’s discussing to that field as well. And we will universally tell you that models are absolutely NOT predictive. Models are useful in explaining data. They help us understand the relationships that can emerge from data. But they are absolutely not predictive. Period. Models are properties of the data employed in them. They are not predictive. Or rather, they are currently non-predictive enough that we should not base policy on them. Economists will only say ‘we have nothing better yet’. They will not say they’re models are predictive. And if they do they are trying to sell you something with a poor chance of return.

This fragility of prediction is Nassim Taleb’s warning in the Black Swan. It is also the source of the debate between Steven Jay Gould and Richard Dawkins. Wherein Gould suggets that evolution is driven by punctuated equilibria and Dawkins (wrongly) disagreed. Hume addressed this issue as “The Problem Of Induction”, which is that we cannot measure what is yet unknown. Astrophysicists have the benefit of long time scales and the error of aggregation working in their favor, but we still do not understand whether string theory or some variant, or something altogether determines the behavior of matter in the universe. Mandelbrot attempts to solve the problem somewhat with Fractal mathematics. But the truth is that our understanding of the primary causal relations in the mathematics of the real world complex systems is simply too primitive to solve this category of problems.

Models are not predictive. They are explanatory. They are historical. But they are not predictive.

Curt, I agree with most of your argument, but on modles not being predictive, it depends, of course, by what you mean by predictive – if you count policy assessment through scenario development as a type of prediction, then there are plenty of models that do that (our own model E3MG, attempts to do this). In this way, models might say if X, then Y, but agreed that is predictive in a weak way.

http://mitigatingapathy.blogspot.com/

There is nothing weak about a contingent prediction of the form “if X, then Y.” All predictions have this implicit form. For example they assume the world will not be wiped away in the next second by a supernova.

Sorry, my message was very sloppy – I meant weak in the sense of being contingent on unrealistic assumptions: “if a tax pricing carbon at $500 a tonne is introduced, then…” for example.

http://mitigatingapathy.blogspot.com/

Without input to this thread from professional climate modelers, it will be hard to reconcile Dempster’s views with divergent opinions in the modeling community. I expect there will be areas of agreement and disagreement – the agreement revolving around notions of chaos and uncertainty, and the disagreement based on constraints on climate behavior founded in physics (e.g., the laws of thermodynamics) even when mathematical constraints may be absent. Unlike Curt Doolittle, I believe the constraints do provide for a degree of predictability, albeit never as good as one might wish, and the empirical data are consistent with this interpretation. This could be argued endlessly, but some expert perspective on all sides would be most helpful, because non-experts are likely to argue from strongly held preconceived beliefs founded on less than the full range of available theoretical and observational evidence, with any informative resolution difficult to achieve.

It is hard to see how the constraints can create predictability. Presumably the models obey the constraints, but their results range from almost no warming to 10 degrees or more. The onset of ice ages must also obey them, so temps may drop by 20 degrees or so within these constraints. That 30 degree range is hardly a prediction.

Hmm,

I see a 30 degree range as a prediction. I’m guess Curt hasnt done much work with physical models for prediction. They dont emerge from the data.

GCM are ab inito models, derived from first principles or physical laws.

So, for example, you ask me to predict how fast a given car will take to run around a race track? I dont go out and start by collecting A bunch of data and derive a model from that data. I start with physics.

The first cut at the model will be crude. The car will be a point mass. The track will be flat. The model will make predictions. They will be wrong.

Then we look at what physical principles we missed. Oh, we forgot to model drag on the car. We model drag, but we dont have a measured Cd. Thats ok, we guess at it till we can get actual data. We have tools for guessing, the frontal area is a good guide. Then we realize that we have to bank the corners and calculate friction, and then, how tires and suspension work, etc.

In the begining the model is crude. A few physics equations. Its wrong. We might USE it to get a handle on the importance of key things

1. what generally happens if we increase horsepower?

2. what happens if we change tire compound?

Opps, we cant answer #2 because its not in the model. So we model that.

Model development of complex physical systems is an iterative process. It never ends because models are not true. they are useful. You have to SPECIFY the use before you evaluate the predictive skill of the model.

My aircraft, for example, may run a predictive model to estimate the distance I can fly till I run out fuel. It does this by integrating a physical model forward based on certain assumptions:

1. you wont hit any head winds/have an tail winds

2. your engine will suck fuel like its designed too.

3. It will produce thrust like its supposed to

4. the atmosphere wont change.

5. your wings wont fall off and you wont get shot down

It does predict. That prediction is useful. the underlying assumptions are almost never met , EXACTLY. Will it predict accurate to the foot? nope.

it doesnt have to. Does it warn me if I’m in danger? yup.

can I build a model that predicts how long it will take me to drive across town? yes. The first model is a very crude physical model. as the crow flys.

Distance = Rate * Time

Time = Distance/Rate

I plug in the locations and it tells me that across town is 10 miles.

I guess I can travel at 30mph and the answer is it takes me 20 minutes

to go across town. My appointment is in 5 minutes. Is my model useful?

yes, it tells me that I’m going to miss my appointment. So I call.

I want to build a better model, so I model the actual roads and I find that the actual route to get across town is 10.3 miles. I model stoplights, I model traffic. These models are built from a combination of physical law, probability theory and some actual data. They make predictions. we use those predictions, they are skillful. usually, but how do I model accidents? for that I need to assimilate real time data.

so i think Curt gets it quite wrong. we build physical models of the world all the time and they do in fact predict. they do. They produce at output for a time that hasnt occured yet. The question is, is that output useful for some purpose? tough question.

Steve –

In the begining the model is crude. A few physics equations. Its wrong. We might USE it to get a handle on the importance of key thingsWhich is where we’re at with present-day GCM’s. And what we should be doing with them . But we’re -or rather, the climate modelers – are not doing so. Instead, they’re telling the world that their models are sufficiently advanced that they can make

realpredictions that are accurate enough to set policy. Horse puckey. You know better than that and so do I.Not saying that they’re totally useless, only that they’re crude and incomplete. And likely to stay that way unless/until the “CO2 is the main (only significant) driver” attitude is overcome. CO2 has a role – but it’s not the leading lady as has been claimed. If it were, 20-30 years of research would been more than sufficient to have nailed it down. And it hasn’t. So who is trying to

USE it to get a handle on the importance of key things?Insanity is………..

I think you miss the point. The first point is that many people who have never worked with or built models continue to misunderstand them. yet they rely on them daily. they are a part of our lives.

The second point is this, the first thing that one would do is specify what measures one was trying to predict and why. And then one would specify what the acceptable uncertainties would happen to be.

So for policy, for example, how close does a prediction of sea level have to be? within a mm? hardly. within 10cm? within a meter?

So, Before you start saying the models are not accurate enough, you have to work the problem BACKWARDS. 1 meter of sea level rise over 100 years would make quite a difference. So you start with the change you want to detect. We want, for example, to be able to predict a 30cm rise in sea level over 100 years with 95% confidence, for example.

When you have that metric, then of course you can begin the task of evaluating a models usefulness.

When you start from the premise that you want a simulation of REALITY, well then you will never get that. there’s one reality. Everything else is just a model.

Jim

“CO2 has a role – but it’s not the leading lady as has been claimed. If it were, 20-30 years of research would been more than sufficient to have nailed it down.”

That’s an odd statement claiming some knowledge. Saying that C02 is not the leading lady, necessarily implies that yu have an understanding (model) that shows this is the case. Further, asserting that 20-30 of reseacrh is enough, is also odd. I find no law of logic, no physical law, and no emprical evidence that indicates 20-30 is enough. How in gods name would you go about validating such a claim. you might Think or suppose that 20-30 years is enough. How do you test that?

Curt is using predictive in a special narrow sense, as am I. He clarifies this by saying “They are not predictive. Or rather, they are currently non-predictive enough that we should not base policy on them.” In short, predictive means good enough for important policy purposes. You, on the other hand, seem to be using predictive in the very broad sense of any statement about the future. Since we are using the word differently, we may not disagree.

In my case it is clear that a 30 degree range is not a policy relevant prediction. I also agree with Curt that economic models are not good enough for policy making, nor are climate models. To return to the thread topic, Dempster helps explain why this is so.

30 degrees is policy relevant if the policy question is this:

Should we plan to leave earth because it will get 30degrees warmer in the next 100 years?

You see its quite easy to make any sort of model useful. A crude model is useful to answer some stupid policy questions.

You only have a car. You are 100 miles away from the next gas station. Your car averages between 10 and 30 MPG. you have

one gallon of gas in the car. Your sitting at a shell station. Its your policy to never buy gas at shell. Do a crude model of how much gas you will burn getting to the next station. You know nothing about the terrain. Do you change your policy? of course.

Your difficulty is that you look at proposed policies and you see that those policies are costly.You dont like the policy so you demand a lot of precision in the model. Instead, free your mind. Start from the question of what size of change in sea level would be interesting or probitive to policy, ANY policy, not just the ones you dont like

The question of which policy is correct should be divorced from the question of how much certainty do we need to act.

Sorry, but I confine my time to present policy issues. I have a war to win.

I agree about Curt’s use of the word predictive. A model measures behavior relative to the terms used to define it. If a measured outcome differs from the “model” then either there is a missing or newly introduced variable that has not been accounted for, or one of the variables was poorly understood to begin with. There are some things that cannot be known. You can only predict the outcome of a horse race — you cannot “model” it.

Well, I think I’d have to disagree on that, and with Curt above. He says in only one place that models aren’t predictive enough. In three or four other places he emphasizes an absolute: “They help us understand the relationships that can emerge from data. But they are absolutely not predictive. Period. ”

This is not true, in my opinion, but it’s an interesting debate that comes up a lot in model building. Nassim Taleb’s point wasn’t that models can’t predict. His point is that models predict the rule, but they don’t predict the exception. If the model is “all swans are white,” then that model will be right

mostof the time. And it will be wrong every time there is an exception – every time a black swan comes along. Fortunately, this is rare. If it wasn’t rare, it would be built into the model.And I also disagree about the point of building a model. Models help us to understand, yes. But they also help to predict. These are two sides of the same coin. We strive to understand the world so that we can predict the world. This is true for computer models, physical models, even mental models. A “rule of thumb” is a model … a statement of what is likely to happen: a prediction.

In fact, I would go further. I would claim that when models are used to understand or explain, it is so our predictions become better, but that models

cannotproduce explanation. Rather, explanation informs the models.Oh, sure – there might be the special case where we understand this rule or that, and then we put together all the rules in a computer and generate the outcome. The the model “explains” what happens when all these rules interact. But this is only because the computer can calculate the complex inputs better than we can.

And we might tweak the input parameters, until the outcome is exactly right, within some error. So the model might be used to tweak our understanding. But anyone who builds models knows, there are LOTS of potential combinations that produce the right output. The broad strokes have to come from us, not the models.

The problem with the word “predict” is that it implies that an event can have multiple outcomes. This is untrue. Every event is a discrete observation with only one possible outcome — the outcome that is observed. This is not a statement of determinism, but it does define the limitations of objective knowledge. There are things that we cannot know. There are things that are nothing more than guesses — it doesn’t matter if you assign a percentage to it and pretty it up with fancy terms, it’s still a guess.

Mosh – I see a big difference here in that race car dynamics, engineering and physics are pretty well studied and understood. Modern telemetry provides a lot of feedback data for analysis and comparison to expected performance. Areas of discrepancy are/are not bottomed out and the results or otherwise are publicly seen in on-track performance. From what I see on the climate blogs climate models are over estimating the rate of temp. change and they appear to be operating in a poorly understood and immature area of science. IMO at this stage it would be a mistake to link any policy to them, except perhaps: “Society needs high quality, open and verifiable scientific research into weather and climate”.

well I used a simple example to get people to understand the silliness in saying that climate models dont predict. Also, I think its a good example for understanding how the requirements for a model need to be specified up front. Also, we cannot let the perfect be the enemy of the good enough.

For example, San Francisco is going to allow a bunch of building on Treasure Island. Given what we know about potential Tsunami’s on the west coast, and given that we might see up to 1 meter of sea level rise, I think, its prudent to take both of those concerns on the table in doing the EIR. if our best science ( immature as it is) tells us we have a chance of something happening, I think that policy makers should take that under advisement.

Simply, if a local community wants to take the warnings of AGW seriously in its policy decisions, I think they can.

Thats a different question from the level of knowledge I need to impose my will on people outside my community. Make sense?

Ok, now when King County, WA wants to eliminate a waterfront road (against the objections of the locals, and marooning the only local fishing dock) because of some resolution that was passed by some committee by a bunch of mainlanders mumbling something about taking climate change into account in future projects, is that equally wonderful?

See the problem? Once you invite those critters to the table, you never get rid of them.

The question is not whether they can, but whether they should? The answer is no. Believing the models without good reason is the heart of the problem.

A 30 degree range (over any time scale) may be a prediction, but it is not one that has any value. You don’t know whether to bring a coat or put on your shorts.

As I said it depends. If I tell you the range will be from 20c to 50c, you know not to invest in a snowmobile.

Most knowledge is useful for someone, somewhere, in some situation. but there might be some useless knowledge. Like are there an odd or even number of stars?

youll do better arguments if you avoid words like:any always no never impossible.

This is so far off base, I dont know where to begin. I will give you the benefit of doubt and assume that you have modeled some very simple things in your line of work. But your comments and examples make it very hard to give you that benefit of doubt. I will give you credit for getting a few high level generalities about modeling right. But that shouldnt be hard for anyone who knows even the least about models. But not much more

1) You get the fact that models start out being crude or not very useful. Not necessarily true in all cases that curde models are not useful. But I wont quibble with you. It depends on what you set out to model and to what accuracy. If your needed accuracy for the particular application you have in mind is provided by the crude model that is all you need.

Right after that you get just about everything wrong. Starting with the statement: “It does predict. The prediction is useful”. You have no idea what it predicts or whether it is useful until the model has been validated and its error bands have been derived. No one will consider your model worthy because what you think is the right physics has been used to build that model. It is actually possible to easily show in mathematical terms, that a flip of the coin predicts better than your model or that the cumulative error margin of the model output is so bad that is worse than hiring a clairvoyent to predict. The question is not whether the model starts with complex math or physics underlying the entity you are trying to model. You could just start with a dumb model that says every leap year the temp will raise by 0.2 degrees without showing the underlying physics or math that led to that model. The question is how close that model came to predict the results. Until your model has been shown to predict the entity’s behavior with a certain degree of accuracy that is needed to answer the questions about the entity’s behavior that you are pursuing, your model is always not useful, no matter how accurately you think the model is trying to capture the physical phenomena and how complex the physics or math behind that is. This is more true of complex system modeling where the error in each subsystem that you are modeling could accumulate to such horrendous degrees for the whole system behavior, that you wont even give it the credence that you give to clairvoyents, much less calling it a “useful prediction” of any kind, however crude. This is true even when you manage to show that your individual subsystem models are quite accurate with mesaurements.

But your examples are trivial and the conlcusions you reach about model usefulness from your examples are incorrect.

Fred,

I think the point of the article is that ‘prfessional climate modeler’ is an oxymoron.

I doubt that Dempster would agree with you. He is trying to make a serious point.

I think that every graduate student at some point is exposed to the Statistitian’s Lament: the scientist goes to the statistician with her data and asks “How can I analyze my data?” The statistician answers “You can’t.” So the rule, so rarely followed in the breach, is ‘go to the statistician BEFORE you collect your data,’ so that it can be collected in such a way that it can be properly analyzed later.

Steve McIntyre has created a cottage industry out of making this point to the paleo-climate crowd. Apparently, the problem extends into modelling as well. One wonders what professional statisticians do with their time – they certainly do complain of not being asked by scientists to apply their skills when appropriate.

Seems all rather obvious in hindsight, but alas, given Dempster’s own description as

lying outside the statistical mainstreamit’ll all be dismissed as the words of a Statistical Truths Denier (not to mention the vague undertones of Climate Denial throughout the presentation. He forgot to mention how dire the situation is, and there’s no list of canaries in coalmines!!).Next frontier: break the mental (and knowledge) barrier that makes climatologists believe they can run computer models with not a computer programming specialist in sight. It is the same identical story as with statisticians, really, with climate scientists specializing in the improper use of obsolete tools, without any clue about decades of development in another discipline and with machines too powerful for the users’ own good.

“… with not a computer programming specialist in sight.”

Yes, there is a particular kind of programming specialist that any system, and most especially any complex system, requires: a system tester. And computer models with numerous variables, numerous relationships among those variables, need great volumes of testing to ensure that the feedbacks among different relationships and variables actually perform the way “reality” does (rather than just the way that the “experts” expect it to perform).

Testing requires careful differentiation in the model results when any single variable or group of related variables changes in a certain way. Does it happen that way in the real world, taking account of all feedbacks, and their impact upon other variables? A broad assumption, for instance of the sort that an increase in the percentage of atmospheric CO2 results in an increase in global temperatures, needs to have empirical evidence to be verified against, taking into account changes in absorption by the ocean, growth of forests and grasslands due to the increased CO2, with the resulting net effect on CO2 and on albedo of the regions affected and the resulting change in absorbed or reflected solar energy, etc., etc. How do the modelers KNOW that the net effect of all of these subsequent changes as specified in their model actually match what happens in reality? If they haven’t shown that the results are accurate in a short term situation, then they don’t really know that their model works over the long term. I suspect that as long as it gives them the results they want to see, that net temperatures rise in the model, then that is good enough for them, regardless of whether they can show that such actually happened in the real world under the exact conditions of their model run. Programs need to be tested to verify that they produce realistic results.

Further to Dempster’s analysis, West & Scafetta’s 2010 books may help with methods to grapple with the statistical complexity of climate:

“Disrupted Networks: from physics to climate change,” Bruce J. West and Nicola Scafetta, World Scientific Publishing Company (2010).

4) Fractal and Diffusion Entropy Analysis of Time Series: Theory, concepts, applications and computer codes for studying fractal noises and Lévy walk signals. Nicola Scafetta, VDM Verlag Dr. Müller (May 28, 2010)

Scafetta shows statistical evidence for solar influence on climate beyond conventional climate models.

Empirical evidence for a celestial origin of the climate oscillations and its implications

He addresses synchronization of coupled harmonic oscillators.

Scafetta and West (2010) address misunderstandings by Rypdal and Rypdal on such complexity analysis. Comment on ‘‘Testing Hypotheses about Sun-Climate Complexity Linking’’

There appears need for much more effort on grappling with both with major statistical issues involved (as highlighted by Dempster and Scafetta) as well as identifying natural causes that can have strong impacts on climate far beyond what is currently included in climate models (per Scafetta, and Svensmark).

Paul.

Keynes, who wrote “A Treatise On Probability” said : “The social object of skilled investment should be to defeat the dark forces of time and ignorance which envelope our future.” Investment being a problem of prediction. Prediction being a problem of time. Time increasing the effect of external forces. External forces being being a function of amplitude, frequency and decay.

Of course there are models that work, if sufficiently simple. All formulae are models to some extent. The question that challenges any model is the amplitude and frequency of external forces not accounted for in the model’s assumptions, or easily derived from the data, and the method of it’s capture.

So, I would say that we’re in agreement if you say that models are tools that provide us with understanding, but only because prediction (rather than the often misstated ‘repeatability’) is the only test of scientific accuracy, and scientific accuracy is , in turn, the only test of our understanding. (So to speak.) But I’ll stick with the fact that models are demonstrably not predictive. They are simply the best tool that we have given our understanding, and the vast complexity required of complex systems.

Curt – Weather prediction more than one day in advance is far more accurate than it was several decades ago, based on fairly complex models that resemble climate models in some ways (and are very different in others, including their greater dependence on initial conditions to minimize chaos-induced errors). I don’t think one can generalize about model predictive skill, or even the skill of the complex models, because it depends on their objectives, the timeframe involved, and the availability of accurate input data. It’s certainly fair to say that model prediction is far from perfect, and that complex models will remain imperfect, but I believe that claims they will not be able to make reasonably accurate predictions may turn out to be an inaccurate prediction.

Fred,

‘Weather prediction more than one day in advance is far more accurate than it was several decades ago, based on fairly complex models that resemble climate models in some ways…’While this is true, I wonder how much of this fact is due to probabilistic trends in real world data. That is, the model predictions depend on both forecasting models of the actual physics, as well as historical data on what similar weather patterns have done in the past. In the last several decades, our ability to take tons and tons more weather data has improved with satellites. That helps us understand that when a specific front moves over Des Moines there are only so many possibilitiesthat have occurred with a given frequency.

I think there have also been great strides made in computing power, which greatly helps the physical forecasting from specific initial conditions, very similar to some climate models. I’m just not sure how much of the improvement in model predictions is those changes and how much is our improved historical dataset.

Maxwell – I’m sure it’s both, but I’m particularly impressed by the ability of forecasters, aided by models, to continually update the predicted track and intensity of hurricanes on an hourly basis, even though it’s doubtful that any hurricane exactly follows the trajectory of a previous one. I believe this requires what is essentially a modeling exercise – to formulate algorithms based on general physical principles and parametrized with the aid of historical data that can be applied to specific new data to yield predicted outputs when the data themselves and consequently the outputs will be unique to the particular storm being tracked.

Here’s something I found on a quick browse – Hurricane Prediction Models.

let me know when Scaffetta agrees to release his code. Until then, It’s not very interesting work.

Errata

Nicola Scafetta

“Empirical evidence for a celestial origin of the climate oscillations

and its implications”

Journal of Atmospheric and Solar-Terrestrial Physics 72 (2010) 951–970

http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf

Suppose it’s true that, due to well studied physics and feedbacks and the insiginficance of unknown feedbacks, carbon dioxide will lead to an increase in global mean surface temperature consistent with a climate sensitivity of 3ºC, and that the increase in temperature we’ve seen so far is indeed a consequence of the present increase in GHGs equivalent to maybe 430 ppm CO2.

If we ran an idealized data set consistent with the above hypothesis through Dempster’s “hidden markov” machinery, would that machinery be able to make a prediction about the different consequences for temperature in 100 years, of continuing to emit CO2, vs. not continuing to emit CO2?

If so, I’d like to know how. I don’t see how it can make the necessary causal connections.

So an explanation of gravity in the past, is no basis for predicting gravity in the future?

No as gravity (as we currently understand it) is constant (in relation to certain factors). There is no prediction required- only calculation. There’s a difference.

The simplest way to see the problem is how I put it above.

Model the time it will take you to drive from point a to point b, in a car with a top speed of 60mph.

You don’t collect data and model the data ( statistical model) you build a physical model based on physical law. Then you refine as necessary.

And also, Gravity is a MODEL, all physical law are models.

In some views of things gravity doesnt exist.

http://bigthink.com/ideas/22877

Nor does rotation or forward motion of the solar system.

All has NO bearing to current science.

Currently precipitation HAS developed a very interesting pattern.

I like this simplified example of a model. Let’s assume you actually use this basic model for recommending some government policy. But when you “model the time it will take you to drive from point a to point b, in a car with a top speed of 60mph,” and compare your results to actual data collected when a series of such trips are made, what do you do when your model does not reflect the actual data? Ever?

Let’s say you have a bunch of physical models of the same simple problem, and none of them have been able to predict subsequent trip times. Does that tell you something about your models? Or do you deny that they need to be “validated” because your critics just don’t understand your sophisticated models? “Don’t worry, our models are fine, they will begin to be accurate in 50-100 years of travel. Trust us.”

Or can a critic logically say there is something wrong with your physical models because none of them can accurately predict the elapsed time of a series of trips? How explanatory are the climate models if they are not able to accurately predict the over arching issue for which they are being urged as evidence – the rate of global warming? If the prediction is wrong, it’s a pretty good guess that the explanation is as well.

Or do you deny that they need to be “validated” because your critics just don’t understand your sophisticated models? “Don’t worry, our models are fine, they will begin to be accurate in 50-100 years of travel. Trust us.”

What I’m describing is the FIRST STEP in validation. Defining the conditions and decisions the model will be used for. Now of course that hasnt been done. In the abscence of the excercise one side has said “they are good enough” another side has said “they are horrible”

Typical situation when there is no requirements doc.

Hi, Steven. I would say instead that your method is one way to build a model, but that you could also build a statistical model. In fact, I would go further, and say that this – a statistical, rule-based approach – is how most models are built.

Think about your example of driving across town. You “guess” how fast you can drive in town, and how long the lights take, etc. This guess is statistical … you don’t really drill down that much to first principles (how many cars are likely to be at each light, reaction times, propensity to speed, or even – absurdly – the performance range of every car likely to be in front of you). You take the statistics – the rule of thumb – that each light takes about X minutes, and there’s four lights, etc.

In your earlier example, I WOULD use statistics to tell you about how well a certain car will perform on a certain track under specific conditions. I would gather data on a bunch of cars, use the data to figure out the pertinent differences between cars (controlling for drivers, weather, etc.) and make a prediction based on the distinguishing characteristics of the car in question.

You could use fundamental physics to design, say, a radically different fuel system. You may even have a hunch that the new system is better, because of X, Y, and Z. But you’re still gonna test drive it, just to see. It’s too easy to overlook a bunch of important things with fundamental principles. The new fuel system may make handling harder, for example, and it never reaches the potential that your fundamental principles predicted.

Reminds me of my favorite engineering joke: In theory, theory and practice are the same; in practice they aren’t.

I think historically many (most) physical laws were determined by interpreting experimental data, and choosing a model for it that best explained the data. The exception to this is seems to be primarily in the area of theoretical physics, but these new theories were/are mathematical extensions of previous theories derived to be consistent with the existing theory. They would also interpret the old experimental data equally well. Physicists have been using statistical analysis of data for longer than there has been a statistician.

Doesn’t that just mean it’s a very

simplemodel?No i don’t think so. I could be WAY off here, but in this example, gravity is pretty much a constant- you’re not ACTUALLY modelling anything, you’re just determining it’s state to a set number of criteria which behave in set ways.

MOdelling (the climate) relies on chaotic, under-understood principles being interpreted in such a way to give (fairly) accurate representations of the ‘state’ of that system. It’s (in this context) used in place of knowledge to simulate unknown future changes and states.

There is a significant difference (to my simple mind) in a) the level of knowledge in both examples and b) the required outcomes.

Hope that makes sense.

I’m not ashamed to admit that the middle the last 2 sections of that post went above my head (with a sonic bang), but- i think i got the gist of it. Just.

I REALLY like the HMM idea, it’s something i’ve never come across before and although i’m not sure how you’d apply it to, or how effective it would be for modelling the climate, my gut tells me it is a far better approach than relying on the (false) assumption that all the inputs are a) known and b) quantifiable- as the current climate scientists ‘believe’.

My question for the OP would then be- once operating under a HMM, is it then possible to ‘hone down’ on some of the unknown factors? I.E. to try to identify the more significant aspects (i’m of course aware that identifiying the significance of unknown factors is very difficult- but could the HMM model show be used to demonstrate that there ARE unknown factors at play and their likely strengths?)

Also, the (repeated) point that climate scientists are not only NOT statistically qualified but also barely mathematically qualified (the latter more alluded to than openly stated- at least thats my interpretation) is very important.

Finally, the inclusion of a ‘we don’t know’ probabalistic outcome is VERY important too, as i think the almost binary response of the bayesian method is very limiting, especially when it is applied to chaotic systems.

Very interesting stuff- which of course will be dismissed out of hand by the establishment.

I’ve played around with HMM in speech recognition, very cool.

Thats interesting to know- tell me what’s your impression of the system? Can you, as i queried above, use it to determine that there ARE significant unknowns at play (without necessarily having to identify them)??

Could be a really useful tool if that’s the case.

The way he uses “don’t know” is somewhat confusing, since the phrasing itself implies that we may find out in the future. But he uses it to mean “can’t know” — i.e., unknowable.

There’s a huge difference. When the r expands to nearly 1, you must then admit that you are dealing with intractable mystery!

Including formally “don’t know” appears to me very problematic, as it is taken to mean: “don’t really have any preference”. In practice it’s not possible to draw a clear line like that. “Don’t know” means very often that I have some idea and some preferences, even a vague estimate of a PDF, but nothing precise. Replacing that with an absolute “don’t know” may lose essential valid information.

Demptster had in his presentation the example of Lorenzian chaotic system where he picked a small region stating no knowledge within the region, but excluding the outside. An alternative approach might have been an PDF of initial values. The two alternatives may lead in such examples to very different results, and I’m not at all convinced that the DS framework result is the better one in describing the state of knowledge.

Rol Madden’s confession about the existing models seems refreshingly honest:

“I remember Rol Madden commenting, back in the 90s when I used to visit NCAR, that it would be a sheer accident if numerical experiments with GCMs were to give credible quantitative representations of the real climate, presumably including future effects of increasing GHG concentrations.”

Having said that, I would imagine that developing a new model from scratch using a new methodolgy would not be a straightforward. Many of the existing models have evolved over many years so it would be difficult to just knock up something completely new in quick time.

Does it make any sense to say we can have understanding, without thereby also implying we have ability to predict? To some extent anyway – needn’t be perfect of course.

For a list of all the Newton Institute seminars on modeling, see Seminars

Regarding Hidden Markov Modeling, I’m not sure how that could utilize important information about past processes deetermining heat distribution in the ocean, particularly the deep ocean, that play a critical role in climate evolution but would be very difficult to deduce from ongoing observations simply designed to update prior probabilities. Other examples are likely to exist as well, including deep ocean buffering and carbonate changes relevant to ocean acidification from rising atmospheric CO2.

I get sick of the brain-dead argument that the climate models are the best tool we have. The reality is if your best tool still isn’t adequate for the purpose intended then you shouldn’t use it because it might be telling you lies. That this happens all the time in environmental modeling doesn’t seem to phase anyone but it should. However it’s even worse in the case of climate models because of the sheer amount and uncertainty of the input parameters. The model becomes no more than a fancy mathematical wrapper around your beliefs and assumptions. The modelers quite simply fool themselves. As Christie continually points out, every test with real data proves that the models are hopelessly inadequate on nearly every level . So any paper that uses model results for impact predictions, instead of eg the statistics of past events, is utterly worthless.

Wish I’d written faze instead of phase.

In ecology, it has been known for decades that parametric uncertainty can propagate through a dynamic ecosystem model, and a large literature exists on this topic. Climate modelers seem to resist evaluating parametric uncertainty propagation (and accompanying numerical method error). Instead, they want to take multiple models and only compare their output (but again want to take the highest warming trend as alarming but ignore the lowest one). Mutliple recent studies show that at regional scales or for predicting hurricane trends the models do not agree with recent historical data or with each other by sometimes orders of magnitude, but still it is insisted (with no evidence) that 100 year global forecasts are ok.

I do like the idea of a “don’t know” category.

I don’t think I understand what makes this controversial, or even terribly intersting. From what I can tell, including “r” is the analogous of including uncertainty bars, since the examples offered don’t present “don’t know” as a third truth value, but merely a variable for holding certain information we have about the truth values. For example, one example offers testimony about color from a person with red-green color blindness as a source of color data for the proposition “red or green.” Obviously, that is experimental uncertainty, not anything inherently problematic about color values.

Although this may well be topical, since, I do believe, climate science generally (like economics) needs to become a lot more sensitive to what it doesn’t know, far more provocative would be an accounting for propositions that simply are neither true nor false, but in a third category. Now that would be provocative, since about 80% of everything we think we’ve proved mathematically relies on the axiom that all propositions are either true or (xor) false.

I would think that changing the math of probability theory is significantly different from adding uncertainty bars to narratives.

Yeah, so would I have–that’s rather my point. If all “r” does is quantify our subjective uncertainty–which appears to be the case–its not really changing the math of probability. Given the way it was presented, I’d thought it was a rather more radical proposition. The analogy that immediately springs to mind is quantum uncertainty. At first, one is inclined to think of it as merely a limitation on our ability to detect (e.g.) the location of quantum particles. But it turns out that the very concept of “location” simply does not mean the same thing at quantum levels as it does in the macroscopic world. I thought that was the type of distinction Dempster was trying to capture, but I conclude from the “red or green” example propositions that it’s not, really. Objectively, “red or green” will decompose into “red” and/or “green” simply by asking someone who’s not colorblind.

Tackling the issue from another angle, “r” appears to be a very different kind of entity from “p” or “q.” They hold information about the objective world, while “r” only holds information about our subjective vantage point. Which I believe would make Dempster’s work a useful refinement for practical applications of probability to imperical applications, rather than a substantive change to probability theory.

This is Baysean logic so all the probabilities are subjective.

We know that not all propositions are either true or false, including the statement “This sentence is false.” My own (naive) uncertainty about uncertainty (and Godel) is the extent to which undecidable propositions can be proven to exist if they are not in some indirect way self-referential. I’ll defer to those who argue affirmatively, but I still wonder.

This has nothing to do with Dempster’s work.

Yeah, I used to raise this issue in math classes to challenge the validity of proofs. Most of my profs simply disagreed–they said all meaningful statements are either true or false, and simply declared self-referencial assertions to be mathematically invalid–i.e., “meaningless propositions,” equivalent to phrases like “chapters contact intentions.” Altough I’ve tried for years, I haven’t been able to come up with anything that both seems like a meaningful proposition that is neither true nor false and that is not self-referencial. Certainly nothing that I could express in mathematical terms.

I like the Godel angle, though. So the statement “this statement is false” isn’t a proposition–it’s an example of a set of parellel equations requiring only one equation to be over-defined.

Well, Mr Dempster obviously has a problem:

He either unable or unwilling to communicate.

Probably the last.

What don’t you understand? More to the point, how much do you know about advanced Baysean theory? Have you read any of Dempster’s papers?

Dr. Curry,

It would be interesting to have Nicola Scafetta comment on this. Scafetta is both a physicist and a statistician. Also, Orrin Pilkey is very experienced in computer models of shorelines.

I agree with Curt that the original post wasn’t very well written. At least, I had trouble following it. Maybe it’s me.

But since we’re talking about models, I thought I’d add something I’ve noticed lately. I’m starting to worry that climate modelers in general don’t really get the big picture of what they are doing. This is only based on two anecdotal examples, so I hope I’m wrong.

The first example was recently on a climate blog – maybe this one, maybe CA or Watts … don’t remember. But there was a modeler on there and he made a statement about Validation and Verification that I found astonishing. He said that V&V for climate models has nothing to do with making predictions and then comparing these predictions to the real world. He said, instead, that the V&V process is ONLY used to see if the code properly instantiates the known physical theories. If the model successfully matches the predictions of the theory, then the model has been validated.

The second example was in a similar vein. This was from a conference I attended last weekend, and the guy talking was a climate modeler and a self-described skeptic. (Although he didn’t define ‘skeptic,’ so I’m not sure where along the very wide spectrum he meant by that.) He was making a point defending GCMs, and said something like: if you have a particular parameter that represents a real, physical rule, then you cannot set that parameter outside of the acceptable physical range. For emphasis, he then said something like, “If X represents rainfall, and you set X to a negative number, that would be absurd.

Even if your model then perfectly replicated the climate, you can’t do that.”What these two examples have in common is that both people are concerned more with getting the physical properties of the model correct, rather than producing a useful model. A model that cannot predict is a useless model, in my opinion. It goes against the fundamental purpose of modeling. Even models that purport to explain do so in order to aid prediction. That is, “explanatory” models explain stuff so we understand it better, and in understanding we can make better predictions.

If I had a huge and complicated GCM that didn’t work until I changed the rainfall parameters to negative numbers, and then it worked perfectly? I would use it. I would use negative rainfall all day long and make millions. And I would change the label from “rainfall” to something different. Sure, it would be nice to know what that “something different” really is, and to know

whythe model works. But knowing why it works should be very much secondary to the fact that it does work.I feel like I’m stating the bloody obvious. I simply can’t understand the frame of mind that says these models don’t need to predict anything, that that’s not their purpose. I feel like maybe I’m misunderstanding.

So I hope I’m wrong. But I worry that the climate modelers have gotten so used to not being able to predict the climate that they no longer

tryto predict the climate. Like, maybe they spent millions on a model, and years of work, and they’re very proud of it. And the model shows temperature going up the last ten years, but the temperatures didn’t get the memo and stayed flat instead. Or maybe the model shows that where temperature goes up in, say, Australia, droughts occur. And then the real world shows (I don’t know if this is true, just an example) that rainfallincreaseswith higher temperatures.These guys don’t like to say all that money and work was wasted – they don’t even want to admit it to themselves, which is perfectly understandable. That’s human nature. So they point out the 10 or 20 innovations in the model and focus on those, and they de-emphasize the missed temperature thing, or the missed drought thing. And next thing you know, prediction is no longer considered essential. V&V now has nothing to do with the real-world. And somehow – maybe by group-think or something – everyone else is going along with this.

So those two anecdotes I mentioned above makes me worry about this: worry that the climate modelers have lost sight of the big picture. But I hope I’m wrong. Maybe these two guys are the exception. Or maybe I missed a subtle or implicit point. I hope so. Because a climate model that doesn’t predict the climate is pretty useless in my opinion, no matter how accurate the parameter labels are.

Ted, I have expressed my opinion before, and I am not sure it is worthwhile repeating it. But here goes anyway. There is nothing wrong with climate models. The same basic equations are used for short term weather forecasts. Before any commercial aircraft takes off, the pilot must file a flight plan , based on the most recent forecast. Just consider how many flight plans are filed per minute on a world wide basis.

On a recent flight from Ottawa to Heathrow, the pliot told us that the winds were so strong over the Atlantic, the flight would be half an hour less than scheduled. If we took off on time, we would arrive early, and then wait for permission to land. So we took off late, and arrived on time, saving half an hour of jet fuel.

When the proponents of CAGW looked at the problem all those many years ago, they were faced with an insuperable obstacle; you cannot do controlled experiments on the earth’s atmosphere. What they ought to have said when asked what happens when we add CO2 to the stmosphere, was “We just dont know”. Instead of this, they claimed, in effect, that the output of non-validated models was the equivalent of, and a substitute for, experimental data. This never was, still isn’t, and never will be true.

So, it is not the models that are wrong. It is the misuse of models by people with the academic qualifications that they ought to have known better.

Ted,

If you had a model making accurate predictions which had negative rainfall, then it is a coincidence or series of coincidences – not predictions.

The goal of predictive climate models is a fool’s errand. It is not now and never will be possible to predict nature 100 years into the future.

Hi, Ron. Good catch … I was presuming a series of coincidences; i.e., it was hindcasting on, say, an annual basis for 100 years. If you make one parameter impossible (e.g., negative rainfall) and the model suddenly clicks into place and gets everything right, then it is more likely that the parameter is mis-labeled rather than just a string of coincidences.

I agree that you can never be sure with your predictions. You might look at the 60-year cycle and think that’s a pretty good rule of thumb, and temperatures will stay level or go down until 2030. But if you don’t know why temperatures have been steadily rising since the LIA, then the unknown parameter could kick in at any time and temps could go down more than anticipated. Of course, most of these hugely complicated models didn’t get even the 60-year cycle right, and they had temps still going up, sometimes even accelerating. Probably, the CO2 sensitivity was too high and the cosmic rays were discounted too much.

So yeah, I take your point. I guess I just disagree as to whether the question is a binary one: i.e., you either can predict nature 100 years from now or you can’t. I think instead it’s a continuum … that is,

how wellcan you predict nature. I think if you extrapolated from the last 130 years – taking the 60-year cycle into account – you would have a pretty good guess on global temperatures in 2100.He said that V&V for climate models has nothing to do with making predictions and then comparing these predictions to the real world. He said, instead, that the V&V process is ONLY used to see if the code properly instantiates the known physical theories. If the model successfully matches the predictions of the theory, then the model has been validated.Steve Easterbrook is who you’re thinking of (blog). He commented a few times here last fall, but unfortunately did not stick around to defend that and a few other questionable comments.

Wow. That’s a pretty damn good memory, Gene. Thanks.

Not a problem. I remember it because it was so disappointing that the conversation didn’t develop. So much of the discussion around models is very dogmatic and not very enlightening. Had someone with a background in it actually engaged, then it could have been a real learning experience.

Yeah, I saw something similar to your V&V anecdote in a documentary about climategate someone linked to. The narrator, after aggressively cross-examining a skeptic, then went to one of the “insiders”–I foret which–and asked him for evidence that the models reflect reality. The guy said, “Look at the cloud patterns. It looks just like the real world.”

I wanted to mock up a program that generated random patterns of golf tee silhouettes, and call it a model of the code of Hammurabi in cunniform.

Newton’s model for gravity sets the speed of gravity to infinite, which is physically impossible. So, under the rules of Climate Science, Newton’s model of gravity is invalid.

What is more valuable? A model that uses impossible values and is 99.99999% correct, or a model that uses only possible values and performs no better than chance?

The value of a model comes from it ability to predict the future. Period. How it gets there is not important, because of the hidden variable problem.

The reason setting rainfall to a negative number works is because of the hidden variable that has not yet revealed itself. By focusing on only allowing known variables into a forecasting model, this gurantees the model can never by accurate unless and until you know all the inputs, which is unlikely to ever happen.

In other words, the more you try and restrict a model to what is known, the less able to model is to discover what is unknown.

The sound of quail calling in the dark for the rest of the covey,

The sprout that reaches the top of the soil to get to sunlight,

have more sense of purpose than numerical models.

“I don’t quite understand all of this or how it might work in the context of climate modeling, I look forward to your interpretations and discussion.” (JC)

To produce a viable, worthwhile, new prediction one must have sufficient data. The golden crown goes to the ones who find and sort and interpret and guess right first. Ohhhh yes! And publish via some medium that will hold up in court.

It is imperative to factor in all known variables. It is also critical to have a “feel” for where we are and where we a likely to go in the sceme of the “science” we are slogging around in in the near future; some call it the “Gift”. Few have it.

PS: Some call it “Luck”. (Heaven loves a duck who can float on the surface, dive and catch something worthwhile more than not, fly with the mob, and produce and raise a new brood.)

I am most impressed by the concept, however it is used, of ‘don’t know’ as a ‘truth value’ of any particular piece of data or element in a calculation. The Keynes quote: ‘About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.’ is [pardon the pun] key.

It is not that we just ‘don’t know’ — but that we acknowledge that there is ‘ no scientific basis on which to form any calculable probability whatever. ‘ In my personal logic system, I have a category of ‘things I don’t know’ but I also have one of ‘things I believe can’t be known [at this time]’. On a personal basis, this allows me to forgo beating my head against a wall trying to find answers to some questions — those in the ‘can’t be known’ category.

So Dempster assigns calculable probabilities of p (probability that this is true) = X , q (probability that this is not true) = Y , and r (the part we can’t know) = Z so that p + q + r = 1 .

The devil is in acknowledging how many of these p+q+r values really have a value of 0+0+1=1 … where we must acknowledge that ‘there is no scientific basis on which to form any calculable probability whatever. ‘ More errors are created by the failure to admit this than by any other means.

To many papers include the working equivalent of : ‘ we can’t really know for sure and calculations are difficult, but we assign a value of XXX which seems reasonable….’ .

I will propose a tweak:

p + q + r + s = 1

where s ~=1 is the probability that abstract probabilistic modeling assumptions are fundamentally flawed.

I base the preceding tweak in part on information gathered during 6 years around math/stats departments at 2 universities. The less bright stars in those departments just follow the bright sparks. And the bright spark leaders are doing a lot of wink, wink, nudge, nudge (wool over your eyes style).

As for the cross-disciplinary rift between the disciplines of physics & stats. It’s DEEP.

Dempster’s ideas are valuably stimulating even though s~=1.

This is an interesting topic. However, the things may be simpler than what appear. Essentially computer climate modelers adopt an analytical methodology approach. The idea is to reconstruct climate change by implementing a computer program that uses a set of known physical equations that presumably relate to the climate single components.

Is this approach scientifically valid? The answer is that we first need to define what “scientific” means. The word “scientific” has several meanings, which are philosophically related, but they should not be confused.

Case #1.

For example, for most climate modelers “scientific” means that one takes a certain number of physical equations referring to some established fundamental physical theories and that are commonly found in some textbooks. Then, he couples these equations in a way that is assumed to reproduce the Earth’s climate. The model is claimed to be “scientific” merely because the physical equations, which the model is made of, taken by alone, have been previously validated by proper experimental evidences. Under the above assumption, a climate model appears to be a “scientific” construction.

Case #2.

However, the above definition does not fundamentally fit the proper “scientific character” of a theory, as defined by the scientific method. The scientific method refers to “theories” more than “models” which are mathematical and physical realizations of a theory. A theory is “scientific” if a mathematical model based on it is capable of reproducing the observations and it is capable of forecasting, within some reasonable uncertainty, future observations. No other requirements exist. In particular, the scientific method does not require that a proposed theory must be explained by another theory, although finding an interconnection among physical theories is “elegant”.

What does the above mean? It means that somebody may build a theory (e.g., manifested in a general circulation climate model) based on already scientifically validate physical processes, and nevertheless the theory as a whole is not “scientific” because it does not reproduce the observations nor properly forecast them.

On the contrary, one may propose a theory, which is not based on other existent and already validated theories, and nevertheless it is a legitimate scientific theory just because it agrees with the observations and forecast them.

The problem within the climate science community is that computer modelers consistently advocate the above case #1 while the scientific method requires case #2.

In fact, to simply use a set of already independently validated physical equations is not enough for building a “scientifically” valid climate theory simply using a set . The initial equation set must be “complete”, the coupling and link mechanism equations should be known and complete as well, and various border and initial conditions must be known for properly solving the differential equations.

Because the above three conditions are not met, climate modelers try to solve the uncertainty problem by playing with the huge uncertainty of key mechanisms such as climate sensitivity to CO2. They succeed, through tuning, to obtain a climate model output that vaguely gives a 1 K global temperature increase during the last century and claim that they have understood how climate works and that the projections of their model for the future should be blindly trusted. Those who question the validity of such approach and claims are, in the best case, dismissed as “scientific morons,” and in the worst case, are accused to be “criminals against humanity” who must be marginalized from the society and the scientific community by any means.

However, the structural uncertainty about how to build a theory of a macroscopic complex phenomenon such as climate change by means of already known fundamental laws of physics and chemistry, must be addressed in the way the scientific method allows, that is, using the main definition of “scientific theory” listed in the above Case #2 .

That is, one should start looking for a working theory of climate that is not constructed on already known physical microscopic theories. Only once that a working theory to describe the climate macroscopic dynamics is established, it would be appropriate to look for its links with other already established physical theories: that is, looking for more conventional physical mechanism explanations.

My papers, where a phenomenological model of climate change is proposed, should be interpreted under the above philosophy. In an above post, Dr. David L. Hagen has summarized some of my works and books where I explain the above philosophy. I thank David.

In particular, in this paper

N. Scafetta, “Empirical evidence for a celestial origin of the climate oscillations and its implications”. Journal of Atmospheric and Solar-Terrestrial Physics 72, 951–970 (2010)

http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf

it is shown that climate seems to be characterized by a set of periodic limit cycles around which the climate system chaotically fluctuates. One of these cycle is a 60-year cycle. The frequency of these cycles are found in the natural gravitational oscillations of the solar system. This finding suggests that climate is resonating and/or synchronized to natural astronomical oscillations. The paper show that astronomical cycles match the temperature cycles with a probability of 96% and above, while current general circulation models such as the Giss ModelE would reproduce the same temperature cycles with a probability of just 16%.

Is the theory that I have proposed scientifically valid? Probably yes, because it is shown to be able to reproduce the global surface data patterns and has a chance to properly forecast climate oscillations much better than the Giss ModelE, for example. Thus, my theory matches the main requirement of the scientific method. The theory suggests that about 60% of the warming observed since 1970 is due to natural cycles, while the IPCC using GCMs claims that 100% of the post 1970 warming is anthropogenic. My theory reconstruct a cooling or momentarily temperature rest since 2002 as observed in the data, which the IPCC GCMs have all projected a significant warming which is not seen in the data, etc.

Is the theory that I have proposed (the climate is regulated by astronomical cycles) already explained by means of other established theories such the fundamental laws of mechanics, thermodynamics, chemistry and so on? No, it isn’t.

Does this mean that the proposed theory is not “scientific”? No, because, rigorously speaking, the scientific method does not require that a proposed theory must be explained by another theory. Future research may search for interconnections among scientific theories and consolidate or eventually debut a proposed theory.

It is the above crucial philosophical point that many climate models may be missing, and the consequence is a twisted version of the scientific method where “scientific” is meant to mean that a theory is such only if it is constructed by using already established theories despite the fact the proposed climate theories (= and general circulation models) are found not to be able to reconstruct or forecast climate dynamics.

Thus, I believe that the problem of physical uncertainty cannot be addressed by looking to some mysterious and magic statistical method. People simply need to apply the scientific method in the proper way. That is, if one does not understand yet the microphysics of a complex phenomenon (that is if analytical computer climate models are not satisfactory yet), he should look at the phenomenon as a whole and try to understand the information implicit in its dynamics by constructing and forecasting it first. The microphysics problem is addressed later.

Nicola,

Thank you for commenting. In a less important side note of Dempster’s presentation, he claims there is a large difference cultural difference between physicists and statisticians. I thought it was an intriguing observation, although I am not sure if it is important. Since you are both a physicist and statistician you are familiar with both cultures. I cannot help but wonder if you agree with Dempster’s comments regarding these scientific cultures?

Ron,

I believe that there exists a large cultural difference between physicists and statisticians. The subject of the physicists is to understand natural phenomena of which they have no control. Statistics is a subset of mathematics, it has its own validity and the statistical theories are constructed by postulating a set of axioms. The axioms are chosen in such a way that one can later develop them in a set of theorems and, in this way, develop a full statistical theory. Internal logical consistency of the statistical theory is sufficient to validate the theory.

Physicists, instead, are bounded by the scientific method to develop theories that are in reasonable agreement with natural data.

A problem that I usually notice is an abuse of statistical theories by natural scientists. Natural scientists often do not understand the axioms on which the statistical theories are built. For example, there is no check that the statistical axioms are indeed consistent with the natural phenomenon under study. Thus, there may be a misapplication of the statistical theory.

For example, once I had a sequence of data and I estimated its power spectrum. According to the usual statistical theories the amplitude of the spectrum peaks can be associated to a statistical confidence level. I found a set of peaks with an extremely low statistical confidence level and the statistical theory would imply that such peaks are indistinguishable from random noise, and therefore, not relevant. However, later I realized that those peaks had a clear dynamical meaning, thus it was not noise, but important signal.

The truth was that the statistical theory was based on axioms that did not agree with the dynamics of my physical signal.

So, statistics can be easily misapplied in natural science if one does not understand its framework. And this happens all the times.

I beg to differ. A scientific theory may be wrong if it doesn’t reproduce the observations or properly forecast them, but that doesn’t make it not scientific.

There is no unique “scientific method” – there are many scientific methods practiced by the diversity of scientists. Some may be better than others (though of course the criteria of quality are themselves controversial), and criticism of the GCM research program is legitimate, but this is a very weak criticism.

Ok Paul,

now you are playing with the definition of the words.

For “scientific” I intend a theory that gives us an appropriate knowledge about a given natural phenomenon. This comes from the Medieval Latin “scientificus” which means “producing knowledge” or from the Latin “scientia” that is “knowledge”. In natural science the knowledge refers to a natural phenomenon.

It is evident that if a proposed theory does not agree with the data that it is supposed to interpret and/or it does not reproduce or forecast further observations, that theory does not gives us the proper “knowledge” about the phenomenon that it is supposed to interpret. Thus, in such eventuality such a theory loses its “scientific” character, although it may remain an interesting and possible correct mathematical, computational or philosophical theory, but not any more a physical theory.

The “scientific method” is only one: look at the data, propose a theory, check that the theory reconstruct and/or forecast the observation, if relevant discrepancies are found propose an improved theory or another theory and repeat the process.

When you say “many scientific methods” you are referring to the “techniques” for investigating phenomena, which are many.

Perhaps you would be more satisfied with the word “objective” rather than scientific. Newton’s observations were both objective and scientific — the fact that Einstein offers a better explanation does not denigrate Newton’s achievement nor does it mean that it’s without value. Newton’s observations were true within the context of his knowledge. That’s all can ever be said of “truth”. Man is not omniscient, and knowldge is stated in the form of discrete, linear propositions.

Jim,

There is a difference between a hierarchical order among scientific theories, in the sense that one is included in another as a subset (as it happens between the Newton’s classical mechanics and the Einstein’s special relativity), and the fact that a proposed scientific theory is just plainly erroneous, and therefore non scientific, because it does not agree with the data that it is supposed to interpret or has no forecasting capabilities.

Paul and Nicola, we had an interesting post by Mike Zajko on the scientific method

http://judithcurry.com/2010/11/09/the-scientific-method/

We have various concepts:

– scientific theory

– scientific method

– scientific process

To me the scientific process is the most meaningful. It’s its the process that leads gradually to the improvement and expansion of our knowledge about the reality (where we have again a choice of philosophical views). A method is scientific as long as it contributes positively to the scientific process and a theory is scientific, if it has a role in the scientific process. I think that at least most natural scientists could agree substantially that the above presents necessary requirements.

Trying to go to more detailed definitions of the scientific method and the scientific theory, it’s likely that some contradiction start to build up. Whenever I have seen lists of requirements for the scientific method, I can soon invent cases that do not satisfy those requirements, but which deserve to be considered methodologically valid science. It appears obvious that the same applies to scientific theories: Any precise definition is too restrictive. Attempts to define, what is a scientific theory tell, what the definer’s subjective experience is and what are his or her subjective preferences. The aim appears to be rather destructive than constructive as the definition is likely to be used to attack some ideas that the person wishes to attack.

Pekka – I agree with your injunction to avoid unnecessarily restrictive definitions of how science should be conducted. Although I can’t read Nicola Scafetta’s mind, I have the sense he is suggesting that his celestial theory of climate variation deserves priority over model-related estimates of anthropogenic contributions because he interprets it to match the data better and therefore better fits his criterion of what is “scientific”. I don’t happen to agree with his interpretation, but I respect his right to it. However, that is the subject for a D&A post rather than this one, and I don’t think broad generalizations about scientific process are particularly useful in determining the relative merits of competing explanations of a specific set of observations. This is particularly the case when any explanation is likely to account for data only imprecisely, not necessarily because the underlying theory is bad, or because a different explanation can be fit to past data better, but because its predictions are subject to a range of errors that can be attributable to modifying factors. It would not be the case when competing theories are tested against future predictions over the course of long intervals with many opportunities to match reasonably well if valid or deviate radically if erroneous.

Fred,

A related issue is that science proceeds often through theories that are known to be in disagreement with experimental knowledge, but which contain novel valuable ideas. After a while someone will then improve on the theory getting rid of the most serious discrepancies. This is just one example, where a restrictive definition of science might be seriously detrimental for its development.

Any incomplete theory may be valuable in its own way. Few genuinely new ideas are finalized by the time they are first published. There is no way of formalizing rules to define, which theories should be accepted as scientific and which not. Often the best agreement is obtained at an early stage by some worthless model that just happens to fit the data best.

Thanks – I think Fred and Pikka have laid out the relevant issues pretty well. But I doubt Nicola will be persuaded.

Paul,

by what should I be persuaded?

:)

Do you believe that a theory is “scientific” even when it is found to be in disagreement with the data and has no forecasting capability?

Then, everything one thinks would be “scientific”, and science would be reduced to a sophistic discussion. :)

Nicola – you should be persuaded by what Pikka wrote, because it’s true.

“In disagreement with the data” is a relative term, as is “no forecasting capability”. I assume you are still talking about GCMs, which have some agreement with the data and some forecasting ability. They are clearly scientific models, whatever else one says about them.

Paul,

Pikka’s comment is quite reasonable. I simply did not get why you infer that I could not be persuaded by such argument.

Theories evolve in time and improvement is always possible. This works for any theory. Why do you think it is so hard to understand it?

I do not have anything against GCM in general.

The problem is that the “current” IPCC GCMs suffer from their incapacity of properly reconstructing climate oscillations and suffer of still huge unresolved uncertainties such as, for example, the climate sensitivity to CO2 doubling. Thus, the scientific knowledge we can get from the current GCMs is still quite limited.

Therefore, the proponents of the AGW theory, which is based on such models, should also be more open to alternative theories.

Concerning the flaws with the Bayesian approach that Dempster wishes to correct:

Suppose We have a competing set of models M1,…,Mn and some observational data D. Application of Bayes theorem results in

p(M1|D)=p(D|M1)p(M1)/p(D)

p(D) = sum p(D|Mi)p(Mi) (a sum over all possible models)

If we consider 2 models M1,M2 then the normalizing constant is given by

p(D) = p(M1|D)p(M1)+p(M2|D)p(M2)

which corresponds to the p+q=1 case (after normalization) of two models which partition the space of all possible models. For models M1,..Mn (possible infinite) one can also write

p(D) = p(M1|D)p(M1)+p(M2|D)p(M2) + p(U)

P(U) = sum [p(D|Mi)p(Mi)], where Mi not M1 and M2

here P(U) represents the unknown. This corresponds to the p+q+r=1 case. How does one go about assigning a value to r or P(U)? What benefit does this provide when the someone could simply claim r is extremely small?

Now in Bayesian model selection it is recognized that one cannot in general enumerate all possible models M1,..,Mn and so this type analysis is not possible (and one wants to avoid assigning an arbitrary r or p(U)). Instead one presents the Bayes factor

p(M1|D)/p(Mi|D) where Mi is any other model

Bayes factors rank COMPETING models but do NOT tell us explicitly that a model has a high probability. It should be pointed out that interpreting Bayes factors can be controversial as well. Bayes factors can be computed because p(D) is not required and so p(U) does not need to be guessed. Contrary to Dempster’s claim, this is a Bayesian way of recognizing ignorance about what we don’t know. There is no misleading statement about a model having a high probability just because we cannot think of anything else. I do not see how a proper application of Bayesian model selection is inferior to the approach of Dempster or how it will be of benefit to climate modeling in any practical way. That being said I am not claiming that the Bayesian (or any statistical) methodology has been correctly used in the Climate science community.

TomJ

Thank you.

It’s for posts like this and the one below by Paul Baer that I come to Climate Etc.

Thanks Bart. Glad I’ve had some time to contribute, and that you’ve found it worth while.

But there is a lot of noise here too! (And plainly I’m a sucker for some of it.)

Demster’s work and similar addresses some interesting and arguably very important issues, but it notably fails to address probability estimates of continuous variables, like the climate sensitivity.

Consider the following question: what is the probability distribution for the climate sensitivity, as conventionally defined?

With such an estimate, and a damage function, one can draw a policy conclusion based on an expected utility calculation. Using a different probability distribution, one can draw a different conclusion. But there isn’t any obvious way in which “don’t know” can enter into this calculation.

I do think that the question of how we choose among alternative probability distributions for parameters like the climate sensitivity – which, it must be kept in mind, are descriptions of our ignorance rather than descriptions of a stochastic property of reality – is a very important one. But unless Demster has applied his theory to continuous variables, I don’t see how it helps us.

Paul, we discussed probability of climate sensitivities here

http://judithcurry.com/2011/01/24/probabilistic-estimates-of-climate-sensitivity/

Thanks – I saw that when it first came out, but it was in January and I was teaching two new classes. I’ve only rejoined the debate this week because my lecturing in those classes is over (thank heavens for student presentations).

I actually know that literature very well (part of my dissertation was on that topic) so it would have been impossible for me to make a short contribution!

Clearly that thread and this are closely related.

“don’t know” can enter the characterization of the climate sensitivity distribution as follows: if the estimates of sensitivity are simply opinions, this represents a probability distribution of “don’t know”. If, as Roy Spencer argues, the methods by which climate sensitivity have been derived in the past are invalid, then one could argue that the sensitivity can not be estimated by those methods (fundamentally unknowable with present methods, until we figure out the proper way to do it).

Paul:

In the context of using parametric or non parametric models for climate (as Dempster recommends trying with HMM’s) the prior distribution one uses for a parameters is considered part of the model itself in the Bayesian approach. As such two identical models with different prior distributions on a particular parameter can be compared using Bayes factors for example, and thus it should be possible to determine the better prior. Specifying priors that represent our opinions is the hard part (such as the well known problem of sdefining a prior that represents complete ignorance about a parameter for example). Perhaps I do not understand the point you are trying to make here.

Although I am no statistician, introducing the “don’t know” concept into GCMs as suggested by Arthur Dempster appears to make good sense to me, in view of the many uncertainties and unknowns related to our climate.

The comment by the first poster, Curt Doolittle, points out:

There is a basic question, which is still unanswered, yet is critical to the whole idea of establishing robust statistical CO2/temperature correlation, and hence a strong argument for causation.

Here is a link to an essay entitled “Is climate chaotic or random?”

http://www.scienceheresy.com/2011_03/chaoticorrandom/ICCOR.pdf

This essay debates whether or not our climate follows the deterministic chaos model (as assumed by IPCC) or the random walk model (as has been suggested by some statisticians, such as Gordon + Bye 1991). The author makes the point that under a deterministic chaos model, the variance does not increase with time (i.e. the number of throws of the die), but under the random walk model it does so, making the outcome less predictable with time (i.e. the further one goes into the future). [Nassim Taleb makes a similar comment about the problems with long-range predictions in his book

The Black Swan.]Not being a statistician, I cannot vouch for any of this, but the logical reasoning appears to make sense.

This was published by the on-line magazine, Science Heresy, whose editor is John Reid, a retired PhD in atmospheric physics. For a link to the home page of the site see:

http://www.scienceheresy.com/

Max

thanks for the link

Max,

Good links. I just want to point out there are only two ways a climate scientist can hold to the IPCC view described as “deterministic chaos:”

One – if they are unaware of the large natural variation of climate history of the planet.

Two – if they are aware but are a victim of double-think (holding two contrary beliefs in the mind at the same time).

Double-think, a group-think, is a major problem in climate science.

Darn. That is supposed to read:

“Double-think, like group-think, is a major problem in climate science.”

There appears to be a lot of discussion here on the question of whether or not (climate) models are “predictive”.

I would suggest that the past decade gives a good empirical check.

Max

Well, since I did mention Dempster-Shafer on a previous thread I should at least drop by and comment on this one!

Dempster-Shafer methods are simply an extension to existing Bayesian methods. They remove some issues, but not all, surrounding this type of analysis. It is simply a toolbox defining terms and relationships between the terms, and requires an underlying model and many subjective assumption to feed into it.

It is curious to see many people defend Bayesian approaches as somehow better than a Dempster-Shafer approach. Dempster-Shafer is a generalisation of Bayesian methods, and in fact a Bayesian approach is a special case of the Dempster-Shafer approach. So if someone is complaining about some aspect of Dempster Shafer being difficult to define, they have to understand that they are still defining it by choosing Bayesian methods; they are still making an equally arbitrary assumption, but perhaps they feel more comfortable because it is hidden in the assumptions of the method, rather than having to be explicitly stated.

Techniques like Dempster-Shafer and Bayesian methods work very well where the requirements of the model are easily specified and understood. A common example is data fusion. You know what your data is describing, and you can create a model of it, and you have a full and clear understanding of how that model relates to the real world.

But when looking at more fundamental, structural issues about how climate is viewed – such as the valid issues raised by Tomas Milanovic and others – there is no way you can cater for these in a model.

Ultimately, both Dempster-Shafer and Bayesian methods suffer from this problem. The models cannot be fully articulated. Arguably, Dempster-Shafer is better in this situation because the hypotheses you are choosing from need not be complete. But in practice, even for Dempster-Shafer, the model is still subjective and is ultimately a better representation of the bias of the experimenter than any meaningful measure of the climate.

And I’m really struggling to see a compelling application of HMMs to climate models. And sequential monte carlo type methods are popular buzz terms in my field at the moment. They have their own narrow uses, but are often applied far and wide because they are in vogue and often attract funding. I think I will remain sceptical until I see some concrete evidence of a meaningful application here.

I will paraphrase Jaynes here in saying that Bayesian anaylsis is a methodology that will result in two people with the same data, model and priors arriving at the same result (posterior). My issue with r still stands that it is arbitrary, I do not claim that Bayesian methods let me define r any better, my example was simply to point out that Bayesian model selection already recognizes that there are unknown models (or that r exists), and because of this Bayesian methodology only compares models which we know (it solves a different but still useful problem). How does limiting discussion to comparing defined models with defined priors equate to having Bayesian methodology make arbitrary assumptions that are hidden?

As for HMM’s or non parametric methods in climate modelling the reasoning is as follows: These methods use simple building blocks which are combined to produce a more complicated model. Once these building blocks are defined, this approach attempts to define a model from the data directly. In effect they attempt to build a model for you from the data. In the context of HMM’s they attempt to determine the transitions probabilities of the hidden markov process along with each states output distribution. This concept of allowing the data to build the model is very appealing and is why so much research is done in the field, but HMM have not proven to be a holy grail in the areas which they have been used.

TomJ,

If you look at the mathematical formalisation of DS methods, you will see that by adding some simple constraints DS methods collapse on to Bayesian methods. These additional constraints are the “hidden assumptions” that are implicitly made.

As I pointed out, in situations where it is reasonable to define a model – say, for example, disparate sensor data fusion where something is known about the type of objects being sensed – approaches like Bayesian and DS are useful because an objective model can be defined in a reasonable way. With the structural issues at hand for climate science, there are too many “hidden assumptions”. Of course two people using the same model and priors will get the same result, but so what if they disagree on which model and priors are correct? For example, Bayesian methods have been used to estimate climate sensitivity, but others argue that linearisation of climate senstivity is meaningless in a system with complex nonlinear dynamics. The different methods we refer to are valuable tools in solving a problem but we are putting the cart before the horse: we don’t know which problem to solve.

DS should have some advantage in this situation over Bayesian methods, since there is an allowance for this uncertainty, and I suspect expressing this uncertainty is the goal of its introduction. but as you rightly point out, there is no way of objectively defining it.

I have seen HMMs and SMCs usefully applied, but again they have a narrow application and they tend to be used far too widely. I appreciate lazy researchers like methodologies where they can just throw data at a problem and hope the methodology comes out with a meaningful answer. It reminds me of the popularity of using Neural Networks to solve just about every problem you could think of in the 1980s. Perhaps someone will surprise me at some point and make a breakthrough in climate using them, but I’m not going to hold my breath.

Dr. Curry,

Talking about predictions – WUWT has a hoot of a story about 50 million climate refugees by 2010 predicted in 2005. It seems some journalist decided to look for them in the countries said to be most at risk. The only problem is, according to recent census, populations are up across the board. The UN tried to disappear the prediction from their website but it was found in Google cache and the enlarged map was not disappeared at the same time (although I believe it is gone now). Check it out. I think it is worth a blog post here.

http://wattsupwiththat.com/2011/04/15/the-un-disappears-50-million-climate-refugees-then-botches-the-disappearing-attempt/