How far should we trust models?

by Judith Curry

In economics, climate science and public health, computer models help us decide how to act. But can we trust them? – Jon Turney

Aeon has a very good article by Jon Turney entitled A model world.   Excerpts:

 As computer modelling has become essential to more and more areas of science, it has also become at least a partial guide to headline-grabbing policy issues, from flood control and the conserving of fish stocks, to climate change and — heaven help us — the economy. But do politicians and officials understand the limits of what these models can do? Are they all as good, or as bad, as each other? If not, how can we tell which is which?

In this new world of computer modelling, an oft-quoted remark made in the 1970s by the statistician George Box remains a useful rule of thumb: ‘all models are wrong, but some are useful’. He meant, of course, that while the new simulations should never be mistaken for the real thing, their features might yet inform us about aspects of reality that matter.

 ‘The art is to find an approximation simple enough to be computable, but not so simple that you lose the useful detail.’

Because it’s usually easy to perform experiments in chemistry, molecular simulations have developed in tandem with accumulating lab results and enormous increases in computing speed. It is a powerful combination. 

More often, though — and more worryingly for policymakers — models and simulations crop up in domains where experimentation is harder in practice, or impossible in principle. And when testing against reality is not an option, our confidence in any given model relies on other factors, not least a good grasp of underlying principles.

[W]e seem increasingly to be discussing results from models of natural phenomena that are neither well-understood, nor likely to respond to our tampering in any simple way. [A]s Naomi Oreskes notes, we used such models to study systems that are too large, too complex, or too far away to tackle any other way. That makes the models indispensable, as the alternative is plain guessing. But it also brings new dimensions of uncertainty.

First, you might be a bit hazy about the inputs derived from observations — the tedious but important stuff of who measured what, when, and whether the measurements were reliable. Then there are the processes represented in the model that are well understood but can’t be handled precisely because they happen on the wrong scale. Simulations typically concern continuous processes that are sampled to furnish data — and calculations — that you can actually work with. But what if significant things happen below the sampling size? Fluid flow, for instance, produces atmospheric eddies on the scale of a hurricane, down to the draft coming through your window. In theory, they can all be modelled using the same equations. But while a climate modeller can include the large ones, the smaller scales can be approximated only if the calculation is ever going to end.

Finally, there are the processes that aren’t well-understood — climate modelling is rife with these. Modellers deal with them by putting in simplifications and approximations that they refer to as parameterisation. They work hard at tuning parameters to make them more realistic, and argue about the right values, but some fuzziness always remains.

When the uncertainties are harder to characterise, evaluating a model depends more on stepping back, I think, and asking what kind of community it emerges from. Is it, in a word, scientific? And what does that mean for this new way of doing science?

What’s more, the earth system is imperfectly understood, so uncertainties abound; even aspects that are well-understood, such as fluid flow equations, challenge the models. Tim Palmer, professor in climate physics at the University of Oxford, says the equations are the mathematical equivalent of a Russian doll: they unpack in such a way that a simple governing equation is actually shorthand for billions and billions of equations. Too many for even the fastest computers.

The way to regard climate models, Edwards and others suggest, is — contrary to the typical criticism — not as arbitrary constructs that produce the results modellers want. Rather, as the philosopher Eric Winsberg argues in detail in Science in the Age of Computer Simulation(2010), developing useful simulations is not that different from performing successful experiments. An experiment, like a model, is a simplification of reality. Deciding what counts as good one, or even what counts as a repeat of an old one, depends on intense, detailed discussions between groups of experts who usually agree about fundamentals.

Of course, uncertainties remain, and can be hard to reduce, but Reto Knutti, from the Institute for Atmospheric and Climate Science in Zurich, says that does not mean the models are not telling us anything: ‘For some variable and scales, model projections are remarkably robust and unlikely to be entirely wrong.’ 

But we might have to resign ourselves to peering through the lens of models at a blurry image. Or, as Paul Edwards frames that future: ‘more global data images, more versions of the atmosphere, all shimmering within a relatively narrow band yet never settling on a single definitive line’.

Generalisations about modelling remain hard to make. Eric Winsberg is one of the few philosophers who has looked at them closely, but the best critiques of modelling tend to come from people who work with them, and who prefer to talk strictly about their own fields. Either way, the question is: ought we to pay attention to them?

Reto Knutti in Zurich is similarly critical of his own field. He advocates more work on quantifying uncertainties in climate modelling, so that different models can be compared. ‘A prediction with a model that we don’t understand is dangerous, and a prediction without error bars is useless,’ he told me. Although complex models rarely help in this regard, he noted ‘a tendency to make models ever more complicated. People build the most complicated model they can think of, include everything, then run it once on the largest computer with the highest resolution they can afford, then wonder how to interpret the results.’ 

JC comments:  I think Turney’s analysis is insightful, and very well written to serve the public understanding of this complex issue.

The epistemology of computer simulations is a growing subspecialty in the philosophy of science, and we are even seeing the development of a community of philosophers of science that focus on climate modeling.  I have been avidly reading this literature, and Eric Winsberg is definitely someone who is providing insights.

There is also a growing number of climate modelers and climate scientists that use/examine climate models who are considering these same issues regarding the epistemology of climate models. This reflection, both from within and beyond the community of climate modelers, is very healthy for using these climate models effectively to advance scientific understanding and for uses in decision making.  Many of the posts at Climate Etc on climate modeling are in this vein.

The issue of model complexity raised by Knutti is an important one; I have a draft post on this topic awaiting publication of the relevant paper.

293 responses to “How far should we trust models?

  1. If models are going to be treated in the same manner as an experiment, then they should be designed to be falsifiable, and if falsified, they should be discarded.
    If climate models project a future temperature over a period, and then fail to describe the measured temperature, they are falsified. They should then be discarded, and the modelers should spend time analyzing why they were wrong in their choice of constants.
    This does not happen. This means we cannot treat climate models as experiments, nor their outputs as hypothesize, instead they are mathematical projections of scientists biases.

    • All models are false. there isnt a single one I ever worked with that wasnt false. You ride in planes that were made with false models. your country is defended with stuff that was built based on false models. Further how do we discard a model that has the law of gravity in it. What parts do we throw away what parts do we keep. Every model uses the laws of math. Do we throw everything in the model out? how do we decide which parts to keep and which parts to junk.
      Its not as easy as “disgarding” the “models” because the model, like any theory, is a complex combination of known math, known physics, uncertain inputs, and less certain physics. Thats part of the reasons that models, while false, are not falsified. They are never thrown away in total.

    • Steven Mosher, “Thats part of the reasons that models, while false, are not falsified. ”

      true, but if you are modeling a Cessna and the model indicated airspeed is Mach 10 you might pause a moment before announcing it to the world.

    • The intellectual model that Doc uses to determine right and wrong is falsifiable, and it has been false many times.

      Thus, Doc’s brain should be discarded.

    • Joshua, “The intellectual model that Doc uses to determine right and wrong is falsifiable, and it has been false many times. ”

      No, it is just more semantics. A complex computer model is never falsifiable because they are never complete. So models have differing degrees of usefulness. Climate models are consistently high which would mean either some common physic assumption is wrong and shared or some common bias is wrong and shared. Since they are consistent they are useful.

      Now is the bug in the physics or the physicists?

    • I don’t fully understand models, but it appears to me that many IPCC models are very much about ~2100, and not very much about 2013. They have that cone; I understand it’s important, but…

      If one “knows” a 2100 model is “wrong” at about 2013, how is that a guarantee it will not be “right” at 2100?

    • A complex computer model is never falsifiable because they are never complete.

      Doc’s mental models are never complete either.

    • Steven Mosher blathers ignorantly:

      All models are false. there isnt a single one I ever worked with that wasnt false.

      They were not all false for the purposes to which they were put.

      That is a component of the falsifiability criterion. One that you consistently pretend away, as otherwise it would keep you from making your “I am so much more knowledgeable than you stupid skeptics” self aggrandizements,.

      You don’t know what a model is (Hint: Doesn’t necessarily have a goddam thing to do with a computer or “laws of math”), and you are not equipped to discuss their assessment.

    • capt.

      “true, but if you are modeling a Cessna and the model indicated airspeed is Mach 10 you might pause a moment before announcing it to the world.”

      Of course. But that doesnt address my point.

      The point I make here has been made before by Quine and Duhem.

      Let’s see if I can explain by way of a simple example.

      Suppose I am building a physics model of car.

      I start with something simple:

      Flong = Ftraction + Fdrag + Frr

      Where Flong is the longitudinal force, Ftraction is the tractional force,
      Fdrag is the drag force and Frr is the the rolling resistance.

      I turn this into a simulation of a car moving in a straight line and I predict that
      if I add 300hp the car will achieve a top speed of 104mph.

      I take this to the track and I measure the actual top speed. I get 95mph.

      Is the model falsified?

      Well, the first thing to understand is that most people MISUNDERSTAND the falsifiability criteria. The model is falsifiable. That means it makes a prediction that CAN IN PRINCIPLE be checked against observation. Climate models are falsifiable in principle. What is not falsifiable in principle is metaphysics.

      But the real issue is what do we do in PRACTICE when a model fails. They always do.

      my model predicted 104 mph. we measured 95. is the model falsified?
      Well, we are faced with a choice. we know this.

      1. Some part of the model may be wrong.
      2. All parts of the model may be wrong
      3. The data may be wrong.
      4. We just had a fluke. Shit happens.

      That’s all we know. Now we usually address 4 by repeating the experiment.
      For number 3, like the recent measures of faster than light signalling, folks pour over the experimental design. And we do things over. It depends how expensive repeating a test is. When a GCM predicts temperatures 100 years from now, dont we have to wait for the experiment to finish? and then dont we have to repeat it before we just toss the model out.

      Then come 1 & 2. My model above uses addition. Maybe addition is the part of the model that is wrong. Ordinarily we dont change the laws of math to make models work. Unless we are doing renormalization in which case we do futz around with math because the model works so well. So, which term in the equation is wrong? is the whole thing wrong? How do we isolate the wrong part and fix it. The general form looks correct, maybe we need more detail. And then we see that the rolling resistance changes as the ties get hotter and bigger and out physical model had that as a constant.. so we add detail. we dont toss out the general form we elaborate the terms.

    • capt

      “Climate models are consistently high which would mean either some common physic assumption is wrong and shared or some common bias is wrong and shared. Since they are consistent they are useful.”

      There are many possibilities.

      1. The inputs to the models were wrong. Models are driven by forcings. Get the forcing wrong, and you will mis predict.
      2. Missing forcings: for example, natural internal variability.
      3. Missing physics.
      4. Incomplete physics.
      5. Physics that is just flat wrong.
      6. We’ve only run one experiment. shit happens.

      That is why the notion that there is a simple decision : FALSIFIED! is wrong.
      there are many choices. and none of those choices can be ruled out by simply looking at the result.

    • JJ.

      Go read my previous comments. Models are wrong. Some are less wrong than others. Some are useful. The use depends on the purpose. The purpose is decided by the user. You have no standing to say whether a model is useful or not, since you are not a user.

    • Steven Mosher | December 16, 2013 at 10:37 pm |

      “All models are false.”

      More time spent studying linguistic philosophy would help (or perhaps hinder, depending on the sport that is being played).

    • “[JJ has] no standing to say whether a model is useful or not, since [JJ is] not a user.”

      Perhaps JJ is an unwilling user. That might give JJ standing.

    • k scott denison

      Mosher states: “You ride in planes that were made with false models.”
      —–
      True enough, but you’ll note hat someone actually built, tested, tested again, retested, and then allowed passengers to ride in them.

      So what is the equivalent that has been done with climate models?

      Answer: none. Because their predictive ability is non existent.

    • I’ve been building computer models (non climate) most of my working life. I would not subscribe to the idea that all models are false, it’s just that some are more false than others.
      In order to modell anything you have to know and understand all the inputs and their interdependencies, the you have to understand the processes these inputs undergo, and the same again for the output. If you can do all of these all you end up with is a sophisticated random number generator .
      Since I don’t think we know all of these things I can not see how any climate models can be successful.

    • lemiere jacques

      models are always wrong ..but..some time under some hypothesis that can be tested( and falsified) they have limited skills.

      It appears to me that this limitations are never clearly explained..and we don’t know what models are able to predict.

      For no rigorous reason but convincing one, hind-casting skills in a particular domain limits predicting skills in the same domain.

      On the other hand models are bases on an empirical set of knowledge of a system s…if a model predict that you re leaving the known domain…you have a problem.. Once you know the plane can’t fly anymore no need to go further…

    • “You ride in planes that were made with false models.”

      Airplane rides are the validation process.

      Aspects of models which led to crashes were rejected to the point they were no longer significant. ( or you wouldn’t fly ).

      Climate model errors that have not been subject to validation ( because they are prognostic far in the future ) have not yet been purged.

    • Heh, here’s the posish: These GCMs were good enough to let the airplane of CAGW take off. They are good enough to let it fly around, though flight is turbulent. They aren’t good enough to allow it to land.
      ====================

    • Joshua, “Doc’s mental models are never complete either.”

      Maybe, then they could be much more complete than you think. While then models themselves may not be falsifiable the modelers should have standards/justification for the usefulness of the models. Most of the “nits” that the skeptics point out are things they believe the modelers should have all ready pointed out.

      Tropical tropospheric warming was a prediction that isn’t panning out as expected. Global sea ice, polar amplification, mid-northern latitude amplification, absolute surface temperature estimation, direction of meridional overturn circulation, brewer-dobson circulation, ENSO/PDO/AMO replication, rate of ocean heat uptake etc. etc. are outside of what most would consider “normal” tolerance. Some would consider the combined flaws an indication that the models have “issues” and are not ready for prime time.

      The modelers look for bits and scraps of evidence that they have not completely screwed the pooch in order to assure their adoring followers and preserve their livelihoods, which some would recognize as “desperation”.

      Skeptics think that believers are idiots for trusting failing models while believers think skeptics are idiots for not “trusting” science. What is your honest opinion of the quality of the models?

    • Cap’n –

      What is your honest opinion of the quality of the models?

      My honest opinion is that they are flawed, as are any models. I think that it is impossible to determine, yet, the magnitude of the flaws, as the time scales are as of yet too short to give enough context for measuring their magnitude (we could just have variability within expected bounds). And yet, while I see that the models themselves project ranges with error bars, that could inform adult discussions about evaluating the risk of perhaps improbable but potentially significant outcomes in the face of uncertainty, we have squabbling that ignores the true nature of the models and the true structure of modeling (with guilty parties on both sides of the great climate change divide).

      All models are flawed, so the question becomes which models are useful. Do climate models give us a better basis for understanding the likelihoods of climate change 200 years out than if we had no such models? It is hard for me to believe otherwise. Do the mental models of “skeptics” give us a better basis for understanding the likelihoods of climate change 200 years out than if we had no such models? It is hard for me to believer otherwise. When I see people arguing that we’d be better off without either, and the basis of their argument is a fallacious binary conceptualization that models being flawed renders them invalid, I question their judgement.

      Skeptics think that believers are idiots for trusting failing models while believers think skeptics are idiots for not “trusting” science.

      And both are simplistic strawmen that serve to do little other than confirm biases. People who make such arguments, IMO, are doing little other than arguing with their own fallacies and fantasies.

    • Steven Mosher | December 17, 2013 at 1:22 am |

      Go read my previous comments.

      I have. They are egotistical, obstructive, and wrong.

      Models are wrong. Some are less wrong than others. Some are useful.

      Yes, dear. I have a sign to that effect hanging over my desk. Stop acting like you are the only one in the room who could possibly have heard that little aphorism, and begin acquainting yourself with the fact that it doesn’t mean what you imply it does.

      The ones that are useful are falsifiable. Your actions seek to render models unfalsifiable, demonstrating that Judith has framed the question incorrectly. How far we should trust a model should be explicitly stated as a component of the model. With climate models, it never is. That is because climate modelers act like you. The germane question is, how far should we trust the modelers – and the people that stump for them? Zero.

      DocMartin’s post is correct, which is why you seek to distract from it. The current crop of numeric climate models are too wrong for their putative purpose. They are only correct enough for their intended purpose – one for which correct is not a criterion. Politics.

    • joshua, “And both are simplistic strawmen that serve to do little other than confirm biases. People who make such arguments, IMO, are doing little other than arguing with their own fallacies and fantasies.”

      Right, so it is useless to argue unless there is actually some meat on the bone worth fighting over. If I bring up that list of model issues, all I get is straw.

    • Of course the models don’t get thrown away in total. They get revised as knowledge expands. IMO the real problems are:

      1) Institutional/political resistance to changes that don’t go the “right” way.
      2) The focus of effort on GCMs in proportion to a wider variety of models and observations. Yes the IPCC takes these other things into account but at the end of the day, it’s the spaghetti graph of GCMs that gets the attention.

      The GCMs are not yet ready to predict 1) weather 2) decadal variability and 3) century scale climate change at the same time using the same mechanisms. Whether they eventually will be is a different question.

      I don’t know of any reason why predicting, say, the GMST in 2100 is any better done with a GCM than a variety of observationally constrained empirical models.

    • If the model says the airplane will go up and instead, the airplane goes down, you fix the model before you fix the airplane.

    • @StevenMosher, i don’t see the strawman argument about car top speed models as valid. The robustness of physics modelling is good enough that numerous models of real cars run pretty good simulations on home gaming computers. Good enough for formula one drivers to practice on them.

      The robustness of engineering models is that structures designed to stay up or hold together seem to do a pretty good job of that. Engineers will happily overengineer something to ensure it doesn’t fail under unlikely scenarios. Climate modellers don’t admit that they are modelling unlikely scenarios. And if we let GCMs model unlikely scenarios, they probably run away out of control forever, rather than demonstrating any corrective response.

      I think DocMartyn is more correct. Climate models would be better if they had error bars generated at the same time from the known errors in the input parameters. As well as seeing the “number” of global temperature, why can’t we also see the predictions of all the input parameters, since the individual parameter predictions can also be used to verify the correctness of the models. For instance, volcanic activity has such a large influence on climate via aerosols/dust. Events are random and with large impact. The models should show exactly when they predict the next volcanic event or what sort of average values they use if not predicting actual events. We should be able to compare ongoing real world data of parameters to projections of those.

      I get the impression the GCMs may run like a 3yr old drives a car on an arcade-style driving game. Constraining parameters bump the answer back towards the real world and random numbers give the impression of real world chaotic variation and at the end of the day, the answer comes out roughly right just the way the 3yr old can “drive” the car all the way round the track.

      GCMs should demonstrate paleoclimate resistance to perturbation of global temperatrue within the known bounds of paleoclimate proxy data. There is no question that climate models could do a lot better than present.

    • Doc Martyn

      The models project greenhouse warming of 0.2C per decade based on anticipated emissions of GH gases.

      The emissions of GH gases continue unabated at levels equal to or exceeding those projected, with atmospheric concentrations reaching record levels, but there is no GH warming – instead there is cooling of around 0.04C per decade for more than a decade.

      Have the models been falsified?

      If not, how many more decades of no warming would it take to falsify the models?

      Max

    • “Mosher
      1. The inputs to the models were wrong. Models are driven by forcings. Get the forcing wrong, and you will mis predict.
      2. Missing forcings: for example, natural internal variability.
      3. Missing physics.
      4. Incomplete physics.
      5. Physics that is just flat wrong.
      6. We’ve only run one experiment. shit happens.

      That is why the notion that there is a simple decision : FALSIFIED! is wrong.”

      No Mosher, you are completely wrong.
      Falsified means that the model has failed to match reality.
      Why a model has failed is the next step that real scientists look to. The models evolve as they go through iterations and the fittest survive. The models get better and better at describing what aspect of reality they have been designed to emulate.
      If you don’t have evolution, you end up with a stagnant superstition and not an investigation of how reality is and how it works.
      That you, an intelligent and thoughtful man, support this support this crap is beyond me. I have no problem with ‘black box’ models that state ‘this input equals this output, and we can describe but not understand why’, such is gravity.
      Your support for GCM’s, in light of their ‘projections’ being used as the basis of huge economic policy decisions is to me, inexplicable.
      You know how the sausages are made and insist that they are filled with the finest cuts, but know they are mostly bloody sawdust, inexplicable.

    • I think I perhaps need to expand my comment above about linguistic philosophy.

      Models don’t exist in isolation. Without “reality”, whatever that might be, a model can’t exist. And with the model comes a whole set of rules (more often implicit) about how it represents “reality”.

      It is within this context that discussions about the correctness or otherwise of the model lie. Once that is clear discussions about whether the model is a true representation of reality or not fall away.

      Some models can be always true (when they are isomorphic with the reality they representing), some contingent, others undecidable, and some always false.

      It all depends on the rules.

    • Steven

      Your idea of modeling is just plain wrong. It is one thing to say that models are not 100% or whatever percentage (depending on the coarseness of the model) true to the actual phenomenon they are modeling. It is entirely another thing to say all models are false, which is far from the truth. Either that or you have to define what you mean by “all models are false” if you have something else in mind other than what is commonly understood by that statement. I have seen you make this wrong statement over and over. havent had much time to respond, franklydont have much to expand beyond this reply, now either.

      The model you build of a phenomenon has to to be “true” to the actual phenomenon you are modeling only to the extent of the behavior or component of the phenomenon you are studying. if for example you only are studying the air flow over the wings of the plane you are modeling, you can simplify or ignore many aspects of the plane, for example whether there is an overhead bin or not. That doesnt make your model false. far from it. Your model of a plane will be false if it fails to replicate the air flow over the wings accurately within a certain % of actual airflow of the specific plane you are modeling over a specific air-space you are modeling. If you are trying to say no model is 100% accurate, that is close to truth, although you can build models with 100% accuracy again depending on what you are modeling.

      In the case of models which make predictions about future, there is absolutely a way to declare the model valid or not (true or false if you want it that way). Not all models by default are invalid or false in this case. Say your model output is a prediction of future temperature, future rain fall, future sea levels etc, at some point (usually when you think your model is mature enough) you have to compare your model output with observations. Until your model is mature, they are not valid by default. When your model is mature, If your model output is off by a threshold % (different threshold for different fields/quantities determined by how divergence beyond that threshold impacts the significance of that quantity) then it is not a valid model, if in fact 0.25 degrees change is significant t. for example your future global temperature prediction is off by say 25% from 1 degree predicted vs 0.75 degrees observed, otherwise it is a valid model.

      I think blanket declarations that models are false is wrong.

    • Shiv and everyone else, no one seems to understand what Mosh means by all models are false. Would it help you all to grasp his meaning if we change it to all models are approximations the object being modeled?

      And approximations are not equal to the object being approximated. Therefore as a representation of the object, the approximation is not 100% correct and if not 100% correct, it must be, to some degree, false.

      Or, in other words, the map is not the territory.

      http://en.wikipedia.org/wiki/The_map_is_not_the_territory

    • jeez, that is essentially Poppers math and object point on falsifiability;
      2+2 = 4 can’t be falsified as maths ‘is’; however 2 Apples and 2 Apples = 4 Apples can be falsified as you can count out two pairs of apples into a box and then count the contents.
      A model is built to give insight in the an underlying property of a measured parameter. A polynomial fit to a waveform is not a model as it is essentially informationless outside the fitted bounds.
      A polynomial of GISS, used to estimate the temperature in 2100 is worst than the Null, 2100 will be like 2013.
      Models can be worse than useless, they can give you an answer that is worse than the, no change, default.

    • Models have finite shelf-lives.

  2. Racing tipsters survive in the marketplace if they can show a track record of successful forecasting. And their skill is tested quickly and often. Bad tipsters are soon eliminated.

    We should adopt the same approach to climate models. There are said to be nearly 100 of them. Eliminate the bottom 95 then let natural selection work to choose the best of the remainder.

    And the only valid way to work out which (if any) are any good at all is to regularly and publically test their predictions against Gaia’s actions. Weakest goes to the wall. Everything else is just missing the point.

    • This won’t work either; you will be keeping the five that happened to get lucky over the time frame. You can make the same mistake with racing tipsters.

    • Racing tipsters survive in the marketplace if they can show a track record of successful forecasting.

      hmmm.

      Looks like someone is fetishizing the free market again…

      Only 24% of professional investors beat the market over the past 10 years, yet as of December 31, 2012 there was over three times as much invested in actively managed funds ($7 trillion) than invested in index funds.

      But perhaps we should follow Latimer’s statist advice, and run the bottom 95% of professional investors out of town on a rail.

    • At least there is some degree of individual choice in whether to trust racing tipsters and stock market analysts/investors bearing models.

      Judith Curry’s statement:
      “This reflection, both from within and beyond the community of climate modelers, is very healthy for using these climate models effectively to advance scientific understanding and for uses in decision making”
      -appears to start from the assumption that climate models SHOULD be used for making decisions.

      Some of us absolutely DO NOT accept that as a given.

    • At least there is some degree of individual choice in whether to trust racing tipsters and stock market analysts/investors bearing models.

      As citizens living in a representative democracy, we have collectively decided to fund researchers and academics. Just because you are on the less popular side of that decision does not mean that the decision is not a collective choice of individuals. Does it make any sense to say that as an individual, I have no choice in whether we invaded Iraq, or whether we have laws that ban drunk driving? What would you suggest? Should all decisions to fund any research be run by you, first, so you can exercise your choice?

    • Joshua | December 16, 2013 at 11:13 pm |

      Racing tipsters survive in the marketplace if they can show a track record of successful forecasting.

      hmmm.

      Looks like someone is fetishizing the free market again…

      That someone would appear to be Joshua, whose fixation is such that he sees reference to it in “racing tipster”.

      Should all decisions to fund any research be run by you, first, so you can exercise your choice?

      All decisions to fund any research funded by me should absolutely be run by me first. You want to waste your own money on pointless thumbtwaddling, knock yourself out. Hell, I’ll help you spend it. I got thumbs.

      The current economic recession was caused, in no small part, by over confidence in very complex numeric models. Overconfidence being easy to come by when the benefits accrue to a few people who have nothing to lose but other people’s money.

      Climate models are financial derivative models writ large.

    • Actually Joshua, I was only thinking as far as restricting the use of shoddy climate-models in taking important macro-economic decisions, not de-funding the research for the shoddy climate-models.

      But now that you mention the idea, I can see that it may have some merits.

    • @miker63

      ‘you will be keeping the five that happened to get lucky over the time frame’

      Exactly! If they are any good they will be the ones that ‘get lucky’. The others won’t. There is no point in keeping ‘unlucky’ models

      And next time around maybe only three ‘get lucky’.

      You don’t get to win the European Cup (or Superbowl) by being unlucky and losing your games. You get there by winning.

      And I do not see climate modelling as a job creation scheme for any Tom Dick or Harriet who wants to spend thirty years playing with computers. Every growing industry goes through a process of consolidation – and now is the time for this vast – and highly unsuccessful one.

    • @joshua

      Dunno about ‘fetishising the free market’. But noting that it has a mechanism for sorting those who can make good predictions from bad. The world of climate models hasn’t even got as far as being able to sort extremely bad from appallingly bad. Because its clear there are no good ones.

      How else would you judge between the 100 climate models?

    • Latimer –

      How else would you judge between the 100 climate models?

      Perhaps the worst of those models is more accurate at predicting the climate 200 years out than your seat-of-the pants, bulls**t meter, politically, aligned, selective “skepticism?” At least it is a possibility that should be considered when we’re evaluating how to respond to the risk of developments that might have significantly detrimental impact.

      The question becomes what are you using as a standard of evaluation. Should it be perfection? No model is perfect, including (I know this may be a shock to you), your mental model. Does the 99th worst climate model, as flawed as it may be, give us more useful information about energy policy than using no model? Does it give us more useful information than that which is the output of your mental model?

    • Clairvoyants, like racing tipsters, rely on repeat customers in a commercial world. Some are very successful (financially), does this mean that they too are evidence of market forces weeding out failing forecasting? Of course not. Racing tipsters, stocks and shares tipster, clairvoyants, etc stay in business despite the accuracy of their predictions.

    • @joshua

      ‘Perhaps the worst of those models is more accurate at predicting the climate 200 years out than your ………………………….“skepticism?”

      Since I’m not in the habit of making forecasts of any description, we’ll never know. It is not possible to make a comparison between a prediction and a null.

      Bur among those that *do* claim to make such forecasts we currently seem to have no way of distinguishing whether they are any good or not. That is an unhealthy state of affairs. And you then ask some very pertinent questions about what level of perfection we want.

      In part that will be answered when we have some objective method of finding how close to perfection we can get. Which at the moment seems to be an entirely disregarded question, even though it should be at the very heart of modelling’s progress.

      Lets start with my sensible way of weeding put the vast majority of dross. And when we are down to just two candidates, then it wil be sensible to go thorugh such a debate. But first we have to free ourselves from the thrall of the failed 98. That’ll take a decade or more. Let’s park your comments until the 2020s.

    • @me

      ‘Racing tipsters, stocks and shares tipster, clairvoyants, etc stay in business despite the accuracy of their predictions.’

      Nope. You are certainly right about clairvoyants and you may have a point about stocks and shares tipsters but you are dead wrong about racing tipsters.

      I chose them as my example precisely because they are obliged to make very specific forecasts which are evaluated very frequently and very publicly.

      Example: If I forecast the 3:30 race at Wolverhampton as 1st Fast Horse, 2nd Middle Nag 3rd. Knackered Glue Pot and the actual result is 1st Nobody Ever Heard of Me, 2nd Who? 3rd Even More Knackered, then any punter who paid for my advice is unlikely to follow my next one in the 7:30 at Windsor, nor tomorrow in anything at Lingfield.

      The tipster has to have a decent average – certainly better than the standard racing odds – and it is evaluated several times a day by people who are putting real money on their advice. Bad tipsters do not make fortunes. Their clients go elsewhere to somebody who can help them to make money, not lose it. And the name for a tipster without clients is ‘unemployed’.

    • me, @ 10:05 helps explain the survival of the madness of catastrophic climate mongering.
      ==========

    • As Wretchard had it: ‘From each according to his gullibility; to each according to his greed.’

      H/t DrJ.
      =====

    • Joshua:

      “Only 24% of professional investors beat the market over the past 10 years…”
      “But perhaps we should follow Latimer’s statist advice, and run the bottom 95% of professional investors out of town on a rail.”

      Too harsh. We should just stop using them and many have that choice. The ability to move their money into something like an S&P 500 index fund. What we seem to have is trust in the financial experts that is unwarranted, given their poor performance.

      Picking an S&P 500 index is the same as saying, I can predict very little that is of value. Yet doing so beats 76% of the experts according to Joshua’s sight.

    • That progressive elimination method would be a big step in the right direction. If all models are mistaken, some are more mistaken than others, and those should be excluded from the ensemble.
      I build my own models iteratively. They need to show some skill even in early stages, with a few pieces. Start with limited tests, just a concrete piece of the whole. Then it’s a process of adding more pieces and checking, then adding more pieces and checking again. If they can’t show any skill in early stages, I probably don’t have a clean start, and it’s back to the drawing board before multiplying the damage.
      What would that checking look like? I suggest a different test. How do they do on real climate features like the 1930s warming, or the little ice age? If they have no clue about those features, how are they supposed to get it right next time? What, they only predict results when the main factors that drove us in the past aren’t driving us any more? Why would anybody buy that assumption?
      Yeah, getting it right is hard work. But failing to get it right has its price, too.
      The fact that people still buy advice from the advisors who can’t beat a cheap, dumb index fund does say something about the power of salesmanship and un-confident investors. But you won’t see smart investors putting any of their own money there.

    • Naw, Joshua, you don’t need to run them out of town on a rail.

      In the free market society, they will die a natural death.

      Max

    • @roberto

      I like your way of building models and your suggested methodology of elimination.

      But what really matter is that there needs to be some sensible way of showing that the models have any predictive skill at all and eliminating the weak.

      At the moment all we have is teams of modellers happily playing away in their own little sandboxes assuring themselves that they are doing a great job. And occasionally peering into an adjacent one and agreeing in IPCC reports that they are all doing a great job too and that they all deserve to continue ..but maybe with a bigger box and finer grained sand.

      Fine for what they grandly title ‘the modelling community’. Self assessment is always a comfy place to be.

      But Joe Public who is paying for all this playtime is getting a bad deal twice from ‘the community’ First he is paying way too much blood and treasure to finance way too many models. And second none of them are fit for the purposes he is paying for.

      The modest proposal of a skill-based shakeout goes some way to rectify both these problems. And gives all the modelling teams a strong incentive to get their models a lot closer to reality very quickly.

  3. If you don’t understand what’s “under the hood” you don’t understand the model. The output is less than useless and those who give credence to the results are no better informed than crystal ball readers.

    Modeling a system that isn’t understood in the first place is, of course, an exercise in futility.

    • David Springer

      Yeah but it can sometimes inspire great works of art. Look at all the beautiful creations in stone, metal, wood, and precious gems that went along with Greek and Roman gods for instance. Or the zodiac and astrological signs. If climate boffins had something like that going it might not be such a waste and history might treat them kindly. Otherwise it’s just more phlogiston and should be buried in a shallow grave and forgotten as quickly as possible.

    • “Modeling a system that isn’t understood in the first place is, of course, an exercise in futility.”

      You do realize that universities are actually adding programs in “data mining?” Really, they call them that, with no hint of irony.

    • “Modeling a system that isn’t understood in the first place is, of course, an exercise in futility.”

      wrong. There is nothing uniquely human about finding the model of a system.
      One can program a computer to look at data and construct a model of the data.
      You might look at machine learning. Now clearly an algorithm that finds a model of a system has no consciousness and so can have no understanding, yet it can find a model of a system. A neural net would be a good example.

      Once upon a time I used a differential game engine to discover a system of tactics for supermanueverable aircraft that had not even been built. The algorithm did not understand aircraft or tactics or what the other pilot was thinking, but it made discoveries. Took a long time because it had to explore an parameter space exhaustively, but it built models having no understanding whatsoever.

    • But Steven, didn’t you have to review the parameter space you uncovered and select for strategies that would be useful?

      I spent 15 years supporting commercial electronic circuit simulators (of all kinds digital gate, digital behavioral, analog, timing, transmission line, etc), they are very useful, but most of the support calls I had was to explain what the simulator was saying and why. The point is just as you can learn a great deal from a simulation, if you ask the wrong thing you get worthless drivel, as the user if you don’t know what you’re doing, it’s easy to think the drivel means something, especially when it’s the results you want, like Co2 is the control knob……..

    • Diogenes

      The biggest problems that any models can have is GIGO and the fact that many of them cannot be checked against reality.

      Add to that that they can be misused to produce a preconceived result based on agenda-driven science (in the case of climate science following the IPCC consensus process), and they become practically worthless as prognosticators of anything meaningful.

      Max

    • Micro. No.

      Google HMM

  4. “The epistemology of computer simulations is a growing subspecialty in the philosophy of science, and we are even seeing the development of a community of philosophers of science that focus on climate modeling.”

    Don’t tell the polymaths.

    • Are there no limits to the ways academics will find to waste public money? I imagine there’ll soon be a community of sociologists examining the community of philosophers of science focussing on climate models.

      And then communicators of the results of the sociological examination of the community of the philosophers of science that focus on climate science.

      Then interpreters of the communications of the above….etc etc etc.

      Has MacDonalds has stopped hiring and this is a disguised way to keep the unemployment figures down?

    • Latimer Alder,

      I am preparing a proposal for a grant to study the epistemological implications of didactic analyses of the impacts of polymathic explorations of climate model parameterizations.

      Or has that already been covered?

    • David Springer

      Been there. Done that.

    • Latimer, ” I imagine there’ll soon be a community of sociologists examining the community of philosophers of science focussing on climate models.”

      +1, and don’t imagine it, count on it.

    • NW is that community of “sociologists” or should it be “scientologists”??

  5. Computers are too fast; tracking different runs of a model or ensemble, with a little tweeky-weeky here, a tweaky-weaky there, is substituted for “epistemology” of the model design or of testable hypotheses. Long, deep, and dark are the dead ends that can excavate.

  6. Dr. Strangelove

    Computer modeling in economics and climate science are not only wrong but also useless. The economy and the climate are far too complex to be accurately represented by computer models. Sure you can try to model them but the errors will be 10-20 times larger than the predicted values. The forecast is therefore useless. No better than a random guess. So what’s the use of modeling? Anybody can make a random guess.

    Computer modeling is useful in less chaotic systems like celestial motion and weather (up to 5 days only). With obvious shortcoming of computer modeling in economics and climate science, why on earth do we still use it? It’s for political reason. Advocate-scientists need to convince policy makers and the public of their cause. Applying the scientific method will expose the non-scientific nature of their cause. Computer models are used to create the illusion of science.

    • Dr S, I was involved with, and directed the use of, many exercises of computable general equilibrium models. Typically this was to assess the merits of alternative policy options (including do nothing) or major projects for which government support was sought. We weren’t trying to predict the future, but to estimate, say, the likely difference in outcome with policy A compared to policy B over ten years. I believe that this modelling was useful, and in the project cases, it was empirically proven. In every case while I worked for the Queensland Government, our modelling showed seriously negative net outcomes for the state, and often the failure of the project. Our modelling and other analysis was never faulted, but in every case was ignored; the projects proceeded, and the outcomes were as we had indicated. This would have been a costly lesson, but nothing was learned. My team was generally vilified, we often had negative career consequences while those backing the failures were promoted and never held to account, but the models showed their value.

      Economic forecasting is another matter.

    • Faustino, I feel your pain. But think of yourselves as the Red Team, and that maybe someday the 982-0 record will be noticed.

    • Nat, did you see my belated response to your take on altruism?

      http://judithcurry.com/2013/12/09/pathological-altruism/#comment-423973

    • Faustino, yes I did, and found it interesting. In the end, though, I didn’t understand how your view related to the two I offered. Not to say those are the only options but, I wasn’t sure where you were situating your view, if you get what I mean. At present there are a lot (!) of theories concerning what I prefer to call “apparently other-regarding behavior.” In my opinion, all of them leave something essential out. Put differently I suspect there is substantial heterogeneity of causes, so that a mixture model is almost certainly necessary to account for the range of what we do.

    • Nat, I think that the definitions of altruism which you derived from denizens’ comments, defining altruism as “getting a good feeling from efforts to help others,” in one case when the others benefitted, in the other whether or not the actions taken were beneficial, were wrong.

      My definition, based on my experience and understanding was that “Altruism involves acting with a volition to help others with no thought of benefit to self. To be effective, the actions must be based on wisdom and understanding.” So the critical elements are not the feeling engendered by altruism, but the selfless volition which prompts it and the wisdom and competence which make it effective.

    • Dr. Strangelove

      Fautino
      If the outcome you’re looking for is effect on GDP of some tax policy, yes that can be quantified. You simply assume all things being equal then vary one variable and see its effect on GDP. I’m referring to predicting the financial markets, which have a big impact on the whole economy (especially when they collapse). Models are useless for this purpose. You cannot model the behavior of all the financial players. Random walk rules. Though fundamental analysis, not fancy computer modeling, is useful in the long run. That’s why Warren Buffett can outperform the market over 40 years.

    • nw –

      Put differently I suspect there is substantial heterogeneity of causes, so that a mixture model is almost certainly necessary to account for the range of what we do.

      Except your description was of two discrete causes. I’d say that it would be extremely difficult to look at those causes as being anywhere near mutually exclusive with any given act. First, it’s awfully hard to get into someone’s head to make a determination on their motivation along the lines of your two possible motivations. Second, consider the following: Suppose I shout out to someone about to be hit by a truck and he moves out of the way just in time. And once he gets across the street he pulls out a gun and kills three people while robbing a bank. I initially have a good feeling because of the impact on someone else’s welfare, and then I later change my evaluation of the consequences of my action. Nothing changes about my initial act.

      The problem with how people want to exploit the notion of “pathological altruism” is that they: (1) don’t recognize the subjectivity in determining the sign of impact on others’ welfare and, (2) even looking beyond that issue, they want to reverse engineer from their assessment of impact on others’ welfare to draw conclusions about the motivation for the act – when in fact they lack evidence to draw conclusions about such inherently ambiguous attributes of human psychology.

      In the end, it’s just more pointing fingers at the “other.” Rather ironic since the criterion for deciding who to point out is a subjective evaluation of altruism.

    • “Josh

      it’s awfully hard to get into someone’s head to make a determination on their motivation along the lines of your two possible motivations”

      Not that it has ever stopped you making determinations of the reasons people are motivated to question various studies. Indeed, you believe yourself to be an expert in understanding the motivations of ‘denialists’.

    • Doc –

      Not that it has ever stopped you making determinations of the reasons people are motivated to question various studies.

      I assume that everyone’s “motives” here are pretty much the same.

      Read up on the differences between “motivated reasoning” and “motives” and get back to me. We’ll talk.

    • Doc, Joshua did a really, really good take down of Dana on another thread. It brought tears to my eyes. I refuse to criticize him for a week.

    • “Joshua did a really, really good take down of Dana on another thread”

      Stamping on bugs does not qualify one to take on Wladimir Klitschko.

  7. Given the persistent and universal warm bias in climate models (see: http://meteorologicalmusings.blogspot.com/2013/07/the-inability-of-climate-science-to.html ) I would say they tell us little that is useful in the way of predictions. This doesn’t just apply to raw temperatures. Consider Emmanuel and others telling us after Katrina that hurricane frequency and strength were going to increase when the opposite is what has occurred ( http://policlimate.com/tropical/global_running_pdi.png ).

    So, while the models may have some use for research, at the present time they are dangerous as policy tools because they are so inaccurate when measured against the real world.

  8. My question is this–if models cannot hindcast [replicate past temps], why in the name of Poseidon are they given any credibility for predicting the future? Lysenko rides again, it would appear.

  9. Others have made the same point. Let me put it in my words. If a model cannot foretell the future with consistency, then it is worse than useless. There is an onus on the users of computer models to convince whoever they are advising, that the models are capable of answering the decision maker’s questions. This has not even been attempted. It is just, “we have a model; here are the answers”

    Physics cannot tell us what happens as we add more and more CO2 to the atmosphere. In the end, Mother Nature will tell us, since it seems to be inevitable that we will burn all the fossil fuels we feel we need. The danger is that the warmists may have already done some damage to the economies of the developed nations, and if decision makers go on believing the nonsense that climate models can tell us what is going to happen as more CO2 is added to our atmosphere, then the damage could be catastrophic.

    And where should the blame be laid for this sorry state of affairs? At the feet of the learned scientific societies, led by the Royal Society, and the American Physical Society, who have endorsed this nonsense.

    • +100

      These “learned scientific societies”, which you cite, have all become complicit in the big bamboozle, for political reasons.

      But the principal culprit in this “sorry state of affairs” is the IPCC and its forced consensus process.

      The models are being misused to ostensibly provide scientific evidence in order to support a preconceived political agenda based on a questionable paradigm.

      And many of the politicians of this world are just as culpable, in supporting this nonsense, either out of ignorance or misguided political convictions.

      Max

    • Max, you write “But the principal culprit in this “sorry state of affairs” is the IPCC and its forced consensus process”.

      I don’t often disagree with you, but in discussing the “race for the bottom”, I don’t really blame the IPCC. They were told to prove that CO2 was the devil incarnate, and that is what they did, or rather claimed that they had done..

      The major culprits were the learned scientific societies. If they had done what they were supposed to do, ab initio, then this whole problem would have been nipped in the bud.

  10. I never trusted ”models” will not start to trust them in the future. Tarot cards, crystal balls and models ”predicting” is for the most ignorant Warmist & Fakes

  11. How much we trust a model depends on the stakes.

  12. In chemistry, it is far cheaper to do it by computer than in the laboratory, but how many still take the trouble to do both when there is no pressing reason? More specifically, how many INDIVIDUALS are even trained to do both? Not very many in my experience.

    Scientists who don’t have to bother collecting actual data can also spend more time cranking out papers to advance their career if they don’t have to spend time away from the keyboard.

  13. Instead on having to, resign ourselves to peering through the lens of models at a blurry image, perhaps we’re choosing the shadows on the wall of Plato’s prison cave over the light of day in the real world.

  14. Matthew R Marler

    “The epistemology of computer simulations is a growing subspecialty in the philosophy of science, and we are even seeing the development of a community of philosophers of science that focus on climate modeling.”

    Another good field is the mathematical/statistical modeling of neurophysiology, with a long history since Hodgkins and Huxley, a good recent review by Eugene Izhikevich (Dynamical Systems in Neuroscience, MIT Press, 2007 — indeed MIT Press has a series of books on this topic), and an annual summer course at the Woods Hole Oceanography Institute.

    Model extrapolations and modeling experiments are useful for guiding research, but model results should not be trusted outside the range in which the model has been tested and shown to be accurate. In my opinion, anyway. “The epistemology of computer simulations” ought to be as interesting as the epistemology of mathematical derivation. Each way, the result is implied by the formulation, but may not be at all clear from merely studying the formulation. Think of the work that has been required to figure out all the consequences that follow from assuming that the speed of light is constant in each medium (air, vacuum, water, glass, etc.) When can you tell that the derived consequences are properties of the natural world?

  15. When the uncertainties are harder to characterise, evaluating a model depends more on stepping back, I think, and asking what kind of community it emerges from. Is it, in a word, scientific? And what does that mean for this new way of doing science?

    This advocates evaluating science in terms of the community that produces it. That strikes me as a fundamentally bad idea. This approach abandons the notion of objective reality and truth which is the bedrock of science.

    Science is that which scientists do. Scientists are people who do science.

    And off we call go merrily into the self referential wilderness where the social scientists live. This is a place where there is no such thing as truth, only many “equally valid” opinions; and ultimately what you believe and who is believed, is nothing but politics and who you know.

    • Except for “where the social scientists live,” +1

    • except scientifically science is what scientists do. You might like it to be ruled by philosophy, but on the facts science has done just fine solving problems, and philosophy does nothing. Science is not wedded to a notion of “objective truth” science is wedded to what works. “objective truth” is a wheel that doesnt turn in science. A theory works to explain what was and predict what will be. Saying it does so because it’s “objectively true” adds nothing.

    • Steven, this privileges scientists over philosophers? A little differently, philosophy may need to answer to empiricism, but as a professional empiricist, I frequently feel that empiricism needs to answer to philosophy too. I don’t think science is some hermetically sealed language game that is wholly unaffected by what other humans think–whatever one thinks ought to be true.

      Is it descriptively correct to say that empirical disciplines do not worry about philosophy? Here is one of my favorite funny academic essays. I suspect you will enjoy it:

      http://languagelog.ldc.upenn.edu/myl/PullumMoaners.pdf

      Sure, the author is ranting against linguists who spend time agonizing about methods, rather than getting on with linguistics. But you can’t help but notice that Pullum thought it worth his time to write the essay. That’s because an appreciable number of linguists are visibly affected by philosophers. Maybe it’s bad philosophy in your (and Pullum’s) opinion, but the other linguists are interested in it and seriously affected by it.

      A group of UK scholars in my field are such nard-core militants about methods that they titled a paper “A Popperian Test of (whatever).” Yes that’s a hilarious thing to do. But there it is: They did it. If you want to be a hard-core empiricist about what Lingusitics (or Experimental Econ) is, do observations like these oblige you to admit that science isn’t some hermetically sealed dynamic process, merely “what scientists do?”

      I guess one can escape by saying, “Well if science listens to philosophy, then that listening is merely what scientists do.” But is that useful or interesting or surprising to anyone?

    • Nat, great essay. Some of it could surely be re-presented as comment on some climate scientists:

      Text: Classical mechanics … makes assertions which not only are never confirmed by everyday experience, but whose direct experimental verification is impossible.
      Interpretation: In physics they say things that no one can possibly check upon. I just don’t see why you trust those nerds in their white coats more than you’re prepared to trust me.

      Pullum cites a quote which could very well have been uttered post-Climategate, without tongue-in-cheek: “Feyerabend offers, tongue in cheek, a recipe for the destruction of science. Deadpan, he presents a purported methodology for modern scientists that will allegedly take them in the footsteps of their great heroes such as Galileo: develop theories that are in conflict with known facts; lie about the observational support for them; maintain them stubbornly in the face of objections; defend them by means of dishonesty and bluster.”

    • David Springer

      Steven Mosher | December 17, 2013 at 12:47 am |

      “except scientifically science is what scientists do. You might like it to be ruled by philosophy, but on the facts science has done just fine solving problems, and philosophy does nothing. Science is not wedded to a notion of “objective truth” science is wedded to what works. “objective truth” is a wheel that doesnt turn in science. A theory works to explain what was and predict what will be. Saying it does so because it’s “objectively true” adds nothing.”

      Nonsense. “What works” is a qualifier for an objective truth. How do we know that general relativity is an objective truth? One indicator is because when you apply to it correct an atomic clock riding in a GPS satellite for time dilation caused by changes in relative velocity and gravitational field strength “it works”. Science is indeed wedded to objective truth. Philosophy and religion are wedded to subjective truths. Write that down.

    • Mosher, it says on my Certificate; Doctor of Philosophy.
      The idea of testing models of reality against reality, and rejecting those that don’t match is a branch of philosophy and predates computers.
      The downside of this approach is that reality kicks one in the teeth on a regular basis, as there is only one reality, whereas scientists are able to produce a myriad of models. If you can’t deal with the kick in the teeth, then you manufacture models which you either can’t or will not test against reality, and play the ‘I am a scientist and I am better than you unwashed scum card’.
      Reality, however grim, is true. For all your waxing lyrical about what models are and how they should be treated, you would not allow cancer researchers to play fast and loose with their models of chemotherapy as you allow these ‘climate science’ to play with global temperature.

    • David Springer

      Doc I think you’re suffering from AIDS – Acquired Ironic Derisive Syndrome. The exact cause is unknown but it’s most often associated with frequent, intimate contact with reality.

    • “A theory works to explain what was and predict what will be. Saying it does so because it’s “objectively true” adds nothing.”

      Mosher’s a smart guy, and I respect him, but even he can’t do much more than apply glossy lipstick to the pig known as climate science modeling.

  16. “[D]eveloping useful simulations is not that different from performing successful experiments. An experiment, like a model, is a simplification of reality. Deciding what counts as good one, or even what counts as a repeat of an old one, depends on intense, detailed discussions between groups of experts who usually agree about fundamentals…”

    Does this comparison make anyone else queasy? And is it even right about experiments? I do both (simulations and experiments) and I really don’t believe they are “not that different.” But what say you?

    • From 100K feet one can see gross similarities.

    • Some experiments are done directly at full scale under conditions identical to those that apply in real use. That’s, however, extremely rare. Almost all experiments are done to confirm or refine models that are expressions of theory. It’s not far from truth to say that all experiments are worthless without models. The models may be very simple, but a model is in virtually every case involved to make it possible to take advantage of the results in an application under even slightly differing conditions. The simplest model assumption may be that of continuity, which tells that the outcome does not change much, when conditions change very little.

      In experimental work it’s more typical that the setup is simplified in order to make the experiment tell on a specific phenomenon separately and accurately. This kind of experiments have been most important for the development of physics. The results of reproducible carefully executed laboratory experiments would be next to useless without a model that expresses the theory. (Every formula of physics is part of a model.)

      As models are needed to give empirical result some meaning, empirical results are needed to make models relevant for the real world. The symbiosis of models and experiments is the content of science. Scientific knowledge can be improved both by making more experiments or observations forgetting models during that work, and by studying models separately referring only to other models and theory during that work, but somewhere in the background must be a connection between the models and the empirical data.

    • Interesting comment, Pekka. But I’m wondering about our direct answer to the question: Are you saying that trying to create some mutually exclusive distinction between models and experiments is (in most cases) unsustainable (and thus, in your view it is true that they are “not that different”)?

    • When the model is too big to hold in your own head, it is propitious to remember it.

    • Joshua,

      Observing the environment either by just by some measurements or by experiments which take advantage of artificially created conditions, produces raw data. That step is certainly different from theory or model development and from figuring out consequences of the theories and models. Neither activity alone qualifies as science (when we exclude pure mathematics, and part of philosophy as distinct from actual science, as often is done). Only their combination is science.

      As soon as we start to give some meaning for the raw data, we use also models, and it’s common to consider that kind of use of models to be part of the empirical work.

      Mathematics studies relationships and also properties of models without any reference to their relationship with observables of the real world. Many methods and tools useful in science are, however, developed by that. That’s clearly a totally separate activity from doing empirical work.

      Whenever mathematical methods are used having input directly or indirectly from empirical observations, we have a change of learning something new about the real world. That may be based either on more empirical data or on development in the methods. Often the development of methods is linked directly to the nature of the empirical data. That may make it impossible to draw a line between empirical work and model development. In this sense the activities may be equal.

  17. When I think of computer models I am reminded of the four color map theorem proof. The theorem basically says no more than four colors are required to color a map so that no two adjacent regions have the same color. After long existing as a conjecture, it was proven in the mid 70’s as I recall. It was proven by a computer program that eliminated a large number of possible cases, after some preliminary analytical narrowing of the possibilities. The proof was not possible to be done or even followed in detail by a human. At the time I engaged in a number of lively discussions about whether a computer proof was acceptable. The issue was, and is, what is the nature of and in fact the point of knowing something. One side argued that it doesn’t matter how a theorem is proven, we learn the same regardless. The end result is just knowing if it is true. Another side argued that the point of proving a theorem was not just to know whether the statement is true or not, but to gain the understanding (by proving the theorem) WHY it is true, and thereby better understand nature or at least math. The latter side argued that one can proceed by assuming it is true and establish whatever results that leads to via appropriate logic. The absolute truth of the statement is not the issue, the understanding necessary to prove it is the issue.

    I think a similar view could be applied to computer models of more complex physical processes, such as fluid flows, computational chemistry, astrophysics, and even climate science. If the computer model gives outputs that can not be understood in large part and reasonably accurately approximated (to agree with observations at that level of approximation) by a human with a piece of paper and a calculator then the computer is not advancing our understanding. It may be adding to our knowledge, but not our understanding. The point is not just to know the results but to understand them.

    But I guess climate science is not math or physics. I am still unsure what climate science is exactly, but I get the nagging feeling that the discipline could use a thorough taxonomy based on time, space, and energy scales. It appears (to an outsider) as if many of the computer models used mix phenomena across multiscale and multiphase boundaries without a prior rigorous delineation of boundaries or clearly defined conservation or symmetry properties. Perhaps an axiomatic approach to climate science? Oh well, it is late at night and time to get off the soapbox.

    • +1. I do sometimes feel like the seven blind men touching the elephant when I deal computationally with a hairy likelihood function. I can only grope at some distant thing, and I know I don’t really see it.

    • A fan of *MORE* discourse

      F.A.H. opines “If the computer model gives outputs that can not be understood in large part and reasonably accurately approximated (to agree with observations at that level of approximation) by a human with a piece of paper and a calculator then the computer is not advancing our understanding.”

      You are right F.A.H. And fortunately “What we seek often is near to hand” (Borges, Averroes’s Search).

      Close study, sustained over many decades, of by-hand-checkable Hansen-style energy-balance climate models has convinced an overwhelming majority of climate scientists that (in Dave Appell’s words) Hansen-style energy conservation reigns supreme!

      Conclusion  Close study of common-sense energy-balance models and their societal/moral implications is recommended to you, F.A.H!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • I am still unsure what climate science is exactly

      I’m unsure whether the concept of climate science is appropriate. Is there a separate science of that name, or is climate just a subject of study of geosciences?

      Climate is defined to be a description of distribution of states of weather (averages, variabilities, etc.). Can studying the distributions be a science of its own? Is it only confusing that we discuss climate science? There are certainly issues that are important specifically for understanding climate, but does studying those make it a separate science?

    • Tomas Milanovic

      I am here with Pekka.
      Climate science is science applied to climate. So one could talk about river science, bug science, cloud science or rocket science.
      And of course it just defines the field to which science applies so it defines nothing interesting or relevant about the science itself.
      The science itself is any necessary scientific domain – thermodynamics,, quantum mechanics, non linear dynamics, fluid mechanics, geophysics.

      To that one could add biology, biochemistry, chemistry, agronomy …
      As it is obvious that nobody masters all of the fields necessary to study the climate, there is no such thing as “climate science”.

    • A fan of *MORE* discourse

      Tomas Milanovic wrongly asserts [without reason or reference]  “As it is obvious that nobody masters all of the fields necessary to study  the climate  medicine , there is no such thing as  climate science ‘ ‘medical science.”

      Dogmatically wrong assertion by Tomas Milanovic, inspiring scientific links by FOMD!

      It is a pleasure to help enlarge your conception of the 21st century scientific enterprise, Tomas Milanovic!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  18. As was once said by another, Judith Curry asks the interesting question about whether we should depend on data or models, The answer is, if we had data from the future we would use that.

    • @eli rabbett

      But the corollary of your insight is *not*

      ‘And therefore we should rely absolutely on any old untested unvalidated junk that calls itself a climate model’

    • But Eli, we do get in a lot of data for testing. But it’s (mostly) regional; if you insist on global averages you’re down to a single data point (or of that order) every month. Hopeless. Can it really be true that everything about earth’s climate is chaotic except only the global average surface temperature? Seems unreasonable; even the global surface is not a closed system, given that people are now considering major heat flow into the deep ocean.

    • The thing is, that we keep talking about the models, and not much discussion of the data.

      We need reasonable answers from both, and more discussion of attempts to model past climate as well.

      We know what high CO2 environments have done in the past, so I think we already have the answer, stop burning coal.

    • Bob Droege, “We know what high CO2 environments have done in the past, so I think we already have the answer, stop burning coal.”

      Why? Because “coal” is inherently evil and the spawn of the devil prosperity? In the US right now about 50% of the coal related pollution is due to 10% of the aging power plants. If all coal fired power plants were upgraded to 1990s technology the improved efficiency and environmental controls would bring coal inline with standard emission requirements.

      Coal is just another tool in the toolbox. Efficiency is the key. China right now has mainly higher efficiency coal power plants with scrubbers and particulate arrestors that are not being used likely as a political bargaining chip.

      http://web.mit.edu/newsoffice/2008/china-energy-1006.html

      If the US took a rational approach to coal use standards and phase out as newer technology developed it would solve more problems than “demonizing” coal. A bigger problem is land use to grow replacement biofuels because of the irrational stance on coal and “alternatives”.

    • OMG, something ER said that I agree with.

      We are forced to use models. We are not forced to have faith in them to set policies that are otherwise harmful; without question, however.

    • No Capt,
      It’s not that coal is the demon, it’s that you get the most CO2 from burning coal.
      You are right, China is burning coal to get the most watts for every dime spent on coal. It would cost them more to burn low sulfur coal, or to operate their scrubbers, so they don’t.
      I’m not demonizing coal or taking an irrational stance on CO2 emissions, but I’ll put you down as a defender of the coal burners.

    • Bob Droege, ” but I’ll put you down as a defender of the coal burners.”

      You could do that though I think of myself more as a keeping my options open kind of guy. Germans must be that way too since they are planning more coal plants. Luckily, Germany didn’t do anything stupid like outlawing coal, so now that they need it because their nation has a different fear factor, they can use it without requiring tons of legal and regulatory exceptions.

  19. Science (and discovery) happens when models fail. That’s how we discover new things we didn’t understand before. Insisting that a model represents reality is profoundly unscientific. That, sadly, is why “climate science” is at present a deeply troubled discipline. Rather than trying to find measurements to break the models, researchers seem to be trying to shade the measurements to fit the model.

    Climate models do not have a history of skilful predictions to validate them, yet somehow because they include some physics known to be true we are supposed to treat them as Gospel. The physics is undeniable; CO2 is indeed a greenhouse gas, and the concentration in the atmosphere is indeed increasing. Worldwide temperatures have indeed been rising in the last century.

    But to go from these simple observations and truths to detailed predictions (e,g, snowless winters in Britain, an ice-free arctic, etc.) is truly absurd.

  20. Climate models are computer programs mostly written in Fortran. Some of the code dates back 20 years. I doubt whether any one person has a grip on all the software for a given model. Some light hearted examples:

    #climate GCMs are robust :
    if (AGW < 0) {AGW = – AGW;}

    #climate GCMs include all known forcings:
    if (redsky@night(d)) {clouds(d+1) = 0;}

    #climate GCMs show enhanced warming :
    H2O = 3;
    AGW = H2O*CO2;

    #climate GCMs include aerosol forcing:
    ASOL = (AGW – HAD4)*3.5;
    AGW = AGW – ASOL/3.5;

    #climate AR5 uses ensemble of GCMs:
    @H2O = [1.2,1.5,2.0,2.5,3.0,4.0.5.0];
    @CMIP5=H2O*CO2;

    #climate GCMs find missing heat !
    hellfire = nskeptics*10^15 joules;
    AGW = AGW – hellfire;

  21. “n this new world of computer modelling, an oft-quoted remark made in the 1970s by the statistician George Box remains a useful rule of thumb: ‘all models are wrong,”

    By 1970 computer modelling was well establishe4d. In Australia we were well into computer simulation for the British MOD. With the Woomera missile range we could offer a complete weapon evaluation facility.I took on the British Bloodhound system. You see, if an air force needed a new fighter .they built a prototype, found an intrepid pilot to fly it and put it through its paces, but there was no room for a pilot in those robots of the skies and a human pilot could never survive the acceleration.

    The supersonic Bloodhound flew like a normal aircraft – it had to bank to turn and no one had ever written a complete 3D mathematical model, including experiments to validate in detail against flights at the Woomera range WE could take signals like wing angle or guidance output from telemetry and feed iit into the model to make sure the response was the same. I wrote the model in 1959 so my colleagues built a special computer to run it. since no commercial computers then becoming available could run it..

    This exercise was a success because the model was subject to a rigorous validation. Guidance, autopilot, wing actuators, aerodynamics, thrust fuel consumption equations were all checked separately right down to production tolerances. This threw up some unexpe4cted results, but is the only way to achie4ve confidence in a mathematical model.

    So how far should we trust models? Only to the extent that they have been properly validated

  22. David Springer

    http://en.wikipedia.org/wiki/Trust,_but_verify

    Simplez.

    I said this at the beginning of thread. It’s THE definitive answer. Everything following is just academic woolgathering or sniveling about the cost of same.

  23. We often hear the trite remark ‘All models are wrong, but some are useful’

    I’m still waiting to hear how this applies to climate models. What have we learnt that we would not know without them? Recall, please, that Arrhenius made a good prediction with a piece of log paper and a ruler back in 1907.

    What have we got for our $100bn ‘investment’. Because to this observer it seems that the answer is very close to nothing at all.

    • Latimer

      Your query begs the question ‘good for what?’

      My impression is that the more complex GCMs have some potential usefulness in simulating the interactions between different parts of the climate system – in a way almost hypothesis testing. For example, can the models conceivably explain the pause by transfering more energy to the deeper oceans? If so, what are the mechanisms and what was the trigger to go from heat gain in the atmosphere?

      The implied question though is ‘are complex GCMs useful at predicting / projecting future temperature trends?’. At present the answer is tending towards ‘not very’.

      As we know that even the most complex and computationally intense model will eventually have to rely on parameterisation and modeller’s assumptions, I think Dr Knutti’s last comments bear considering – are the modellers going down a blind alley in attempting to develop ever more ‘realistic’ models that can only be run very few times, or would it be more useful for climate projections to run more simplified models that can be run thousands of times over a much wider envelope of initial conditions?

    • “Arrhenius made a good prediction with a piece of log paper and a ruler back in 1907.”

      Yup. and he wasnt a watermelon

    • Watermelon or not, the problem is that after all this work, we still do not know whether AGW is likely to cause warming of around 1.5C on doubling of CO2 (and hence be no problem for humanity at all as constrained by the availability of fossil fuels to consume) or result in warming of 4.5C (and become a potential problem for humanity and our environment over the course of the next 100 years or so).

      And the models aren’t getting us any closer to that key answer (which is really the only answer we really need in order to plan for any climate changes that may occur).

      So I’d say they have been worthless to date.

      Max

    • David Springer

      re; what did we learn from climate models

      We learned that some physical assumption shared by all the models causes them to predict more warming than happens in reality.

      We also learned that climate boffins have a difficult time accepting the fallibility of a consensus of experts.

  24. A fan of *MORE* discourse

    Question  Should we trust climate-change models?

    Answer  As long as we see Earth’s seas keep rising and Earth’s polar ice-caps keep melting, we should trust that Hansen’s climate-science worldview is broadly correct.

    ———–

    Question  Should we trust Chris Monckton/WUWT/slogan-shouting demagogues/corporate shills/cycle-chasing denialists/elderly crackpot/hyper-libertarian vigilantes?

    Answer  Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha

    ———–

    Question  Is it efficient, prudent, sane, and moral? … or is it short-sighted, willfully ignorant, greedy, and wrong? … to burn all earth’s carbon-energy resources for cheap power?

    Answer  It’s short-sighted, willfully ignorant, greedy, and wrong.

    ———–

    These questions and answers are not complicated, Climate Etc readers!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Should we “trust” Fanny?

      As Santa says, “ho-ho-ho!”.

      Max

    • A Fan of More One-sided Shouting wrote:

      Question Should we trust climate-change models?

      Answer As long as we see Earth’s seas keep rising and Earth’s polar ice-caps keep melting, we should trust that Hansen’s climate-science worldview is broadly correct.

      Oh wow.

      I don’t think I have ever seen evidence of less understanding of the scientific process. I count a minimum of 4 things wrong in one sentence.

      Do you even think about what you are posting? Or is it entirely Pavlov-reflexive?

  25. Climate models including anthropogenic effects must be able to make specific predictions which can then be tested by experiment. Otherwise alternate theories which can reproduce observed temperatures are equally valid no matter how outlandish – see for example here. Testing predictions of models is the only way to make progress in science.

    For example some specific predictions of GCMs could be:
    1. Tropical hot spot in upper troposphere.
    2. Cooling of stratosphere.
    3. Changes in water vapor levels in upper atmosphere
    4. Quantitative regional warming predictions eg. poles versus tropics.
    etc.

    As far as I know none of the above have yet been definitively measured by experiment. The clincher would be to make a new unambiguous prediction which can then be tested by experiment.

  26. It may be unfair, but climate models remind me of the Cambridge Econometrics model for the UK economy. This was very high profile in the early 1980s, presented as the definitive, very detailed, sophisticated, model of all sectors and regions of the economy, featuring thousands of equations. It also had other agendas One was to vindicate what one might call Cambridge Economics (Keynes onwards) and demonstrate the ‘folly’ of the then government. As it happens, although by far the most detailed model around at the time, its macroeconomic forecasts were poor compared to other models. A ‘palace coup’ effectively removed the founding academics (memory serves?). Apparently it is now quite successful at providing forecasts etc on a micro level across Europe.

    Not sure quite what the moral to this is, but something about complexity not being the Holy Grail of modelling a large system, but rather fewer well chosen equations.

  27. Tomas Milanovic

    Computer models should not be distinguished by what they do because they only do and will always only do additions of integers and basic logical operators AND/OR. Considered under this aspect all models do exactly the same thing with different human produced algorithms.
    They should be distinguished by purpose, by the field to which they are applied. It is this purpose that will allow to know whether they can be useful or useless. I will show what I mean by 3 examples.
    .
    Chess model
    ==========
    Chess is a very complex system governed by very simple rules. While the number of possible chess moves in a game is finite, it is so astronomically huge that no computer will ever be able to examine them all. That’s why if there is one day a proof that the white always wins (or that there is always a draw) if both players select the best moves, this proof can only be generated by a human brain and not a computer.
    .
    A chess computer model is one of the easiest models to do. One just runs through the decision tree and stops at some horizon which is far beyond what a human can do. As chess is a holistic game (e.g not the move but the whole position counts) the trick is then to weigh positions and select the one with highest weight.
    This trick will base on some empirical rules and will vary from model to model.
    .
    This model is relatively useful. After all it beat Kasparov and would easily beat any Climate Etc poster.
    But this model is not playing chess and makes hopeless blunders when a human player uses his ability to understand the position holistically and makes a move whose signification lies beyond the computer’s horizon.
    The fact that only the best have this ability on a level sufficient to beat the model’s brute force horizon changes nothing on the fact that the computer can’t understand chess.
    .
    CFD (Computational Fluid Dynamics)
    =============================
    This system is much more complex than chess because it is continuous and infinite dimensional.
    But the advantage compared to chess is that we know here exactly the equations governing the dynamics – Navier Stokes.
    The problem is that N-S can’t be analytically solved in the general case and it is not even known whether a unique continuous solution exists (one of the Clay problems worth 1M$ for who finds the proof).
    .
    However if we take a simple monophasic system with pressures and viscosities in a friendly behaving domain (f.ex air or water in standard conditions), the challenge is “just” to find some numbers that solve a PDE system and computers are good at finding numbers.
    The only difficulty and it is indeed a make or break condition is to be sure that the necessarily finite procedure will converge to the infinite continuous set of numbers which are the solution at least for a finite time horizon.
    What helps here is that we can prove (with paper and pencil) many useful properties of the equations which will act as guardians that the computer doesn’t start to compute non sense.
    What also helps is that we can do an experiment and validate the computed numbers by comparing them to the real flow. And it works, CFD is sometimes useful.
    But, and this is the fundamental caveat, the usefulness of CFD is dramatically constrained.
    .
    First it is constrained in space – the finite resolution to insure convergence is so high that CFD can be only used on small space scales (of order of meters).
    Second it is constrained in time – after a finite time horizon the convergence is no more guaranteed.
    Third it is constrained in the parameter space – the pressures, velocities and temperatures must be in a bounded small domain for which the validity of the numerical treatment has been proven.
    Violate any of the 3 above conditions and the computer will only produce garbage. While the chess computer model was universally valid, the CFD computer model is only conditionally valid.
    .
    Climate model
    ===========
    This is CFD and chess, power infinity.
    Here we have an extremely messy, complex, non homogeneous system. Compared to chess the brute force of combinatorics leads nowhere and compared to CFD we don’t know the equations governing the system.
    .
    More precisely – we know some of them like N-S or quantum mechanical radiation transfer but don’t know them all. Besides the conditions of the system violate all the necessary conditions which made the finite CFD approach useful in some cases.
    – The space scales are too large (thousands of km) so that there is no chance that the computed numbers will converge to a solution of any equations even if we knew them what we don’t.
    – The time scales are far beyond the horizon where the simulation can be trusted
    – The system is polyphasic, undergoing phase changes (ice, clouds, continents) so that the parameters are outside of the validity range of CFD like methods
    – No controlled experiment can be done to validate the model’s predictions
    Of course by constraining the model to conserve energy, momentum and mass (often by totally artificial methods) the computed numbers won’t look totally stupid because any evolution respecting conservation laws is potentially allowed.
    The problem being that among the infinity of orbits obeying conservation laws, only one will be chosen by the system and understanding which one and why is what we call to understand the system’s dynamics.
    .
    Considering the above, the climate models can be safely considered useless and produce no new or relevant understanding. The only limited usefulness I can see is that the computed numbers provided that one doesn’t stretch the time horizon unreasonably far, may probably belong to possible states of the system even if in no way sure or necessary.
    I said probably because by definition there is no proof of this belief and there can’t be because we don’t know the full set of equations the system obeys and the computer alone can’t produce any new insights in this question.

    • Tomas,

      Two comments:

      1) As long as we don’t know, whether N-S equation has unique solutions, we may not know, whether it’s correct to require that more and more detailed numerical approximations of N-S should converge to some solution. One alternative is that making the approximation more detailed shows convergence to a point, but diverging behavior beyond that.

      Perhaps we should also remember that real fluids are not a case of continuum, but formed from discrete particles, and that N-S is not a correct equation for real fluids at distances of the order of mean free path of the particles or smaller.

      2) The critique you presented on the GCMs is correct from one essential point of view. They are not good bottom-up models built along the lines you describe. That leaves open, whether they are good enough top-down models for many purposes. It’s possible that those conservation laws than can be taken into account in combination with the other constraints introduced limit the set of models to such that every model constructed correctly to satisfy the set requirements predicts the statistical properties of expected climate with a useful accuracy given right forcings as input.

      The use of GCM for IPCC projections is obviously based on believing that what I said to be possible is even rather likely. Because this is the only potential justification for such use of models in my view, discussing the value of GCMs should not be based on their failure in meeting those criteria they surely don’t meet, but on meeting the second set of criteria, which is also sufficient for their usefulness. The problem recognized by many modelers is that it very difficult to tell, whether a model satisfies this second set of criteria. There are no formal methods for deciding that based on the presently available information.

    • @Tomas

      “I said probably because by definition there is no proof of this belief and there can’t be because we don’t know the full set of equations the system obeys and the computer alone can’t produce any new insights in this question.”

      Exactly. And what you described is the ‘easy part’.

      In addition to the mathematical difficulties, which I understand exist but are otherwise to me just meaningless buzzwords, we have the problem, from the modeler’s pew, of there being a large number of variables to the model input that are known to influence climate but whose magnitude and timing are unknown and unknowable.

      Examples would include:

      The output of the sun: TSI, spectral distribution of the TSI, solar eruptions, variations in the sun’s magnetic field and hence the solar wind, etc.

      Variations in the earth’s magnetic field, magnitude and orientation.

      Cosmic radiation.

      Plate tectonics affecting the size and shape of the ocean basins, and by extension, ocean currents. Also affects land features. Slow, but definitely influences climate.

      Undersea eruptions, magma, superheated water, and other chemicals. Produces point sources of heat at random locations and time which produce convection currents and again by extension, influence ocean currents. The undersea eruptions also build seamounts and even new islands, also affecting ocean currents.

      Volcanic eruptions. Location, magnitude, nature of materiel ejected, and timing all random and unknowable, but definitely having an immediate effect on climate.

      And a whole litany of other ‘climate drivers’ some of which we may not even be aware of yet.

      Using models based on theory and controlled wind tunnel experiments to predict aircraft performance is VERY useful in producing airplanes that fly and burn fuel at the rate promised to the airline customers.

      Climate models, based on the axiom that ACO2 is the primary driver of climate and fed by under sampled and ‘adjusted’ empirical data and a laundry list of SWAG’s are ALSO very useful, as catweazle666 and others have pointed out. But not for predicting climate. Not now; not ever.

    • Pekka Pirilä | December 17, 2013 at 7:11 am | | Reply w/ Link | Reply w/

      That leaves open, whether they are good enough top-down models for many purposes. It’s possible that those conservation laws than can be taken into account in combination with the other constraints introduced limit the set of models to such that every model constructed correctly to satisfy the set requirements predicts the statistical properties of expected climate with a useful accuracy given right forcings as input.

      I think it is possible, I just don’t think what has been built meets this possibility. The proof of this is that their output doesn’t match reality. Co2 may be a control knob, but it’s not the only control knob.

    • The models model anomalies and not actual temperature, which makes the phase transitions of water somewhat problematical.

    • No GCM can model anomalies, they all model actual temperatures and other actual variables. The results may often be presented as anomalies calculated by subtraction from the model results.

    • I always appreciate your informational comments here Tomas. And Pekka

  28. In 2005 solar physicists Galena Mashnich and Vladimir Bashkirtsev bet climate modeler James Annan that global temperatures would be cooler during 2012-2017 compared to 1998-2003.

    Perhaps the Russian’s model had to do with an early stadium wave hypothesis, plus solar variability. In four years, or less, the models can be compoared.

    • According to GISS, 1999 and 2000 were very cold:

      40
      41

      By comparison, 2012 and 2013 are broilers:

      57
      59 – Dec 1, 2012 thru November 31, 2013

      It’s going to be very hard for Annan to lose.

    • It’s going to be very hard for Annan to lose.

      Right. As long as Jimbo’s merry men are massaging the data it’s a cinch.

      Max

    • Prove it.

  29. Tomas Milanovic

    As long as we don’t know, whether N-S equation has unique solutions, we may not know, whether it’s correct to require that more and more detailed numerical approximations of N-S should converge to some solution.
    Formally this is correct. But IF they have (and this is expected for regular enough initial conditions) then the numerics must be defined and verified so that they converge. This is a must for any numerics.

    Perhaps we should also remember that real fluids are not a case of continuum, but formed from discrete particles, and that N-S is not a correct equation for real fluids at distances of the order of mean free path of the particles or smaller.

    This has already been verified. A system of about 100 molecules was solved with explicit molecular interactions. The difference to N-S was small (have not the paper handy but it can be easily googled). This invalidates this objection for CFD where the minimal length is the Kolmogorov length.
    It’s possible that those conservation laws than can be taken into account in combination with the other constraints introduced limit the set of models to such that every model constructed correctly to satisfy the set requirements predicts the statistical properties of expected climate with a useful accuracy given right forcings as input.
    Yes this develops farther my use of the “probably”. You’ll agree that if one looks at that possibility with cool head, it really looks like a crazy faith based on absolutely nothing. This joins the comments we exchanged about ergodicity on another thread. I wouldn’t expect from you to dive headlong in such crazy speculations ;)

    • Tomas,

      One more point is that N-S equation is derived from the same principles that are used in building climate models. The models are typically written directly from the conservation laws, not by approximating explicitly N-S equation. That means also that it’s possible to estimate errors at each step of model construction using empirical observations to tell about the actual properties of the atmosphere and specifically about details that cannot be described by the model.

      Considering the models as derived directly from the conservation laws helps also formulating tests for various subsystems or isolated phenomena.

      The above doesn’t allow for reliable or objective estimates of model validity, but the success at various steps of the approach adds to the plausibility of the usefulness of the resulting models.

    • The essence of Navier-Stokes is that dissipation can be described by a linear, viscous process. This is of some relevance to forced convection, i.e. wind tunnels in which thermal gradients are neglible. It has little dissipative relevance to natural convection with dissipation determined by extremely nonlinear thermal processes. I suggest Landau and Lifshitz, Fluid Dynamics, Chapter 5, as a reference. The steady-state Carnot description of normal (z) thermal flux dissipation provides a simpler picture and a secular 2nd variation indication of a minimum, and approximate solutions are going to err on the high side for surface temperatures (WiP).

    • Quondam,

      I checked that chapter of Landau and Lifshitz. What’s discussed there is certainly essential for understanding the atmosphere, but I cannot see, what was the actual point of your comment. Much of the content of that chapter is discussed in some way in every presentation of physics of the atmosphere, but you must have some specific point in mind.

    • Pekka,

      Thank you for taking time to check out Chapter V. The subject of this thread is trust in models. I certainly agree that GCMs are not Navier-Stokes solutions – it’s not really clear what they are solutions to. While they may start with some fundamental conservation expressions, embedded in the text are little tweaks of questionable provenance (Hansen, 1983). NS is often implied to be the path to understanding climate, but I take exception. The atmosphere is not in equilibrium, at best we can approximate it as a steady state and, in my mind, its essential characteristic is the constant work required to keep it so. NS implies this work can be expressed through a linear viscosity tensor and material flux between boundaries. V shows that NS requires an addition thermal term when thermal gradients occur and the viscous term becomes of minor import (Eq. 56.4), with the steady state maintained by an energy flux between boundaries – a physically quite different process. In either case, linear dissipation is assumed. While that may be a useful assumption for isothermal forced convection, it bears no relevance for free convection in the troposphere.

      “A very curious case of convection is the flow which occurs in a fluid between two infinite horizontal planes at two different temperatures, that of the lower plane being greater than that of the upper plane.”

      Should I have any trust in an NS model of the troposphere? Let me count the ways.

    • Quondam,

      Have you had a look on textbooks of atmospheric science, or on a more specialized book like Vallis: Atmospheric and Oceanic Fluid Dynamics or Washington and Parkinson: Three-Dimensional Climate Modeling. From them it should be fully clear that the role of convection is fully recognized. That doesn’t mean that it’s fully solved, but your comment seems to indicate that you think that it’s not even recognized, or that it has been ignored without making major efforts in solving the problems.

      I don’t know enough to judge properly the severity of remaining problems, and would certainly like to learn more.

  30. Are we scientists, standing on the shoulders of giants, or are we simply elevating ourselves in Towers of Babel?

  31. We know exactly what the success rate of the economic models – based I believe on the Black-Scholes model (acronym is BS – significant or what!) – because the global economy is still attempting to recover from the economic setback of ~2008.

    Anyone who believes they can use a computer – any computer, no matter how large and powerful, all that happens when you chuck more power at the problem is that you get the wrong answer faster – to predict the future state of a massive, effectively infinite, in fact, if speculation concerning galactic level periodicity is correct, containing an unknown number of feedback loops, of some of which we are unsure even of the sign, and worse, subject to extreme sensitivity to initial conditions is either highly deluded or a total charlatan.

    Ironically, it was a meteorologist – Edward Lorenz – who initially pointed this out.

    The fluttering of a butterfly’s wing in Rio de Janeiro, amplified by atmospheric currents, could cause a tornado in Texas two weeks later.

    This statement is as true now as it was when he originally uttered it.

    As to climate models being useful, currently they appear to be very useful indeed – for the purposes of extortion, and little or nothing else.

    • And then add in Mother Earth’s infinitely elastic and unknowably powerful homeostatic mechanisms and you have a real can of worms.

    • “As to climate models being useful, currently they appear to be very useful indeed – for the purposes of extortion, and little or nothing else.”

      Exactly. Climate models have proven and continue to prove to be the most useful computer models ever devised. For their intended purpose.

      And their success has nothing whatsoever in their ability to predict, with fractional degree precision, the effects of ACO2 on the TOE.

  32. It’s not difficult. The ‘useful’ models are those which can simulate reality with some degree of accuracy. Hindcasting of course cannot be taken as anything other than tuning. A necessary but no sufficient condition. To quote Lindzen, it’s like doing an exam with the answers in front of you. Ergo the tests of the climate models are correct predictions and present day spatial correctness but they abjectly fail both so are not useful only in knowing that there is something missing. Certainly of no use for poliy though!

    • Yet even with the answers in front of them, the models differ from each other and from actual global mean temperature by up to 4 deg C and differ in terms of precipitation everywhere and according to any measure of precip (seasonality, storm size, regional patterns, days of rain, you name it).

    • Craig Loehle,
      “Yet even with the answers in front of them, the models differ from each other and from actual global mean temperature by up to 4 deg C and differ in terms of precipitation everywhere and according to any measure of precip (seasonality, storm size, regional patterns, days of rain, you name it).”

      Other than that and reversing ocean currents they are doing a marvelous job :)

    • That’s not at all the way that it works.
      Those that can, do. Those that can’t, make up stuff.

      Lean is following the right approach.
      http://contextearth.com/2013/12/18/csalt-model-and-the-hale-cycle/
      Full speed ahead.

  33. Mother Earth’s infinitely elastic and unknowably powerful homeostatic mechanisms

    AKA “attractors”, once again courtesy of Ed Lorenz.

    One of these days modern climate scientists might catch up with him.

  34. A professor in hydrology said in Spanish that governments “simulate” in order to “dissimulate”
    simular y disimular in Spanish are close together

  35. Freeman Dyson had very relevant statements to make with respect to models. He stated:

    “”I have studied their climate models and know what they can do,” Prof. Dyson says. “The models solve the equations of fluid dynamics and do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields, farms and forests. They do not begin to describe the real world that we live in.”

    An experience he had with Enrico Fermi is also enlightening:

    “Dyson learned about the pitfalls of modelling early in his career, in 1953, and from good authority: physicist Enrico Fermi, who had built the first nuclear reactor in 1942. The young Prof. Dyson and his team of graduate students and post-docs had proudly developed what seemed like a remarkably reliable model of subatomic behaviour that corresponded with Fermi’s actual measurements. To Prof. Dyson’s dismay, Fermi quickly dismissed his model.
    “In desperation, I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, ‘How many arbitrary parameters did you use for your calculations?’ I thought for a moment about our cut-off procedures and said, ‘Four.’ He said, ‘I remember my friend Johnny von Neumann [the co-creator of game theory] used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.’ With that, the conversation was over.”
    …. Only years later, after Fermi’s death, did new developments in science confirm that the impressive agreement between Prof. Dyson’s model and Fermi’s measurements was bogus….

    See http://www.canada.com/nationalpost/news/story.html?id=985641c9-8594-43c2-802d-947d65555e8e

    JD

    • Freeman Dyson participated in Oak Ridge’s work on climate modelling:

      And there you got a very strong feeling for how uncertain the whole business is, that the five reservoirs of carbon all are in close contact — the atmosphere, the upper level of the ocean, the land vegetation, the topsoil, and the fossil fuels. They are all about equal in size. They all interact with each other strongly. So you can’t understand any of them unless you understand all of them. Essentially that was the conclusion. It’s a problem of very complicated ecology, and to isolate the atmosphere and the ocean just as a hydrodynamics problem makes no sense.

      Thirty years ago, there was a sort of a political split between the Oak Ridge community, which included biology, and people who were doing these fluid dynamics models, which don’t include biology. They got the lion’s share of money and attention. And since then, this group of pure modeling experts has become dominant.

      I got out of the field then. I didn’t like the way it was going. It left me with a bad taste.

      Syukuro Manabe, right here in Princeton, was the first person who did climate models with enhanced carbon dioxide and they were excellent models. And he used to say very firmly that these models are very good tools for understanding climate, but they are not good tools for predicting climate. I think that’s absolutely right. They are models, but they don’t pretend to be the real world. They are purely fluid dynamics. You can learn a lot from them, but you cannot learn what’s going to happen 10 years from now.

    • Oops, dismissed carbonates, which, I agree, are not in as close contact with the other reservoirs. Otherwise, excellent points. I particularly liked that climate models were useful, but not for predicting climate.
      =========

  36. If, for example, the GFS has a distinct north and east bias for hurricanes five days out, we can use that information to make predictions. It is, as has been pointed out here, both wrong and useful.

    It seems we have enough evidence to conclude, at least on a preliminary basis, that the models have a distinct warm bias. This is useful information, even if they are not correct.

  37. A fan of *MORE* discourse | December 17, 2013 at 4:26 am | Reply

    Question Should we trust climate-change models?
    Answer
    (translated from Fanny’s statement) Well, they seem to provide a reasonable explanation, and if we tweak the inputs a bit they come close, so therefore they MUST be correct, right?

    Question Is it efficient, prudent, sane, and moral? … or is it short-sighted, willfully ignorant, greedy, and wrong? … to burn all earth’s carbon-energy resources for cheap power?
    To some it seems very important it is reserved to serve a later population which by the end of the century will be seven times wealthier than we are now (IPCC figures). The little problem of developing countries not being able to develop or to educate and control population growth the way the developed world has, will obviously be simply solved by a massive international bureaucracy redistributing the fruits of a massive taxation and wealth redistribution scheme.

    Bonus Question: Should we trust bureaucrats and government employed scientists who are on the receiving end of one of the biggest cash cow system of research grants ever seen on this earth to be dispassionate and independent and to ignore the pressure of stated government policy and the pressure of green lobby and the ‘noble cause’ of ‘saving the world’ to really concentrate on the correct answers? Or do you think they may just take the path of least resistance and access to further research grants and remain on the gravy train rather than the career trashing path of being marginalized, excluded and ostracized?

    And a personal question to Fanny. What is it with the links you so freely provide? Sometimes I am foolish enough to follow them, and they lead me into all sorts of vaguely related, but not very meaningful, nooks, crannies, and quiet little places. Are they provided solely for the purpose of distraction?

    • Don’t expect FOMD to understand your points. It is not in his political interests to do so. His answer likely will consist of a simple rejection, followed by some kind of rhetorical flourish, likely a rhetorical question. One thing he will not do is consider your arguments, then respond thoughtfully, taking points you made into consideration, even if only to refute them.

    • “It’s really fun. It’ll be even more fun when we can afford to have the fourth wall installed. How long you figure before we save up and get the fourth wall torn out and a fourth wall-TV put in? It’s only two thousand dollars.”

      H/t R. Bradbury.
      ==========

  38. And when testing against reality is not an option, our confidence in any given model relies on other factors, not least a good grasp of underlying principles.

    our confidence” argues for decision makers to be informed scientists/engineers. It may be ok for scientists/engineers to be the decision-makers for research projects or for engineering ones, but for government public policy, politicians and the people who elect them must have confidence in the decisions and they are the decision-makers. Saying “trust the models” is a form of tyranny unless the models have under gone the most rigorous forms of validation that they are useful for a public policy purpose.

    A model which can not be tested and validated for a given purpose is no better than soothsaying for that purpose. The current state of climate models is that they have not been validated to serve any decade or longer climate forecast and any public policy justification use of them is no better than guess work.

  39. Steven Mosher | December 16, 2013 at 10:37 pm |

    All models are false. there isn’t a single one I ever worked with that wasn’t false. You ride in planes that were made with false models.

    I sometimes wonder what Mosher is trying to tell us:

    Perhaps:
    A. Models are never precisely accurate, but they can be very useful? Sounds fair, but how accurate do they have to be? In the case of a plane, if they are close enough that there is no catastrophe in the flight tests, they have been useful. But sometimes they are wrong in aerodynamic and rocket propulsion calculations by a factor of several times (*see below) and still everyone survives. So does that tell us anything about how accurate or inaccurate GCMs may be?

    Or
    B. I am Steve Mosher, and the smartest guy here, so therefore the rest of are by definition, simply wrong. (Oh and PS, I prefer providing a few cryptic phrases and to not to explain myself at any point lest it raises some strange doubts as to my brilliance in some inferior mind out there.)

    I am pretty sure we may be stuck on answer B. (Whilst fully acknowledging the possibility that Mosher may in fact really be intellectually, if not communicatively, brilliant.)

    *January 16 2003. First shuttle launch. Columbia:

    The next anomaly struck moments after liftoff…. “Are we lofting, FIDO?” …..”Lofting, Flight…” .. the shuttle … eventually got back onto its trajectory.
    (Later) Hutchinson pointed to the velocity plot. “I thought it was going right off the top of the screen…” Which, indeed it almost did. Incorrect modeling of the solid boosters’ exhaust plume and aerodynamic effects caused the orbiter to fly at a higher angle of attack than the wings were designed to take. Years later, STS-1 commander John Young (said)… “We had fully deflected needles on the ADI. It was a pretty big mess. But that is why you fly. To learn what you need to do.

    And, re the heat protective tile losses of that flight:

    When the sold rocket boosters ignited at liftoff, the resulting pressure wave reflected of the pad and slammed into the shuttle at loads four times the predicted amount …… Every subsequent launch used a water suppression system…

    “But that’s Why You Fly” Air and Space, Smithsonian July 2004 Page 16 by Terry Burlinson

  40. Trust models? In my opinion only as far as you yourself understand them which is rarely the case. Orrin Pilkey and Linda Pilkey-Jarvis have written a book about modelling called “Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future.” While not specifically about climate modelling they have surveyed many futile attempts to model natural processes, from predicting cod fishery yields, to environmental impact statements, climate forecasts, beach erosion problems, Yucca mountain drainage, and more. Such models often involve approximations or guesstimates. They also may depend strongly on initial values which are poorly known but have a strong influence on the outcome. Plus “adjustments” may be required to make them correspond to reality that evades calculation and these “adjustments” are nothing more than fudge factors. They are opaque to the users so that political pressure can be, and has been, exerted to get the “right” answer which is then passed off as a “scientific” fact. One result is that we have no more codfish in the North Atlantic Ocean. The Pilkeys conclude that none of these models can be trusted to give quantitative results.To them the modeling of natural processes is simply “useless arithmetic.” Which becomes a hundred times more so when projected a century into the future.

  41. There is a simple matter that demands clarity but climate science refused to address from the get-go. John Costella asks the key question: So is a tree really a good thermometer?

  42. Antonio (AKA "Un físico")

    “The issue of model complexity raised by Knutti is an important one; I have a draft post on this topic awaiting publication of the relevant paper.”
    Anyone understands that complex reality (for example in finance) is less modelizable than simpler reality (like traffic jam preventions). I have a draft in:
    https://docs.google.com/file/d/0B4r_7eooq1u2VHpYemRBV3FQRjA
    that will never be relevantly published, but that explains why climate models are not trustable. The main reason is not related with the complexity of climate but with that point RC5.
    If Judith Curry is interested, she can add a new post in her blog about this time-scale issue in climate change models. In that case, I will be commenting.

  43. “How far should we trust models?”

    This topic is just another example of the not-science of the Global Warming Discussion.

    Andrew

  44. Deserves another printing:
    rgbatduke says:
    June 13, 2013 at 7:20 am

    Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

    This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.

    Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.

    Say what?

    This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
    “noise” (representing uncertainty) in the inputs.

    What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

    So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that R^2 or p for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.

    Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.

    Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)

    Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)

    A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.

    A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).

    In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.

    Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.

    So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.

    Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.

    Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.

    What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.

    Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.

    Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.

    Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.

    Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.

    And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.

    This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!

    So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.

    It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

    Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.

    rgb

  45. Modelers deal with them by putting in simplifications and approximations that they refer to as parameterization. They work hard at tuning parameters to make them more realistic,

    That is a curve fit and not really a model.

    • A point I have made many times, however, calling fitting ‘tuning’ and calling predictions ‘projections’ is considered enough to move them from being fits into models.

    • You’re not dealing with a “curve fit” when feeding in white noise gives you a ‘hockey stick.’

    • Models often use curve fits. In fact some of the most reliable models use curve fits.

    • @steve Mosher

      ‘some of the most reliable models use curve fits’

      Specific examples, including measures of reliability, would help the general understanding of your point.

    • Curve fits are models: they’re statistical models. They’re used all the time in about every branch of science, + economics.

      Most scientific models are a hybrid of physical and statistical models, with some parameters obtained by curve-fitting. I mean really, that has to be true, unless you’re solving Schrodinger’s equations at every time step.

    • For example, applying a curve to current solar data tells us our sun just died.

    • @wagathon

      Well its certainly dark now here in UK. So maybe you’re right!

    • Listen to me and I can bring it back to life. Well, I can bring it up over the Eastern Horizon tomorrow morning, at least. You’re on your own after that.
      =============

    • Latimer.

      Sure a simple example. When a missile comes off the rail of a aircraft it will weathervane into the relative wind. This is known as missile tip off.

      you can see some details here

      http://www.dtic.mil/dtic/tr/fulltext/u2/285042.pdf

      Now when you are building a model (MLE) to predict whether or not you should launch a missile you want to know if you might have a tipoff problem.
      that is will the missile tip off so much that it loses lock on the target. In short, as you pull your nose on the target, the missile will tip off and go away from the target in the first few milliseconds of flight. If it tips off too much you will lose lock at launch. Now that aircraft can fly at higher AOA the tip off will be even more pronounced.

      here is some background on MLE
      http://www.cs.odu.edu/~mln/ltrs-pdfs/NASA-94-tm109057.pdf

      Now, while we could calculate the precise angle of tip off, the time this takes would render the calculation moot as the conditions would change during the calculation. Since the response is generally a second order lagged response with some overshoot, we can fit the detailed response with a variety of parametric curves that can be implemented as a quick look up.
      they are not exact, but you dont have time for an exact solution.

      There are thousands of example like this

    • @Kim

      That’s great, thanks. Please arrange for the reappearance of the Sun in UK tomorrow.

      BTW – do I need to do any rituals to make this happen? Sacrifice my first-born? Erect a few windmills? Send money to Greenpeace or FoE?

      Or does it all happen sort of automagically obviating the need for all the dumb religious type faith thingies? I only have so many first-born to spare and I’m running low since the global cooling scare of thirty years ago.

      Wouldn’t it be nice if the Climate Gods made their frigging minds up rather than being so capricious?

    • Models often use curve fits. In fact some of the most reliable models use curve fits.

      A curve fit is never a model.

      They call it that but that does not make it true.

      You can call a “reliable curve fit” a model but it is still a curve fit and not a real model. It is a a curve fit that got tweaked to match data. That is not a model.

      Models are not tweaked or tuned. If you tweak and/or tune, it is not a model.

    • @steven mosher

      Thanks for that. A good example. Now I understand your point.

    • The classics teach us
      the gods are capricious,*
      climate models likewise.

      Beth-the-serf.

      *Serfs understand capriciousness.

  46. Models often use curve fits. In fact some of the most reliable models use curve fits.

    Models are way too skinny these days! They look like they haven’t eaten for weeks…. Years!!!!! They don’t look lik real women anymore! I wish we saw more full figured models on the runways….

    Oh… We’re talking about computer models???

    Never mind.

  47. Throw out the models and climatism becomes a nascent ethical-sociopolitical teaching.

  48. So many use generic arguments to prove climate models useless. Such proofs are seldom valid. Generic arguments may tell about some limitations, and they can tell about potential pitfalls, but in most cases more specific considerations are needed to draw valid conclusions on the usefulness of a particular model in a particular application.

    • Pekka, “but in most cases more specific considerations are needed to draw valid conclusions on the usefulness of a particular model in a particular application.”

      True, but what is the current “particular application” of the model ensemble mean? Since that “particular application” is leading to selection of individual models instead of using the ensemble mean, what is the new “particular application”? Then should that fail what will be the new “particular application”?

    • The long Roger Brown quote posted above gives pretty specific critiques and suggestions about how to evaluate and improve climate simulations:

      1. Stop relying on simple correlation of simulation output and empirical time series because the underlying process is driven by turning points, regime switches, etc., and models tuned to correlate within one regime will fail to predict these important turning points.
      2. Stop treating averages across simulators as statistical samples or populations because the differences among these are not random i.i.d. variables.
      3. Toss out all but the best simulators and try to improve those. To be consistent with 1, the determination of “best” would have to be by means other than simple time-series correlation with the instrumental record. Presumably, we would look for things like responses to known forcings, fits with spatial correlations across regimes, and other higher-order properties.

      Do you see anything wrong with this agenda, which seems to deviate from current practice?

    • “but in most cases more specific considerations are needed to draw valid conclusions on the usefulness of a particular model in a particular application.”

      Specific considerations such as they don’t work?

      As an engineer, I take considerations like that very seriously, because if it don’t work, I don’t get paid – or worse.

      Funny thing, my clients tend to object to smoking craters, perhaps it’s different in the climate model industry.

    • The first decade of the new millennium falsified GCMs.

    • Ensemble mean is clearly not an AOS model. It might be considered a special kind of model for the expectation values and some other statistical indicators, but it’s not at all a model for the dynamic behavior.

      I would not call it a model, only a method for getting some values. How useful it is for that, is again something that has no generic answer.

    • Stevepostrel,

      The quote of Robert G. Brown was too long and didn’t seem interesting enough for that length.

      Concerning the points you picked from there

      1) It’s clear that climate models should not be tuned to fit historical data that’s strongly affected by internal variability.

      2) Ensemble averages over many models should not be considered to have similar statistical significance as ensembles of a single model that have been created appropriately. Considering an ensemble of runs from a single model is not any better without good justification for it’s statistical representativeness.

      Non-representative averages may have some value, but that’s certainly not obvious.

      3) Better success in early test is not at all a guarantee that a model is more suitable as basis for later development. Thus I don’t agree on this point.

    • Curious George

      Pekka, could you please provide with a list of physical/chemical/biological factors which play different roles in weather models versus climate models? For me, a weather model is falsifiable. A climate model usually hides behind a long-term perspective (I can tell you confidently, that the world will be hot in 2100, but I can’t tell you anything about the weather next week). I have an uncomfortable feeling that my tax money is used to pay charlatans.

    • @ Curious George

      ” I have an uncomfortable feeling that my tax money is used to pay charlatans.”

      Will you be less uncomfortable once you accept the obvious and replace ‘feeling’ with ‘certainty’?

  49. Walt Allensworth

    As a humble physicist I can only say that we should trust models only as far as they provide reasonable results in predicting the real world.

    F=ma, while ultimately wrong in a relativistic world, does a fine job predicting many things to first order – so it’s useful. For example, paths of bullets out to a few hundred yards, or how fast your car does 0-60mph.

    Say we didn’t know that F=m*a, say that we only knew that F=m*k, where k is some unknown constant. We could do some experiments and estimate k, and that would be useful, probably, to first order. HOWEVER, what we just did there was abandon a scientific method of proposing a theory of how the world works, and we punted back to “curve fitting” to simply get a proportionality constant that roughly models the real world. BIG DIFFERENCE.

    It seems to me that this is exactly what the climate science community has done. In their case, to first order, they are saying something like T=T0+{%CO2}*k, and now they are curve fitting to get an estimate of k (temperature sensitivity to CO2) to predict the future value of T. They call this a “climate model” but it’s far more ‘stupid’ than that. It’s simply curve-fitting without really having the foggiest idea what basic physics underlies the model. In AR5 this is freely admitted. Our knowledge of the sensitivity of global temperatures to {%CO2} has not improved despite $100BILLION dollars spent over the last 30 years.

    What we seem to have learned in the last 30 years is that Hansen and many dozens of other preeminent climate modelers don’t have a clue as to what is going on. If they did they would not give a nebulous range to ‘k’ of 1.5 to 4.5 degrees/CO2 doubling. If things were truly this simply they would give a VALUE to k, based on empirical data, and we’d be done.

    Things are just not this simple, though the liberal left has tried to boil the whole global warming thing down to this: T=T0+{%CO2}*k.

    Take for example the differing male vs. female stimulus/response models, described well in the linked photo…

    http://www.pcbdesign007.com/articlefiles/48180-fig%2025-01.jpg

    I submit that the climate behaves like a woman, and not like a man, and to quote Einstein, “Not everything that can be counted counts, and not everything that counts can be counted.”

    • Walt, you write “If things were truly this simply they would give a VALUE to k, based on empirical data, and we’d be done.”

      I agree completely with most of the substance of what you have written. But the piece I have copied is an exception. The problem the warmists faced, ab initio, was that it is simply impossible to actually measure the value of climate sensitivity, your “k”. We cannot do controlled experiments on the earth’s atmosphere.. So they pulled a fast one, and pretended that is not necessary to actually measure climate sensitivity. They can use meaningless, hypothetical estimations and the output of non-validated models instead. If this is obvious to me, it must have been obvious to the scientists who started CAGW. That is why I am convinced that they did know, and they have perpetrated a hoax.

      This is the utterly non-scientific approach that they have used. Unfortunately the managed to convince just about all the learned scientific societies, led by the Royal Society and the American Physical Society, and that is why we are in the mess we currently find ourselves.

      Now the problem is that the Royal Society, and the rest of The Team are deep in a hole and refuse to stop digging. How we resolve this mess I have no idea. Hopefully Mother Nature will soon provide the requisite empirical data to show that CAGW is wrong. We can always hope.

  50. We’ve learned a few things and nothing of what we have learned has come to us from modeling. For example, we know that increases in atmospheric CO2 does not cause runaway global warming. We know that the Earth’s atmosphere does not act like a greenhouse at all. We know that humanity needs more energy not less. We know that global warming is a Left vs. right issue which means it has nothing at all to do with science. We know that the ME warm and LIA cold periods really, really did exist. We know that natural variation exists and that climate always changes. We know that the self-professed climatologists of Western academia are not interested at all in the future well being of polar bears or unemployed Americans or Africa’s poor or glaciers or ice on the North Pole or even what they’ve done to help tear down the credibility of science. We know that no one knows the future.

  51. David Springer

    Best answer:

    As far as we can throw them.

  52. Lauri Heimonen

    Judith Curry:

    ”Finally, there are the processes that aren’t well-understood — climate modelling is rife with these. Modellers deal with them by putting in simplifications and approximations that they refer to as parameterisation. They work hard at tuning parameters to make them more realistic, and argue about the right values, but some fuzziness always remains.”/Jon Turney

    Climate models are mere tools. To reach a working solution to any complicated climate problem these tools ought to be properly used. At present it seems to be true that the complicated climate changes make the climate model calculations, adopted by IPCC, be inadequate to reach any working solution to mitigate climate warming. There are obvious faults both in models itself and even in parameters used. Although the models itselves and the other parameters would be OK, but the recent, long-term increase of CO2 content in atmosphere would be assumed to be controlled by anthropogenic CO2 emissions, simulations carried out would be inadequate to reach any working solution for actions to mitigate climate warming.

    In the comment of mine http://judithcurry.com/2013/12/13/week-in-review-8/#comment-425381 I have proved that the mere recent increase of anthropogenic CO2 emissions can rise only 0.005 ppm CO2 in atmosphere per year. Instead the recent total CO2 increase in atmosphere has been about 2 ppm, where, according to natural laws, the anthropogenic share is 0.08 ppm; the total increase of CO2 content in atmosphere has been caused by warming of global sea surface, especially on the areas where sea surface CO2 sinks are.

    One have to understand that the CO2 level in atmosphere depends together on all CO2 sources and on all CO2 sinks. The CO2 sinks determine how much CO2 from total CO2 emissions stays in atmosphere. The anthropogenic share of total CO2 emissions determines the anthropogenic share of total increase CO2 in atmosphere. For instance, as in the recent total CO2 emissions the anthropogenic share has been about 2 %, even in the recent total increase of CO2 content in atmosphere the anthropogenic share has been about 2 %. This means that the anthropogenic share of the recent total atmospheric increase of 2 ppm CO2 a year has been only 0.08 ppm CO2 a year.

    IPCC scientists have already 25 years tried to find an evidence for AGW, but they have not managed to do it. Because the long-term increases of CO2 content follows warming and not vice versa, and because the anthropogenic share of the recent long-term CO2 increase in atmosphere has been minimal, the CO2 emissions from fossile fuels can not have been any essential cause of the recent global warming.

    • Big Al’s got himself the monopoly for spreading misinformation!

      (But, hey, he already got an Oscar and a Nobel for doing it.)

  53. Curious George

    There is a distinction between models and experiments. In an experiment, we prepare a simplified reality and then ask Mother Nature: What would you do in this situation? A modeler assumes the role of God and never asks anybody.

  54. OK all, how about this for a test.
    The transition between hindcast and forecast is day zero.
    If models have the possibility of being true, it follows that the population distribution model-real in the hindcast should have the same properties as the forcast. Thus, Day 0 represents a mirroring point where we can interrograte the model forecast from Day 0 to Day n, vs. the model hindcast from Day 0 to Day minus n. The hindcast should act as an internal control for the forcast. The population distribution of the errors (model-reality) should have the same mean and variance; if the models forecast as well as they hindcast.
    If the model errors are statistically different, fore:hind, the model is falsefied.

    • Too simple! Take a spring, mass system with friction. Observe for a moment to fit parameters to obtain parameters that allow a DE model to fit the observed motion. The future motion will dampen errors because of the friction compared to the past.

    • Doc, please clarify this for me? Are we supposed to regard observations during the hindcast period as “unassimilated,” that is, not used in the estimation of any model parameters requiring estimation?

      People use ‘hindcast’ in different ways.

  55. I just read the full article of Jon Turney. It’s well written, and on this reading I didn’t notice anything that I would disagree with. The article does not, however, give a full picture of important issues related to medeling – giving a full picture in one article would probably be impossible. I didn’t notice in the article anything significant and relevant to the climate models that has not been discussed many times on this site, but a good article is worth reading even when it repeats issues mentioned before.

    One point brought up by Reto Knutti was on making the models more and more complex and all-encompassing. One may well question, whether that is a good direction. It’s certainly nice that a model includes all phenomena that affect climate over a very long period. Thus one model run considers also interaction between melting of ice sheets and atmosphere, or changes in vegetation. In that way nothing that has been modeled will be forgotten. That’s, however, good only if the subsystems and the interaction is modeled correctly, and including too many details in one model adds to the risk that something is wrong, and in an unfortunate case two errors may amplify the effect. Some of the submodels may be considerably more uncertain than others, and the worst components may spoil the outcome.

    It might, after all, be better to force someone to look at the information produced by one submodel before it’s fed to another part of the model system. The hugely different time scales make that often possible, and handling the different time scales is an additional problem in putting the overall model together as one model.

    The interaction of oceans with the atmosphere is certainly so important and complex that handling both in the same model is very useful, but uncertainties related to that and to the resulting behavior of oceans is one of the main weaknesses of large models.

    =======

    My response to the title of the post is repetition to what I have written a couple of times before.

    I trust that models are useful and help scientists greatly in learning about the Earth system. They are good enough for that. I wouldn’t, however, have similar trust on specific numbers got as output from model runs. I would prefer final results that have been studied further by competent modelers who then tell their interpretation. They should tell also the raw model output, but not as final result. If their personal judgement is that the model result is biased, they should not hesitate telling that. Presenting only model results as they come out is wrong kind of objectivity, because it’s known that no model describes faithfully the real world.

    Good modelers know about most likely biases, and they have often fair feeling of uncertainties. They should tell all that openly, and they should tell that in a way that adds to the trust in their quest for objectivity.

    • Your platitudes cover a multitude of sins.
      ==========

    • Should we factor in polar bear births and deaths?

    • Curious George

      Pekka: Unfortunately, climate is a very complex system. If we get lucky, it may be possible to model a subset of it, thus simplifying the task. However, I don’t believe that there is a subset which gives consistently correct answers over decades, while giving consistently incorrect answers over weeks. I am all for looking a good simplified subsystem to model: but let’s see how good it is at a weather forecasting.

    • Curious George,

      I don’t believe that models have any difficulties in predicting climate for many weeks, they cannot predict weather, but that’s a different issue.

      • Pekka,

        I don’t believe that models have any difficulties in predicting climate for many weeks, they cannot predict weather, but that’s a different issue.

        If you believe that right is more than just getting the aggregate global average temp close, they fail even the low bar you’ve set.

    • “I don’t believe that models have any difficulties in predicting climate for many weeks.”

      More explanation is never going to help here.

    • The Threadwinnah, and still Champeen, Pekka Pirila.
      ============

    • Curious George

      Pekka, of course there are models – for example, the famous radiative model with a N2-O2-CO2 atmosphere above a black body with a perfect heat conductivity and a zero heat capacity – which can’t predict weather at all. This model is very useful to illustrate the principle of a greenhouse effect, but I don’t trust its predictions for our planet – for example, clouds play a major role. I think we agree that a good climate model should be based on physics, maybe chemistry and biology, and not to be oversimplified. For me, an ability to model weather is crucial; “studies [of final results] by competent modelers” are no substitute for a verdict by Mother Nature.

  56. Does anybody know how climate scientists decide that their models are “working”? How many models have been discarded/corrected/tweaked, because they didn’t show the requisite warming? Is their a particular climate science procedure for weeding out the models that run too cool?

    • Fyfe’s findings are based on a study of 117 GCM simulations over a 20-year period comparing the results of model predictions to the observed rate of warming. From 1993 to 2012, the “global mean surface temperature… rose at a rate of 0.14 ± 0.06 °C per decade,” and the observed warming over the last 15 years of the period was, “not significantly different from zero.” GCMs, however, simulated a “rise in global mean surface temperature of 0.30 ± 0.02 °C per decade.” Compared to the actual rate of warming, the simulated rate was more than double.” Moreover, simulations were more than four times higher than actual over the last 15 years. Needless to say, the “null hypothesis that the observed and model mean trends are equal,” is rejected: statistically, there is but a 1 in 500 chance these GCMs are actually looking at the same planet we live on.

      [See, Fyfe, JC, et al., Overestimated global warming over the past 20 years. Nature Climate Change. V3 (Sept. 2013)]

    • PCMDI at Lawrence Livermore lab runs an ongoing comparison of lots of models with backup databases. It is up to the modeler to evaluate his own work and twek or junk the model. They don’t seem to impact use in IPCC work.
      Scott

  57. With respect to climate change, all models which project any warming due to CO2 can be ignored, since there is no evidence that any warming has ever occurred due to greenhouse gasses. It is all a hypothesis, disproven by the facts.. Can anyone dispute this?

    • Lauri Heimonen

      You are right! Look for instance at my comment above!

    • David Springer

      Not ignored. Fixed. Climate models without CO2 warming cannot replicate 1979-1999. Climate models with CO2 warming cannot replicate 1999-present.

      The prime suspect causing the error remains failure to model the water cycle well enough. If the earth were an arid rock with unchanging albedo it would be a far simpler proposition for a numeric model. The earth is a water world where all three phases can exist at once. The capacity of H20 to carry latent and sensible heat both horizontally and vertically is a huge complication. The quite drastic change in albedo between ice, ocean, and cloud adds a great deal more complexity. Variation in insolation both in total power and power spectrum add even more unpredictability. What we need to do is not try to predict what will happen in the future because that’s quite likely beyond our ability for the foreseeable future but rather we should develop the capacity to deal with regional and global climate change whichever way the cookie crumbles.

  58. Mosh keeps claiming that planes are built with false models, but no plane was ever built and flown based simply on a model (and the current finite element models HAVE been tested exhaustively). There are wind tunnels and test flights and 100 yrs of experience building planes including many that fell apart or fell down. The Wright brothers greatest invention was the wind tunnel so they could test their wing designs and controllers, which is why they didn’t die during testing. Likewise, the claim that GCMs are “just physics” is not true–there are many aspects of these models that are empirical or a “parameterization” [ie a kludge]. Consider clouds for example. There is no “physics” such as a “law of clouds” which gives precise results for the formation, size, movement, optical reflection etc of clouds in response to weather. Yet clouds can give pos or neg feedback to climate depending on small changes in their behavior (as Spencer has noted).

    • True, The academics’ GCMs, ”are extremely oversimplified,” says Freeman Dyson. “They don’t represent the clouds in detail at all. They simply use a fudge factor to represent the clouds.”

    • Simply untrue.

      “Mosh keeps claiming that planes are built with false models, but no plane was ever built and flown based simply on a model (and the current finite element models HAVE been tested exhaustively). There are wind tunnels and test flights and 100 yrs of experience building planes including many that fell apart or fell down.”

      1. FEA is limited.
      2. FEA predictions are all wrong, ie imperfect.
      3. Wind tunnel tests are all wrong, ie imperfect.
      4. Flight test is limited and one doesnt test every aspect of the plane.

      The point is simple. All models are wrong, imperfect. All testing is imperfect and incomplete.

      Example: the F/A-18 was designed to have a certain level of survivability during combat. What was that level? how was it tested?

      You dont know. But I do. The survivability was “tested” with a model.
      Live fire testing was extremely limited because shooting up millions of dollars of planes is not cool.

      Example: crew system escape systems. Tell me how they are designed and tested. you cant.

    • Craig

      the best example of a model that is consistently wrong yet effective is a missile
      guidance model. The model represents both the path of the target and the position of the missile. It is consistently wrong, but instead of falsifying the model, the model is set up to feed back the error of prediction into the model to change the model and improve the prediction. The model continues to be wrong up until the end at which time the missile explodes. At this time if is not too wrong the target is destroyed. Its wrong all along, but right enough to get the job done.

      In fact because of the time constraints sometime you forgo the better model because of the processing time.

      The issue with GCMs is that nobody has defined what it takes to blow up a target. How right do they need to be to get the job done and what is the job.

      Predicting the future exactly is NOT the job.

    • “Live fire testing was extremely limited because shooting up millions of dollars of planes is not cool.”

      Yes, generally models are used to lower costs.
      But if you are playing around with models then it’s not
      about lowering costs. It’s more about entertainment.
      And models are used in film making to also lower costs
      of film makng. And such models could also be played with.

      So other than forms of entertainment models are used
      in some way to lower cost. One could model an atmosphere
      so that one doesn’t have take multiple and costly measurements
      for example.

      But have climate models reduced costs?

    • Here is a nice picture how proportional navigation works.
      The model starts out seriously wrong, overleading the target.
      At this stage nobody suggests “falsifying” the model.
      basically you feed errors back to improve the solution.

      you could think of climate science the same way.. except that they are not feeding back errors. Its more a problem of HOW they are responding to their imperfect predictions RATHER THAN the fact that a model made an imperfect prediction. Thats because they oversold models as truth engines. they are not.

      http://media.moddb.com/images/articles/1/86/85205/auto/1_1_lead_vs_pure.jpg

      • you could think of climate science the same way.. except that they are not feeding back errors. Its more a problem of HOW they are responding to their imperfect predictions RATHER THAN the fact that a model made an imperfect prediction. Thats because they oversold models as truth engines. they are not.

        Steven, I fully agree with this. To take it one step further, I don’t think they provide a valid projection for a world of increasing Co2, but are sold as if they do, this is my issue with climate science, they took a hypothesis and tried to turn it into policy, and the scientists who should know better didn’t protest. And not just any policy, one that would turn the entire worlds economy on it’s head.
        I find this the height of arrogance and stupidity. If they didn’t do this, I’d consider the matter much as I do dark matter, something to spend a little money on and go figure out. If they didn’t do this, I wouldn’t care if people made up data for places that are never actually measured :)

    • “All models are wrong” is a 100% content free statement. It is a straw man dressed up as a truism.

      All political systems are imperfect. That does not prevent us (or at least some of us) from knowing with absolute certainty that fascism and communism are horrible political systems.

      GCMs are not just “wrong” in the metaphysical sense, They are badly wrong in the one area for which they have been advertised as being useful – predicting GAT 10, 50 and 100 years in the future.

      And describing something as “useful” is a useless term, unless you define useful for what.

      GCMs are useless at predicting future global average temperatures as the basis for massive policy changes.

    • They are badly wrong in the one area for which they have been advertised as being useful – predicting GAT 10, 50 and 100 years in the future.

      Wow! I guess that GCMs have been around a lot longer than I thought.

    • “…the best example of a model that is consistently wrong yet effective is a missile guidance model.”

      I’d guess that in most cases it turns into a tail chase, rather than attempting something like a 90 degree intersection hit. Perhaps that is what the GCMs are trying to do. Predicting while the Zen approach is to follow. The missile guidance model is told to follow, not to know what the target will do. The model may be made with an understanding of what the target can do. For instance its turn radius, so that the missile cannot be out turned. The target is bounded by assumptions or can be. We’d judge the model with field tests and look to see if the missiles do in fact intersect the targets at an acceptable level.

    • GaryM:

      “All models are wrong” is a 100% content free statement. It is a straw man dressed up as a truism.

      It’s also wrong. While we cannot create models which perfectly describe our universe, we are quite capable of creating models which describe aspects of our universe in sufficient detail for some purpose or another.

      Suppose we want to know whether a force will be positive or negative along some axis. A model created for this problem would need only give + or – as an answer. Such a model could be very crude yet still be perfectly accurate. That would mean it perfectly describes the scope it covers, hence not “wrong” at all.

      Additionally, while we may not be able to model our physical reality precisely, there are other things we may want to model. For example, one might want to model how data flows through a program or calculation. There is no particular reason that couldn’t be done precisely.

    • David Springer

      At some point aircraft flight and weapon systems are tested in flight and in combat. That’s typically called the point where the rubber meets the road for people familiar with the whole design cycle from concept to production.

    • David Springer

      The best made models of mice and men often go awry.

      Write that down.

  59. When fiction writers provide us a glimpse of a super human it usually is accompanied with an explanation about how their powers came about. Being bathed in a massive dose of cosmic radiation is what they usually share in common. In the school teacher version, what binds us Earth-killing global warming super humans together is that according to their global circulation models we Westerners — who provide the very modernity the climate alarmists enjoy as they blame and righteously rail against us — are destroying the Earth with our CO2.

  60. With regard to climate models: When the model does not include an important variable can it have any validity? The heat emissions from our energy use have four times the energy than that accounted for by the actual measured rise in atmospheric temperature, and nowhere in the models is heat emission included. It is assumed that CO2 is the prime variable contributed by anthropogenic activity, and the models have shown themselves to be woefully inadequate. Let’s go back to the start and include both CO2 and heat emissions as prime variables and see where that leads us. Would JC concur with this suggestion?

    • Old Farmer’s Almanac predicted global cooling based on the solar cycle not human heat emissions. Was the ME warming period caused by too many Roman bonfires? Was the LIA caused by the shutdown of French nuclear power plants?

    • Philip Haddad,

      You are up against people who believe the increase on temperature close to a roaring bonfire is surely the result of the measurable increase in CO2 above the conflagration.

      If you are right, then there should be increased temperatures near a heat source. An increase in CO2 is a result of oxidation causing heat, and follows heat production, not the other way round.

      Alas, the inmates are demonstrably in charge of the asylum. Don’t worry, be happy. This present insanity will pass, I’m sure!

      Live well and prosper,

      Mike Flynn.

    • David Springer

      Waste heat is undoubtedly a major player in urban environments and the surrounding area. The urban heat island effect is very well known but urban areas comprise less than 1% of the earth’s surface and simply cannot significantly effect global average temperature of the lower troposphere.

    • Roger Pielke Sr wrote a chapter in one of his books about heat impacts in urban and disturbed farmed areas as the land cover moved from natural covers such as tall grass prairies to wheat fields. Not even including all the urban heat island impacts of buildings, asphalt and concrete. Lots to measure on regional scales to study variations in local climate.
      Scott

  61. Steven’s point that there are always aspects of reality not captured by a model i.e. “all models are false”, is correct – or more specifically, a correct interpretation of Popper (and the Duhem-Quine thesis).

    On the other hand (which is really the same hand) the commenter who suggests that we would reject a model depending on what problem we are trying to solve, also offers a correct interpretation of Popper.

    There has been much theoretical and practical work, since Popper. While many would agree that Popper’s insights still point in the right direction, it is important to understand the historical context that informs those ideas; and in so doing, to recognize that some of those ideas were ridiculously (albeit appealingly) heroic.

    • Heh, this is no century for heroes.
      =======

    • Martha, what aspects the models are missing is the important part. Earth is 70% water, water is assumed to produce 66% of the “enhanced” GHE which is a linear assumption, water vapor/clouds are directly dependent on ocean surface temperature and wind speed. The models don’t get the SST correct nor do they get the surface wind dynamics right which means they don’t get clouds and atmospheric water vapor right.

      To boot, since the surface winds drive Ekman transport the models don’t get ocean surface currents right meaning they can’t replicate ENSO, PDO, AMO and most of the other “Os” without training.

      Some would consider coupled Ocean Atmospheric models are a tad less than sub optimal under those circumstances. Now if you train some of the models with observed temperatures they can replicate PDO and the centennial scale Pacific “oscillations”, but that pesky centennial scale oscillation implies longer term persistence which would reduce the CO2 equivalent forcing impact.

      So we have models that appear to be capable of being useful, but the results produced are counter to the theory the models were designed to prove. What’s a Climate Scientist to do?

  62. The obvious answer to the banner question is: “Only as far as we can throw them.”

  63. According to the data from NASA and the Hadley Center, the global mean temperature pattern has not changed since record begun 160 years ago.
    This single pattern has a long-term global warming rate of 0.06 deg C per decade and an oscillation due to ocean cycles ( http://bit.ly/nfQr92 ) of 0.5 deg C every 30 years as shown in the following graph.

  64. John Robertson

    Trusting models?
    Abandoning reason.
    When a modelled result fails to match reality, this tells you little about reality and much about the model.
    Models can be said to produce data, but only about the modelling.
    Useless for creating a mental map of a process we do not yet understand.
    Hence Feynman is most correct and the belief that the output of model runs are any thing more than information about the model, dubious at best, delusional if confused with reality..

  65. It looks like Hansen’s models have been a huge failure:
    http://news.google.com/newspapers?id=llJeAAAAIBAJ&sjid=AWENAAAAIBAJ&pg=5501,1378938&dq=james-hansen&hl=en

    James Hansen 1986: Within 15 Years Temps Will be Hotter Than Past 100,000 Years.
    Hansen said the average U.S. temperature had risen from one to two degrees since 1958 and is predicted to increase an additional 3 or 4 degrees sometime between 2010 and 2020.
    http://hauntingthelibrary.wordpress.com/2011/01/06/james-hansen-1986-within-15-years-temps-will-be-hotter-than-past-100000-years/

  66. The conversation should first be about GCMs and whether they can simulate the current climate, and the answer is yes, enough is known about the fluid equations, the heating from the sun, water effects, and radiative transfer to give a good rendering of the climate, including pole-equator temperature and seasonal changes around the world. Try running a GCM without CO2 or with the wrong amount, and they will fail, for the reason that the CO2 we have helps to determine the climate we have, along with our distance from the sun and the net albedo of earth’s various surfaces. To the extent that GCMs have succeeded in simulating climate and are used as tools for understanding it, they can also be used for climate change studies, especially with what by climate standards are small perturbations that may change temperatures by only a fraction of the global and seasonal range. So, to disqualify a climate model, first you have to disqualify the GCM on which it is based. If it passes as a GCM, it passes as a climate model for small (~1% forcing) perturbations like doubling CO2.

    • Models can’t predict the AMO or the PDO. That means we don’t understand heat transfer well enough to program it in the models. The same forcing can cause a snowball earth or an ice free earth depending on how heat is transfered. We are a long way from understanding the climate and climate models have no use other than perhaps helping to identify what we don’t know.

    • steven, the AMO and PDO are consequences of chaotic ocean motions, and just because it can’t track a particular chaotic fluctuation, doesn’t disqualify a model from defining a mean state. Chaotic motions are initial value problems, not boundary value problems as climate is. It is the same fallacy as “you can’t tell if it will rain next week, so you can predict climate change”, but I think people are unable to see that AMO and PDO are the ocean’s version of weather fluctuations that happen on its longer time scales. Ocean “weather” is slow motion compared to the atmosphere with certain patterns taking years to play out.

    • Jim, so how many years before the chaos averages out? How many longer variations of chaos are there? How do you know it is chaotic if you don’t know what causes it? It isn’t a fallacy to state that if you don’t understand why something happens that you don’t understand why something happens. It doesn’t matter if you will know anytime soon or not. Let me know when you can explain the Gulf Stream reconstruction that shows it slowing from the MWP to the LIA and then increasing again to the present warming period.

    • To the extent that GCMs have succeeded in simulating climate and are used as tools for understanding it, they can also be used for climate change studies, especially with what by climate standards are small perturbations that may change temperatures by only a fraction of the global and seasonal range.

      They respond poorly under peturbations,such as volcanic singularities.The (CMIP5) models being worse then their predecessors.The failure in the dynamical response to volcanics (which give rise to dissipative structures) suggest systemic failures in the models dynamics eg Driscoll 2012.

      http://onlinelibrary.wiley.com/doi/10.1029/2012JD017607/abstract

    • The conversation should first be about GCMs and whether they can simulate the current climate, and the answer is yes,

      How do you figure that? They don’t generate a pause, nor do they generate regional climates correctly, the only reason they get global temp even close is because they average temps for the entire world.

      Try running a GCM without CO2 or with the wrong amount, and they will fail,

      Well duh, that’s because they programmed it into the GCM’s to be the main driver.

    • maksimovich, volcanic dust effects are less known than CO2. CO2 is just radiative physics and the quantity is known well and it is well mixed, while dust has properties and quantities as a function of location that are less well known. Besides which volcanic cooling is simulated, so I don’t know what you are referring to in the first place.

    • Mi Cro, well radiative physics is programmed in, but maybe that is what you meant by duh.

      • Radiation Physic’s would get a CS of ~1C, and from station data it doesn’t even look that high.
        As for the “chaotic oceans”, your view on that appears to be wrong as well, the major currents aren’t all that chaotic, and the AMO/PDO are emergent states, not weather.

    • Radiation physics gets you 33 K more warm with GHGs than without, and that is not just CO2. H2O matters too in accounting for that. The 33 K is accounted for by these models which allow water vapor to increase over warmer water. They would have a hard time accounting for the current level of tropical moisture without this effect.

      • Radiation physics gets you 33 K more warm with GHGs than without, and that is not just CO2.

        This might be (I know it is warmer than it’s suppose to be, no argument there). But I really like getting my hands dirty (even metaphorically), I went out an got a IR(8-14u) Thermometer , good to -60F (+9/-9F@-58F), and on clear skys @41N/81W it reads below -60F, day/night doesn’t matter, it’s so cold it overloads the temp stabilizing circuit and the measured temp drops to less than -90F. The point is that right in the band (8-14u) Co2 is suppose to be returning all of that DWLR it is very very cold.

    • Jimmy dee enumerates the reasons why the GCMs give a good rendering of the climate, but the renderings ain’t matching reality. Uh,oh.

    • Curious George

      Jim – I would like to “Try running a GCM without CO2 or with the wrong amount”. Please tell me where to get code and what hardware I need.

  67. steven, the GCMs can simulate climate (decadal averaged seasons) without knowing the exact state of the ocean circulation. That varies it by only 0.1 degrees. It is not a problem.

    • steven, the GCMs can simulate climate (decadal averaged seasons) without knowing the exact state of the ocean circulation. That varies it by only 0.1 degrees. It is not a problem.

      Of course it is, it’s most likely the source of the pause.

    • Exactly, the pause is ocean weather plus perhaps a little solar effect. Tell steven this.

    • Jim, I’ll take your non-response response as you don’t know what caused the 200 year trends. Don’t feel bad I don’t think anyone knows. Ignoring problems rarely makes them go away but try it if you want.

    • steven, the GCMs can simulate climate (decadal averaged seasons) without knowing the exact state of the ocean circulation.

      NO, they generate output that sometimes matches real data, but that does not really mean they simulate climate. They tweak and tune to get a match, but they don’t know what makes temperature disagree with their forecasts. They don’t know why it snows more when they promised snow would be a thing of the past.

  68. Points about models of – say – missile launches are disingenuous at best.

    It is quite straightforward and rapid to ascertain the real world performance of the missile, tweak both the model and the missile and try again, with various different AOAs and other parameters until the variations in model and real world performance are sufficiently indistinguishable that the models are very informative indeed. This is a closed loop system.

    Climate models are an open loop system, as it is impossible to adjust the climate to see if modifications can be reflected in the model, so there is a whole dimension missing.

    As for the comment that that models don’t “have any difficulties in predicting climate for many weeks”, that’s not reassuring when my energy bills are increasing exponentially based on model estimates of climate many decades – in fact almost a century – in the future, and OAPs are dying because they can’t afford to both eat and heat their homes.

    Climate models are killing people.

  69. George, you can google “general circulation model GCM code” and find a number of GCM models you can download. You will have to figure out how to turn the Co2 knob down, though in some cases it’s in a parameter file.

  70. Curious George

    No. I might select a model Jim D does not approve of.

  71. “Mosh keeps claiming that planes are built with false models, but (Mosh said:)
    no plane was ever built and flown based simply on a model

    (and the current finite element models HAVE been tested exhaustively). There are wind tunnels and test flights and 100 yrs of experience building planes including many that fell apart or fell down.”

    The NASA Space Shuttle, airplane/spaceship, was flown with live pilots on EVERY flight. There were NO flight tests without pilots.

    Wind tunnel tests and Computational Fluid Dynamics and Computer Simulations were used to predict how it would perform and we nailed it.

    Some of the same people who made us successful in Space are now studying Climate. We went to the moon in a decade and understanding climate is not going to turn out harder than we can deal with.

    There was no way to test fly early airplanes with no pilot.
    The hundred years of experience included many that were not flight tested without a pilot.

  72. A Funny for the Econs and the Philosophs

    https://medium.com/the-nib/27188bafcd62

  73. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  74. Pingback: IPCC CO2 Science vs. Majority of Peer-Reviewed Research - Page 5 - US Message Board - Political Discussion Forum

  75. Pingback: Recent Energy And Environmental News – December 30th 2013 | PA Pundits - International

  76. Pingback: How far should we trust models? | ajmarciniak

  77. TimTheToolMan

    Mosher writes “Suppose I am building a physics model of car.”

    There is a fundamental difference between a model the models the “now” and one that models the future. We’re quite good at weather modelling because its in the “now”.

    Suppose you modelled your car at 104mph and it only reached 94mph…well thats fine until you use the model to predict how far it will go on a tank of fuel.

    Switch the model to a rocket. It would be disastrous if the same thing happened and the rocket wasn’t able to reach the velocity needed to get into the correct orbit to slingshot it around various planets and on to its ultimate goal. That goal might be a very “out of reach” target indeed.

  78. Pingback: The Wind Farm Scam | Bookfall