A few things that caught my eye this past week.
Global savings from global warming
At MasterResource, Chip Knappenberger has a provocative post entitled Global savings : Billion dollar weather events averted by global warming. Excerpts:
For every billion-dollar weather disaster identified as being ‘consistent with’ human-caused global warming, there are probably several other potential billion-dollar weather disasters that human-caused global warming averted.
I have begun to compile a list of averted billion-dollar weather events during the past year “consistent with” anthropogenic global warming. A full list is necessarily much greater because there are certainly many more events that we could never know about because they didn’t rise to the extreme to be recorded.
Hurricane Debby, June 2012. Hurricane Debby never formed. Increased vertical wind shear “consistent with” expectations from global warming prevented the development of tropical storm Debby into hurricane Debby. Damage estimates from tropical storm Debby have been estimated at $250 million with 5 direct and 3 indirect fatalities from the storm. Had global warming not helped to inhibit the growth of the storm system, these totals may well have been higher, exceeding a billion dollars. (For more information of the life of Debby, see here.)
Now that is a really novel angle on the global warming-disaster meme.
Der Spiegel has an interview with Hans von Storch. Excerpts:
SPIEGEL: Would you say that people no longer reflexively attribute every severe weather event to global warming as much as they once did?
Storch: Yes, my impression is that there is less hysteria over the climate. There are certainly still people who almost ritualistically cry, “Stop thief! Climate change is at fault!” over any natural disaster. But people are now talking much more about the likely causes of flooding, such as land being paved over or the disappearance of natural flood zones — and that’s a good thing.
SPIEGEL: Yet it was climate researchers, with their apocalyptic warnings, who gave people these ideas in the first place.
Storch: Unfortunately, some scientists behave like preachers, delivering sermons to people. What this approach ignores is the fact that there are many threats in our world that must be weighed against one another. If I’m driving my car and find myself speeding toward an obstacle, I can’t simple yank the wheel to the side without first checking to see if I’ll instead be driving straight into a crowd of people. Climate researchers cannot and should not take this process of weighing different factors out of the hands of politics and society.
SPIEGEL: Just since the turn of the millennium, humanity has emitted another 400 billion metric tons of CO2 into the atmosphere, yet temperatures haven’t risen in nearly 15 years. What can explain this?
Storch: So far, no one has been able to provide a compelling answer to why climate change seems to be taking a break. We’re facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn’t happened. In fact, the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) — a value very close to zero. This is a serious scientific problem that the Intergovernmental Panel on Climate Change (IPCC) will have to confront when it presents its next Assessment Report late next year.
SPIEGEL: How long will it still be possible to reconcile such a pause in global warming with established climate forecasts?
Storch: If things continue as they have been, in five years, at the latest, we will need to acknowledge that something is fundamentally wrong with our climate models. A 20-year pause in global warming does not occur in a single modeled scenario. But even today, we are finding it very difficult to reconcile actual temperature trends with our expectations.
Lucia has a very good post at the Blackboard, discussing a new paper entitled Strengthening of ocean heat uptake efficiency associated with the recent climate hiatus. Lucia summarizes the main points of the paper to be:
- There has been a hiatus in warming in the surface temperatures and this hiatus represents a statistically rare event in climate models.
- They suggest this hiatus can be explained as arising from the Pacific Decadal Oscillation.
- They further suggest those (statistically rare) periods of ‘hiatus’ in surface warming seen in models also correspond to periods of enhanced heat uptake in models
Lucia provides some interesting commentary, and the discussion in the comments is also worth reading.
WUWT has highlighted the previous comment by rgbatduke into a main post, and rgbatduke responds with an even longer, and more provocative comment. A few excerpts:
[T]here is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together. One can at best take any given GCM run and compare it to the actual data, or take an ensemble of Monte Carlo inputs and develop many runs and look at the spread of results and compare THAT to the actual data.
In the latter case one is already stuck making a Bayesian analysis of the model results compared to the observational data (PER model, not collectively) because when one determines e.g. the permitted range of random variation of any given input one is basically inserting a Bayesian prior (the probability distribution of the variations) on TOP of the rest of the statistical analysis. Indeed, there are many Bayesian priors underlying the physics, the implementation, the approximations in the physics, the initial conditions, the values of the input parameters. Without wishing to address whether or not this sort of Bayesian analysis is the rule rather than the exception in climate science, one can derive a simple inequality that suggests that the uncertainty in each Bayesian prior on average increases the uncertainty in the predictions of the underlying model. I don’t want to say proves because the climate is nonlinear and chaotic, and chaotic systems can be surprising, but the intuitive order of things is that if the inputs are less certain and the outputs depend nontrivially on the inputs, so are the outputs less certain.
I therefore repeat to Nick the question I made on other threads. Is the near-neutral variation in global temperature for at least 1/8 of a century (since 2000, to avoid the issue of 13, 15, or 17 years of “no significant warming” given the 1997/1999 El Nino/La Nina one-two punch since we have no real idea of what “significant” means given observed natural variability in the global climate record that is almost indistinguishable from the variability of the last 50 years) strong evidence for warming of 2.5 C by the end of the century? Is it even weak evidence for? Or is it in fact evidence that ought to at least some extent decrease our degree of belief in aggressive warming over the rest of the century?
I make this point to put the writers of the Summary for Policy Makers for AR5 that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world. Literally meaningless. Without meaning.
The probability ranges published in AR4′s summary for policy makers are utterly indefensible by means of the correct application of statistics to the output from the GCMs collectively or singly. When one assigns a probability such as “67%” to some outcome, in science one had better be able to defend that assignment from the correct application of axiomatic statistics right down to the number itself. Otherwise, one is indeed making a Ouija board prediction, which as Greg pointed out on the original thread, is an example deliberately chosen because we all know how Ouija boards work! They spell out whatever the sneakiest, strongest person playing the game wants them to spell.
And for the sake of all of us who have to pay for those sins in the form of misdirected resources, please, please do not repeat the mistake in AR5. Stop using phrases like “67% likely” or “95% certain” in reference to GCM predictions unless you can back them up within the confines of properly done statistical analysis and mere common wisdom in the field of predictive modeling — a field where I am moderately expert — where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence.
Predictive modeling is difficult. Using the normal distribution in predictive modeling of complex multivariate system is (as Taleb points out at great length in The Black Swan) easy but dumb. Using it in predictive modeling of the most complex system of nominally deterministic equations — a double set of coupled Navier Stokes equations with imperfectly known parameters on a rotating inhomogeneous ball in an erratic orbit around a variable star with an almost complete lack of predictive skill in any of the inputs (say, the probable state of the sun in fifteen years), let alone the output — is beyond dumb. Dumber than dumb. Dumb cubed. The exponential of dumb. The phase space filling exponential growth of probable error to the physically permitted boundaries dumb.
I assert — as a modest proposal indeed — that we do not know enough to build a good, working climate model. We will not know enough until we can build a working climate model that predicts the past — explains in some detail the last 2000 years of proxy derived data, including the Little Ice Age and Dalton Minimum, the Roman and Medieval warm periods, and all of the other significant decadal and century scale variations in the climate clearly visible in the proxies. Once we can predict and understand the gross motion of the climate, perhaps we can discern and measure the actual “warming signal”, if any, from CO_2. In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.
Let me make this perfectly clear. WHO has been publishing absurdities such as the “number of people killed every year by global warming” (subject to a dizzying tower of Bayesian priors I will not attempt to deconstruct but that render the number utterly meaningless). We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization.
That is why presenting numbers like “67% likely” on the basis of gaussian estimates of the variance of averaged GCM numbers as if it has some defensible predictive force to those who are utterly incapable of knowing better is not just incompetently dumb, it is at best incompetently dumb. The nicest interpretation of it is incompetence. The harshest is criminal malfeasance — deliberately misleading the entire world in such a way that millions have died unnecessarily, whole economies have been driven to the wall, and worldwide suffering is vastly greater than it might have been if we had spent the last twenty years building global civilization instead of trying to tear it down!
Even if the predictions of catastrophe in 2100 are true — and so far there is little reason to think that they will be based on observation as opposed to extrapolation of models that rather appear to be failing — it is still not clear that we shouldn’t have opted for civilization building first as the lesser of the two evils.
Head to WUWT and read the entire post.