by Judith Curry
Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in assessments of climate change.
The latest issue of Nature Climate Change has another interesting commentary, by Katz, Craigmile, Guttorp, Haran, Sanso, and Stein: Uncertainty analysis in climate change assessments [link; behind paywall]. Excerpts:
Because the climate system is so complex, involving nonlinear coupling of the atmosphere and ocean, there will always be uncertainties in assessments and projections of climate change. This makes it hard to predict how the intensity of tropical cyclones will change as the climate warms, the rate of sea-level rise over the next century or the prevalence and severity of future droughts and floods, to give just a few well-known examples. Indeed, much of the disagreement about the policy implications of climate change revolves around a lack of certainty. The forthcoming Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the US National Climate Assessment Report will not adequately address this issue. Worse still, prevailing techniques for quantifying the uncertainties that are inherent in observed climate trends and projections of climate change are out of date by well over a decade. Modern statistical methods and models could improve this situation dramatically.
Uncertainty quantification is a critical component in the description and attribution of climate change. In some circumstances, uncertainty can increase when previously neglected sources of uncertainty are recognized and accounted for. In other circumstances, more rigorous quantification may result in a decrease in the apparent level of uncertainty, in part because of more efficient use of the available information. Nevertheless, policymakers need more accurate uncertainty estimates to make better decisions.
The recent IPCC Special Report Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation10, on which the IPCC AR5 relies, does not take full advantage of the well-developed statistical theory of extreme values. Instead, many results are presented in terms of spatially and temporally aggregated indices and/or consider only the frequency, not the intensity of extremes. Such summaries are not particularly helpful to decision-makers. Unlike the more familiar statistical theory for averages, extreme value theory does not revolve around the bell-shaped curve of the normal distribution. Rather, approximate distributions for extremes can depart far from the normal, including tails that decay as a power law. For variables such as precipitation and stream flow that possess power-law tails, conventional statistical methods would underestimate the return levels (that is, high quantiles) used in engineering design and, even more so, their uncertainties. Recent extensions of statistical methods for extremes make provision for non-stationarities such as climate change. In particular, they now provide non-stationary probabilistic models for extreme weather events, such as floods.
It has recently been claimed that, along with an increase in the mean, the probability distribution of temperature is becoming more skewed towards higher values. But techniques based on extreme value theory do not detect this apparent increase in skewness. Any increase is evidently an artefact of an inappropriate method of calculation of skewness when the mean is changing.
Box 1 | Recommendations to improve uncertainty quantification.
- Replace qualitative assessments of uncertainty with quantitative ones.
- Reduce uncertainties in trend estimates for climate observations and projections through use of modern statistical methods for spatio-temporal data.
- Increase the accuracy with which the climate is monitored by combining various sources of information in hierarchical statistical models.
- Reduce uncertainties in climate change projections by applying experimental design to make more efficient use of computational resources.
- Quantify changes in the likelihood of extreme weather events in a manner that is more useful to decision-makers by using methods that are based on the statistical theory of extreme values.
- Include at least one author with expertise in uncertainty analysis on all chapters of IPCC and US national assessments.
If these recommendations are adopted, the improvements in uncertainty quantification would thereby help policymakers to better understand the risks of climate change and adopt policies that prepare the world for the future.
JC comments: I applaud the publication of this essay by Nature Climate Change. This paper comes from the Geophysical Statistics Project at NCAR, of which I am a fan.
I support most of the recommendations made in this paper. But I have one big-picture cautionary concern. Uncertainty quantification is a growing buzzword in the climate modeling community. The Wikipedia defines UQ as the science of quantitative characterization and reduction of uncertainties in applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. What’s not to like about UQ?
Sometimes the uncertainty monster is just too big to characterize , implying big areas of ignorance. The biggest area of ignorance is observations that we didn’t make in the past. In this case, at best we may able to state the sign of a trend, or put some bounds around what we are estimating, but the nature of the uncertainty and the level of ignorance should be carefully evaluated before attempting to quantify the uncertainty in context of a pdf (which has the potential to mislead). My concerns along these lines are fully explicated in my uncertainty monster paper.
The bottom line, and in this I agree 200% with the paper, is that climate science would HUGELY benefit from greater incorporation of statistical expertise. The relative lack of this expertise is one factor that contributes to overconfidence in the IPCC conclusions.
That’s a good line from Nature Climate Change. Meanwhile Bloomberg is reporting other concerns.
Yes, it makes a change from the usual, leaking what is NOT in the document.
“The World Bank says the planet is on course to warm by 4 degrees Celsius by 2100”
LOL This is a diversionary tactic. Watch your wallets. They’ll have them and be off running while you are momentarily confused.
Andrew
This, from the Bloomberg article, clearly shows why more accurate descriptions are important:
“U.S. and European Union envoys are seeking more clarity from the United Nations on a slowdown in global warming that climate skeptics have cited as a reason not to “panic” about environmental changes…”
______
The distinction between tropospheric warming and changes to Earth’s energy balance is a very specific distinciton to make, yet a very important one. Far too nuanced for the general public perhaps, but when scientists (who ought to know better, and shame of them if they don’t) talk about “global warming”, when they mean tropospheric temperature change, then the issue get’s muddied, and of course, this is exactly the intent for some.
R Gates
Ref my remark yesterday.
From the book ‘climates of hunger’ by bryson and murray written in the 1970’s
They coined the phrase ‘human volcano’ in reference to the potential impact of human generated particulates on climate. they suggested that widespread changes in farming, more ploughing, dry climate, drought, caused extensive dust clouds i,e a ‘climate feedback.
They did not specifically use the ‘carbon’ bit.
Unfortunately for you i have now copyrighted the term ‘human carbon volcano’ and ‘Anthropocene’ so you will have to ask my permission to use them. I am working on a scale of fees at present.
tonyb
“Sometimes the uncertainty monster is just too big to characterize , implying big areas of ignorance. The biggest area of ignorance is observations that we didn’t make in the past. In this case, at best we may able to state the sign of a trend, or put some bounds around what we are estimating, but the nature of the uncertainty and the level of ignorance should be carefully evaluated before attempting to quantify the uncertainty in context of a pdf (which has the potential to mislead). My concerns along these lines are fully explicated in my uncertainty monster paper.”
Precisely what I was thinking, in my imprecise, layman’s way of course. In fact, when it comes to the most fundamental question of all, that is, will it get warmer or cooler, we simply don’t know. In that case, the sign remains unknown. That we don’t even know this, after all the time and money, and despite manifestly bogus claims of near certainty, is to borrow K.T.’s most deeply regretted word, a travesty.
In terms of actual climatic changes to do with Co2…we really don’t know anything. That from a policy standpoint, is the bottom line.
Judith
Interesting that you feel that statistical input and expertise is sorely lacking.
Virtually all of the hundreds of papers I have read on climate science subjects from sea levels to sea surface temperatures, temperature anomalies to arctic ice extent, rely heavily on a statistical interpretation.
Out of every one hundred papers how many would you guess are properly and thoroughly Statistically correct?
Should papers be revisited and reanalysed such as was done recently by trenberth.
Tonyb
Statistics is used in most fields of science, and it’s probably used badly in all of them. The problems may be discussed most in medicine where tens of papers have been published on the inadequacy of the methods. This overview has a number of such references
http://www.smw.ch/docs/pdf200x/2007/03/smw-11587.pdf
Climate science is just one of many in this respect, probably not any better or worse.
Yes, but climate science is the only field haranguing the public with dodgy statistics in an effort to get us to bomb ourselves back to the Stone Age with carbon taxes and looney green subsidies.
Pekka
Thanks for the link. As Don remarks, climate science has a special weight and importance as it affects us all.(although he said it more colourfully.)
That is why I would be interested to know how many climate science papers are flawed enough to make their results of dubious value. I would be especially interested to know how many of the IMPORTANT and INFLUENTIAL papers are flawed .
My belief from delving into the history, as that all too many pillars of climate change are taken as correct and data is derived from it . There is no better example than this silly belief we have that we know global sea surface temperatures to 1860 or that missing out on September ice melt and ignoring Russian data will allow a construction of a valid Arctic ice extent for the 1920-1940 period.
Not only do we have enormous uncertainties in the first place, but these uncertainties are then dissected and analysed by people who don’t seem to always know what they are doing.
tonyb
Pekka.
I would agree about medicine. I deal with predictive methods of suden death and the use of expensive therapy that has numerous complications to prevent a fairly uncommon event. Any argument is clearly statistical and this pervades virtually all fields of medical research. Personally, I have found the statistical methods required in my research to be extremely challenging and I have found that I always have to go back to basic theory to understand what I am trying to acheive – a process that usually takes me months.
I would comment that medics are, as a general rule, not particularly good at quantitative reasoning and depend on impressions that are backed up with a bit of statistical fluff.. I have been struck by many statistical arguments that have been advanced in climate science that have parallels to medicine and are, in my view, somewhat unsophisticated.
The second thing is that medical statistics is a proper discipline and now has much more influence in experimental design and interpretation. This is welcome, but it has been a long slow grind.
Until climatology really engages with statistics and it is taught at a reasonably high level in under and post graduate classes so that the practioners can apply fundamental statistical reasoning to their work, I think that there will be a continuing problem
RC, I work in a research dept at the Texas Medical Center. We have a Biostatistics department who have to be inducted into all trials, prior to their design, and these supervise all the data collection and management.
I must admit that, as a Ph.D., I have been change my view of clinicians as scientists over he last four years.
In the UK, I also worked with many clinicians, mostly surgeons, and they had neither the backing of statisticians nor the inclination to seek them out.
@Doc Martyn
I am glad that you are revising your view of clinicians as scientists. I think that this is probably due to the rise of the MD/PhD (which I did rather oddly in biomedical engineering).
As regards statisticians, yes they are very helpful, but they can be led down false paths themsselves and are equally likely to become fascinated by “state of the art” statistics. The trick, I think, is to have sufficient background to be able to argue the fundamentals of the statistical reasoning and to be able to do most of the modelling oneself.
Medicine has taken an obvious advantage over climate science; that is, constant and useful clinical feedback. The opportunity is there in climate science, but much befogged.
================
TonyB
Hurray for more Uncertainty Quantification! What is it?!
The international standards community established Guidelines for Measurement of Uncertainty, e.g., NIST TN1297 in 1994. BIPM updated guidelines in 2008.
Yet in a google scholar search, out of 2,670,000 papers listing “climate”, only 136 also mention “climate” with “type b uncertainty” (or 36 that mention “climate” with “Type B error”).
i.e. Only 0.003% know about Type B uncertainty or error.
So out of every 100,000 climate science papers, 3 might be statistically correct by including Type B errors. Not bad for a science claiming 95% probability!!!
I quote “The forthcoming Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the US National Climate Assessment Report will not adequately address this issue. Worse still, prevailing techniques for quantifying the uncertainties that are inherent in observed climate trends and projections of climate change are out of date by well over a decade. ”
If this is true, and I am sure it is, then the forthcoming IPCC AR5 report is a travesty. The senior scientists of this, supposedly, august body are putting out a report which is clearly utter and complete scientific nonsense.
Where is the outrage from the scientific community, led by our hostess?
Jim, Re our hostess, I used to share your sentiments. I couldn’t understand how this stuff didn’t enrage her, since she clearly understands what’s going on…
And yet I’ve come to respect her measured approach. J.C. has the ear of the MSM these days, which for someone so far out of the mainstream (whether you think she qualifies as a skeptic or not…and I surely do), is quite a feat. Had she been invoking fire and brimstone from on high all this time, she’d be labeled one more crack pot denier. In fact she is in some quarters…
Once again I agree Pokerguy, she seems to take a measured approach without stepping on toes or at least few. The more the public hears from her, no matter the pulpit, the more effect she has.
pokerguy, yiou write “And yet I’ve come to respect her measured approach.”
I wonder who you think will be more effective in the coming weeks; Marc Morano or Judith Curry. I would put my money on Marc.
Nice work, Jimmy. That’ll get ‘er.
“Reduce uncertainties in climate change projections by applying experimental design to make more efficient use of computational resources. ”
+1
“applying experimental design”
You mean they haven’t done any experimental design yet?
Andrew
There is probably an experimental design employed, however you can employ very specific experimental designs, DOE’s, to vary inputs in a specific pattern. The experiments are run to obtain the outputs. By use of statistical methods, ANOVA is generated which in turn tells you which input variables most influence the output and by how much. Partial or full factorial experiments can be run. The method allow one to gain understanding of how variables interact, or not, and by how much. It allows one to understand which inputs need to be carefully controlled and which are not too important. It is used frequently in design of new products, processes, or to gain better quality control through understanding of a poorly understood process. The idea is to build quality into a product or process by determining the optimum design or process conditions to keep the product or process in the range where it always works properly. You can use it to design tolerances needed to keep proper control.
Now imagine using this methodology to run simple climate models. One could run a DOE varying different inputs considered important to the output of interest, such as temperature, and gain some understanding of how sensitive temperature is to the inputs of interest. It would help refine how models run in ways that would in turn be used to improve future model design.
With a 10 fold reduction possible in satellite uncertainty, there must be some room for improvement. See Nigel Fox, TRUTHS project.
As a statistician (loosely speaking), I can’t possibly object to improved statistical techniques.
But with the caveat that we not believe the results of uncertainty analysis too much. Every method of analysis is based on an underlying mathematical model that makes simplifying assumptions, both for tractability and out of ignorance. This means that estimated uncertainty is likely too low. Or to put it another way, it’s difficult to account for what we don’t even know we don’t know.
A question for Judith: How common is the use of robust statistical techniques in climate science? How sensitive are the results to corrupted or erratic data?
It’s great to see all these great papers coming out over the last week or so in tip-top journals just in time to be included in the latest IPCC report. Hopefully the deniers will finally shut up about how the consensus is gaming the system.
actually its too late, cutoff for publication was last april
Howard said:
“Hopefully the deniers will finally shut up about how the consensus is gaming the system.”
___
Unfortunately, no they won’t. They have a religion to preach.
“just in time to be included in the latest IPCC report”
LOL They won’t be included. In fact in some of the Climate Gate emails, there is behind-the-scene discussions about known errors in studies that were left in for adoption into the AR4 — only to be revised once it’s to late to make corrections to the IPCC final report.
You ought to read them some time.
Katz is an old colleague of mine at NCAR and an excellent statistician. However this bit from their paper is just wrong:
“policymakers need more accurate uncertainty estimates to make better decisions”
And the reason it is is wrong is what?
totally agree, I was going to flag that but I’ve also made that point so many times (not that it seems to be sinking in).
Judith, perhaps lacking context here, but I don’t understand your and Roger’s point. As a long-term policy adviser, I would agree that “policymakers need more accurate uncertainty estimates to make better decisions.” You need to know the uncertainties both in the issue under consideration and in the proposed alternative approaches to dealing with it. How else can you determine an appropriate response?
If you know that uncertainties are large, then you would seek flexible responses which can easily be modified as circumstances or assessments develop over time, rather than adopting, e.g., a large-scale, inflexible response. You would probably want to avoid over-commitment until the uncertainties were reduced – if that approach had been followed in Australia, which adopted highly expensive and poorly-designed emissions-reduction policies, we’d be in much better shape now both economically and to cope with whatever emerges.
“We demand rigidly defined areas of doubt and uncertainty!”
― Douglas Adams, The Hitchhiker’s Guide to the Galaxy
+1
I must be missing something here.
It seems to me that policymakers are much better served if Climate Scientists state that sea-level increase will be 6 cm +/- 0 cm, in contrast to 200 cm +/- 194 cm. The associated mitigation tasks are significantly different, IMO.
This is possibly a discussion for other posts.
How about that in two decades that temperature might rise from 287.95 to 288.62 +/- 0.36 degrees?
actually the next post (coming tomorrow) is on sea level rise, stay tuned
Dan, no, what we want is the best estimates available including their degree of uncertainty. If the best estimate is -194 to + 200, you’ld want to allow for the higher outcome but be very flexible in your response, not over-committing as you have other priorities besides sea-level change. You would also, of course, want a time-scale, and would want to monitor what befell and if estimates were revised. If you are told 0 to +6 cm, you’ld say, no problem, forget it, move on.
“Dan, no, what we want is the best estimates available including….”
What we want is quantifiable (and thus, falsifiable) estimates. No more “quite, possible, maybe”.
“If it can’t be expressed in figures, it’s not science it’s opinion.” – Long, Lazarus
This seems to turn on a rather pedantic interpretation of the phrase.
If this is interpreted as saying policymakers would make better decisions if they had better information about the uncertainty I’d argue that was a tautology on most reasonable definitions of “better decisions”.
Pielke Jr. and by concurrence Curry:
However this bit from their paper is just wrong:
“policymakers need more accurate uncertainty estimates to make better decisions”
——————–
Hmmm, that ‘bit’ is at the end of a paragraph that starts
Uncertainty quantification is a critical component in the description and attribution of climate change.
Such an opening statement on the critical nature of uncertainty quantification suggests to me that Pielke and Curry here are a little quick on the draw or at least might have developed the thought a little more. And I thought ex cathedra assertions were passé.
From another perspective: Policy decisions are in reality complex, e.g., entailing multiple decisions within decisions. Then there are the attendant niceties of setting priorities, value of information, value of engineering, etc. which all benefit from improved uncertainty estimation. Per Faustino, have we a contextual disconnect?
Consider something like a ‘rate of inflation’ used in present value calculations–there is surely uncertainty there; but hey, can we just take some ‘stock’ values off the shelf and say we only demonstrating things anyway? For seminars, classes, and whitepapers, yes. For policy, no. Context.
Then it seems odd or even inconsistent to me that a discussion where some adaption of experimental design into a decision process is seen as an improvement, the value of more accurate estimation of uncertainties can by readily dismissed–ex cathedra. Again, context.
Heh, heh. Maybe we are advocates of including good statisticians except when our work/judgement is involved.
So I repeat Don Monfort question:
“And the reason it is is wrong is what?”
It couldn’t houight!! (hurt :) )
These are good recommendations but be clear about what they involve. First, climate scientists, some of whom have just enough of a statistics background to get into trouble, will have to yield the floor to those whose expertise better enables them to make defensible uncertainty assessments. That will be a tough sell to those who reflexively react with indignation whenever non-climate scientists dare to tread on their turf. Second, the difficulty of the task is great, even for experts, and the difficulty is compounded when so much of the focus has gravitated to explaining extreme weather events (a regrettable diversion IMHO). Random deviations from centrality are not independent, nor is it easy to characterize the nature of the dependence. Extremes are highly sensitive to model specification, which makes this a likely candidate for an ill-posed problem.
The trouble is that people who really understand statistics and people who really under climate tend to be disjoint sets. But you likely need to be good at both to do a first-rate job of assessing uncertainty.
McIntyre is good because he has a good grasp of statistics, and through his experience investigating and auditing the dodgy work of supposed climate experts, has ended up knowing more than most of them.
Good applied statisticians successfully collaborate with experts in other fields such as medicine without being experts in those fields themselves. I don’t see why it has to be different with climatologists.
tom, you are right about McIntyre, and see how popular he is with climate scientists.
“Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in assessments of climate change.”
Definitely not. The focus needs to be on elimination of false assumptions underpinning inference; otherwise all you’re getting is statistically-paradoxical p-values & confidence intervals. A beautifully elaborate abstract framework should never be allowed to seduce where assumptions are unhinged from reality.
I agree. Without a proper physical framework, one is trying to make proabilistic assertions about a deterministic system that one doesn’t understand
Paul, RC, true, if your basic understanding is false or inadequate, then it doesn’t matter how good your statistical techniques are. I’ve seen with econometric modelling how critical are the assumptions made, you can’t determine appropriate assumptions without thoroughly understanding the underlying economics. Since I was at LSE (1961-64), the emphasis in teaching economics has moved away from that critical understanding towards esoteric mathematical techniques. Which is why graduates I recruited reckoned they learned more in 3-4 months working with me than in 3-4 years at reputable universities. So you need to understand good understanding with good techniques, improving both through successive iterations.
Faustino wrote: “the emphasis in teaching economics has moved away from that critical understanding towards esoteric mathematical techniques. Which is why graduates I recruited reckoned they learned more in 3-4 months working with me than in 3-4 years at reputable universities.”
Exactly.
I was fortunate to have 2 strong mentors — one (a statistician) with a healthy respect for conceptual understanding of fundamentals (in contrast with rote application of canned procedures) and the other (a non-statistician) with exceptionally lucid field-based awareness of aggregation criteria & consequent philosophical implications for (naively attempted) stat inference.
Tsk ! Team and members – of – the – I – P – C – C.
Like Chief and Kim keep on remindin’ y’all, yer gotta
git the clouds right . As RC Saumarez says above,
the proper physical framework MATTERS,. AND as
Faustino is tellin’ yer below, good statistical techniques
don’t matter zilch if the built – in – “ass -umptions”
don’t amount ter a hill of beans.
Tsk!
Attaching a tuned volatility rake and a simple tachometer to the engine lit up some dynamical attractors Beth — like a flashing Christmas tree.
(ozone supplement to go with cautionary p.3 paragraph)
The observations are robust even if the detail in the solar time series is thrown away by converting to simple “low” (-1) & “high” (+1) values.
Simple, rigidly-constrained observations that lay nakedly bare strictly false & strictly unphysical mainstream modeling assumptions that have been illegitimately asserted with undeservedly overbearing hubris for decades.
I just came across a relevant June 2010 e-mail to Roy Spencer:
“In other words, they assume causation in only one direction (feedback) is occurring”. I’m an economist, involved in computer modelling of economic issues since 1966 (off and on) and having drawn on a great deal of modelling in my policy work. The assumptions made are critical to the outcomes, so often when different modellers have arrived at different conclusions, that difference is totally dependent on the differing assumptions. That is, the model shows us only what was assumed, it doesn’t reveal the truth or otherwise of the assumed relationship. You can’t accept the conclusions of modelling in any field unless the relationships underlying the model are well understood and proven. The argument that only climatologists are able to evaluate AGW models is false, the statistical techniques and modelling issues are common to many fields, and the AGW case is dependent on computer modelling.
Observation of solar-terrestrial-climate attractors is robust against the following:
1) switching summary methods.
2) changing the resolution of the data (e.g. from monthly to annual).
3) substituting atmospheric angular momentum data for earth orientation data.
4) substituting the famously “ironed flat” TSI reconstruction for sunspot numbers.
5) converting sunspot numbers to simple “low” (-1) & “high” (+1) values. (The proposed comparatively tiny adjustments to solar records also have no effect.)
#5 is the clincher that underscores the physical importance of frequency shift.
Regards
I think you are missing the point that you can’t do what you want to do without incorporating state-of-the-art statistical methods on the ground floor. It is that understanding that helps model the system appropriately (and high-lights the limitations in the data and the models).
@ HAS
Please carefully consider the possibility that maybe you’ve overlooked a few key things:
http://judithcurry.com/2013/08/30/pause-politics/#comment-372135
If you can reproduce Donner & Thiel’s (2007) Figure 4, you’re ready for more serious discussion of the points I raise here.
Even a crude back-of-the-envelope exploratory method applied loosely by investigator A will do orders of magnitude better than the most infinitely-exquisite stochastic method applied precisely by investigator B if cross-disciplinary investigator A is aware of attractor structure (via earth orientation data rigidly constrained by the law of conservation of angular momentum) while uni-disciplinary investigator B remains blinded by paradoxically-false inferential model assumptions.
(With literally orders upon orders of magnitude more climate data, investigator B could directly discover the attractor.)
We have a strategic opportunity to raise the level of cross-disciplinary discussion by an order of magnitude if people learn to independently reproduce Donner & Thiel’s Figure 4. I hope you will make the effort.
Regards
It seems a curious response to rebut a suggestion that state-of-the-art statistical methods can add value, by pointing to literature that (ahem) could benefit from more sophisticated statistical methods.
Both the improvement & extension (of Donner & Thiel’s (2007) methods) I’ve engineered are mathematically trivial. When you are willing and able to reproduce their Figure 4, there will be something worthwhile to discuss. By the way: I DARE you to TRY to publicly disparage their methods in a substantive manner.
I don’t think IPCC-style climate science is ready for the statistical methods that Judith is advocating. Too much is missing. Too much is based on the models themselves not on actual scientific evidence. Without any recognition of ocean oscillations in the models, what use is a statistical method? With the hydrological cycle reaction apparently underestimated by a factor of about three, what use is a statistical method? With no understanding of clouds, what use is a statistical method? And that’s just three examples from a very long list.
Mike Jonas | August 31, 2013 at 2:33 am |
Without statistical method how can one recognise ocean oscillations; or even tell the hydrological cycle reaction is apparently underestimated by a factor of about three? Without statistical method how can you understand clouds?
HAS,
You need to distinguish between raw data exploration and statistical inference, WHICH DIFFER FUNDAMENTALLY.
Stat inference IS BASED ON MODELING ASSUMPTIONS.
Data exploration need not be.
The fundamental distinction is of crucial importance philosophically.
There’s a phenomena known as statistical paradox. (Simpson’s Paradox is a special, well-known case of this.) This arises when key conditioning variables (usually referred to as lurking variables) are overlooked in summaries. It’s not rare in nature; on the contrary it’s pervasive.
I’m not suggesting mainstream climate stat inference suffers from paradox. I’M DEMONSTRATING IT using rigidly-constrained observations and robust methods.
If you can figure out how to independently reproduce Donner & Thiel’s (2007) Figure 4 (click “similar methods” link top-right p.1 of STC101), we’ll be able to have an efficient exchange about this.
(If you deeply understand the simple, rock solid, powerfully beautiful concepts underlying their methods, you’ll be able to do the calculations in a few spreadsheet columns.)
PV
I’d just mildly make the observation that state-of-the-art statistical methods are as important to induction as they are to deduction, and help sensitize one to avoiding rock solid claims when data mining (its much more like sand in this territory).
HAS,
You’re attempting to argue from the general to the specific (a logical fallacy).
I suggest you role up your sleeves if your aim is to fairly comment with integrity on the specific context at hand:
Donner & Thiel (2007):
“In this letter, we introduce an advanced statistical framework that allows an appropriate analysis of these relationships.”
Suggestion: Reproduce their Figure 4. If you get that far independently, then I can efficiently walk you through the rest of the steps. I’m not here for games. Please either be serious or stop engaging me.
I was just commenting on your general claim that state-of-the art statistical methods definitely couldn’t improve the quantification of uncertainty in climate science.
I did also observe that better use of statistical techniques could improve the paper you later refer to as an example presumably of one the proved your point (even though as I’m sure you are aware the specific can’t prove the general).
You should perhaps just ponder what the claimed significance of the correlation related to Fig 4 in that paper really means, and whether with better experimental design they could have done better.
HAS,
You’re not even making an effort to read carefully what I’m saying. I’m not referring to a “correlation” in figure 4. I’m asking people to learn the skills necessary to reproduce figure 4. If I see that they can get that far independently, I’m willing to guide them through application of the method in another context with different data.
I conclude that you are not serious enough to advance the discussion.
The method includes the statistical analysis (my original point) and until you understand that and take it into account, anyone following you into an analogous analysis would be wasting their time.
No. It’s a univariate series, so there’s no correlation. You’re game is obfuscation.
I could be churlish and observe Fig 4 deals with a Nth and Sth Hemisphere (i.e. two independent vbles), but putting that aside the statistical analysis deals with the performance of the series over different ranges across time. You can do statistical analysis with univariant series in this way.
JC wrote: “The biggest area of ignorance is observations that we didn’t make in the past. “
The biggest area of ignorance (currently) is aggregation criteria.
Even with breathtakingly exquisite methods, all you guys will ever get is paradoxical estimates if you can’t or won’t take the time to get the data sorting (taxonomy) right. The implicit assumptions of spatial symmetry & temporal uniformity FAIL diagnostics (catastrophically).
Statisticians alone won’t be able to help you with the aggregation issue. Earth orientation experts on the other hand can help you.
This is a cross-disciplinary problem. If you leave out the earth orientation people, opting to just work with the statisticians, you’ll be pushing dead-ends for as long as you plow.
I sincerely hope no one wastes funding on such dead ends.
The statisticians know about the law of large numbers for sure, but without sensible interpretation of HARD constraints from the law of conservation of angular momentum (that only earth orientation experts appear to have), you guys are just wasting more time & resources while squandering away more public trust in the long run.
Infinitely theoretically-exquisite stat inference based on FALSE assumptions is a total waste of everyone’s time & resources. Why even think about going there?
I hope you will stop and think.
Collaboration is not an either-or proposition. Statisticians shouldn’t isolate themselves from the experts with whom they collaborate. But that being said, there will be times when they have to dig deep into their own area of expertise just as their cohorts have to do in theirs.
The Geological Society of Australia’s draft ‘Climate Change Statement’ (p6 here http://www.gsa.org.au/pdfdocuments/publications/TAG_165%20TAG.pdf) is a cautious, conservative statement.
What a contrast between this and the strongly advocacy statements by Royal Society, National Academy of Sciences, American Geophysical Union, Australian Academy of Sciences.
Why don’t the IPCC and the various science bodies adopt similarly conservative e wording for their statements?
Would they IPCC and theo other bodies be more effective if they did?
“…is that climate science would HUGELY benefit from greater incorporation of statistical expertise.” – Judith Curry.
This must have been brought up before, that with the consensus situation which is another topic, bringing in more Scientists from other fields to work on the issue even in limited ways, might lessen Consensus tensions. A lateral movement drawing on existing Scientists that may help.
“[…] the probability distribution of temperature is becoming more skewed towards higher values. But techniques based on extreme value theory do not detect this apparent increase in skewness. Any increase is evidently an artefact of an inappropriate method of calculation of skewness when the mean is changing.”
Whoever wrote this is exposing their lack of conceptual depth & independence.
Specifically:
“But techniques based on extreme value theory do not detect this apparent increase in skewness.”
If so then why not adjust the methods so they can? (instead of falsely assuming that can’t be done)
The malleability of methods is limited only by vision.
A sound practitioner is able to engineer new methods from scratch as new types of patterns are encountered that are not detectable by conventional/existing methods that were designed to do something else.
Perhaps because the increase in skewness isn’t an artefact of nature?
My impression is that the paper is too optimistic on the power of more modern methods.
When issues like extreme events are considered the amount of relevant data is unavoidably small – otherwise the events would not be called extreme. Trying to get hold of the tails requires assumptions as data alone cannot tell much on them. It’s easy to agree that the tails tend to be fatter than Gaussian tails, quite possibly close to some power law. The bad news is that rather small changes in the parametrization of the tails – or equivalently some other rather small changes in assumptions, may influence the outcome drastically. As long as we are looking at variability not understood well theoretically, the limits of what statistics can tell are not so favorable.
Clear errors of statistical analysis can certainly be avoided. Some of the sentences from the paper refer explicitly to the paper of Hansen, Sato, and Ruedy, that I really don’t like (and told that strongly enough here), and that’s not the only example.
“My impression is that the paper is too optimistic on the power of more modern methods. ”
I read it much more as a reminder of how rudimentary much of the statistical analysis used in climate science is.
The main point is if you used better statistical analysis you wouldn’t claim much of the rubbish that currently seems to pass muster.
The recommendations give me a different impression, some other sentences say that even more directly.
Why not try RADAR…
http://www.wunderground.com/news/mega-canyon-bigger-grand-canyon-discovered-under-greenlands-ice-sheet-20130830
O’-right. Imagine a river flowing once, now under more than a mile or more
& even more cold stuff on the way for tomorrow too.
nasa… uses the old, slow, radar.
“Using data collected over the past few decades by NASA…”
WHO knew.
‘According to Fig. 5, a series of intense El Nino events (high red color intensity) begins at about 1450 BC that will last for centuries. In that period normal (La Nina) conditions have but disappeared. For comparison, the very strong 1998 El Nino event scores 89 in red color intensity. During the time when the Minoans were fading, El Nino events reach values in red color intensity over 200.’ http://www.clim-past.net/6/525/2010/cp-6-525-2010.pdf
http://s1114.photobucket.com/user/Chief_Hydrologist/media/ENSO11000.gif.html?sort=3&o=132
Such persistent and intense El Niño periods would certainly devastate modern Australian agriculture. Luckily – it seems that a return to dominant La Niña conditions is more likely and the US will be in centuries long drought.
The real point however is that historic extremes go far beyond what was experienced in the 20th century. A starting point would be to document the limits of natural variability – to evolve impact analysis scenarios – and simultaneously enhance the resilience of communities globally to climate extremes. In this anthropogenic climate change seems more of a distraction than otherwise. In a coupled, nonlinear system mode shifts proceed as emergent properties of internal system reorganisation. It is utterly unpredictable despite the notions of ordered forcing.
Michal Ghil schematies the differences – http://s1114.photobucket.com/user/Chief_Hydrologist/media/Ghil_sensitivity_zps369d303d.jpg.html?sort=3&o=47 – real climate is chaotic and random.
The answer for climate sensitivity is for instance…. wait for it… γ in the linked diagram.
http://s1114.photobucket.com/user/Chief_Hydrologist/media/Ghil_fig11_zpse58189d9.png.html?sort=3&o=0
http://www.atmos.ucla.edu/tcd/PREPRINTS/Math_clim-Taipei-M_Ghil_vf.pdf
Now we need 1000′s of times more computing power to find out what the question is.
A sensitivity of γ might certainly include 1 degree or 6 degrees. However, it seems credulous in the extreme to suggest that this is a definitive range.
‘In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS?
Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’ http://www.pnas.org/content/104/21/8709.full
Uncertainty at this time is absolute. Julia Slingo and Tim Palmer put it like this.
‘Figure 12 shows 2000 years of El Nino behaviour simulated by a state-of-the-art climate model forced with present day solar irradiance and greenhouse gas concentrations. The richness of the El Nino behaviour, decade by decade and century by century, testifies to the fundamentally chaotic nature of the system that we are attempting to predict. It challenges the way in which we evaluate models and emphasizes the importance of continuing to focus on observing and understanding processes and phenomena in the climate system. It is also a classic demonstration of the need for ensemble prediction systems on all time scales in order to sample the range of possible outcomes that even the real world could produce. Nothing is certain.’
By ensemble they mean perturbed model ensembles – something still in infancy.
The issues around the quantification of uncertainty are overshadowed by the lack of good hypotheses and good experimental design in climate science. Many recent papers have been much better in this respect and the quality of the science has been much improved, which is most pleasing to see. Now if we can all deal better with our own confirmation biases the standard of debate will rise exponentially!
it’s certain that climate will keep changing BUT, there isn’t such a thing as GLOBAL warmings
and in this I agree 200% with the paper
That confuses me. But then, I have not read the whole paper. If it comes out from behind the paywall, could you please share it?
The hardest uncertainties to quantify are unknown unknowns. So even really good uncertainty estimates tend to be overoptimistic. This is a double-edged sword, of course.
Matt, the statement that confused you was Judith’s agreement “that climate science would HUGELY benefit from greater incorporation of statistical expertise.” Given the heavy use of statistics in climate modelling, and frequent criticisms of poor techniques used in climate modelling, I’m surprised at your response.
My confusion arose from Pielke Jr and Judith agreeing that the paper is “just wrong” in saying that “policymakers need more accurate uncertainty estimates to make better decisions.” Whatever reservations might be held about the quality of those estimates, as a (past) policy adviser I would consider having the best estimates crucial, as opposed to be given projections with no indication of their uncertainty. (cf my comment at 8.42 pm 30/8 above)
In trying to quantify uncertainty you are manufacturing more bogus certainty. Which was the problem in the first place.
mosomoso, if you don’t accept the basis of the models, then, yes, uncertainty estimates are irrelevant. If you are making decisions based on the models, then they are relevant, though, of course, if the models are not fit for purpose then your decision-making basis would only apparently be better if you had uncertainty estimates.
Good point, Faustino. I accept that there is a relationship between a Dinky Toy model of a Lamborghini and an actual Lamborghini. The relationship of a climate model to climate is similar, though much weaker. Should someone wish to somehow measure that weakness, or “uncertainty”, I quite understand. We all need our hobbies! Someone who collects compressed tea and follows St. George can hardly look down on other people’s enthusiasms.
See my comment below. In a sense, statistics is a post facto discipline. Lack of skill and fitness for purpose in numerous other disciplines sabotages the models (not to mention reliance on them) <ab initio.
It is gob-smacking that the Jackasses of all Sciences, Masters of None who call themselves Climate Scientists dismiss and disregard input and critiques from the real disciplines they presume to merge and meld.
tag edit: ab initio.
On one hand this assessment bets for “the well-developed statistical theory of extreme values”.
On the other, in JC&PW book “Thermodynamics of atmospheres and oceans” (pg.357) it says: “Applications of this type of linear feedback analysis have been made to the climate system, justified by considering only small perturbations to the radiative flux and surface temperature.”
So then, Judith, are you sure that you agree 200% with this assessment?.
I am writting an essay about this. May be scientists can find in there all the common sense that has been lost.
The relative lack of this expertise is one factor that contributes to overconfidence in the IPCC conclusions.
The relative lack of
thismultiple areas of expertise is one factor that contributes to overconfidence in the IPCC conclusions.There, FIFY. The list includes mathematics, physics, biology, modelling, programming, and others.
When uncertainty is already 100% – because some things are totally unknown – how can “better and more modern statistical methods of calculating uncertainty” improve the assesment? Can they calculate uncertanty to be 200% ? Is that more useful to policimakers than 100%?
The problem is not lack of adequate methods of calculating uncertainty, the problem in climate science is one of willfully denying and intentionally ignoring the uncertainty.
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?