by Judith Curry
I am starting to see some encouraging signs that people (including the IPCC) are paying more attention to the uncertainty issue as it relates to climate change. Nature has an editorial on this issue that summarizes the situation as:
IPCC members last week considered the best way to quantify uncertainty. They are not alone in needing to do so — the media must also take a firm line when it comes to scientific reporting.
The public seems to want and expect an acknowledgement of uncertainty. If there is anything to be gleaned from the silly Scientific American survey, it is that uncertainty scores big:
1. Should climate scientists discuss scientific uncertainty in mainstream forums?
- No, that would play into the hands of the fossil fuel lobby: 3.0%
- Yes, it would help engage the citizenry: 90.3%
- Maybe, but only in serious venues: 6.7%
Well, this survey has attracted a self-selecting sample. But for reference, 34% of the respondents said we should do something about climate change (the implication being that a majority of these respondents also voted for uncertainty). Only half of these respondents (17.5%) thought the IPCC was an effective group of government representatives, scientists, and other experts (with 81.8% thinking it was a corrupt organization prone to groupthink with a political agenda). My interpretation of this is that besides being an essential element of science, acknowledging uncertainty increases the public credibility of the science and the scientists. Overconfidence comes across as selling snake oil.
I would like to think that my arguments are helping to elevate this issue into importance, but you can never be sure how many people are actually reading the blog (I’ve no idea how to relate individuals to number of blog hits) and who is actually reading the blog other than the people that post comments. Well, at least one person on the “Hill” has been reading the blog, a congressional staffer who contacted me about possibly testifying in a possible hearing (more on that if/when it materializes).
From my perspective, the most interesting (and potentially important) development on the uncertainty front is the preparation of a special issue in the journal Climatic Change (founding editor Steve Schneider) entitled Framing and Communicating Uncertainty and Confidence Judgments by the IPCC. I have been invited to submit an article, and I have accepted.
So I am submitting an article to a mainstream journal criticizing the IPCC (in my invitation, it was noted that they expect me to submit a critical paper). I view this as strong evidence that I am not an apostate, by working within the system to try to change the system. But perhaps my assertion should be checked by Michael Tobis with an Italian flag analysis, supervised by Keith Kloor.
This thread is an opportunity to extract and summarize the highlights of previous uncertainty threads, and make some suggestions for what I might include in the paper (and bring newcomers to Climate Etc. up to speed on what has been our major topic of discussion).
Background
IPCC’s foundation for characterizing and communicating uncertainties and confidence levels is described by Moss and Schneider (2000). The “Guidance Paper” by Moss and Schneider recommended steps for assessing uncertainty in the IPCC Assessment Reports and a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts.
The actual implementation of this guidance in the AR3 and AR4 WG1 Reports focused more on communicating uncertainty rather than on characterizing it (e.g. Peterson, 2006), adopting “judgmental estimates of confidence” whereby a single term (e.g. “very likely”) characterizes the overall confidence. Since physical scientists generally prefer to consider uncertainty in objective terms, why did the WG1 authors choose the subjective perspective, or judgmental estimates of confidence, and focus mainly on communicating uncertainty rather than evaluating it? Peterson (2006) suggests that lack of time given other competing priorities and the simplicity of using a scale in a table rather than a more thorough examination of uncertainties enabled the assessors to continue with only a minimal change to their usual working methods. Defenders of the IPCC uncertainty characterization argue that subjective consensus expressed using simple terms is more easily understood by policy makers.
The recommendations on characterizing and communicating uncertainties made by the IAC that recently reviewed the IPCC were:
“All Working Groups should use the qualitative level-of-understanding scale in their Summary for Policymakers and Technical Summary, as suggested in IPCC’s uncertainty guidance for the Fourth Assessment Report. This scale may be supplemented by a quantitative probability scale, if appropriate.”
“Chapter Lead Authors should provide a traceable account of how they arrived at their ratings for level of scientific understanding and likelihood that an outcome will occur.”
“Quantitative probabilities (as in the likelihood scale) should be used to describe the probability of well-defined outcomes only when there is sufficient evidence. Authors should indicate the basis for assigning a probability to an outcome or event (e.g., based on measurement, expert judgment, and/or model runs).”
“The confidence scale should not be used to assign subjective probabilities to ill-defined outcomes.”
“The likelihood scale should be stated in terms of probabilities (numbers) in addition to words to improve understanding of uncertainty.”
“Where practical, formal expert elicitation procedures should be used to obtain subjective probabilities for key results.”
At the recent IPCC meeting in Busan, the IPCC prepared a response to the IAC recommendations, the relevant statement regarding the treatment of uncertainty and confidence levels is:
The Panel decided to improve the IPCC guidance on evaluation of evidence and treatment of uncertainty. It is implementing the six recommendations in the IAC Review as part of a broader package of updates to procedures and guidance notes. The Panel noted with appreciation the Draft Guidance Note for Lead Authors of the Fifth Assessment Report on Consistent Treatment of Uncertainties (Appendix 4) and requested the Co-Chairs of Workings Group I, II and III to present the final document to the Panel at its next Session. The final document should provide more detail on traceable accounts, the evolution of the guidance since AR4 and explain how each of the six recommendations in the IAC review is addressed. The Panel urges the Co-Chairs to take any necessary steps to ensure that the guidance note is implemented in the development of its work.
In Appendix 4 to this document, further clarity is provided on their proposed methods for treating uncertainty in the AR5.
Well, these are steps in the right direction, but barely scratch the surface of issues that I and others have raised in Climate Etc.’s series of uncertainty threads:
- The uncertainty monster
- No consensus on consensus
- What can we learn from climate models?
- The culture of building confidence in climate models
- Do IPCC’s scenarios fail to comply with the precautionary principle?
- Overconfidence in IPCC’s detection and attribution. Parts I, II, III
Special issue in Climatic Change
The editors have invited papers on the topics of assessing the treatment of uncertainty in previous IPCC reports, the IAC critique, the new guidelines established for the AR5 reports, the social psychology of communicating and understanding uncertainty, how information on uncertainty is used in the policy process, perspectives from critics of the IPCC (i.e. me), other approaches for communicating uncertainty, perspectives from the user community, and the defense/security community perspective on exploring uncertainty.
Well, it still seems like the emphasis remains on communicating uncertainty rather than actually understanding, characterizing and reasoning about it. The perspective from the defense/security community should be really interesting; what a novel idea for the IPCC that uncertainty should be exploring uncertainty.
My article has a word limit of 3000 words. I’ve already written well over 10,000 words at Climate Etc. on the subject, and I’m just getting started. I would appreciate your thoughts on ideas about what should be included in my paper, including arguments that you thought were effective (or not) in the previous uncertainty threads. Your help in identifying the most important comments in the previous threads (made by yourself or by someone else) would be most appreciated.
Moderation note: please keep this issue focused on the broader issues of uncertainty and the IPCC. Discussion about WHAT we are uncertain about (e.g. the temperature record, climate models, whatever) should be conducted at the Disagreement thread or a previous Open Thread.
Congratulations!
Sources of uncertainty
1. The temperature record has been pruned and homogenized. This is a source of uncertainty.
2. UHI.
3. Coverage – for example in the Arctic and Russia, coverage is fairly low.
4. Related to coverage is interpolating from nearby temperatures to obtain estimated coverage of the missing areas.
5. I haven’t read about this one – but it seems to me that a proxy (for example O18 in ice cores), by its vary nature, would be an average, and tend to dampen the cold and hot swings over some period of time. I would think this would have some effect on uncertainty – wholly apart from the uncertainty of the accuracy of the proxy itself.
I am sure there are many more – but those occur to me at this time.
Hi, Judith. I want to first offer generic thanks for being a level-head in a rising sea of hype and providing a forum for more detailed, technical, and open-minded discussion.
On the topic of uncertainty, especially as regards decision making under risk, I have myself been rather frustrated with imprecise reasoning with regards to uncertainty scaling.
Specifically, one will often hear from activists that “uncertainty is not your friend” as a rebuttal to proponents of inaction. They apply “risk management 101” — i.e., symmetric probability of a large effect (pick your poison — cryospheric loss, transient or equilibrium temperature rises, etc).
However, as someone experienced with a wide variety of applied risk management, I must say this is naive with regard to risk management “201”, if you will — that the shape of the distribution matters, not merely its location and scale. Specifically, if the 3rd moment is large, it could easily be that the upper bound from paleoclimatic considerations is much tighter than a lower bound from models with a variety of feedbacks and infelicities. In that sort of scenario, percentile points or VAR style risk analysis does not yield the same kind of decision surfaces as a naive symmetric (or even Gaussian) shape because the “upper limit” scales differently with the lower limit.
This rather long bombast could perhaps simply be briefly summarized as error bars are often asymmetric, this asymmetry can have huge decision repercussions, and finally the entire issue seems generally side-stepped from what I can tell looking into source documents.
This is perhaps a less substantial issue than the behavioral risk ideas you were also interested in, but I thought you might want to give a little attention to such matters in your upcoming writing if you hadn’t already planned on doing so.
cb, this is EXACTLY what i am talking about, I am personally not interest in the social psychology of communicating and understanding uncertainty. I am interested in understanding and characterizing uncertainty so that it can be used as information in risk management strategies. Also, next week I am starting a series on decision making under uncertainty, i am a novice on this subject so hope to get input from experts like yourself.
I would like to weigh in on cb’s comment. Not only is the approach just 101 with respect to scaling, there are other symptoms of rent seeking in the methodology of the risk assessments. This has to do this assumptions. For most risk, the precautionary principle (PP) is a rhetorical device not a risk assessment and ranking. However, using PP skews the implementaion of the methodology due to the assumptions inherrent in the PP. When one has large uncertainty, assumptions may no longer be a part of the matrix, but rather determine both ranking in the matrix, and ranking of viable alternatives. One example is the ranking of costs/benefits of known world problems where climate change came in with the worst ratio of about 30. Take the cap and trade as the proferred solution, and compare to RPJr’s growing our economy as insurance to afford our eventual solution. The assumptions of PP preclude a scenario where we address the 30, grow our economy, adapt and mitigate ,and solve CO2 emissions. However, consider that these very items, solving the 30, mitigation, etc, should or will be done. It is only with PP where rational ranking is thrown away for the end result can RPJr’s suggestions not be chosen as the preferred solution. When you think of making a decision about risk management and climate change, the identity discussion at RPJr’s site is an excellent start. Then consider how one gets from there to what is being proposed, and ask your self, does it make economic, risk based sense.
“Also, next week I am starting a series on decision making under uncertainty”
If you regard decision making for mining projects under geological uncertainty (this is a VERY common situation) as relevant, I can provide many, many examples from reality – plus the actual consequences. There is also a huge literature on this
Judith,
I am not sure what you want or what I am talking about.
I may have got the meaning of “risk management strategy” wrong, or at least a different interpretation. I am seeing it as ideally an ongoing risk reduction strategy, with risks in both direction, a strategy to steer a course that minimises these ongoing risks from year to year. Practically I am seeing it as any non-disasterous course that tries to maximise happiness as people value present happiness and discount future risk.
You give us:
“I am interested in understanding and characterizing uncertainty so that it can be used as information in risk management strategies.”
This begs the question of what information the manage strategies require, which is important if you have limitied resources to provide and characterise information.
Some information whose uncertainty distribution we fret about due to scientific interest may not be all that relevelant to the management strategy, but things that we do not worry much about may be of great importance to the strategy.
Here are a couple of somewhat daft analogies.
If it is a “making a cake” problem, a scientific approach might concentrate on precision in the ingredient amounts, which is the information input suitable scientific replication. A cook does not need anything like that degree of precision in the ingredients but does need to know something about time constants, e.g. what is the sensible rate of adding ingredient B to A. The cook knows when to stop, that part is built into the management system.
Similarly in order to know if an artillery shell can hit a target, one needs all sorts of precise information i.e. artillery tables, and atmsopheric data, and still have a considerable errors in delivery to worry about.
In order to know if a guided rocket can hit a target one needs to know rather different information, like its manoeuvrability, responsiveness, etc.
If we currently do not know which management strategies will work, it would seem sensible to try and test them.
Let us say, we gave a model over to management team, let them bring scenarios regarding the art of what is possible, politically, economically,etc. Give them feedback from the model and see if they can fly it, without crashing it. Will the combined model/management system over-react and twitch erractically, or will it be too sluggish and get into high amplitude oscilations or will it be sweet. Maybe it will take a few runs before the management system can adapt to the characteristics of the model before it can be flown successfuly. If it can be flown, start thowing in black swans and see if it under performs under CO2 threat compared to no threat (some black swans might always crash it). Start trying different models with different sensitivities and other factors, build a comprehensive management system that is not model sensitive. Obviously it is the world that is going to have to be flown not the models, but unless they are complete junk they should be able to at least dry run our mangement strategies, and if the real world diverges then I guess the models will play catch up.
I do not trust the smorgasbord approach that the IPCC seems to favour, a wealth of scenarios and outcomes dealing with situations that we have no intention of experiencing.
I always seem to harp on about climate sensitivity being the last of our worries, but I expect, and sincerely hope that it is. If we can fly the beast if it is 2c, 4.5c, or even 12c, its value does not matter now, and hopefully may never be revealed.
To get back to the point, hopefully someone is sorting out what the input requirements of the management system are going to be and the precision required. To me it seems sensible to enquire as to the strategists requirements before rushing off and trying to fulfil them based on assumptions.
If we do not know what information, and to what precision, is required then that needs fixing.
In reality we are treating the problem in an ongoing piecemeal fashion, I do not see any real hope of us “commiting” to undertake any action that is likely “to guarantee” the we don’t crash the planet. We have an ongoing and hopefully adaptive (not in the climate sense) to both mitigation and adaptation (in the climate sense).
I realise that none of this helps with quantifying the risks, but it is about quantifying what needs quantifying.
Alex
Alex, very well said, this is pretty much the theme of my dmuu series. I think the scientists are asking the wrong questions at this point in order to provide useful information, because the science has been torqued into this particular direction by the UNFCCC/IPCC overemphasis on identifying a CO2 stabilization target.
Judith, I don’t know what simulation capability there is at Ga Tech, but Alex has an interesting proposition. At present, the EU has some cap and trade, and the US EPA is starting to address CO2 under Title V of the CAAA. Perhaps your class would benefit from working out scenarios such as EU and the US implement BAU, EU and US invoke cap and trade, and the Copenhagen accord became international law. Then start looking at the strategies that Alex has pointed out, use Kaya identity of RPJr, and start simulating world economies and what costs and benefits the 30 challenges represent, and which ends up with a world economy that does not crash. Here is a paper that is a good read that highlights some of the discussion. http://dspace.mit.edu/bitstream/handle/1721.1/1587/Vienna.pdf?sequence=1
My bet is that RPJr is right that the iron law of economics will have to be accounted, before good policies can be realistically expected to fly mothership Earth.
John, this is a really interesting suggestion. Unfortunately the only course i teach is the thermodynamics course. But I am in the early stages of proposing a new degree at Georgia Tech in Environmental Systems, and this would make foundation for a very interesting course.
I agree with RP Jr that the Iron Law need to be accounted for as a necessary condition, but its not sufficient: further, a convincing argument is needed for securing the common interest at the level where decisions are actually made. This is the political will piece. The global arguments being used won’t work.
That is one of the reasons I posted the link. I do not like the PP as usually applied. However, the paper I linked gives a good accounting of a successful approach and its reasons. Not that I necessarily agree with the claims as stated. But, this accounting, as outlined, becomes part of the test for determining the usability of the policy outside of just economics. RPJr has a nice piece today that shows some of the problems with classical approaches .
http://rogerpielkejr.blogspot.com/2010/10/disasters-wanted-math-of-capitalizing.html I don’t know if you have spent time with WGII or WGIII, but the authors could use some training in communicating with the public. Such as using a baseline $/decatherm, or other standardizations such that one did not have to run through 10 citations, and a spreadsheet just to see if the result was reasonable.
Good luck with your academic initiative. I do think, that if we are going to become a global village in reality, we will need leaders that have been trained for it. Our history of wars, environmental damage, etc, indicate that “learning the hard way” may need to be relegated to the history bin along with slavery, social Darwinism, and other failed ideas.
And the Piltdown Man.
Annan suggests that your view that surface temperatures show only a 70% warming signal where IPCC calls it unequivocal is a bit extreme and I agree with him. If you had to expand your uncertainty bands so much further away from mainstream opinion, you might want to make a stronger case than the WUWT crowd does.
From an old, retired Arctic hydrologist, whose philosophies are based on the premise that ‘If you can’t measure it, I don’t believe it’, the concept of assigning billions (trillions?) of dollars on climate mitigation based upon someone’s subjective feeling seems totally unreal. For me, best guesses and gut feelings should never be the basis for expensive decisions. I believe that is the fundamental reason for the scepticism of IPCC pronouncements, and might be an essential element to emphasize in your paper.
Well said Jack. There is no uncertainty. None of the models have been validated, and none of the output of the models ought to be given any meaning in physics. It is impossible to say what the effect of adding CO2 to the atmosphere is, using the “scientific method”
So what have you measured, Jack?
Have you looked at the current information available, and evaluated it by your philosophies, which surely must mean, ‘If you have measured it, you ought believe it?’
What do you believe about the changes in measures of Arctic and Antarctic ice mass, volume, extent, thickness, start and end dates of summer ice retreat, and similar measures for Arctic and subarctic permafrost and ground frost?
Isn’t a sceptic one who measures, and we more apt to say one who neglects to measure is called an ostrich?
Given the level of criticism of measurement tools currently adapted to discussing climate from the old, and largely unimproved weather station model, you have to be aware of their shortcomings.
Wouldn’t your philosophies also inspire you to say that whatever we’ve got to measure, the equipment we’re using is not well-suited to the job and we’re clearly not going about the job of measurement like someone who means it?
I mean, something as important to you as to make it a philosophy, shouldn’t that be pretty core to your character and determine your actions?
At least enough to count and measure how many of the people out there are making pronouncements on their gut, as you appear to be doing, and how many have actually been measuring, as the IPPC have demonstrated that they do?
Bart R. writes “So what have you measured, Jack?”
Jack does not seem to reply. It is not what Jack has measured, but what the IPCC has not measured. The logic is that for a doubling of CO2, one first estimates the change in radiative forcing; then one estimates what this change does to global temperatures, without feedbacks; then one estimates the feedbacks. None of these quantities has been measured.
In particular, the amount global temperatures rise as a result of a change in radiative forcing, without feedbacks, can NEVER be measured, since any attempt to do so would be confounded by the feedbacks. This particular number ought to be abhorrent to all true physicists; a hypothetical, meaningless number that can never be measured.
Until you have established that adding CO2 to the atmosphere causes a significant rise in global temperatures, every other measure – Arctic sea ice, hurricanes, etc – could easily be caused by natural changes. There is nothing to connect a change in CO2 level to any weather or climate phenomena.
“This particular number ought to be abhorrent to all true physicists; a hypothetical, meaningless number that can never be measured.”
Yes. Absolutely. Which is why no true physicist pays for gasoline, or taxes, as they involve such meaningless figures without any basis that can be resolved by physical experiment.
Your suggested Existential approach notwithstanding, some physicists manage to descend into the real world of concerns long enough to come up with clever and acceptable proxies for incommensurables. No physicist ever measured the laser effect before the first laser was built. By your logic, Albert Einstein is no true physicist.
One need establish a relationship between CO2 and temperature to establish a relationship between CO2 and any other measure? That seems a bit of a needlessly strict condition.
Ocean acidification, for example, seems all but entirely independent of a proven temperature relationship.
Even temperature dependent functions may have characteristics that allow for cunning devices to connect human cause with observed effects while bypassing the question of temperature entirely, for example using arguments about the properties of adding external forcings on the behavior of complex systems, compared to systems without additon of forcings.
While you have an opinion, and I don’t mean to disrespect it, the opinion does not flow from the logic you present.
At the same time, your opinion dismisses what measurements there are with a blithe handwave, which is contrary to the spirit of science.
Another example of a meaningless and unmeasurable number in physics would be the absolute velocity, relative to the background of space. Michelson and Morley *did* measure it, and then Einstein came along and told them that it was meaningless. But surely a clever physicist could come up with an acceptable proxy?
There’s no problem in principle with relating CO2 to effects on temperature – the problem is with the intermediate idealisations. Although I’d note that the result might not be a function at all – if there are internal forcings, there may be no single-valued map from CO2 level to climate to find.
Exactly!
I think the physicist was Newton, for the range of cases where the figure was meaningful (one, trivial).
But then it follows that we don’t so much care about CO2’s effects on temperature as external forcings’ effect on climate.
So long as there is a detectible external forcing, perturbation of the unpredictable system.. oh look, a shiny bangle!
Bart R. writes “One need establish a relationship between CO2 and temperature to establish a relationship between CO2 and any other measure? That seems a bit of a needlessly strict condition. ”
I know Judith does not approve of talking politics on a scientific discussion, but I cannot help it in this instance. I require the very strictest of science and particuarly physics, if our politicians are going to spend billions of dollars on trying to “decarbonize” society. This is partly my tax dollars, and I really do object strongly. There is no proper physics to show that as you add CO2 to the atmosphere, temperatures will rise castastrophicly. That is enough for me to say to my Member of Parliament, STOP!!!!!
In science removed from politics, scientists do whatever they want. But when science meets politics, as CAGW has, then the rules change. That is the issue which you are avoiding.
I was speaking of logic, not politics.
A needlessly strict condition in logic is one that has additions put on it where no such restrictions are necessary to obtain the same result, or where faulty conditions are put on the problem that make obtaining a true result impossible.
Saying, “less than four, and also less than ten,” is needlessly strict. If you have less than four, you already have less than ten. The added stipulation is redundant and so inelegant.
Saying, “list every letter of the alphabet; you have fifteen characters to do it in,” is needlessly strict.
Statements that carry around the baggage of needless conditions and strictures hamper analysis and waste my valuable time. They’re a source of error and a favorite tool of sophists.
So, when I say ‘needlessly strict,’ what I mean is biased, prejudicial, error-generating, skewed, senseless, ill-considered, thoughtless, improperly formed, or just plain wrong.
Take your pick.
It’s your tax money.
Jack, have you seen these measurements?
http://www.arctic.noaa.gov/reportcard/greenland.html
Use of the fallacy of appeals to fear by minimizing the uncertainties in global warming science is by far the most commonly used fallacy I have encountered in my personal search for truth about AGW. Not only is this true of media, journalists, environmental groups and politicians who are trying to compel me to agree with them about AGW but also, and sadly, scientists and scientific institutions are also a huge contributor to this problem. But for those like me who know a fallacy or two, no amount of fear mongering on anyone’s behalf will make their claims any more valid than they are or are not. I applaud what you’re doing. It is long overdue.
Re: “… Use of the fallacy of appeals to fear by minimizing the uncertainties in global warming science….” It also supports adoption of the “Precautionary Principle” by the EU and IPCC SAR WG III.
Sunstein, Cass R. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge, UK: Cambridge University Press.
Sunstein, Cass R. 2003. Beyond The Precautionary Principle. Working Paper #38. Public Law and Legal Theory. University of Chicago, January. http://www.law.uchicago.edu/files/files/38.crs_.precautionary.pl-lt.pdf
Dr Curry:
The article above is another in your series of welcome, thought provoking and interesting presentations on this blog.
I have only one quibble. You say;
“Overconfidence comes across as selling snake oil.”
That should read;
“Overconfidence in future projection is selling snake oil.”
Statements such as “We do not know, but we think …” are much more convincing than assertions that “the Greenland ice sheet will collapse in X years if present trends continue”.
The public knows such overconfidence is “selling snake oil”, and that knowledge is a major reason for the collapsing public acceptance of AGW ‘projections’. The overconfidence provides a hint of doubt for most people, and that doubt seems to be confirmed when astonishing errors such as the Himalayan glacier fiasco come to light.
IPCC Authors would do well to remember their classics; Nemesis followed Hubris.
Richard
Well, since my comments are apparently very on point, a few stylized facts may be illustrative, and a few issues pertinent. If the distribution is lopsided as described, a tight upper bound and a soft lower bound, then as the scale or variance or “uncertainty” increases, you could actually get a negative average. That’s right — the mode and median might be for positive delta temp, but the expected value might be negative. I do not think climate variables have such extreme skewness, though in such a scenario “utility optimizers” might see close to zero expected shift and little reason for action while “risk optimizers” might see large worst-case bounds and a lot of reason.
I read Mike Hulme’s whole book on disagreement and this idea that a population has a range of risk aversion doesn’t come up, but seems very relevant to me. Historically, over thousands of years, humanity has been highly under-insured and under-prepared for climate risk of the ordinary natural variation sort. If one views this as a revealed preference for climate risk, it could be that nearly 100% allocation of resources to “forecasting and adaptation” is appropriate, at least for specifically climate spending. Obviously energy is pricey and there is a lot of independent non-climate argument for cheaper supplies.
Another issue is estimating skewness or moments which is really quite tricky if the upper and lower bounds come from entirely different sources which is my understanding of the situation. Specifically, model variation ranges can easily be symmetric, heavy-tailed, or even skewed the other way toward large moves, but these regimes are precisely the poorly calibrated regimes with little validation from empirical data. So, then one might get into a credibility fight between paleoestimaters and modelers.
I personally think a reasonably rational subset of the population observing these things has internally estimated a negative skewness — the raving Venutian disaster scenarios have gotten more and more panning and smaller moves more and more attention. So, in psychological accessibility terms, a lot of people have probably picked up on this and lean toward inaction through a non-conscious skewness analysis.
A final point for this comment is actually about risk normalization or relative risk. The global shift in climate variable X according to various analyses that I have seen is actually small if you divide by the scale of regional variation over century plus timescales. Put another way, the added insurance for a global shift is small compared to the total insurance one ought to be paying. This seems to be true of temperature, precipitation, and even perhaps most surprisingly of sea level. I looked at some tide gauge data for the US and local in time and space variations driven by undulation of the oceans can actually exceed 1 meter — some multiple of the expected global shift (not the diurnal tidal variation itself, but the equilibrium). Needless to say, most places are not prepared for such undulations at all. It can be disingenuous to prepare for and attend a smaller (and more theoretical) global shift in variable X when one is wholly unprepared for a much larger and less theoretical local variation, albeit one which is harder to forecast.
The more certain global risk may afford risk aggregation or pooling, but the world does not generally have extant economic systems for doing that sort of thing, and it may be unreasonable to expect it. If some chaotic undulation of sea level had wiped out the Maldives 50 years ago this kind of issue might be more salient in both the popular and economic imaginations. How much is any one player really obligated to “pay” for risky settlement decisions of others? Should the scale be normalized by fraction of global risk or should the fraction relate to the increment of local risk. Radically different wealth transfer properties would hold depending on that answer. There was a Supreme Court decision about this a couple years back that I believe did not really address this import local risk question. A less climate-y but analogous question might be if you build a poorly constructed city on top of a fault line in Haiti, yes, you can hope for generosity and good will from the world, but can you negotiate or demand it at the UN? For a _single_ disaster the answer is easier than 100 to 150 years of little disasters.
Thank you, very helpful. re the distribution issue, I have been arguing (see the thread on scenarios) that this is too uncertain for a pdf, and that a bunch of possible scenarios (beyond the IPCC’s range on both ends) and that we have failed to imagine the possible black swans. If you haven’t read that thread, would appreciate your feedback (either on this thread or the other one).
cb
One more climate-y case is ozone depletion. There were a lot of ecosystem concerns about increasing UV but the concern that carried the day was UV and non-melanoma skin cancer. Reagan’s Council of Economic Advisers projected a toll of 5,000,000 US non-melanoma skin cancer deaths between present and 2165 (I think). This came to 10 trillion dollars. Protection of ozone layer has cost a tiny, tiny fraction of that and it involved international treaties and regulations and transfers of money from developed to developing countries (all anathema to the ideological crowd). So white people with an aversion to putting on sun screen drove protection of the ozone layer (the projections were driven by the incidence of non-melanoma skin cancer as a function of latitude in the US, the rather small fatality rate and pretty good projections for UV and ozone loss – and oh yes, Reagan’s skin cancer on his nose).
The very real ecosystem concerns were not calculable and were of most concern to many observers. So, it was not really sustainability that led in the end to success. Tanning was the driver. Or golf without a hat.
We have a similar problem with climate. Many studies are showing that the lowest ends of the warming projections are the least likely. But the absence of historical aerosol data will make attribution arguments about the middle of the last century fraught with difficulty. The lack of certainty about the carbon cycle and economic cycles will make projections far from current conditions very difficult. The satellite measurements of incoming and outgoing are showing that CO2 has added enough energy to drive lots of changes. This is probably why Lindzen is creeping up on IPCCs lower bound on sensitivity. (He has gone from 0 to about 0.8). Although we are unable to track the added energy with the certainty that we would like (see Trenberth’s lament), the First Law and radiation measurements have pretty much killed the “Wait for a decade and it will cool again argument”. It is warming and will continue to do so. Reasonable projections from the future may come from extrapolating on the last 50 years until you get beyond the current bio-geochemical regime. If we knew where that tipping point was we would be much better off. Look at the physically comprehensible, reasonable trends that have already showed themselves. Strengthen the Hadley circulation and see more drought in Africa and S. Europe. Watch habitats and plant zones migrate poleward. It has happened and will continue and intensify. That is why the National Director of Intelligence’s Report (under Bush) on the impacts of climate change is quite interesting – look for climate refugees increasing in the next 20 years.
But the warming will not kill a bunch of rich folks and ecosystem costs are difficult to substantiate. So the cost-benefit analysis might not drive changes needed for sustainability. The problems with waiting until problems get more evident are the accumulation of CO2, its long lifetime and the ocean storage and release. Chemistry shows that the peak CO2 will be held for a thousand years. So the time scales are getting up to the times needed to melt Greenland. That is sustainability – but too long term for the next election cycle.
Your discussion suggests that we may just screw this up. Humans have chosen ideology and dogma over commonsense in the past (God would not do this to us. The Invisible Hand would not allow it). It is called history. This generation is imposing increasing climate costs on many of the poor (whose suffering does not drive cost-benefit studies much and who can not afford adaptation) and stealing margin for error from the future.
We may not prove it in the future tense.
Regards,
Chuck Wilson
“This came to 10 trillion dollars. Protection of ozone layer has cost a tiny, tiny fraction of that and it involved international treaties and regulations and transfers of money from developed to developing countries (all anathema to the ideological crowd).”
How do you figure protection was a tiny fraction? Your talking 10 trillion over nearly 200 years. My industry has spent hundreds of billions on refrigerant and equipment changes to protect against an ozone hole that we had minimal history and data we find that the hole has probably always been there, that closing the hole will lead to more warming and the base calculations were not verified.
D’Aleo, CCM, Joseph. 2011. ‘Ozone hole’ shenanigans were the warm-up act for ‘Global Warming’. Scientific Blog. ICECAP. January 7. http://icecap.us/index.php/go/joes-blog/ozone_hole_hoax_was_the_preview_for_global_warming/
Unless uncertainty bands on future projections include negative temperature changes (cooling), then they are bised and meaningless (which they are anyway because the models are not validated).
Trenberth in an interview with IEEE Spectrum would seem to still disagree on how to convey uncertainty.
Love this part “..Scientists almost always have to massage their data, exercising judgment about what might be defective and best disregarded…”
Mr “the data must be wrong” himself. If you suspect the data, find out why it may be wrong. You do not just sweep it under the rug and hope no one notices.
How to Fix the Climate-Change Panel
Questions for climate modeler and IPCC insider Kevin E. Trenberth
October 2010
http://spectrum.ieee.org/energy/environment/how-to-fix-the-climatechange-panel
Spectrum: It seems to me the most damaging thing about the disclosed e-mails was not the issue of fraud or scientific misconduct but the perception of a bunker mentality among climate scientists. If they really know what they’re doing, why do they seem so defensive?
Trenberth: What looks like defensiveness to the uninitiated can just be part of the normal process of doing science and scientific interaction.
Scientists almost always have to massage their data, exercising judgment about what might be defective and best disregarded.
When they talk about error bars, referring to uncertainty limits, it sounds to the general public like they’re just talking about errors.
——
The Trenberth interview is a classic example of what is wrong with the IPCC. People have been wondering who my closing message was intended for in the heresy post (“So how have things been going for you lately?”). I didn’t want to name names, but since they have come forward to beautifully to show that the shoe fits, Trenberth is one and Mann, Ehrlich and Rahmsdorf are others.
Further, the letter published by Mann, Ehrlich and Rahmsdorf in Nature is a perfect example of the “merchants of doubt meme” that I referred to in the heresy post. They state “Nature should have pointed out to its readers that Greenberg has served as a round-table speaker and written a report for the Marshall Institute.” Attempting to smear someone who says something you don’t like by inferring that they are in the pay of big oil or brainwashed by right-wing think tanks is the merchants of doubt meme.
It seems at this stage of the game, the UN is the wrong venue for climate science. This is why politics and the science have become so entwined. Remove the science from the UN and develop it as well as any summaries of it in a science venue, not a political one.
Judith
Congratulations for raising these uncertainty issues. Some thoughts as an outside observer of IPCC’s current climate “science”:
1. One underlying systemic uncertainty is that IPCC is making an argument from ignorance.
2. IPCC apparently assumes its de facto mandate is to identify catastrophic anthropogenic global warming, and to raise coordinated political action to avoid that by carbon mitigation (cap and trade).
3. It excludes ocean oscillations, solar modulation of cosmic rays impacting clouds, the planetary impacts on earth’s Length of Day, synchronizing effects etc.
4. It ignores the systemic UHI temperature errors and systemic “global warming” due to “homogenizing” (“filling in”) temperatures.
5. It “fits” (kludges) the late 20th century global temperature changes.
6. It “finds”/assumes large positive climate feedback for water vapor and clouds, ignoring reports of negative feedbacks and low climate sensitivity.
7. Consequently, it finds “inconvertable” (>90%) anthropogenic CO2 driving climate change.
8. It’s “gatekeepers” bias reports by cherry picking warming/harmful results, and excluding cooling/beneficial papers, as exposed by Climategate and found by NIPCC.
9. It projects fossil fuel use far above industry projections of recoverable light oil and coal, and all “peak oil”/”plateauing oil”, “peak coal” forecasts etc.
10. Consequently IPCC “finds” catastrophic consequences for such anthropogenic global warming.
11. Current global temperatures are running substantially below Hansen’s and IPCC’s projections.
12. IPCC’s projections fit very poorly when compared with subsequent temperature comparisons – systemically pushing the extremes or beyond IPCC’s uncertainty estimates.
13. A number of models by “climate realists” incorporating ocean oscillations, full solar impacts etc, find natural forcings dominate anthropogenic forcings.
14. IPCC has ignored / violated almost all principles of scientific forecasting for public policy.
15. IPCC has ignored/excluded the principles of double/triple blind testing requirements found necessary in public medical research to exclude researcher bias.
16. IPCC ignores/minimizes international cost/benefit rankings for global humanitarian projects such as the Copenhagen Consensus.
17. Arguments / advocacy focus on alarmist climate mitigation, with little serous attention on the more practical less costly climate adaptation.
18. Identifying and eliminating such systemic biases requires reexamination from the foundations on up by an independent well funded “red team” with a specific mandate to address the full scientific and forecasting uncertainties, including exposing all systemic biases.
See previous posts for links or email for references.
Disclosure: I consider myself a pragmatic “climate realist”, having written a 330 page report on solar to mitigate climate change. I am working on fossil and solar fuels to tackle “peak oil”, “accommodate” global warming, and help alleviate 2/3rds world poverty.
David,
except that 2,3,4,5,6 are simply factually wrong. Read IPCC AR4 WG1. None of the things you list is ignored. Maybe you do not like the way they are handled.
Its a First law problem. Radiation in, CO2 forcing, Radiation out are changing in ways the make it clear that CO2 and feed backs trap more than enough energy to alter the internal internal energy changes we have seen (warmer oceans, warmer surface, ice melting) The satellite data show that. Given your resume, you understand the First Law.
Wonderful that you are working on solar. Carbon capture and storage may lead to accommodation of global warming. Natural gas may be a good thing in transition.
Regards,
Chuck Wilson
Judith,
Unfortunately, uncertainty about 100 year projections of global warming and the consequences thereof involve both “known unknowns” and “unknown unknowns”. I suspect that expert evaluation of uncertainty is strongly focused on the former and mainly ignore the later. After all, experts are ‘experts’ based on an expectation of their understanding of the subject matter at hand. Experts routinely overestimate certainty exactly because they overestimate their level of expertise.
Agreed. I touched on these issues in the earlier uncertainty threads.
Judith
Technically, there is a major systemic underestimation of the Type B (“bias”) errors. See NIST on Uncertainty of Measurement Results
as extrapolated to include the full uncertainty of the ongoing experiment on “earth’s future climate”.
A major unrecognized systemic uncertainty is that of ignoring the principles of scientific forecasting for public policy. See
Global Warming Audit
including
GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS
Consider the practices and attitudes exposed by ClimateGate.
Contrast the stringent methods and procedures established by FDA for new drugs, to protect the public.
Surely the $65 trillion in proposed expenditures to mitigate catastrophic anthropogenic global warming justifies measures at least as rigorous as those established by the FDA, to quantify and distinguish natural vs anthropogenic causes and their changing interactions (vs short run “incontrovertible” correlations).
David, can you expand on the Type B bias errors you see in the climate problem.
When becoming an expert, one thing one does, besides acquiring basic facts and techniques, is to put together a “story” of your field, what it knows, and what it can do. At one time the “story” of geology was fixed continents and the reaction to moving continents was foaming at the mouth and the invention of land bridges all over the ocean to explain organism distributions. The benefit of a story is that it gives more of a feeling of coherence to a field. The problem with it is that the annoying inconsistencies are left out. This is the big picture problem. At the big picture, we feel good about ourselves as experts, but the cost is to downplay uncertainties. Let’s take criminal psychology–experts here would claim to be able to explain criminal behavior but have virtually no ability to predict who will kill again or when. Their story does not account for the actual uncertainty in their knowledge.
Ah yes, the puzzle analogy, discussed in the Frames thread.
Specifically expose “incontrovertible” – As Hal Lewis pointed out, very few things in Physics are “incontrovertible”. e.g., With each improvement in metrology, physicists continue to examine the validity of the speed of light, inverse squared law for gravity, and Einstein’s general relativity etc. Optical measurements are now entering 17 significant digits.
It appears that even the sign of water vapor/cloud feedbacks is not confidently “known”, let alone the magnitude. Models fitting global temperature to PDO/AMO parameters fit much better than to CO2. For IPCC to claim its projections (out to 100 years with 90% certainty that they are due to anthropogenic causes), are “introconvertible” appears to be the height of scientific hubris!
Hmm. Your post there is more relevant to the kind of “meta uncertainty” ideas here [ and promised :-) ] than the commentary on that thread which focused on scenarios, oil, etc. So, I’ll comment here, but about your post.
The modal logic approach may be apt. When you cannot nail down a PDF, you can have a PDF of PDFs and so on, with distributions over function space. Those can be hard integrals to deal with, so you can discretize things and work with nested sums instead. I suspect but do not know that there are deep mathematical analogies or equivalences between a hierarchies of discretized probability coupled with “regular logic” and those possibility theory or modal logic approaches. Climate researchers or putative IPCC contributors with a lot of experience in layers upon layers of distributed continuous phenomena may find a “hierarchy of pdfs” approach easier to conceptually manipulate.
I think the hard part is still likely constructing the input from the climate research papers or opinions in differing domains. Probability or possibility or modal logic or whatever. Every climate researcher I’ve spoken to, including Hansen, have this pattern of “the strongest argument/evidence for P comes from domain Q, but for R from domain S.” This would be something like between the Type B bias mentioned elsewhere here and the typical statistical issues. Given that, it might be better to seeing what kind of imperfect data sets you can form (or find), and the style of their imperfections. If you want to think in inverse modeling as your other post suggests, that seems valid-“ish”, but I agree it is multidimensional — not merely radiative forcing, but aerosols and land use. Some kind of multi-D “fit” may be needed to apply it as intended, if that is even possible.
The true Type B bias is inestimable or only weakly estimable and inscrupulously attented to can rapidly lead one to a position of radical skepticism “nothing is known”. One hears that regression a lot in climate argument, but it’s ultimately not productive of even alterations in research strategy to improve things.
I do think one thing re: the precautionary principle and black swans. Nuclear, bio, chemical terror gone awry, ordinary plagues, and even catastrophic cooling (which seems like it can be as rapid as decade or two) all seem potentially much faster and much more catastrophic. Simply because we have much less refined machinery to project these risks does not mean they aren’t there. I don’t like the precautionary principle because its parochial application seems to mostly serve only a rhetoric — the psychological accessibility of just one threat to inspire ponying up resources or support, applied to just one problem before congress by ranting former vice presidents. Applying PP consistently to all known threats would yield impractically expensive responses which just makes people “shut down”. PP is a typically one dimensional analysis that correctly generalizes to high dimensions badly (badly in the sense of guiding action). Wittingly or not, we end up being committed to adaptation as our ultimate strategy for almost black swan existential threats…partly by definition and partly in lieu of no affordable alternatives. Given this, singling out climate as the one dimension to rally around smells like a politically biased selection or askance arguing.
Indeed, something really not discussed much is what the true black swan risk of climate variation (warmer or cooler) under no human influence. This goes back to the question I was raising about how one normalizes risk for the purposes of insurance sharing. Do you apportion cost based on expected returns, or ratios of risk at various tail junctures or what? Most discussion of this seems to presuppose well formed answers or theories of numerical fairness, but I have not heard much well formed discussion. This is kind of a 2nd order concern to your decision making under risk…call it group decision making under distributed risk with individated risk aversion, if you will. :-) If you have multiple players and they are under some common, known risks and other individual risks and so on, what kind of group apportionment is appropriate?
As with all my comments, I don’t assert I have answer to hard questions, but just intend to inspire more subtle considerations than I generally hear.
thank you thank you thank you. Re the black swan issue, yes I am more interested in what might happen in terms of natural variability, or natural variability in combo with AGW. The absurdities of the PP (and where they lead to) is also something that I am very interested in (will be discussed in the dmuu series). I am just learning all this, I am in total agreement that we need both a broader framing and more subtle considerations than have the standard in this field.
Just a side note. You will soon need to add an acronym decoder.
Try http://climateaudit101.wikispot.org/Glossary_of_Acronyms
Hit-or-miss updating lately, but it’s wiki-format, so please feel free to contribute!
I haven’t seen anything better.
Best, Pete Tillman
“Nuclear, bio, chemical terror gone awry, ordinary plagues, and even catastrophic cooling (which seems like it can be as rapid as decade or two) all seem potentially much faster and much more catastrophic. Simply because we have much less refined machinery to project these risks does not mean they aren’t there.” …. glad that you brought this up… while the models have been able to (apparently) simulate ice age climates and much earlier warm climates, I’m not aware of any success in the models in depicting the rapid flickering of the ATMOSPHERIC portion of the climate system that, as you say, have taken place in the space of a decade or two in the extra-tropical latitudes (swings of 10C of yearly average annual temperature). These events are not unique, but have not been able to be produced in our simulations; how do we assign a probability of recurrence of these events, and errorbars?
I fear that the simulations that have been tuned/kludged/parameterized to approximate 20th century time series of global temperatures introduce uncertainty from a number of pathways. The global temperature “match” that has been modeled for the 20th century seems to have been achieved without getting the synoptic climatology right. This is unsettling to me. It reminds me of a taylor series or a linear regression model that can operate without a working model of how reality operates. As long as “x” remains within the range of possible “x’s” that the taylor or regression approach is tuned for, f(x) will be well accounted for. As “x” wanders away from what those approaches were tuned for, f(x) can diverge wildly from the real world. How do we know when “x” in the climate system is wandering out of bounds of what GCM’s tuned to 20th century reality can produce credible results? How do we express this in terms of uncertainty?
Concerning GCM’s, the models have had difficulty in realistically depicting the SYNOPTIC CLIMATOLOGY. Dr.Hart at Florida State University, for example has noted that the models seem to strengthen the tradewinds in response to their lack 0f success in generating tropical cyclones. The models have great difficulty in generating blocking in a climatologically realistic manner, even for a single season in the future. While errorbars are currently routinely assigned for temperature projections, how would one estimate the uncertainty associated with getting the synoptic climatology right, and the uncertainties that would be inflicted upon global and regional decadal or longer projections (by a departure from a correct synoptic climatology vs.time)?
Richard, this is a critical issue for the climate models to be useful at regional scales.
Further to your point, this paper
Woollings, T, 2010. Dynamical influences on European climate: an uncertain future. Phil Trans. Royal Soc. A 368:3733-3756.
shows that models disagree on major climatology features of European climate, both with data and with each other. As examples of key atmospheric processes affecting Europe that models currently do not simulate well, Woollings cites several. The location of the Jet Stream over northern Europe in most models is divergent from reality. Zonal flow is biased too far south in most models. The models can’t simulate or explain the North Atlantic Oscillation with sufficient magnitude to match historical data. Heat waves and droughts, such as the summer 2010 Moscow heat wave and fires, are caused by blocking, which is a process the models are unable to simulate.
Being a “detail” rather than “big picture” person myself, these discrepancies bother me.
The long term AMO follows the Arctic temperatures with approx 3+ years delay (time it takes for the Arctic ice to move from the Fram to Denmark Straits). On the other hand Arctic temperature closely correlates (R = 0.9434) to the average of the Arctic GMF
ok, this is really interesting.
Not to those who build their models on the CO2 primacy. http://www.vukcevic.talktalk.net/CO2-Arc.htm
The Arctic GMF is only an indicator not the instigator.
Judith
This is an example of systemic bias of IPCC of ignoring a whole genre of similar evidence & exploratory or published models that are not the primary CO2 causation meme. These are often depreciated etc by IPCC, even when they show better correlation than CO2, with preliminary physical causal arguments.
Science can only advance by exploring such data and by developing and testing models like this to identify and quantify the issues relative to other explanations.
I have made the suggestion on another thread that if climate science had studied rival and/or contradictory theories concurrently, and evaluated each with due regard for the null hypothesis, as it should have, the resulting body of research could be evaluated according to Occam’s Razor (which explanation of the observed data requires the fewest assumptions?, if I have it right), rather than solely, or at all, wrt “certainty”.
Dr. Curry, David L. Hagen and TomFP
thank you for your comments.
I think the GMF is not the driver but just a coincidental indicator, which correlates to a degree with the global temperature reconstructions:
http://www.vukcevic.talktalk.net/LL.htm
I may have isolated some elements in the most basic form, of the possible so called ‘driver’ force as illustrated here:
http://www.vukcevic.talktalk.net/CET-NAP.htm
but it is beyond my capacity to adequately process, some help (in a way of guidance, methodology and presentation) from a research institution would be welcome.
Dr. Curry: If I may ask an elementary question which I think you have dealt with but I didn’t quite absorb your answer…
It seems to me that there are two kinds of uncertainty — the uncertainty one knows and the uncertainty one doesn’t.
When IPCC writers present estimates of likelihood or confidence, are they basically concatenating all their plus-or-minus-this and plus-or-minus-that numbers (wherever those numbers come from) or are they also somehow including the uncertainty that there may be large unknowns that they just don’t know or biases they are not accounting for?
Much of my personal skepticism about climate change comes from the latter uncertainty. Environmental scientists have aided and abetted a rather sorry history of fear campaigns that proved wrong. I’d have difficulty putting numbers to this, but it seems to me that there is something about the way environmental science is conducted that biases it towards catastrophic predictions.
Now maybe the climate change scientists have got it right this time, by hard work or good luck, but having bought into most of the previous scares, I am now wary even before I see the hockey stick graphs etc.
Huxley, it’s difficult to know what actually gets incorporated into their “expert judgment,” but it seems that the ontic uncertainties and the unknown unknowns receive relatively short shrift. More discussion on all this on the Uncertainty Monster thread.
Dr. Curry: That’s worse than I thought.
I can’t find the link now, but some time ago I read a web page explaining that IPCC writers weren’t pulling SWAGs out of the air but following a careful protocol for determining uncertainties for each stage of a determination then calculating the overall certainty as a product of intermediate uncertainties.
At the time I read that I thought, well, it’s not as arbitrary as I thought and I was somewhat comforted. But you seem to be saying it’s not nearly as nice as that.
In any case my concern for the process of environmental science which is somehow biased towards catastrophic predictions remains my little problem and that of any other layperson who has noticed that the science deck is stacked for catastrophe.
Scientists refuse to be accountable for all the times they yell “Fire” in a crowded theater and nonetheless insist that their next prediction is as good as gold according to their studies and models, then pout and complain when ordinary citizens stop taking the latest science scare seriously.
The horror of it is that scientists will eventually be right and maybe climate change is that time.
huxley – I think both fall within Type B uncertainty, in contrast to statistically quantifiable Type A. See NIST Uncertainty Guidelines
David: Not real useful. Get back to me when you “know” something and whether it applies to the IPCC and when you can be bothered to explain it in a sentence or two.
For others:
Type A evaluation: method of evaluation of uncertainty by the statistical analysis of series of observations,
Type B evaluation: method of evaluation of uncertainty by means other than the statistical analysis of series of observations.
http://physics.nist.gov/cuu/Uncertainty/basic.html
huxley
See NIST:
Type A is based on measurements – eg the standard deviations.
Type B is guestimating everything else where you don’t have the statistics – some you “know” are there and guestimate their magnitude.
The others you DON’t know are there. Now they are the harder ones to “quantify”! When different ways of measuring do not have overlapping error bars heads up for Type B.
Ocean oscillations should have been modeled and been reported within Type A. However, I understand that IPCC excluded them (e.g. as being averaged out.) I don’t think they included them in Type B either. Consequently models that give greater correlation/causation than CO2 trends are ignored etc.
Considerations of risk and uncertainty is standard fare in engineering projects. After all, there is never so much money or time available to allow a fool proof plan for completion of a project. Something always goes wrong, or at least that is the expectation. The Black Swan Theory is just a normal consideration. :-)
Communicating risk and uncertainty is also part of engineering. The customer or client agency wants to know how likely the project will be completed on time and in budget. An engineer must be able to describe what the risks are at each stage of the project and what will be done if a problem is encountered. He had better have a ‘Plan B’ available to mitigate it, even if it takes more time or money, as long as he can quantify it.
The point I am trying to make is that when it comes to communicating about risk and uncertainty, you do not need to rely only on university text books. You can call upon the expertise of practicing engineers.
From my own perspective on another subject, I find that people around me are quite comfortable with the National Weather Service predictions that include statements such as “20% chance of rain” or “70% cloud cover.” However, descriptions such as “most”, “likely”, and others leave folks floundering for meaning. Instead of having a table that says “90% should be described as very likely” or some such thing, it would be better to just use the percentage. Steps of 10% would be just fine. Us unwashed masses deal with percentages daily. Translating them into prose does not improve communications.
In fact, in the example of error bars, it would be perfectly fine to say something on the order of “Our best guess is XXX but the uncertainty in our calculations mean it could be 40% higher or 30% lower.” The idea is to communicate. Besides, answers like that are likely to illicit questions about the uncertainty allowing an opening for teaching about the science (or maybe make a pitch for funding!)
Areas of uncertainty that are glossed over :
the merging of disparate proxy records
the reliability of the proxies
the accuracy of the records in general
the influence of circular reasoning and group think (confirmation bias)
the lack of historical examples
the absence of predicted manifestations
the truly wide range of historical variability
modern sensing vs. historical records and proxies
climate signal vs. weather
To name a few.
CB:
“Applying PP consistently to all known threats would yield impractically expensive responses which just makes people “shut down”.”
Great comment. The PP is like the fountain of youth; obviously impossible but still desired by so many that it leads to very bad allocation of resources. In the face of proposals to apply the PP to global warming, a “shut-down” is the only sane alternative. Most people recognize that while we are all going to die, (explicitly, or just based on ‘feeling’) there is a limit to the expenditure of resources that people will support to delay that eventuality.
Some, and in many cases most, proposed actions to reduce a specific risk are not rationally justifiable, even when that risk is reasonably well defined. In the case of global warming, the risks posed by the consequences of warming are both poorly defined and undefinable at present. Application of the PP to such a poorly defined risk leads always to demands for the expenditure of essentially unlimited resources. If application of the PP to global warming issues were simultaneously outlawed everywhere, this would be a good first step to making rational public policy on global warming.
Judith,
I share CB’s POV on decision-making when facing uncertainty. Estimating the subjective probability of alternative future uncertain outcomes is reasonably straight forward using, e.g., the Delphi Method. However, decision-makers face another dilemma: “Can I live with an adverse outcome?” IMO, many so-called policy makers don’t face this dilemma because they are not personnally accountable for their decisions. Having been the “responsible executive” in a for-profit firm for approving competitive proposals for large R&D aerospace/defense programs involving significant science/engineering uncertainties and managing them throughout their various phases, I can assure you that the “risk dilemma” influenced my decisions.
Speaking as an engineer that has done a fair amount of modeling in stress analysis and heat transfer subjects that are much better understood than climate I find it difficult to believe anything approaching 90% confidence level as I see much of the climate discussion. When asked for a presentation of my work I try to state my confidence level based on how well I know the final use of the work. That is, are they looking to produce a product or was my work done to see if the idea is/was plausible and that we should pursue it further.
In the climate science arena, they both want to begin production (think windmills, biofuels, solar and carbon trading) and still say more research is needed. It’s okay to to say you don’t know what will work but worry that inaction will make things worse. It’s is not okay for a climate scientist or a railroad engineer to make specific recommendations about adaptation policy without being very explicit about the uncertainty of their research. The only thing I have a 90% confidence level in is that it has gotten warmer in the last 30 years. I am not however all that worried about it even after all the hysterics about catastrophy due to “climate disruption” or whatever they want to call it in the future.
To convince me of something your argument is better served by displaying both your best argument and the most likely reason that you moght be wrong. Give me the chance to decide .
Thank you,
Barry Strayer
I worry about another engineering issue, multiple uncertainties. In cases where I have been involved, where a result is dependent on factor A and on factor B and on factor C… The uncertainties multiply and you get huge uncertainty in the result. If there is a small uncertainty in the temp record and in the aerosol forcing and in the response of evaporation to surface roughness and and and., then an engineer would multiply these small uncertainties together. If there are 10 factors, then you would have to be 99 percent sure of each of them to get to 90 percent of the final. I too am puzzled that you could ever get to a figure like 90 percent for climate forecasts.
Yes, this is relevant for the contingent argument for attribution that I used in Part III. I thought I was being generous by saying the confidence was as high as the confidence in the premise with the highest confidence (40%), it might really be lower as all these uncertainties accumulate. How do you get 90%? this is exactly the question I was asking in Part III.
It is a little discouraging that this is even an issue. It is not that difficult to convey uncertainty. Clinical trials and epidemiological studies virtually all include assessment of uncertainty, such as 95% confidence intervals. Although the media often omit discussion of uncertainty, scientists in this field are generally skeptical of one another’s results and that is often reflected in interviews presented by the media. There is a broad recognition in the epidemiology research community that health issues in humans or animals interacting with the environment are remarkably complex and that we do not know all the key independent variables. Being accustomed to these types of studies, I was amazed at the degree of certainty expressed in climate science in later IPCC reports based on data that would be regarded as erratic noise around the baseline in a biomedical study.
Stephen. Epidemiologists are very, very good at predicting the negative consequences of a reduction in vaccination rates in advanced economies and of the benefits of introducing or extending vaccination programs in developing countries.
One of the greatest scientific achievements of the 20th century has to be the elimination of smallpox. Surely. And we’re nearly there for polio in humans and rinderpest in animals – a great endorsement of good science and good, worldwide health policy thoroughly and persistently implemented.
When it comes to the _details_ of predicting the nasty consequences of failure to vaccinate, epidemiologists can tell you about numbers of likely cases, deaths and permanent injury for particular diseases. But … they can _not_ tell you which particular people will be affected first, worst, at all or not at all.
Not a bad analogy to climate science in fact. Climate science can only tell us about the world at large (as epidemiologists tell us only about whole populations susceptible to disease). They cannot specify when, where, how often, in which order, various impacts will show up.
Judith,
Flow diagrams always help, as does simplification. For example:
Simplification: Natural warming = No concern.
Human-induced warming = Concern.
Do the benefits from fossil fuels outweigh the concern for human-induced warming, given that if it was natural there would be no concern?
Of course any change in climate is a concern for society, however, given the confusing nature of the issue it helps to ignore some aspects. The RS Climate Change Summery of the Science largely ignored impacts:
“The impacts of climate change, as distinct from the causes, are not considered here.”
The deficiency is not so much in the communication of uncertainty, but in what is done about it. Because of this you get the situation as I reported here:
http://landshape.org/enm/files/2010/10/Critique-of-DECR-EE.pdf
where alarming conclusions and uncertainty are dutifully reported on models that have no skill at all!
It’s reasonable for scientists to worry about uncertainty, but when the science moves into policy, a certain hurdle must be applied to determine if the results are worth reporting at all. This is not being done.
Its a bit like getting work done on your house. You expect a certain baseline of skill. If all of the available contractors do no better than random actions, you don’t decide to use them all, as they do with model ensembles! Same with policy advised by models, there needs to be a benchmarking of skill decided on beforehand.
This is all basic forecasting practise, and well understood by engineers and largely ignored in climate science, which is why so many outsiders express frustration with the field. The IPCC in this view is purposely set up to provide a direct pipeline from scientists to policymakers, thereby short-circuiting the accepted validation practises whereby models earn trust and their applicability is justified.
Good point.
Judith, just in case it might have slipped past your notice, the word “Framing” in that title… framing DOES mean pretty much the same thing as putting spin on it. As in the same thing the gonad-less panels looking into Climategate were doing. The same aim as “hiding the decline.” The same hiding uncertainty as has gone into the last three IPCC summaries (much of that I acknowledge was AFTER the scientists were done with their part; to the chagrin of more than a few, the politicians’ summaries did not reflect the science all that well).
Just sayin’…
Oops on the wall-to-wall italics there. (I REALLY wish WordPress had a “Preview”!)
I’m trying to find a template like the one i’m using that also has a preview function, no luck yet tho
Suggest CA Assistant. with greasemonkey. The Atahualpa theme on WordPress is very flexible, (or see Themeframe in beta).
Feet2thefire closed his italics with a “i /” instead of “/i” If that can be changed, the problem might be fixes.
I don’t know WordPress or how easy it really is to configure, but these links might help. They say it’s “easy”. (You could maybe check with a more experienced blogger like Anthony Watts?)
http://lorelle.wordpress.com/2006/04/01/comment-live-preview-placement/
http://wordpress.org/extend/plugins/live-comment-preview/
(The first links to several downloads at the bottom. You can click on ‘Installation’ in the second one to get some instructions. Obviously this is at your own risk.)
You also ought to be able to log in as administrator and edit comments. If you edit feet2thefire’s comment to fix or take out the italics, that should hopefully fix this page.
Ok, i will check with my wordpress person and also charles the moderator (who has been offering helpful technical advice. Thanks.
Alternatively try DELETING feet2thefire’s post and then add the text back in under a new administrator post?
Will this stop the italics?
@Ed Forbes :
Gawd, it makes me want to puke, hearing a scientist makes such a statement and not getting tarred and feathered for having such an attitude.
I once read (sourced even) that 85% of all C14 dates that are spit out by the labs are “disregarded” by scientists “exercising judgment.” In that book, it stated that the main reason was that the C14 dates didn’t fit the expected period, so they were tossed with the reason given as “obviously contaminated samples.” Even if that 85% is wrong, but high, high-handedly tossing them out WITHOUT A SOLID, DOCUMENTED reason, that is not science. It utterly reeks of fudging the data. After reading that, the uncertainty that I read into C14 dates is about 5 times what is reported. I don’t trust their dates at all. And little I’ve read since then (20 years ago) has changed my mind.
In fact, one of the issues I’ve brought up a few times at WUWT and CA is that people are ONLY focused on the temp levels, but the X-value in all the graphs ALSO has an uncertainty to it. And when the homogenization is then done, high peaks and low valleys to the data begin to cancel each other out, producing much flatter curves – combined curves that don’t reflect the ups and downs of the individual curves on spaghetti charts. As witness to that, there are over 800 individual studies that show a MWP, but the homogenization Mann data doesn’t show the MWP – is it that Mann was fudging the data or that the process used flattened the curve (partially due to dating uncertainty, but also due to the flattening of the averaging process)?
And then there is the “Great dying of the stations”, with an inordinate number being rural ones. That is a form of “disregarding” data, too. Regardless what CRU and Mann say in their looks at the averaging of rural records, the individual rural stations DO show much less warming than The Hockey Stick Team finds. I believe the selection of stations needs to be looked at, but also the averaging process they use, too.
Judith,
This is an attempt to close the “wall of italics” [which has also incorporated all text in the sidebar when full content of post and comments are viewed. ]
It may or may not work, if it doesn’t please feel free to delete this comment … or try editing this comment by changing the “/em” above to “/i” … if neither of these fix the problem, I suspect it may lie in feettothefire’s
http://judithcurry.com/2010/10/29/uncertainty-and-the-ipcc-ar5/#comment-6533
Judith
My main gripe is for what they call “unprecedented” recent warming, while the data shows the two warming of last century are of identical values of about 0.45 deg C in 30 years as shown in the following plot.
http://bit.ly/de8ihf
Judith, they must acknowledge that the recent warming is not unusual.
Thanks Judith. We all love science, and it must only be about the truth.
Yes, this is one of the elephants in the room
I would like to see the tree ring charts extended to 2009 with no instrumental record appended to the end. I understand it might be a contentious reconstruction, but it would reduce the uncertainty of the hockey stick chart in my eyes. Another source of uncertainty in that department is if only trees at the tree line make good thermometers, are we assured those trees are on or near the tree line for 1000 years?
The uncertainties should clearly be discussed (especially among the scientists themselves!) and quantified in some way – and more importantly, they should influence the qualitative wording of various summaries, too. The main problem is that some people don’t want to raise the standards in this way because they literally thrive because of the lousy standards of the IPCC. The IPCC science’s very poor quality is what allows them to get lots of money, travel, influence things, and so on.
While the probabilities that a statement is true are sometimes mentioned numerically, the numbers are actually not calculated. At this level, it is not genuine science; it is just quoting of some subjective opinions of people who may be paid as scientists but who don’t actually act as scientists.
If one wants to determine the probability that the warming in XY years will be greater than UV or not, or whether the contribution of AB to the warming in period CD exceeded EF, he must actually make some calculation, with the right error margins, and the p-values and probabilities must be calculated in the standard scientific ways of the probability calculus.
This is really never done by the IPCC. That’s a shame because one good source of information about a problem is usually capable to produce much more accurate ideas about the probabilities of various statements than 10,000 vague hints that are subjectively evaluated. I think that whoever disagrees with the previous sentence either has no experience with science, or is not thinking scientifically at all.
If there are lots of arguments to support the opinion that an effect exists, they come at various confidence levels. Some of them may be 2-sigma arguments but there will be 5-sigma arguments, too. Because of very different p-values, a 5-sigma argument is stronger than thousands of 2-sigma hints. If there’s no 5-sigma argument, it’s probably because the assertion is not valid and the 2-sigma hints are just misleading noise.
There are many years for AR5 to bring the climate science somewhat closer to a quantitative science. If it won’t be possible to calculate the probabilities, the new report will be just useless. Whether it will be equally alarmist or less alarmist (or more alarmist?) than the 2007 report, it will still just reflect some political and subjective opinions of an ad hoc particular ensemble people, not science itself.
It may happen that no assertion about the climate can be defended at a high enough confidence level so that it makes a sense to talk about the “sigmas”. In that case, the conclusion should be that we just don’t know anything that matters for the climate change debate, and that no existing theory or climate model is more reliable than simple extrapolations of the observed past history – which are, by the way, completely harmless.
What I find more important is that the IPCC declares, at the very beginning, that AR5 will make no assumptions e.g. about the key question whether any problem exists at all. If the IPCC doesn’t declare very clearly that it will study this question without a bias, its work will be the same politicized cherry-picking kitsch movement it has been so far.
Also, I think it’s useless for the IPCC to be just a “collector of literature”. The literature contains work of many different levels of quality and impartiality and the community is clearly misbalanced when it comes to many questions such as the political ones. The only meaningful IPCC-like organization will have to present a meaningful self-sufficient “big scientific paper” by itself. Such a “big paper” has to contain all the relevant calculations, not just links to papers that may be wrong, confusing, or cherry-picked in some way.
There are reasons why I think that these completely basic requirements for the IPCC won’t be simultaneously satisfied and it would be the wisest decision just to abolish this institution. There are many hints that it can’t be reformed because the wrongdoing, corruption, and bias are written into its very pedigree.
It’s hard can’t to dispute the thrust of Luboš Motl’s points, though may be better not to attribute to malice what can be adequately be explained by incompetence.
This is clearly not about the goodwill or competence of the contributors to the IPCC, who largely have sterling reputations, and to a one far outstrip me — and most of the blog-reading world — in technical expertise in their field.
Organizations also can be rated in terms of their competence. The Capability Maturity Model, for example, rates organizations on such qualities as process optimization and continuous process improvement.
I find great irony in a critique by a scientist of a large organization which contains absolutely no terms from the science of organizations.
Such little poisoning the well tidbits as, “some people don’t want to raise the standards in this way because they literally thrive because of the lousy standards,” while it is on-topic as a criticism of standards is absolutely lacking in reference to measurement of those standards.
Pot. Kettle.
> Because of very different p-values, a 5-sigma argument is stronger than thousands of 2-sigma hints.
This statement is certainly not be true for carpentry. If a piece fits reasonably well however you measure it (length, width, angle, etc), chances are it will fit. If you only know that it fits absolutely well for only one parameter and disregard all the other ones, chances are you’re not a very efficient carpenter.
This statement may be true for science, but this begs to be proved as a formal argument, if we are to raise standards of blog science. It that formal argument is possible for that kind of use of “stronger”, that is. Making such a statement just after having probed the intentions of many unindentified scientists might not be very scientific itself, after all.
Note the differences between forestry and carpentry, science and engineering – discovery vs design , scientific forecasting vs adaptation (and “mitigation” only where obviously cost effective like energy efficiency.)
I’d like to see the importance of 5-sigma in forestry.
“I’d like to see the importance of 5-sigma in forestry.”
Would you?
OK, then. 2-sigma (assuming 1D Normal distribution) corresponds to a probability of one in twenty, while 5-sigma corresponds to one in 1.7 million.
When a logger cuts a tree down, he wants a certain confidence that it’s not going to kill him. Is his theory about where to cut, where to stand, which way to run if things go wrong, right enough? He cuts down around 40 trees a day, for say 200 days a year – making 4000 trees every year, any one of which could kill him.
Thus, the difference between 2-sigma and 5-sigma is the difference between a life expectancy of about half a day versus a life expectancy of about 200 years. (Or 1/yr for every 200 loggers.)
I think Lubos didn’t quite phrase it right (it takes about five 2-sigma events to exceed a 5-sigma event if they’re independent and about the same hypothesis), and what he meant was that thousands of 2-sigma events on different hypotheses can come along and nobody will pay any attention, because at the rate one is gathering data (and Lubos is, I suspect, talking about the identification of new particles at the LHC which means very fast), meaningless one-in-twenty probabilities occur every day. But a single 5-sigma signal is a lot more significant, and people will pay a great deal of attention.
Typo: Substitute 8000 for 4000.
Of course, a single 5-sigma signal is a lot more significant. Lumo’s point is more than that: he seems to be arguing that 2-sigma evidence is unconvincing for him and that we must thrive for 5-sigma signals.
Your example is interesting. So two days of cutting trees correspond to 80 trees. I’d like to know what “cutting a tree” means in climate policy (theory).
I’d also like to point out that we might not have waited until reaching 5-sigma insurance of human security to send lumberjacks into the wood. See for instance:
http://books.google.com/books?id=dnwvTXEMFM0C&pg=PA900&lpg=PA900&dq=forestry+accident+lumberjacks&source=bl&ots=zzsdlBpipF&sig=_J8vnQzyyanxx1_sNkJnYIYLwDs&hl=fr&ei=1zTMTKewLsL7lwfImIXmCA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBUQ6AEwAA#v=onepage&q=forestry%20accident%20lumberjacks&f=false
“I’d like to know what “cutting a tree” means in climate policy (theory).”
Well, the obvious analogy would be emitting CO2. But I don’t propose to take it too literally.
While 95% has acquired a somewhat clichéd status in science, it’s not the only choice, nor necessarily the most appropriate. It depends on priors, costs, and the availability of other options. But the big question for climate policy here is not whether 90% is sufficient (although it’s certainly arguable that it’s not), but whether the IPCC’s 90% assessment is based on anything more than gut feeling and political negotiation on the part of committees of vested interests. Should we be told?
The intuition that 2-sigma is dangerous is obtained by about trees, not man-years, or overall industries. A tree is a count term, not “CO2 emission”. There is also a real possibility that we might be conflating evidence-based reasoning with risk management. Unless we come up with a better interpretation, the analogy with forestry breaks down very soon.
I agree that “2-sigma” is unnecessary. But I doubt that replacing it with this “5-sigma” desiderata is justified by more than a gut feeling. IPCC’s “gut feeling” can be explained as being provided by the experts.
That we’re being told over and over again that the IPCC has “vested interests” is mostly a narrative. While interesting in itself, this narrative is independent to the question at hand.
Speaking of independence, the “events” that are analyzed here are not independent from each other. Treating them as independent simply contradicts the idea that each line of evidence points toward something like a systematic convergence.
Willard,
The count/non-count distinction is not germane in this case. I had originally written that the analogy would be to x tons of CO2 emission, where x was some number to be determined, but decided that it was more detail than was appropriate to such a flaky analogy. It would automatically lead on to attempts to calculate the number x, and no doubt further criticism of the calculation, and yet more rabbit holes to dive down. All of which would miss the original point that the question of confidence is important even in more practical activities like forestry, and that 2-sigma/95% is very often not sufficient.
It’s quite possible that 5-sigma isn’t sufficient either. Maybe you would want 6 or 7 sigma.
But all of that is a question for the politicians and economists. The only question we have for the climatologists is: ‘How did you calculate this number?’
If there is indeed reason to believe that it is 90%, then we can start from that point to discuss policy. But given what I know of the uncertainties, I don’t see how it can have been calculated to be anywhere near as high as 90%. I rather suspect that it is based on what they thought was the minimum number that would get the politicians to pick the policy they favoured.
I obviously have no evidence for that, beyond that my model of climate scientists can only explain their actions with that assumption in place, but if I assert without calculation or explanation that it is more than 90% probable, would that acceptable to you? ;-)
However, all of this is dancing around the point, which is that unspecified ‘expert opinion’ is not an acceptable scientific methodology for deriving numerical uncertainty estimates in the headline conclusion of the justification for restructuring the entire world economy.
Incidentally, if you don’t assume independence, then multiple lines of evidence wouldn’t accumulate as fast (if at all). Observations pointing towards a systematic convergence can still be independent.
Nullius in verba,
I agree that providing numerical uncertainty numbers based on educated guesstimates should be based on sound scientific methodology. For me, they’re more like “from 1 to 10, how would you rate your confidence in X” kind of statement. Criticizing a logic that might not have been there looks unsound to me. To improve IPCC’s estimations, one has to suggest a better methodology, at least one that is not asking to revoke everything we know about climate.
I believe we can agree that comparing “we’re 95% sure that the Arctic ice is melting”(fictious statement) with “we have 5% to die by cutting a tree” (the analogy) has many shortcomings.
I believe that you’re asking for something that either is in the IPCC report already, or it’s not. I presume it’s not there. So you’re basically asking for something that has not been done. And you’re implying that this **must** be done for the IPCC report to begin discussion on policy. I disagree with this assumption.
There is no need to know with 7-sigma certainty that the Arctic ice is melting to start doing something. Starting to do something does not imply that we “restructure the entire world economy”, whatever that means. If we also know that the ocean gets more acid, and that glaciers, and that etc, you get as close to a smoking gun than you need to start doing something.
Agreed. They sound like “from 1 to 10” statements to me too.
But why is it necessary to provide a better alternative before you can criticise a logic that is not there? *Do* we actually ‘know’ anything much about climate, or are we just fooling ourselves that we do? Why do we need to do anything, and what would be best for us to do?
This ‘precautionary’ approach is essentially Pascal’s Wager in disguise. Pascal said that if there was no God, then there would be no penalty for believing, and if there was a God, then belief would buy you an eternity of bliss, when disbelief would condemn you to eternal torment. With infinite stakes, it doesn’t matter what the odds are. It therefore makes perfect sense to be making sacrifices to Tlaloc the Climate God and reducing overpopulation, ‘just in case’. The logic is the same.
And as it happens, the clerics say ‘God’s justice is certain’ which is even higher than 90% – although it isn’t quite clear what their evidence for this is. But never mind. We can start doing something anyway.
It perhaps isn’t immediately apparent what the flaw in this logic is. Certainly a lot of people have found it very convincing over the ages (and I’m OK with that – I believe in freedom of belief). But there are plenty of us who are not convinced, and do not wish to sacrifice our lives, our prosperity, and our future to such nebulous maybes without knowing in great detail why it is necessary and how you can be so certain.
If you want to take action voluntarily and stop using energy, then please, feel free. But if you’re going to make *me* join you in that, by force of regulation, then I want to know it has been done properly. Also, as a scientist, I’d appreciate it if you were to stop claiming this is science unless you’re really using the scientific method to calculate those numbers. I find people making stuff up and calling it ‘science’ very upsetting.
I’m not ready to pursue the discussion on the precautionary principle with comments that have three-words lines. I understand your concern, but I believe that practical reasoning sometimes needs something like it.
I’m still happy we agree on that much. I thank you for your patience and your openness.
Understood. I’m not suggesting that you do so on this occasion – we’ve wandered a bit off topic and it would be as well to pick a different subject – but there’s nothing to stop anybody putting in a short note to say ‘continued below’ and then start a new comment thread at the bottom.
This threaded/nested model has certain advantages, but it would work a lot better in a template where the page width was wider or adjustable. I guess it’s not part of the WordPress way of doing things.
Dear Professor Curry,
I want to raise one point based on my personal experience. I am a scientist who works in manufacturing of high tech products as a manager.
In my industry we are used to various types of “Six Sigma” programs and we make decisions within complex processes under uncertainty almost on a daily basis.
And even if we have a six sigma process and therefore a high confidence level, we find colleagues who object. The difference is, we do not call those colleagues “deniers”, we rather encourage them to express their views in order to learn and improve our knowledge and our process.
I think, the IPCC and climate science can learn a lot from industry.
An uncertainty level is not an absolute number, but rather a guideline to base decisions on. It is utterly wrong to call somebody a denier, if he objects even under a high confidence level.
However, I do observe managers who always push for a consensus decision even under uncertainty and who try to bully the skeptical engineers to a consensus. Those are usually the managers who shy away from taking over the responsibility in case the decision was wrong.
Best regards
Guenter Hess
Guenter Hess, I also work in manufacturing . I have found that because they are thinking, and are anticipating potential problems, “deniers” are actually facilitators for success. Instead of shying away from taking responsibility, they instead, take responsibility for solutions that help or determine success. Has this been your experience?
John F. Pittman, I agree, one has to value every opinion. My experience is that it facilitates sucess to consider the sceptical viewpoint moving foreward. And yes to be sceptical against the mainstream is the first step to take over responsibilities according to my experience. I try to encourage this. I do not want camp followers in my team who always say what they anticipate I want to hear as a consensus view. Consensus is no concept for improvement.
Best regards
Guenter
There are serious problems with the IPCC models because to justify high AGW they assume ‘global dimming’, mostly from polluted clouds, of nearly half present median AGW [AR4]. Yet, as far as I can tell, apart for thin clouds there’s no experimental proof of this. So, the theory is wrong and predicted AGW must be reduced by a factor of >=3.
Here’s the theory: http://pubs.giss.nasa.gov/docs/1974/1974_Lacis_Hansen_1.pdf . Eq. 19, adapted from Sagan and Pollack for Venusian clouds, is a quotient of expressions involving [1-g].tau: g the Mie asymmetry factor, tau the optical depth. It apparently works: as tau increases so does albedo; pollution increases tau therefore albedo.
However, despite coming from Sagan it’s very wrong. Mie’s solution of Maxwell’s equations assumes a plane wave. After the first scattering, most energy is concentrated in peaks, 10^7 [relative] at 15 microns diameter, 10^5 at 5 microns [polluted]. So, at the next interaction the boundary conditions have changed dramatically and you can’t apply Mie’s analysis. [The same applies to all multiple scattering physics, probably quite a lot.]
Eventually, the radiation becomes isotropic [also g becomes meaningless], yet measured albedo shows angular dependence not expected for a Lambertian emitter. Consider a non-absorbing cloud with 0.7 albedo: 30% energy emerges diffusely from the base, the same diffusely from the top, 40% is pseudo-geometrical backscattering as the wave loses geometrical information, a form of shielding pollution switches off, an alternate AGW.
In 2004, soon after experiment disproved ‘cloud albedo effect’ cooling, the researcher who predicted pollution increased albedo of thin clouds, Twomey, was given a prize. He had warned his analysis could not be extended to thick clouds. This NASA website substitutes his theory with a claim of more ‘reflection’ from higher surface area of water droplets: http://geo.arc.nasa.gov/sgg/singh/winners4.html : it’s incorrect physics.
Conclusions:
1. Sagan’s Venus greenhouse effect analysis was based on wrong theory.
2. Early work at NASA, apparently triggered by the fear the same could happen on Earth, was based on the same theory.
3. Thus ‘cloud albedo effect’ cooling is imaginary and could be heating.
4. There’s no justification for predicted high CO2-AGW; it could be zero.
5. I’m disturbed at NASA’s use of incorrect science after a key assumption without which the high CO2-AGW hypothesis is incorrect, was disproved.
6. The IPCC programme and leadership needs radical revision.
Phillip Bratby (October 29 at 3:15 pm) raises a very valid issue regarding potential cooling. We have a situation where the current paradigm has produced a posse of scientists all chasing the wrong suspect down the wrong street.
Human history and geological records appear to confirm what one would logically expect, that negative climate oscillations correlate with hardship and deprivation, particularly during a major glacial advance cycle. The last one came close to wiping our ancestors out. If there is to be a future climatic catastrophe in the human time scale, it will almost certainly be the advance of the next full glaciation.
Contributing to uncertainty is the inability of climate models to model turbulence. Uncertainty regarding turbulent airflow within modeled cells, at boundaries of cells, at interfaces of air and water, turbulent water flow, and a host of micro systems that contribute, in the big picture ,to the constrained chaos observed either as weather or climate. If complex statistical models cannot predict the flight of a locust swarm in a field, ie, where are they going to head next, reflecting constrained chaotic behavior, a multidimentional system like climate, over prolonged periods of time, are not within our current capabilities either. The framework of building upon what we know in limited systems, would give us confidence in our ability to tackle larger and larger systems. I just don’t see that in the current “climate.”
Using my rather pedantic medical analogies (aee previous posts), the problem is not of formal type II errors, but of type I errors in which the null hypothesis is rejected falsely.
A key to this is a careful analysis of the distribution that one might expect from a particular physical process – naive statisticians instinctively start from the assumption of a normal distribution, which may not be the case with deterministic, or semi-deterministic processes that are not determined by purely random, stationary errors. There are tons of examples of this medicine, which I maintain has many parallels with the uncertainty issues in climate science.
The second key is prediction. However difficult, and irrespective of the ingenuity required, the uncertainty issues in climate science will be resolved by falsifiable hypotheses and the increasingly sophisticated measurements that are now possible.
Sorry – late at night! There are tons of these in Medicine ..
When most scientists discuss the certainty of their conclusions in non-mathematical terms, they use the phrases “statistically significant” or “not statistically significant”; traditionally meaning a p value of 0.05, our journals are going to be full of so many incorrect papers that they won’t be worth reading or citing.
Environmental advocates and policymakers are desperate for information about the dangers of GW, so climate scientists have relaxed the standards normally associated with statistical significance. When a sub-section of an IPCC report lists four conclusions that are “likely”, statistics tells us to expect all four to be correct only about 1/3 of the time. Given the obvious bias of many investigators, the possibility of systematic error, and the existence of pal review, the bias of the lead authors of the IPCC and the motivations of the politicians who have editorial control over the SPM and the situation is hopeless. There is no sensible reason to believe anything that the IPCC judges “likely”; you may as well decide what to believe by flipping a coin.
If one looks at the positively obscene term “more likely than not”, careful analysis shows that it really means “less likely than not”. Anything called “more likely than not”, should not meet the criteria for “likely” or better (p<0.33), leaving most of the remaining 66% of the probability distribution curve in the "wrong answer" category.
Contrast the behavior of the IPCC with that of the FDA. Before approving a drug, the FDA gets all of the raw data from every clinical trial and performs an independent statistical analysis. They don't consider simply consider the patients who completed the study; they also perform an "intent-to-treat" analysis which includes patients who may drop out of the study because of side effects. The normal requirement for approval is two independent double-blind clinical trials with a p value <0.05 showing the treatment group did better than the control group. For patients with no other treatment options (cancer patients who have failed chemotherapy), one such clinical trial will suffice. When a clinical trial shows that the whole treated population failed the P<0.05 standard but that some subset of the treated population (for example, the female patients or patients with a particular mutation) had a p value <0.05, the FDA will not approve the drug. They will insist on seeing an independent clinical trial performed limited to the patients who are expected to be responsive. After all, the whole patient population in the original study could be sub-divided at least 20 different ways, making p<0.05 criteria meaningless. The drug company, of course, will argue that they understand why only the female patients responds or those with a particular mutation, but the FDA will respond: "If you really believe this explanation, why didn't you limit your earlier clinical trial to this sub-population?"
Surely, you mean 16 joint dependent studies with p value of 0.05 to produce about a 1/3rd probability?
4 joint dependent studies come out above 4/5’s, I think?
4 studies of unknown interdependence are somewhere between at least 4/5’s and possibly as high as 99.99375%, no?
Or is there some new theory of destructive interference of statistical inference that I’ve missed in my reading?
Though to argue cross-benches, aren’t there sometimes cases where FDA results are not merely suspect, but spectacularly questionable through and through?
Sorry, not sure I understood that either.
Frank said four examples of a ‘likely’ outcome, which means likelihood in the range 0.66 to 0.9. The probability of all four coming out that way assuming independence (and the hypothesis) would be between 0.66^4 = 0.19 and 0.9^4 = 0.66. I’m not sure where “about 1/3” came from, but it is within the range. Maybe he picked 0.75 as a representative value.
For 0.95^16 you get a probability 0.44 of them all coming out as predicted. To get about 1/3 you need 21 trials. If the trials are not independent, then you can get anything from 0 to 0.95.
Or did I misunderstand what you (both) meant?
Oh, you understood me, at least in part.
I was just mathematically wrong, or at best vague.
Him, I’m still not sure I understand, though your explanations get me part way there, if we make a lot of unstated assumptions explicit.
As for a result above p=0.95 for studies each of p=0.95, I’m (as my math deficit shows) rusty, but I was sure I’d once read of ways of obtaining higher confidences from lower ones with the appropriate relationships between results, though not as a general rule.
For example, I have to condemn four criminals, or let them all go if even one is innocent, each only 5% likely individually to be guilty. My odds of setting any man free is above 95%.
Frank,
I didn’t understand your point about “more likely than not”. Can you clarify? Did you mean “about as likely as not”? If not, how did you get p less than 0.33?
Also, I wasn’t always clear whether you were talking about the probability of a hypothesis being true or the likelihood of a particular outcome being observed given the hypothesis. The IPCC seem to mix the concepts up too: their definitions talking about likelihoods, but then they often use them to describe their assessment of what appear to be hypotheses.
Sorry to be replying to my own comment – but this might be important.
It just occurred to me, what if the IPCC are *not* mixing up likelihood and confidence? What if they really *did* mean a likelihood?!
I’ll explain.
The conclusion that is causing all the fuss is:
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
Note: they use the phrase “very likely”.
In box 1.1 of IPCC AR4 WG1 chapter 1, “The uncertainty guidance provided for the Fourth Assessment Report draws, for the first time, a careful distinction between levels of confidence in scientific understanding and the likelihoods of specific results. […] Confidence and likelihood as used here are distinct concepts but are often linked in practice.” They then introduce *two* different sets of terminology: one for confidence, and one for likelihood. The former would use the term “very high confidence” whereas the latter uses “very likely”.
So given their use of the words “very likely” in the conclusion, it seems quite possible that they actually meant a likelihood!
Why is this significant? Well, ‘confidence’ (or “degree of confidence in being correct” as the IPCC put it) expresses the final belief in the truth of a hypothesis or statement taking all the uncertainties into account. A likelihood is the probability of a particular outcome or observation **given that the hypothesis is true**.
Thus, you would expect a statement like “there will be more hurricanes in the Atlantic” to be given a likelihood, because it is the estimated probability of an outcome assuming their models are correct, while a statement like “anthropogenic CO2 caused most of the late 20th century warming” is not an observation or an outcome, but an explanation, and so you would expect it to be given a confidence.
But they didn’t! They expressed it as a likelihood, which hints that it might be some sort of back-casted prediction, that *assuming* their models are reliable, that CO2 causes so much warming, that sensitivity is high, that the measurements and calculations of global mean temperature anomaly are accurate, *then* the probability of anthropogenic CO2 causing more than 50% of that warming is more than 90%.
At which point, it becomes clear that all our debating the reliability of the models for predicting climate a century ahead is completely irrelevant. All they’re saying is that they’re 90% sure their models are modelling late 20th century warming as mostly anthropogenic. You still have to multiply that by your confidence in the models and assumptions and measurements to get any idea of whether you should believe it or not.
If this is true, this is an absolute masterpiece! I’ve seen weasel words before, but this ought to earn a prize!
This is a helpful analysis. I read their likelikhood and confidence statements, then figured the authors weren’t using them in any consistent way.
It is possible that this is what Michael Tobis is blathering on about over at his blog criticizing my italian flag analysis of the situation. One way of interpreting the IPCC statement, is that based on the evidence they have (climate models) it is very likely that most. But the climate models say very likely that all. And the confidence in this statement should be low, because of the low confidence in the contingent premises. That is not how the statement gets interpreted or sold by the IPCC. So the likelihood/confidence thing is a different way of reasoning about this than my evidence based italian flag. If this is correct, a real light bulb has gone off, that hopefully will illumunate Tobis, Annan, etc. who are using my Italian flag analysis as proof positive that I am an idiot. (PDA pay attention)
Judith,
You’re smart. They’re smart. You’re not an idiot. They’re not. This kind of taunting is annoying in my children’s schoolyard. Let not the blathering gets the better of us.
I believe that Tobis, Annan et al are figuring that you’re not presenting an analysis in any consistent way. Prove them wrong.
Shall we see a beginning of a formalization IPCC’s understatements, or not?
Hmm. Just had a very quick look at Tobis’s blog. It looks to me like a point of pedantry about the way you phrased something, (where it was obvious what you meant,) and a criticism that you wasn’t already familiar with the technical Bayesian machinery (hardly unusual, even amongst scientists), which made it difficult to figure out how your newly invented terminology matched up with the conventional language. As a result, he seems to have misunderstood what you meant. The rest is just the perfectly normal condescending snarkiness of “Team” climate scientists that they show to anyone who disagrees with them. That’s a tactic that might work on those who lack confidence in themselves, but obviously just looks immature to professional academics. I pay no attention, or regard it as a plus. It doesn’t exactly help their popularity.
But enough of that. I thought you might be interested in a way of connecting the Bayesian approach to your ‘Italian flag’. It’s based on a simple Venn diagram approach to Bayes’ theorem.
First, draw a big box that represents everything possible. Now draw a circle inside it to represent the hypothesis, model, theory, assumption or however we want to describe our beliefs about the way the world works. Label it H. Now draw a second set using a curve that overlaps about 90% of the first circle, but is a bit vague and wiggly in the part outside the circle. This represents the set of circumstances in which the outcome or observation actually happens. Label it O, and shade it in.
Now we interpret our diagram so that area is proportional to probability. (Hence, the area of the surrounding box is 1.) What we first want to know is what proportion of the surrounding box is covered by the second set O. But we only know about this bit inside the circle, H. We know that in the circle, 90% of the circle is shaded: we say P(O|H) = 0.9. (This is all that the IPCC’s statement means, if I’m right in my suspicion.) This intersection region taken as a proportion of the big box is the proportion of the circle shaded, times the area of the circle. We write this P(O|H)P(H). But we also have the shaded area outside the circle, which is the proportion of the region outside the circle that is shaded, times the area of the outside. That’s written P(O|¬H)P(¬H).
We add the two together to get P(O) = P(O|H)P(H) + P(O|¬H)P(¬H). This is just the sum of the two bits of the wiggly region O.
Now let’s say we observe O. So we know that the inside of O is all there is. Given this information what is the probability that H is true? That is to say, what proportion of the wiggly region O is covered by H?
Well, we have written down the areas of both bits already. The ratio is just P(O|H)P(H) / P(O) and it represents the fraction H inside O or P(H|O). Similarly, we can look at the proportion of O that lies *outside* the circle, and get P(¬H|O) = P(O|¬H)P(¬H) / P(O).
This is all very nice, but the boundary of O outside the circle is vague and uncertain. We don’t know how likely the observation is to happen if our model/hypothesis/whatever is not true. But we can circumvent this point is we divide one of the expressions we have just obtained by the other, so that P(O) cancels out.
This gives:
P(H|O) / P(¬H|O) = [ P(O|H)P(H) / P(O) ] / [ P(O|¬H)P(¬H) / P(O) ]
= [ P(O|H)P(H) ] / [ P(O|¬H)P(¬H) ]
= [ P(O|H)/P(O|¬H) ] [ P(H)/P(¬H) ]
This is just the expression I gave before.
Now we can use a trick to recover the probabilities, because we know that the areas inside and outside add up to 1. That is P(¬H|O) = 1 – P(H|O) and P(¬H) = 1 – P(H). This means that the expression P(H)/P(¬H) holds exactly the same information as P(H), and the two forms can be converted back and forth freely. Similarly with the other one.
Now, the way I interpreted your ‘Italian flag’ was that the two sections inside the circle were coloured red and green, and the box outside of the circle was left white. (Think of it as the flag turned inside-out!) Since all the Bayesian stuff is now just areas of blobs on a Venn diagram, hopefully you will be able to piece together how it fits in with what what you were saying. (And perhaps how Michael Tobis misunderstood it.)
To take it all a step further, you can convert the multiplication above to an addition by taking logarithms. This gives a much more intuitive picture of confidence being simply added and subtracted as you accumulate multiple lines of evidence. Convert back to a probability when you’re done.
So our ‘confidence’ before the observation is:
log(P(H)/P(¬H)) = log(P(H)/(1 – P(H))
Note, we convert ‘scales’ from confidence c to probability p and back using the two functions:
c = log(p/(1 – p)) and p = 1/(1 + 2^-c)
(logs are to base 2 here.)
Our confidence after the observation is:
log(P(H|O) / (1 – P(H|O)))
And the evidence added by the observation is:
log(P(O|H)/P(O|¬H))
This says that an observation is evidence to the extent that it’s probability changes depending on whether H is true or not.
The IPCC have provided only the top part of this: P(O|H). They have no idea about the bottom part P(O|¬H), so they can’t do the rest of the maths to give a confidence in H. Of course, to be any use whatsoever as an assessment of detection and attribution, this is precisely what they would need. So it’s a marvellously misleading way of giving the impression that they have answered the question without actually having done so. Sheer brilliance!
Dear Nullius in Verba
Excellent analysis. I think you are correct. It is all based on the underlying assumption that the models are correct. The uncertainty of modelling is therefore not included.
Moreover, consider the following sentence in AR4 WG1 Chapter 10 Box 10.2 page 798.
“Most of the results confirm that climate sensitivity is very unlikely below 1.5°C.”
However, obviously there are results that contradict this.
So, in scientific language, considering box 10.2 in the AR4, the results span a range for equilibrium climate sensitivity between 1°C and 10°C and that is the state of our knowledge. So calling somebody a “sceptic” or a “denier” or a “heretic” for believing that the climatesensitivity is only approximately 1°C is clearly a political argument and not based on science.
Best regards
Guenter Hess
(Un)certain(ties)
Here are 4 types and scales of (un)certainty
1) Confidence limits applied to a limit in a convergent focus. AGW is a reality. It is uncertain as to the how quickly and what eventual path AGW will follow.
Please Note. By considering this uncertainty one has implicitly accepted and assumed into the background context that AGW is fact.
There is no consideration given to revisiting the possibility that AGW might not be so.
2) Uncertainty derived from an ill-formed problem.
This type of uncertainty is described in retrospect as “We were correct given the understanding that was available to us at that time. Unfortunately we had misconstrued the problem as it was presented to us. There were important aspects which we failed to consider.”
The domain of consideration is premature and incomplete. Reality (of the bigger picture) converges to a value which is exterior to the anticipated range of solutions.
3) Open ended, diverging, unbounded uncertainty.
This type of uncertainty is impossibly large. It motivates the 4th type of (un)certainty by way of inflection.
4) Certainty in the context of an open ended unknowable.
This type of perspective looks upon uncertainty/error as converging upon the certitude of mistake. “I may not know what I don’t know but I can say something with a high degree of confidence as to what I know is wrong.”
There at least 2 certitudes of mistake in AGW science.
1) The perils of ‘Dogmatism’ as explained here
2) The Inflationary Bubble as brought about by drawing awareness to a newly identified valuable commodity.
The inflating bubble activity extends far far far beyond cap-and-trade schemes. It permeates every aspect of environmental resource allocation and usage. It is the giant which overshadows all scientific consideration.
“Separate draft guidance notes on the treatment of uncertainty, presented in Busan by IPCC working group co-chairs, suggest that, where evidence and understanding are overwhelming, IPCC authors could jettison uncertainty qualifiers altogether and present research findings as statement of fact.”
Overwhelming?, lets see if we can collectively agree on their definition of “overwhelming” with any degree of certainty.
Post normal science or bucket of cold water #3?
well maybe a statement like CO2 is a gas in the atmosphere could be allowed to pass with uncertainty qualifiers. But I agree this looks a bit scary. Seems like they are trying to find excuses to go to higher confidence levels.
Facts: “Water vapor and carbon dioxide in the atmosphere absorb and radiate thermal energy. Clouds absorb and radiate energy.”
Anything beyond that gets into climate process and modeling details filled with a very wide range of uncertainties – from highly quantitative Line By Line radiative modeling – to trying to determine the sign, magnitude and impacts of clouds and precipitation.
Roy Spencer on the magnitude of climate model uncertainties:
How Do Climate Models Work?
With insight like that from climate experts, I will take extra precautions in evaluating climate projection statements.
Climate energy imbalance signal is below the noise level:
Spencer observes:
When the direct anthropogenic climate signal is below the noise level, we should proceed with extra caution when evaluating the “projections” of climate models. This requires stringent objective evaluation and validation using all the principles of scientific forecasting for public policy and chemical process control.
Dr Curry
Could you comment on the following :
Proxies based on the Greenland glaciers ( google GRIP project), the latest 10Be records, may be highly suspect!
Here I show direct copy of 10Be data from
A 600-year annual 10Be record from the NGRIP ice core, Greenland
by Berggren et al. from
GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L11801, doi:10.1029/2009GL038004, 2009
Received 6 March 2009; revised 20 March 2009; accepted 1 April 2009; published 2 June 2009.
http://www.vukcevic.talktalk.net/CET&10Be.htm
The 10Be data is inverted and superimposed with the CETs.
Anyone in contemporary science will tell that can’t be (?).
Either solar output is directly responsible for England’s temperature variations (except for miner deviations between England and Greenland) or 10Be Greenland glaciers (GRIP project) data are useless (?!).
I’ll flag this to look at later, I’m not all that familiar with this issue but it looks important.
Thank you.
Hi,
Interesting, which CET dataset did you use and how was it processed.
I know that the relationship between solar and the Armagh Observatory Temps has been investigated, perhaps you should look for the Armagh data.
A casual observation is that we are governed by cloud cover, is there a proven link between cosmic rays and cloud cover modulation? Also the attributable temperature variance looks large (stdev~.5C; do you have a value?). We are maritime so inherently sluggish indicating that the forcing would be large e.g. >1W/m^2, too big for standard solar.
Interesting and thanks.
Alex
Nature has an editorial on this issue that summarizes the situation as:
IPCC members last week considered the best way to quantify uncertainty. They are not alone in needing to do so — the media must also take a firm line when it comes to scientific reporting.
I do hope Nature regards itself as part of ‘the media’. ;)
Peterson (2006) suggests that lack of time given other competing priorities and the simplicity of using a scale in a table rather than a more thorough examination of uncertainties enabled the assessors to continue with only a minimal change to their usual working methods.
A lot of climate graphs don’t have error bars on them. But the real issue here is that if the uncertainty of the projection output of models was really displayed on the graphs, the public at large would see just how uncertain climate science is about the future of the global temperature trend, due to the difficulty of modeling cloud feedbacks. Even the 1 to 3C of warming overstates certainty, since it’s 2C plus or minus 50%.
There is uncertainty due to IPCC’s use of the argument from authority. See:
William M. Briggs posts:
What confuses me here is that uncertainty and risk seem to be an attempts at quantifying ignorance (or maybe more accurately limited understanding). Without getting too ‘Dick Cheyenne’ here quantifying an unknown in an objective way seems to be an impossible task. The “expert opinion” is essentially the subjective attempt to resolve this. The best way to resolve this issue seems to be to bring the gaps in our knowledge to the forefront and guide the scientific community to resolve these issues with further research. The IPCC do not seem interested in this process. Unknowns appear to be described and then put away to make room for the AGW theory. The IPCC documents are surely intended to drive policy and the authors (the vocal one’s at least) all seem to have a very definite idea about what direction that policy should be moving. Why cloud the issue with uncertainty?
Judith, this may be too late but with regard to your Climate Change article I’d love to see a paper focussed around the deconstruction of the IPCC comment below, the more confrontational the better! ;)
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
(Minor point I had the understanding that “most” in this sentance effectively meant all (90-100%). Where do you get the idea that the IPCC allow this to be 51-95%? CAGW seems to demand it to mean all.)
What is evident is that the major players disregarded natural inmfluencers such as the
PDO and AMO etc without full understanding even being there now, twenty
Years on.
How can the public believe that co2 is the culprit when the experts have disregarded anything that does not fit the conclusion they set out to prove.
The next ten years will make things a lot clearer.