by Judith Curry
I am currently digging into the treatment of uncertainty in the IPCC AR5, pursuant to Part I and the paper that I am writing for Climate Change.
Comparison of AR4 and AR5 treatments of uncertainty
I spotted this interesting document that compares the treatment of uncertainty in AR5 vs AR4. Some key changes:
Evidence and Agreement. The AR4 guidance (paragraph 12)2 presented calibrated language to describe the amount of evidence and degree of agreement regarding a finding in qualitative terms. The AR5 guidance (paragraph 8) extends this approach to incorporate explicit evaluation of the type, amount, quality, and consistency of evidence, with a modified set of summary terms. Author teams are instructed to make this evaluation of evidence and agreement the basis for any key finding, even those that employ other calibrated language (level of confidence, likelihood), and to provide a traceable account of this evaluation in the text of their chapters.
Confidence. The AR4 guidance (paragraph 13) presented quantitatively calibrated levels of confidence intended to characterize uncertainty based on expert judgment regarding the correctness of a model, analysis or statement. The AR5 guidance (paragraph 9) retains these terms, but no longer defines them quantitatively. Instead, levels of confidence are intended to synthesize author teams’ judgments about the validity of findings as determined through their evaluation of evidence and agreement, and to communicate their relative level of confidence qualitatively.
Likelihood. The AR4 guidance (paragraph 14) presented the quantitative likelihood scale, to be used when describing a probabilistic assessment of a variable or its change, or some well defined outcome having occurred or occurring in the future. The AR5 guidance (paragraph 10) retains this scale, more explicitly instructing authors to base likelihood assignments on quantitative analysis and noting that three additional terms were used in AR4 in limited circumstances and may be used in AR5 when appropriate. The AR5 guidance also is more explicit about the relationship and distinction between confidence and likelihood, and encourages the presentation of more precise probabilistic information (e.g., percentile ranges, probability distributions) instead of likelihood when possible.
Annex B of the doc presents their response to the IAC review committee.
For reference, the latest draft of the AR5 uncertainty guidelines that I can find are in Appendix 4 of this doc.
JC’s draft paper
Well, I think the AR5 guidance for treating uncertainty definitely takes some steps in the right direction; it remains to be seen how these guidelines are actually applied. But I am getting the impression that the issue of uncertainty is being taken more seriously by the IPCC.
My (tardy) paper for the special issue of Climate Change is still in draft form; when it is complete I will post for comments. The section headings that I have tentatively decided on are:
Reasoning About Climate Uncertainty
1. Introduction
2. Framing of the climate change problem
3. Uncertainty, ambiguity, indeterminancy, and ignorance
4. Consensus, disagreement and justification
5. Reasoning about uncertainty
6. Conclusions
The target length is 3000 words, which is pretty tight to summarize all of my concerns. Note, that in preparing to write this paper, I went through the previous uncertainty threads, and found many useful comments. I look forward to your further ideas and input on this topic.
Moderation note: this is a technical thread, comments will be moderated for relevance.
Confidence? Likelihood? Expert opinion?
How eminently fudgeable! To call these vagaries “quantitative” is risible.
I came away witty same thought. Although some of it looks like a nod to rigour most of it looks like a framework for more consistent handwaving.
A vexing challenge will be to tailor probability/uncertainty estimates to meet the needs and expectations of two different audiences – policymakers and media who respond to the SPM, and scientists who hope to assimilate the material in the main reports.
I’m not sure there is any perfect way to address the SPM audience. In general, they are averse to the notion of uncertainty – they don’t want to be told what “might be” but what “is”. ( I’m reminded of a complaint expressed by the late Senator Ted Kennedy, who wished that there were more one-handed scientists among those who testified to Congress. Too many, he lamented, would remark “We believe the outcome will be such and such. On the other hand…”)
It might be best for the SPM to provide relatively simple LOU statements and probability or confidence estimates, accompanied by advice to visit the main reports and their references for a detailed picture of how these values were arrived at. In fact, if the SPMs are “one-handed” enough, the extent of uncertainty in the main reports, however broad, will probably have little impact. In any case, the recommendations cited in the new guidelines seem reasonable for the main reports, but it would be optimistic to believe they will silence controversy or claims of bias.
Fred, my take on this situation is that once the scientists do a really good job of characterizing the uncertainty in a rigorous and traceable way, then it isn’t too difficult to figure out how to communicate this to the public (and anyone who needs more info or confused can go back to detailed uncertainty analysis in an appendix or whatever). As it stands, there is too much expert judgment and gut feeling that goes into the uncertainty assessments, by people that are ever aware of the politicization of the subject.
they don’t want to be told what “might be” but what “is”.
I run into this in my line of work all the time – it can be a tight-rope, but refusing to do so is key. Giving someone a certain answer where none exists will eventually backfire.
it would be optimistic to believe they will silence controversy or claims of bias
Agreed, but the goal should be to minimize the reasonable controversy. There will be 9-11 conspiracy theories until the end of time, but those that spout them don’t seem to carry much credibility with the general public (even when the nutjobs are fairly well known).
I think the ‘one-handed’ quote was originally Harry Truman’s, about economists.
One of the changes recommended is to include more quantitative probability estimates. While this approach has the clear advantage that the readers are likely to understand better the message that the authors wish to transmit, I find the change very problematic. Most of the important uncertainties cannot be expressed using objectively determined probabilities or pdf’s, as they are to a significant part due to other than statistical causes.
The recommendation may lead to the use of probabilities, when this gives a seriously wrong picture of the nature and extent of the uncertainties. Consequently the change may well be a step in the wrong direction.
The real problems of using science as support of decision making are not helped by this change.
I agree. there is a failure to recognize scenario uncertainty, and further attempts such as making a pdf out sensitivities is misleading.
http://judithcurry.com/2011/01/24/probabilistic-estimates-of-climate-sensitivity/
However, I think better quantitative estimates of say the error in surface temperature measurement and integral values such as average global temperature is very much needed.
As an additional potential negative consequence, I think that the change would just strengthen the present problem that public discussion concentrates to a large extent to fighting, whether the estimates of IPCC on the physical changes are justified or not, while they should concentrate much more on, how to make decisions in the presence of very large uncertainties in other factors related to potential policy decisions. Adding more (imaginary) accuracy through additional numerical estimates to the best known part, is likely to make the situation even worse.
There is a well known tendency to give too much weight in all decision making to the best known factors and too little to the more difficult ones even when they are finally more important.
”
There is a well known tendency to give too much weight in all decision making to the best known factors and too little to the more difficult ones even when they are finally more important.
”
The most important are sunlight and albedo. These are both technologically easy to change and well known.
Passerby late in the evening sees a man frantically looking around the base of a streetlight. “Can I help, my good man? Did you lose something?”
“Yesh. Los’ my watch. Over there in th’ road.”
“But why are you looking here, then?”
“Light’sh [hic] better.”
from Chapter 27 of the pioneering volume Philosophy of Science as Explicated in Very Old Jokes (Herkimer & Sons, 1927)
Pekka – What do you mean by “objectively determined probabilities”? Are you referring to countable events, like the frequency of “heads” in a coin toss? I have a sense that very few climate probabilities can be based on a historical record of the frequency of occurrences, and that many relevant probabilities entail outcomes that have never happened before. Can these be objectively determined? If not, how should they be dealt with?
Fred,
I used the expression to describe anything on which more or less every every knowledgeable person would agree when trying to avoid bias. This may require that the sources of uncertainty can be described quantitatively using methods of statistics or from the technical limitations of methods used.
As soon as expert judgments are the decisive basis for the estimates, I have great doubts on the justification of numerical estimates.
“Most of the important uncertainties cannot be expressed using objectively determined probabilities or pdf’s, as they are to a significant part due to other than statistical causes.”
Those are not uncertainties. Those are ignorance, and should be presented as such.
Good point, ignorance plays a big role in the paper i am writing. Ignorance isn’t just greater uncertainty, it is almost an orthogonal concept. However, anything that you cant put a pdf too isn’t necessarily ignorance; e.g. scenario uncertainty shouldn’t be characterized by a pdf, but it is certainly not an instance of ignorance. And in between uncertainty and ignorance is indeterminancy (which is a concept that relates to known areas of ignorance).
The ‘one-handed expert’ comment attributed to Ted Kennedy was actually formulated by President Harry Truman, in relation with economists.
Mmm. What, really, is the point of all this? If it’s in the context of climate science, fine. If it’s in the context of public policy, er, not fine.
You don’t need 3000 words to deal with it if we’re talking about public policy. I believe the reason you started this blog was that you have concluded that the answer is ‘we don’t really know’. We certainly don’t know with enough confidence to beggar ourselves and attempt to morally blackmail a bunch of developing nations into continuing to be beggars.
Look back at that post you made about the scale of the alternative energy challenge. Unless you’re 99.999% certain that something very bad is going to happen, no-one is actually going to be willing to engage in the massive spending required. (This is why the catastrophists know that they have to wildly overstate how bad it will be and their certainty it will happen. e.g. Hansen).
I keep thinking with your blog that either the whole subject is only of interest to climate scientists and climate science groupies (because it has no direct relevance for public policy) or you would have to drastically re-think your degree of uncertainty (become a born-again catastrophic AGW alarmist). You seem to be continually on the verge of saying ‘It’s OK, just ignore us until we sort the science out’. (Which will be maybe 50 years). But no-one wants to move their own specialty from the center of public attention and spending back to being … irrelevant. Do they?
Well, this is not the message I have been trying to get across, you may not have read my series of posts on uncertainty (look at the category uncertainty below the list of recent posts.) My overarching concern is that overconfidence by the IPCC is torquing the science in a narrow direction, and is bad for the policy making process.
Would it be fair to characterize the state of play as, “The things we think we have a good handle on are not the factors that would drive policy, and the factors that would drive policy are in areas where a lot of work needs to be done?”
The biggest problem is no one can say what the effect of making any change will be. Increase the CO2 use by ocean organisms? Might throw us into an ice age. If you can’t predict, you can’t control, and nobody can predict yet.
We can’t predict much but we continue to have traffic laws, tax laws, medical regulations etc.
The mental image I have is of riding in a truck down a long, steep grade. A sign says, “Bridge Out”.
Which is the rational thing to do: treat the sign as a joke or start to slow?The climate change deniers always treat the sign cavalierly and dismiss any effort for caution as irrational.
That’s begging the question. The disagreement is whether there is a sign at all, and what it says if there is one, and how well it describes the actual hazard. Additionally, you use a faulty analogy in the first place.
While we’re citing logical errors, all you’ve done is sweep the dirt into another corner.
It’s worse than that. The area is well known for highwaymen, and he sees lots of covert movement and gleams from weapons behind bushes and trees next to the road. So the proper response is: “Floor it!”
No.
Judy – As you know, I respect your expertise in this and other areas, but when it comes to our level of confidence that anthropogenic greenhouse gas emissions are a major contributor to global warming, I believe the evidence supports a level higher rather than lower than what I view as a conservative IPCC estimate. My own estimate would entail a probability value >99 percent.
This conclusion rests on a concept that is itself hard to subject to probability quantitation, but is critical, I believe, for accurate evaluation. In essence, probabilities of relatively low magnitude, when derived from independent approaches to a problem, can converge to yield very high overall probability. For climate science, I would cite, among other items, the paleoclimatologic data from many different eras, the data from the past century, the physics of radiative transfer, the ARM measurements, satellite quantitation of water vapor, the failure of models to reproduce trends without including anthropogenic forcing, and the need to accept high-end estimates for alternative climate drivers to deny a significant role for anthropogenic forcing.
A simple example illustrates the general principle. Imagine a hypothesis tested by 10 independent techniques, each yielding a probability value of 0.5 (50% probability) for the truth of the hypothesis. The strength of eah result is weak. However, the combined probability would be 2^-10, representing a 1023/1024 likelihood that the hypothesis is correct.
Of course, independence is critical for evaluations of this type. If the 10 approaches all tested the same thing, the 0.5 value for each would translate into 0.5 overall – a 50% chance of being either right or wrong. This principle will undoubtedly create disagreement as to the independence of separate approaches to climate problems, with some approaches likely to fall into the semi-independent category. On the other hand, the list I cited above comprises within it, dozens to hundreds of unique studies with varying degrees of independence.
Overall, thousands of separate datasets have contributed to a final estimate, and I find it unlikely that most of them warrant a p value as low as 0.5. Much of the criticism of consensus estimates entails challenges to the certainty of individual studies. The argument appears to be, “Study A clearly could be wrong (for reasons specific to that study), as could study B, and study C. With all those uncertainties, how can we have any confidence in the consensus view?”
Even if those challenges are quantitatively valid, they leave unexamined the importance of convergence. When more than one thousand datasets converge toward a conclusion, convergence is likely to be the overriding determinant of the probability of a conclusion. Ignoring it, either from unawareness or design, will almost certainly yield highly erroneous results.
Fred your argument would be more convincing if you considered the counter argument, that 20th century climate variability can be explained by natural causes, and allow proponents of the alternate explanation to provide evidence for their arguments. There are multiple lines of evidence to support the alternative explanation (we can argue about how good they are, in the same way that we can argue about the quality of evidence on the other side).
The theory of argument justification (e.g. Betz 2009; see also the Tesla doc) invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher the degree of justification for the original thesis.
Argument justification requires that you consider both sides of the argument. And you should let proponents of the other side of the argument produce the evidence supporting the other side of the argument. For example, how can you dismiss the role of solar variability when our level of understanding of solar forcing is characterized as “low” and indirect solar effects are in the territory of borderline ignorance? Is natural internal variability a more convincing explanation for the 1940’s temperature bump and the subsequent cooling than kludged aerosol forcing? You get the picture.
In a Bayesian analysis with multiple lines of evidence, you could conceivably come up with enough multiple lines on both sides to produce high confidence levels. This is called the ambiguity of competing certainties. If you acknowledge the substantial level of ignorance surrounding this issue, the competing certainties disappear (this is the failing of Bayesian analysis, it doesn’t deal well with ignorance) and you are left with a lower confidence level. So the multiple lines of evidence argument isn’t convincing unless you make a serious effort to look at the argument “what have to be the case that this thesis were false?” and consider multiple lines of evidence from the other side.
This idea has a long heritage in philosophy and the scientific method. From the wikipedia: http://en.wikipedia.org/wiki/Regress_argument
Another escape from the diallelus is critical philosophy, which denies that beliefs should ever be justified at all. Rather, the job of philosophers is to subject all beliefs (including beliefs about truth criteria) to criticism, attempting to discredit them rather than justifying them. Then, these philosophers say, it is rational to act on those beliefs that have best withstood criticism, whether or not they meet any specific criterion of truth. Karl Popper expanded on this idea to include a quantitative measurement he called verisimilitude, or truth-likeness. He showed that even if one could never justify a particular claim, one can compare the verisimilitude of two competing claims by criticism to judge which is superior to the other.
I disagree. The counter argument becomes strong only if confirmed by the same type of convergence I cited above. The possibility of a dominant solar contribution, for example, would translate into a high probability only if thousands of different datasets derived from independent approaches to solar variability all supported that explanation with at least moderately high probability for each approach. That isn’t the case.
In fact, solar, chaos, LIA “recovery”, or all other invoked competitors illustrate exactly the point I was trying to make. Even if each of those is a plausible possibility, they simply represent the other half of the 50% example in my above comment, and would fail to make a dent in the 1023/1024 probability in that hypothetical example. From a probabilistic perspective, invoking a non-trivial probability for individual alternatives makes no difference. The only thing that would matter would be a convergence of evidence in support of one or more of the alternatives. That hasn’t happened, and so I believe the very high estimate I cite appears to be supported on the basis of current data.
That is why I used the 50/50 example to illustrate the non-comparability of converging evidence on one side vs non-converging data on the other.The converging side overwhelms the the alternative possibilities and largely determines the result.
I would certainly welcome alternative views on this, but it would be helpful if they were illustrated by quantitative example of competing hypotheses.
Ok, why do you think it is >50% and not 40%? Is 40% all that implausible? I suspect the real answer is somewhere between 20 and 70%, we can put a bound on this that seems reasonable. but greater than 90% confidence in the likelihood of greater than 50%? very hard to justify IMO. The IAC review of the IPCC recommended that the probability scale for confidence used in AR4 is inappropriate; I agree.
If the probability for 10 converging datasets is 20 percent for each, the overall probability is no longer 1023/1024, but 89.7 percent, which still strongly supports the hypothesis.
Ok, do the same analysis for the other argument, and let me know what you come up with. Better yet, let somebody who is convinced by the other argument come up with the evidence that supports their argument. On the italian flag thread, we discussed:
1 a) How much information would you ideally wish to have in order to be confident in providing a judgement in support of, or against, the proposition?
1 b) In relation to the hypothetical ideal, how much information do you actually have on which to base a judgment?
2 a) Assuming that the quality of evidence is of high quality and trustworth, what support does it give to the dependability of the hypothsis?
2 b) How much faith do you have that the information on which you have based your judgement is of high quality and trustworthy?
Confidence in your belief is given by 1 – (1b/1a)(2b/2a). NO WAY do I come up with a number that is > 90%.
The 89.7 percent value is 1 – 0.8^10, where 0.8 is the probability that the alternative is correct in each case.
I don’t think the principle I describe is very vulnerable to challenge per se, although I would be interested in how it might be challenged. Rather, weaknesses in the argument will involve the degree of independence of separate modes of evaluation. My claim would be that with thousands of separate evaluations, it is unlikely for there to be insufficient independence to sustain a high probability.
Fred-
“The 89.7 percent value is 1 – 0.8^10, where 0.8 is the probability that the alternative is correct in each case.
”
This is, of course, subject to the restriction that you show these 10 are independant of each other. How do you intend to do that?
Yes, I made the earlier point that independence was a requirement, and so this hypothetical example was based on the assumption of independence.
Right, you assume independance and use the result above to say it couldn’t reasonably be other than AGW. So far you’re just talking a theory which can’t be used for this situation.
I assumed that questionable trustworthiness would be part of the reason for assigning a probability as low as 20 percent to a particular dataset, and therefore not a separate consideration.
Below, in response to other comments, I will try to make my point clearer. In particular, I assert its validity as long as challenges to any individual evaluation do not consist of positive evidence for an alternative but merely assert that one or more named alternatives are possible because the original hypothesis can’t be assigned a very high probability. I should have clarified that above. If, for example, a high positive probability can be shown for solar variation exclusing a significant CO2 role, then the role of CO2 becomes significantly less probable. If the challenge is simply that “you haven’t proved it’s CO2 – it might be something else”, and no evidence for the “something else” is proferred – then the role of CO2 has not been diminshed but merely remains in need of further confirmation from other studies. I’ll illustrate with examples below.
Continuing – to help readers intuit what I’m trying to describe, consider solar changes and chaotic fluctuations as competitors with CO2 at 50 percent probability each. The reason they don’t compete well with CO2 is that they compete with each other rather than supporting reach other. The strength of support for any hypothesis grows to the extent that independent evaluations support the same result.
And here you’re referring to statistical independance. I’m not sure this has been shown.
I illustrated what I had in mind regarding independence in a response below to Brandon – Comment
Fred your argument would be more convincing if you considered the counter argument, that 20th century climate variability can be explained by natural causes, and allow proponents of the alternate explanation to provide evidence for their arguments. There are multiple lines of evidence to support the alternative explanation (we can argue about how good they are, in the same way that we can argue about the quality of evidence on the other side).
Part of the case Fred made is that we are unable to account for late 20th century warming by pointing to “natural” forcings. In any case, it is up to the proponents of this counter argument to make their case – Fred is in no way preventing them from presenting their evidence. As for your “multiple lines of evidence”, where are they?
Well, for starters the proponents of the counter argument have been institutionally marginalized by the IPCC. And a reminder, the counter argument needs to explain more than 50% of the variability by natural causes (not 100%).
Well, for starters the proponents of the counter argument have been institutionally marginalized by the IPCC.
Nice attempt at diversion, but no dice. The proponents of the supposed counter arguments have plenty of space in which to stake their claim.
And a reminder, the counter argument is to explain more than 50% of the variability by natural causes (not 100%).
Technically not true, because the total warming effect is not strictly limited to 100% of observed warming , ie we know that at least some warming has been offset by the effects of aerosols and the recent pronounced solar minimum.
But in any case, please go ahead and provide an explanation which accounts for >50% of warming through specific “natural” causes.
The proponents of the supposed counter arguments have plenty of space in which to stake their claim
Where is this supposed space? Which journal(s)? I believe Fred is claiming that the thousands of papers/evaluations that assume warming due to CO2 should be assumed to be independent evaluations without regard to the lack of publishing opportunities for alternatives. This is problematic for 2 reasons – first because nearly all of those papers are written with CO2 warming as a basic assumption, which means that they’re not independent. And secondly because, as Dr Curry indicated, the IPCC has skewed the debate such that alternatives have been considered by only a small percentage of qualified scientists – therefore the alternative papers/evaluations are fewer in number as well as being, in many cases, refused publication. As I said last night – numbers are not a valid criteria. There may well be different approaches buried in those “thousands” that would qualify as “independent”, but one would have to justify each one as to its “independence”. Given the “consensus” as a base from which most of those papers are generated, I think that would be extremely difficult in practice – with a vanishingly small probability of success.
Technically not true, because the total warming effect is not strictly limited to 100% of observed warming , ie we know that at least some warming has been offset by the effects of aerosols and the recent pronounced solar minimum.
I think not. The total warming effect IS 100% of observed warming. The offset you appeal to is a part of that warming and the effects of aerosols and the recent pronounced solar minimum are integral to the system that produced the warming – not outside of the system or separately accountable.
“In a Bayesian analysis with multiple lines of evidence, you could conceivably come up with enough multiple lines on both sides to produce high confidence levels. This is called the ambiguity of competing certainties.”
Judy – In rereading your comment, I noticed that statement. It’s intriguuing, but could you elaborate? In line with my principle of convergence, one can do a simple Bayesian calculation, for example, where anthropogenic causality (A) and some other mechanism (?solar forcing – S) are assigned equal prior probabilities of 0.5, and where 999 independent studies support A at a p value of 0.5 and one supports S at 0.5. This yields a probability of the truth of A close to 1.0. It would remain close to 1.0 until the number of S studies rose considerably. Is this type of weighting of the pro and con elements what what you are referring to?
Fred,
In Bayesian approach we compare two alternatives A and S preferring A if the empirical result is observed, when A is true than when S is true. The fact that S allows for the result is of lesser value than the fact that A predicts it. If situations A and S the result is one of a set of equally possible outcomes, then A is preferred, if this set is smaller than for the situation S.
In Bayesian approach we do not start looking at the arguments and telling whether they support A or S. Instead we take each study separately and check what is the likelihood of its realized outcome, if the hypothesis A is valid and what is its likelihood if S is valid. The ratio of these likelihoods transforms the prior probabilities to the posterior probabilities.
Pekka – I analyzed the hypothetical data in the manner you appear to describe, utilizing empirical results from 999 hypothetical A studies and one S study. The A outcome is predicted with p close to one, and the S outcome with p close to zero. This is because S would be extremely unlikely to yield 999 independent results supportive of A, whereas it would not be improbable for A to yield 999 results favorning it with only one favoring S.
For simplicity, I made it a calculation with assigned probabilities rather than pdfs, but I don’t think that changes the concept. The calculation, based on the 999/1000 dataset (D), is that
P(A|D) = [P(D|A) x P|A]/P|D. I assigned A and S equal prior probabilites of 0.5, so P(A) = 0.5. P(D) will be close to 0.5, because it will be close to the probability of A, with S (the other 50 percent prior probability) very unlikely to yield D – i.e., only 1/1000 outcomes in its favor). P(D|A) will be close to one, because if A is true, the probability it will yield an outcome in its favor in 999/1000 attempts is close to one.
What also follows is the main point I have been trying to make, but perhaps less clearly than I intended. The numerical quantity of results favoring one outcome vs another is critical when all results are associated with a moderate level of uncertainty. If there were 10 S results, the probability of A would remain very high, but it would diminish significantly with 200 S results out of 1000.
Fred,
Fine. I think it was not totally clear from, what you wrote, although the discussion has been precisely on the correct way of interpreting the significance of empirical data and other arguments.
There has been discussion on the importance of the prior, but with a large number of independent observations the correct modeling of the likelihoods resulting from the alternative hypotheses tends to become more important. In absence of a well defined physical model to connect the hypothesis to the full set of possible observations this is also a very difficult factor, which may involve a lot of subjective influence – and lead two scientists to very different results based on the same empirical data.
fred this is part II of the italian flag (hope to get to it soon).
I’ll look forward to that, Judy. I believe the convergence principle I enunciated has been insufficiently utilized in assigning confidence levels to conclusions about anthropogenic influences. If adequately included, it would raise those levels above IPCC estimates. I’m dissatisfied with the job I’ve done on this thread in describing the principle, because my explanations have been scattered into too many comments, none of which is complete. I don’t want to belabor the point further here, therefore (except in response to comments from others), I’ll wait for the new thread.
2^-10 is the probability the hypothesis is wrong, leaving the 1023/1024 probability it is correct.
I think you have this wrong. 2^-10 is the probability that every experiment will show the hypothesis wrong. 1-(2^-10) is the probability that at least one experiment shows the hypothesis to the true.
I wasn’t referring to experiments with a yes/no answer. My calculation and yours come out to the same value, but for different reasons. In my hypothetical example, none of the 10 studies is definitive. Each ends up with only a 50 percent confidence that the conclusion is correct. (For example, a study may find that the conclusion is correct if every measurement is accurate to within 1 percent, but also may be unsure about some of the measurements – hence the less than 100 percent confidence).
If a study ends up with a 50% confidence that the hypothesis is correct based on some measurement, how would you characterize the other 50% confidence? That the hypothesis is incorrect if reality is outside of the measurement, or just that there’s no knowledge of the truthfulness of the hypothesis?
If it’s that the hypothesis is incorrect, your reasoning should work in both directions and show high confidence that the hypothesis is incorrect simultaneous with high confidence that is it true.
In the no knowledge case, I’m really not sure how to handle this. A quick search finds:
http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory
which seems to address this. This talks about a confidence having a lower bound of belief (strong evidence the hypothesis is true) and an upper bound of plausibility (based on evidence that another hypothesis is true), but uses the idea that confidence from each study is spread over a known set of hypotheses. Unfortunately in Dempster, “The normalization factor has the effect of completely ignoring conflict and attributing any mass associated with conflict to the null set.”
Since I saw elsewhere you used this syntax, how would you write the result for an individual study? P(A+|S1) = 0.5 (The probability of A being true is 0.5 based on performing the study S1) or P(S1+|A+) = 0.5 (The probability the study S1 shows A is true is 0.5 assuming A is true). The latter seems like a representation of confidence, but still needs to be multiplied by P(A+).
Fred
Per Judy’s argument, look at the presentations by David Archibald and consider what it would take to disprove each of them.
On your example of 50% for 10 different ways of measurement, the converse would be that if this was a decision between two options, and the other option had the other 50%, then each would be equally likely given the data, and the 1023/1024 would be a false analysis.
How do you know you are seeing causation rather than correlation?
What if natural changes caused the change in clouds, which caused the change in ocean temperature, which caused the major CO2 increases?
What tests can you provide to prove which is the cause and which the consequence?
e.g. temperature leads/Co2 lags in the ice core data.
David – If you consider my comment carefully, you will see that my conclusions don’t require disproof of alternatives. Even if every alternative among the numerous ones that have been discussed is assumed to be 50 percent probable, that would not change the result, which emanates simply from the mathematics.
Regarding correlative evidence, it is only one part of the total data converging to substantiate a significant role for CO2. However, regarding your specific point, the ice core temperature changes can only be adequately explained by postulating that after initial orbital forcing, the resulting temperature response triggered secondary feedbacks that sustained and perpetuated it. It can be shown that rising CO2 resulting from a warming ocean suffices to provide a mechanism for the heightened and persistent rise in temperature.
This point illustrates one of the principles I tried to enunciate. Other evidence regarding CO2 forcing is entirely independent of ice-core data and their interpretation. It is this independent convergence toward one particular conclusion – in this case involving CO2 – that magnifies the probability estimate.
I didn’t state that the way I intended. Even if in every circumstance, the possibility of an alternative were 50 percent, the overall result would not change. Positive evidence for an alternative would in fact diminishe the probability that the original proposition is correct. I explain in more detail below why this has little relevance for most evidence regarding anthropogenic contributions.
Fred,
I’m afraid that just doesn’t make sense.
The probability of a paper being correct is in no way affected by the results of other papers..
That would only hold true if every possible paper which could possibly be written, for and against, had been written and published.
Obviously, that scenario cannot remotely exist except as a thought experiment.
In any case, the probability is that of the paper’s findings not being spurious, and not a measure of how much it ought to be believed.
Fred
If (AND ONLY IF) you had n cases reasons each showing a 50% probability of anthropogenic warming, then I agree with you on the mathematical consequences of 1-2^-n.
However, the issues I raised throw all that out the window. e.g.
“What tests can you provide to prove which is the cause and which the consequence?”
Without identifying cause and effect the rest is meaningless.
Foundationally this means we need to discover and quantify ALL significant natural causes vs consequences, before you can argue:
The actual physics could very well be just the opposite of your conclusion. The cloud uncertainties alone are so great that we do not know even the sign or which is the cause and which the effect.
Until you grapple with that, the rest gives the appearance of great confidence – but has little meaning.
The Aristotelians could easily have given hundreds of examples of probabilities from Aristotle – but Galileo’s evidence showed that all wrong.
See my further comments on bias etc. at Consilience of Evidence
http://judithcurry.com/2011/02/17/on-the-consilience-of-evidence-argument/#comment-43943
Fred
“Even if every alternative among the numerous ones that have been discussed is assumed to be 50 percent probable, that would not change the result, which emanates simply from the mathematics.”
The math ONLY holds of the 50% probable is all for one cause and the other 50% if for numerous causes. If all n examples are 50% between only 2 causes, the final probability is still 50:50, NOT 1-2^-n.
The lemming effect does not make for accurate science.
You only refer to recent ice core data. To accurately quantify natural causes, you have to also explain why in older ice core data, the CO2 lags the temperature, (not leading it), and why an order of magnitude higher CO2 correlated with substantially reduced temperatures, NOT higher temperatures.
Without understanding those differences, the argument over modern Co2 and temperature is but an argument from ignorance, with possible correlation, but not causation of AGW CO2 with temperature.
Fred, I think there are several problems with your statistical model. (The fact that it yields such strange results should be a warning.)
First you are confusing evidence with tests. The fact that some piece of evidence is consistent with AGW is not a test of the hypothesis. A test requires a prediction that is made before the evidence is known.
Second, almost all of what you seem to be citing is not in fact evidence for AGW. For example, the core fact that temperatures supposedly rose during the 20th century is not evidence for AGW, simply because there are other possible explanations. The only actual evidence for AGW is that which is not evidence for any of these other hypotheses. That is, if event E is consistent with, or implied by, both hypotheses H1 and H2, then E is only evidence for the proposition (H1 or H2). E is not evidence for H1, nor for H2, alone. This fallacy is everywhere, with people claiming that warming alone is evidence for AGW. It is not.
Third, you also have to include all the evidence against AGW in your so-called convergence. Every year in which CO2 levels rise and temperature does not, for example, is arguably a test that failed.
In short, if life were this simple, then inductive logic would be easy. Every accused person would be convicted. However, this flawed model certainly explains why you are so convinced that AGW is true.
David – all the counterfactuals you cite are included in assigning a low rather than very high probability to individual evaluations. By themselves, therefore, these evaluations are weak. Taken together in support of a single proposition, they become very strong.
You are illustrating one aspect of my comment – the fact that objections to individual evaluations, even when valid, may have very little effect on the final result, if a hypothesis is supported by multiple other sources of information. It is the multiplicity that creates this result.
Fred –
I’ve had to go back and start over on this thread because I have too many questions/comments. I will NOT address ALL of those right now, but there are a few that need some airing. Keep in mind that everything I say may be wrong (I am – keeping it in mind, that is).
First – I believe that you’re focussing on convergence without considering that it works both ways. But I’m not willing to debate that point right now. Or probably ever – it is, after all, your focus, not mine.
Second – a question and a comment – are you saying that the thousands of papers that have been written on this subject constitute thousands of data sets that justify the role of CO2 as THE major driver and thus enough “independent” evaluations/data bases all (or at least, many ) of which produce 50% probability? If this is the case, it needs to be said that most of those papers were undoubtedly written with that proposition as a basic assumption/boundary condition. Meaning that I don’t think they don’t constitute thousands of “independent” data points. Keep in mind that there are also hundreds (if not thousands) of papers that are written with “other assumptions/boundary conditions” and I believe they have exactly the same weight as the “thousands” because they’re not “independent” either. From my POV, you only one point for those thousands.
Example – as I wrote on another thread recently – When Hitler commanded “his” scientists to write papers to refute Einstein they produced 100 papers. And they all failed. But Einstein’s comment was that “If I were wrong, it wouldn’t take 100 papers, one would be enough.” Same principle, different application. 100:1 or 1000:1 makes no difference if the “1” is correct. Not that I’m claiming “correct” for the alternatives, only that “independence” is not a function of numbers on one side vs numbers on the other side.
Third is that you seem to be viewing CO2, solar, chaos, etc as competitors rather than as additive (or subtractive) components of the system. Personally, I have no belief in this interpretation. I’m a systems engineer – I think in systems – and the planet’s climate is a system. Systems have component parts (subsystems, if you will) – and there is rarely one single component that is all-important – or in competition with other “components”. Rather, there are many components, each of which has it’s own part to play in the operation of the system. The climate is no different in that respect. So I view the CO2 vs all other possible impacts as an artificial division. I believe the “system operation” demands ALL of the components and cannot be artificially separated.
Finally, I think you’re reversing the null hypothesis in that by your method the ONLY way other influences could have any effect on the equation would be to PROVE that CO2 has no effect. And that’s not the sceptics job. Rather, it’s the job of the “consensus” to prove that CO2 is the main driver. And you won’t convince very many sceptics – or even Joe Sixpack – with a counter-intuitive analysis.
Jim – Too many points to address thoroughly. I follow the literature in the climate, geophysics, and general science journals. I’ve encountered many hundreds of datasets that provide independent evidence for major CO2 warming effects, including a dominant role in the warming of the past half century. I’ve seen almost none that offer evidence contradicting this. I do agree that anthropogenic emissions and other factors can each affect climate, and as long as the other factors are not invoked to exclude a major role for anthropogenic emissions, I have no problem with this. The balance is a subject for a different discussion.
Proof in the sense you mention is impossible (as in most areas of science), but very high probability is something that can be calculated from the data.
That’s the best I can offer for now.
Oh, come on. In the case of the earth, the all-important single component is the *sun*.
And in a sense, what GHGs do it so *harvest* the sun’s energy, so it can be used to heat the earth.
Which, overall, is a good thing.
Now, the actual question facing us today is … how much of this good thing is too much of a good thing, as we add more GHGs therefore retaining more of the sun’s energy?
But without the sun, there’s be no discussion (nor intelligent life on earth).
If the Sun quits on us then there’s no more argument,is there. When do you expect that to happen? :-)
You didn’t notice that I said “rarely” – not “never”.
And to answer your question we first need to answer the question – What are the actual effects of those GHG’s that you’re so terrified of? That has yet to be answered.
And then there’s the question – what will it cost to do what you think is necessary? That hasn’t been answered either. Where’s the risk analysis? And where’s the cost/benefit analysis? Haven’t seen any of those yet. Certainly not by anyone who’s qualified to do them.
Re: Jim Owen,
Nordhaus did a cost-benefit with his DICE model some time ago, and mitigation costs net trillions, and adaptation about breaks even.
http://www.nybooks.com/articles/21494
The only option that is a strong positive is a hypothetical “low-cost backstop”, a cheap low-carbon energy source.
Nordhaus, btw, is a quite liberal AGW believer, and assumes AGW is real.
As for the “backstop”, it may be in hand within a year or two. Here’s a recent update:
http://focusfusion.org/index.php/site/article/lpp_2011_new_years_summary/ Here’s hoping! It would render the entire issue moot. (Costs under 5% of best conventional, <1% of renewables, fast deployment, easy distributed sourcing, etc.)
Fred, after my post you posted a clarification that is very important. You say “challenges to any individual evaluation do not consist of positive evidence for an alternative but merely assert that one or more named alternatives are possible because the original hypothesis can’t be assigned a very high probability.”
You therefore Exclude any and all positive evidence for alternative hypotheses. You method therefore only applies to the special case where there is multiple weak evidence for a hypothesis and No Evidence for any alternative. It may well work there, an example being conviction on purely circumstantial evidence, there being no other suspects.
But the climate debate is very, very far from this special “one hypothesis” case. It is virtually the opposite, because there are multiple hypotheses, with considerable weak evidence for each. That is what unsettled science means.
In fact there are two issues here. First, you confine the case to “challenges to the evaluation” of evidence for the hypothesis (call it H1). If we include ambiguous evidence that supports H1 by 50% and an alternative H2 by 50%, then no matter how much evidence we find the result is still 50-50.
Even worse, we must also look at all the weak evidence for H2, which is not about H1 at all. In this case we will get the same wildly strong result that you got for H1. H1 and H2 will each be found to be virtually certain, which is a contradiction, a reductio ad absurdum.
In short, your method may work where there is only one hypothesis supported by evidence, but that is not the case with climate change. Climate change is a paradigm of the problem of multiple hypotheses each supported by weak evidence. This is a normal stage of science by the way.
David, isn’t this also a problem for the Italian Flag? That is, once all hypotheses receive – as they are beginning to – due attention, may not all these sorts of analyses end up assigning high degrees of confidence to incompatible hypotheses? I say “may”, since I suppose this will not invariably be the case, but there seems to be no mechanism for EXCLUDING such an outcome. That is surely a problem?
Yes, this is called ambiguity. If you do a proper accounting of uncertainties and ignorance then the ambiguity should disappear, with a proper acknowledgement of uncertainty and ignorance. The bayesian multiple lines of evidence reasoning leads to ambiguity if you actually consider the other side’s argument
Tom, people can always get it wrong but I think the IF is well designed to handle this problem, That is, the white area is all about the uncertainty created by alternative hypotheses and their evidence. This uncertainty is very different from the uncertainty due to the evidence directly for and against the hypothesis being flagged.
This is especially true in the climate case, because the alternatives are epistemic, or “meta-scientific” if you like. That is, the skeptics are not generally arguing that solar activity is the cause of the warming, rather they are pointing out that we do not know that it is not and there is evidence that it is. This is the logic of the speculative phase of scientific progress.
Fred,
I’m afraid I don’t understand your reasoning at all.
Taken at face value, the concept of several weak results combining to produce a strong result is so contrary to everything I know about statistics, that I can only conclude that I must have seriously misunderstood what you wrote.
Peter – The reason you didn’t understand is that I explained it less clearly than I should have, leaving confusion between the probability that alternatives to a consensus hypothesis can’t be excluded and the probability that the hypothesis itself is false. I’ve been more precise below – for example, at 6:44 PM – comment 42265
Fred,
A very good analysis. I would also ask for the large amount of work into climate sensitivity based on multiple different lines of research which supports a sensitivity of 2-4.4C to be taken into consideration.
Fred:
What you seem to be saying here is that uncertainties are lower than we think because a large number of studies have confirmed that greenhouse gases warm the earth up. But we have known this for a long time, and knowing it now is of little help. What policymakers want to know is what the impacts of continued GHG emissions are going to be a hundred or more years down the road, plus or minus. And we still don’t know enough about how the earth’s climate works to be able to give them a meaningful plus-or-minus answer.
As JJ noted in an earlier post, it’s important to make a distinction between uncertainty and ignorance, which raises the question of exactly how ignorant are we? Inverting the “Level of Scientific Understanding” entries in column in AR4 Table SPM.2 gives us the official view. We understand (i.e. are not ignorant of) the impacts of greenhouse gases. We are moderately ignorant of the impacts of ozone and moderately to highly ignorant of surface albedo and aerosol direct impacts. We are highly ignorant of aerosol cloud albedo effects, stratospheric water vapor impacts and also of the impacts of the sun. We are very highly ignorant of the potential impacts of everything not listed on the table, including cosmic ray flux, UV radiation and natural ocean cycles. In other words, we really don’t know much about anything – except greenhouse gases.
We are highly ignorant of aerosol cloud albedo effects, stratospheric water vapor impacts and also of the impacts of the sun. We are very highly ignorant of the potential impacts of everything not listed on the table, including cosmic ray flux, UV radiation and natural ocean cycles. In other words, we really don’t know much about anything – except greenhouse gases.
Which is pretty much why we have, in Fred’s words –
the failure of models to reproduce trends without including anthropogenic forcing, and the need to accept high-end estimates for alternative climate drivers to deny a significant role for anthropogenic forcing.
I once suggested that the need to include either anthropogenic forcing or high-end estimates for alternative climate drivers was due to the LACK of input of other factors (as noted above). Got blown off, but I didn’t care to argue the point at that time.
But in my mind this is all moot simply because if the theory is correct then it should explain the presently known discrepancies and uncertainties without resort to convergence of probabilities.
This could have been used to justify, for example, the Standard Model in Particle Physics some years ago and thereby justify NOT building the LHC. Is that reasonable? I don’t think so but YMMV.
I also think that the convergence would need considerable defense. If nothing else, the assignment of probabilities is debatable and the handling of the probabilities for multiple alternative factors (and combinations thereof) needs to be re-examined.. I tend to agree with Peter that the results, as presented, are counterintuitive.
But basically, I’ll stick with the idea that no matter how grand the edifice, it it only takes ONE contrary fact to bring it down – regardless of probabilities. . There are multiple examples of this phenomenon in science history.
Jim:
Indeed. When 93% of the forcings are anthropogenic, 93% of the model response will be anthropogenic too. Your horse is bound to win if it’s the only one running.
This is similar in concept to a meta-analysis, or perhaps a Bayesian approach. My preference is to look at where the upper and lower bounds have been reasonably conclusive established – this is the error that you have to deal with in decision making. Using some “best guess” or probability distribution adds value if you’re using some kind of optimizing function, but I’d argue that there is no rational optimizing function to use, and it’s more important to have a decision which deals with the upper and lower possibilities. But here I’m getting ahead of myself – solving a problem presupposes you have a problem to solve.
«uncertainty based on expert judgment».
It exists only one widely used method for that purpose (in my opinion), the Delphi method or Delphi technique.
On can look at http://pareonline.net/pdf/v12n10.pdf for it.
It needs a precise protocol, especially several iterations for the feedback process.
I never see that in IPCC documents nor the word Delphi commonly used.
So I assume that nothing comes from a rational forecast in the IPCC uncertainties.
Experts no, guesstimate yes.
this is really interesting, havent come across this before.
There’s a lot of literature on it. There are more advanced techniques which claim to be better, but Delphi has been studied extensively. I used it as little as possible, since it’s cumbersome. If all you’re looking for is numerical estimates, there are faster, simpler ways that have been used extensively.
A Delphi or it’s derivatives is good for possible all of it (certainly once values are included, but the area of risk has it’s own numerical techniques as well – as an overview
http://www.cob.ohio-state.edu/~butler_267/DAPapers/WP970009.pdf
The linear technique mentioned is the one I’ve used most often for quantitave estimates, but the distribution was assumed, with the difference between the “least” and “most” estimates used to estimate the standard deviation of the distribution. These things can be made arbitrarily complicated, and it’s more important to follow good procedure in developing the questionnaire than it is to get increasingly detailed. The questionnaire can lead to major biases which are difficult to deal with.
”
I never see that in IPCC documents nor the word Delphi commonly used.
”
They violate two basic rules:
1) Participants are anonymous
2)Estimates are disclosed to the facilitator
Either violation is expected to result in biased estimates due to halo, anchoring, etc.
I think your Chapter 2 “Framing of the climate change problem” is the critical one to get right.
This scientific endeavour is purposeful, namely to help political decision makers respond to potential changes in our global environment. So the information provided by science (and this includes not only best estimates of parameters, but also best estimates of variability, uncertainty and what is unknown) needs to specifically be designed with those decisions in mind.
I’ve said before that this science is applied and mission oriented. It isn’t simply about the advancement of knowledge. There is a client (whether governments or the global community) that will act on what is produced.
Because the science is supporting decision making under uncertainty my view is that this Chapter needs to explicitly discuss:
1. The risks that come from the uncertainty and the unknown and the need for science to support decision makers in managing these (arguably a much higher priority than dealing with what is known); and
2. What is material to the decision making.
The tendency in decision making under uncertainty is to focus on the known and to spend too much time on teasing out the immaterial just because it is known. It’s the case of the drunk crawling around under the street lamp looking for his car keys. When asked where he lost them he points over to the dark beyond, but says “the light’s much better over here”.
I should reflect some more on this issue. It is critical.
this section i actually have a draft of, comments appreciated:
Framing of the climate change problem
An underappreciated aspect of uncertainty in climate change is associated with the questions that do not even get asked because of the way that the climate change problem has been framed. Frames are organizing principles that enable a particular interpretation of a phenomenon (e.g. de Boerg et al. 2010). De Boerg et al. (2010) state that: “Frames act as organizing principles that shape in a “hidden” and taken-for-granted way how people conceptualize an issue.” Risbey et al. 2005 state that decisions on problem framing influence the choice of models and what knowledge is considered relevant to include in the analysis. De Boerg et al. (2010) further state that frames can express how a problem is stated, who is expected to make a statement about it, what questions are relevant, and what range of answers might be appropriate.
The decision making framework provided by the UNFCCC Treaty provides the rationale for framing the IPCC assessment of climate change and its uncertainties, in terms of identifying dangerous climate change and providing input for decision making regarding CO2 stabilization targets. In the context of this framing, certain key scientific questions receive little attention. In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution. Further, the impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement. In the WG II Report, the focus is on attributing possible dangerous impacts to AGW, with little focus in the summary statements on how warming might actually be beneficial to the climates of Canada, Russia and northern China.
Further, the decision analytic framework associated with setting a CO2 stabilization target focuses research and analysis on using expert judgment to identify a most likely value of sensitivity/ warming and narrowing the range of expected values, rather than fully exploring the uncertainty and the possibility for black swans, dragon kings, wild cards, and not to even mention the prospect of natural climate variability. The concept of “imaginable surprise” was discussed in the Moss-Schneider uncertainty guidance documentation, but consideration of such possibilities seems largely to have been ignored by the AR4 report. The AR4 focused on what was known with certainty and on narrowing the uncertainties and raising the confidence levels in an attempt to identify a most likely future scenario. The most visible failing of this strategy was the neglect of the possibility of rapid melting of ice sheets on sea level rise in the main summary conclusion in the Summary for Policy Makers (e.g. Oppenheimer 2007; Betz 2009). An important question to ask is what is the true black swan risk of climate variation (warmer or cooler) under no human influence. Without even asking this question, there seems little to base a judgment upon regarding the relative risk of anthropogenic climate change.
The presence of sharp conflicts with regards to both the science and policy reflects an overly narrow framing of the problem. Until the problem is reframed or multiple frames are considered by the IPCC, these conflicts are not going to resolved, and the scientific debate will continue to ignore crucial pieces of the puzzle and the policy deliberations will likely continue to be stymied.
The general theme of the importance of uncertainties in formulating policies extends very far and wide. I try get my own thoughts is better order and have started to put related texts to my previously almost empty blog (linked to my name).
My present texts are general at without links. They try to indicate, how hopelessly widereaching the problems are. I hope to get to more specific issues soon with references. The ones that I am most familiar with, are technology development and use of large scale models to analyze energy system scenarios for the future, but there are also many other issues that I hope to address soon.
The climate science is not in central role in what I expect to write.
I’m adding your site to the blog roll, to make sure i keep up with it.
Judith,
You have misspelled my name. You may leave off the dots, but, please, change my family name to Pirila (or Pirilä).
Pekka
oops i am lousy at spelling names will vix
I’m distracted by the hungry mouths that need feeding, so haven’t spent any time on this but thought I should perhaps mention that this kind of decision making under uncertainty is endemic in social policy, and in economic policy (for example) it can happen with real time consequences and there is a strong body of quantitative literature sitting behind it (eg monetary policy management as a topical case in point). (Health is another area).
For this reason I was going to have a bit of a poke around in the literature on things like Knightian uncertainty and the like and how this is handled in decision making theory. Others might save me the journey.
On reflection though I do think Chapter 2 should at least include some high level explicit treatment of the various kinds of uncertainty, the strategies for dealing with them in decision making, and the types of information required by decision makers as a consequence.
HAS,
I know perfectly well, that this post cannot present any part of the argument well. Books have been written on the same problems, or parts of them. I have been pondering for a while, how to get started with more specific writing and ended up with this. If it does not serve others well, it should set some context for my following postings. I hope that at least some om them will not take too long to appear, although I have not been particularly successful in maintaining regular writing on my Finnish language blog, which has presented often issues of more local interest.
“Further, the impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement. ”
Woot! Well put Judith.
I’d have said something clumsy about stepping the doo piles of invisible elephants.
On the subject of other potential causes for phenomena not being properly considered once a hypothesis is elevated to King Theory, check my latest post for some century old wisdom.
http://tallbloke.wordpress.com/2011/02/13/t-c-chamberlin-multiple-working-hypotheses/
RE: “In the WG II Report, the focus is on attributing possible dangerous impacts … with little focus … on how warming might actually be beneficial to the climates of Canada, Russia and northern China.
I get the impression many skeptics assume that AGW will look like a more or less uniform increase of a few degrees throughout the world, when actually there are very good reasons to think a change of a couple of degrees of average temperature (together with other man made trends) would seriously rearrange ocean currents and wind patterns such that even in a relatively moderate scenario, many agricultural areas would become unproductive; some of the worst case ice melting scenarios could destroy the value of perhaps trillions of dollars worth of the most desireable real estate on earth …
It might very well be that a few areas would profit from a few degrees of extra warmth, but the potential for damage seems far greater than the potential for benign possibilities.
Fred Pearce, recent topic of conversation here wrote what looks to me like a good book on the subject. I wonder what Judith thinks of it.
If much of the world gets together to try to minimize the possible downsides, one thing certain in my opinion is that it won’t be politically feasible to live up to Copenhagen recommendations or the like, and unless the real climate begins to show stronger signs of risk, the efforts would start to dwindle, but, in the other event, nations would begin building a framework for cooperation that would serve us as, over decades, the picture of what is likely to happen becomes clearer.
Problem definition in the political arena is challenging. To borrow loosely from the original Kepner-Tregoe book, the problem is the difference between the perception of current reality and the desired reality. So far, the definitions seem to be “man made CO2 is increasing, which will lead to increased globsl temperature, leading to climate catastrophes” and “CO2 should stop increasing” This is an over simplification in a way, but not far off the public discussions . Until the problem definition is reasonable, few with training on the topic will give any recommendations much credence.
Considering the “framing” issue, which may well affect the direction of the IPCC, the following history may be relevant:
IPCC. 2004. IPCC Anniversary Brochure. December.
http://www.ipcc.ch/pdf/10th-anniversary/anniversary-brochure.pdf
I have not yet finished connecting the dots, but it appears that anthropogenic global warming due to greenhouse gasses appeared early as a policy issue, and that the UNFCCC excluded natural variability from its considerations.
From the history (Anniversary Brochure):
1979: man’s activities may cause extended changes of climate
1985: Cited the Role of Carbon Dioxide and of Other Greenhouse Gases in Climate Variations and Associated Impacts
1987: Increasing concentrations of greenhouse gases may impact socio-economic patterns
1988: IPCC. Review national/international policies related to the greenhouse gas issue (and) Scientific and environmental assessments of all aspects of the greenhouse gas issue. (IPCC does not recommend policies)
1988: UNGA charged IPCC with review of adverse climate change, global warming, response strategies(Delay,Limit, Mitigate) and international legal instruments
1989: Calling for wider participation from developing countries
1990: (FAR) topics including greenhouse gases and aerosols, radiative forcing, greenhouse effect resulting in additional globalwarming, processes and modelling, observed climate variations and change, and detection of the greenhouse effect in the observations.
1990: Intergovernmental Negotiating Committee (INC)
1992: IPCC Supplementary Reports. greenhouse gas emissions, regional distributions of climate change and impacts. Issues in: energy and industry, agriculture and forestry, sea level. Evaluated emissions scenarios.
1994: UNFCCC and SBSTA (Subsidiary Body for Scientific and Technological Advice)
1994: IPCC definition of climate change: natural internal processes, external forcings, anthropogenic changes to atmosphere and land use.
1994: UNFCCC definition of climate change: directly or indirectly to human activity altering global atmosphere which is in addition to natural climate variability.
1995: SAR introduced socio-economic aspects and agreed two Co-chairs each for eah WG (a developed and a developing country)
1995: SAR WG I greenhouse gas increased, anthropogenic aerosols tend to negative forcing, climate has changed, balance of evidence is human influence, many uncertainties
1995: Synthesis. Dangerous Anthropogenic Interference, adaptation and mitigation options to achieve ultimate objective of the UNFCCC. Again, balance of evidence suggests human influence, surface temperature up 1C-3.6C, sea level up 15-95cm (6” – 37”). Significant reductions in net greenhouse gas emissions are technically possible and economically feasible by utilizing an array of technology policy measures.
The history continues through 2007 and the publication of the FAR. I have not organized notes as yet, but the general framework appears to be unchanged.
yes, the turning point was the 1992 UNFCCC treaty. policy cart before the scientific horse
Pooh, Dixie | February 14, 2011 wrote: “The history continues through 2007 and the publication of the FAR. <b.I have not organized notes as yet, but the general framework appears to be unchanged.” I finally completed the notes, from several sources including the Site Map of the UNFCCC
http://unfccc.int/home/items/993.php
COP-10 (Buenos Aires, December 2004) adaptation and response, capacity-building for developing countries, global observing system, development & transfer of technologies, financial mechanism, funding to assist developing countries, forestration, Kyoto Protocol, GHG inventories, land use.
COP-11 (Montreal, 2005) Enhancing implementation of the Convention, work program for SBSTA, LDC (Least Developed Country) Fund, development & transfer of technologies, land use, forestry, administrative initiatives for articles of the Kyoto Protocol, including national systems:
Definition of national system: “A national system includes all institutional, legal and procedural arrangements made within a Party included in Annex I for estimating anthropogenic emissions by sources and removals by sinks of all greenhouse gases not controlled by the Montreal Protocol, and for reporting and archiving inventory information.”
COP-12 (Nairobi, 2006) Special climate change fund, financial mechanism, technology transfer. “The ultimate decision-making body of the Convention is the Conference of the Parties (COP), which meets every year to review the implementation of the Convention. The COP adopts COP decisions and resolutions, published in reports of the COP. Successive decisions taken by the COP make up a detailed set of rules for practical and effective implementation of the Convention.”
SBSTA report: adaptation to climate change, air transport, certified emission reductions, climate impacts, developing countries, emissions, fuels, hydrochlorofluorocarbons, hydrofluorocarbons, maritime transport, methodology, methodology, REDD (Reducing Emissions from Deforestration in Developing Countries), research and systematic observation, technology transfer, vulnerability assessment.
COP-13 (Bali, 2007) Action Plan/Road Map. Deforestration Emissions, technology transfer, financial mechanism, Global Environment Facility, Extension of the mandate of the Least Developed Countries Expert Group, global observing systems for climate.
COP-14 (Poznań, 2008) …Clear commitment from governments to shift into full negotiating mode next year in order to shape an ambitious and effective international response to climate change, to be agreed in Copenhagen at the end of 2009. Parties agreed that the first draft of a concrete negotiating text would be available at a UNFCCC gathering in Bonn in June of 2009.
Kyoto Adaptation Fund Board to have legal capacity to grant direct access to developing countries. Progress on important ongoing issues that are particularly important for developing countries, including: adaptation; finance; technology; reducing emissions from deforestation and forest degradation (REDD); and disaster management.
Bonn Climate Change Talks – March 2009
COP-15 (Copenhagen, December 2009) Clean development mechanism, amending Annex I to the Convention to add Malta, guidance on REDD+, draft decisions on adaptation, technology, and capacity-building. However, the Bali Roadmap negotiations could not be concluded and negotiations will continue in 2010.
One observer (Ken Hughes, Sierra Club delegate) wrote: “One intriguing development is how to create a mechanism for meaningfully send funds, technologies, and other resources to developing countries to mitigate and adapt. The institutional framework is not there yet to ensure that it’s money well spent.”
COP-16 (Cancun, 2010): Mukahanana-Sangarwe, Mrs. Margaret, and Anonymous Chair. 2010. Possible elements of the outcome. Note by the Chair. Ad Hoc Working Group on Long-term Cooperative Action under the Convention. Cancun, Mexico: IPCC, November 29.
http://unfccc.int/resource/docs/2010/awglca13/eng/crp02.pdf
Mrs. Margaret Mukahanana-Sangarwe is a delegate from Zimbabwe. Zimbabwe is a “developing country”. Development is hindered by the regime of Robert Mugabe, whose role in the destruction of Zimbabwe’s economy can be found on Wikipedia.
As chair, Mukahanana-Sangarwe proposed developed counties transfer $US 100 B per year (or else 1.5% of GDP, later dropped) to developing countries.
In case you were curious about the objective of the UNFCCC, which directs the SBSTA and influences the COPs:
UNFCCC. 1992. United Nations Framework Convention On Climate Change. United Nations.
http://unfccc.int/resource/docs/convkp/conveng.pdf
Article 2: OBJECTIVE
“The ultimate objective of this Convention and any related legal instruments that the Conference of the Parties may adopt is to achieve, in accordance with the relevant provisions of the Convention, stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Such a level should be achieved within a time frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner.” emphasis added
Judith, may I suggest the following for your first par:
“Climate change has hitherto been framed in a way that has prevented important questions from being entertained. The uncertainty surrounding CO2 as a climate driver must therefore be deepened by the existence of rival and/or complementary hypotheses which are only now beginning to receive due attention.”
De Boerg et al would have done well, IMO, to substitute “subliminal” for their ““hidden” and taken-for-granted”, but there’s nothing you can do about that!
Dr Curry
This is presumptuous but with further thought this in abbreviated form is what I’d suggest covering in Chapter 2:
Commence with a discussion of what governments etc need this information for:
– risk management
– making trade offs between options to intervene, optimising based on costs and benefits
– assess the robustness of responses
– etc
To do this they need not only information about what the science tells us, they need information about the uncertainties and the unknowns. It is the uncertain and the unknown that is in many respects the most important because it is here that the political process needs to do one of its unique jobs, form and represent a consensus of the populous at large. It is precisely what IPCC has failed to deliver.
[I think this overall point that “the client needs to know” is the lynchpin to the argument for much greater transparency about uncertainties and the unknown, and why the IPCC framework needs to be broadened.]
Science can quantify uncertainty in many areas, but there are large areas where it can’t. This may be because the science is controversial or because it is simply unknowable today (“there be dragons”) or inevitably unknowable.
Just as science can help inform by being quantitative about the subject matter, so it can be about the state of the science. It can and should report the range and clustering of the views about what is controversial, uncertain and unknown. [I think that missing this point is what led people down the PNS alleyway.]
For the political process when dealing with these uncertain/unknown areas it is the range of views and the reasons for the disputes that are as important as where the majority sit, because it is here that issues of robustness of response are more likely to be of interest than simple optimisation.
Adding to this reason to be careful about the consensus we have:
– the paradigm argument De Boerg et al. (2010) [I can’t find this reference via Google]
– the focus on the measurable stuff as the stuff of science which biases the consensus [which is perhaps an extension of the paradigm argument]
– the risk that the consensus reflects value choices made by the scientists, value choices that do not reflect expertise and should therefore more properly be made by the electorate.
– [I’m sure there’s more]
My argument for this kind of approach is that it strengthens the case for the scientific community reporting on the uncertain and the unknown much more systematically in this endeavour. It gives you IMHO a more solid platform to build on.
All for what it’s worth :)
good points, thx!
Thanks curryja,
In addition, on can look at http://www.forecastingprinciples.com/
sponsored by the International Institute of Forecasters.
This site contains a list of papers (of various interest) in http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=78&Itemid=130
established by the special interest group on global warming.
thx, this one i’ve seen
Peter and others – I realize that I didn’t describe precisely how I perceive positive evidence for a consensus conclusion about CO2 based on less than very high probabilities in individual cases. Here is an illustration:
Consider ice core data related to CO2 concentrations that appear to support a CO2 cause/warming effect relationship if the data are accurate. Consider then a challenge that claims (a) that the temporal relationship is too uncertain for high confidence; and/or (b) the ice core concentrations may be invalid because of migration of gas out of the bubbles at an unsuspected rate.
If this leads to only a 50 percent probability estimate that the cause/effect relationship is correct, it does not also constitute a 50 percent probability that CO2 does not cause warming. The 50 percent uncertainty simply represents a failure to prove the causal relationship, but does not contribute a disproof.
Another example involves the observation that as temperatures have risen, the increase has been greater in winter than summer, and at night than during the day. This is evidence for a greenhouse effect, but it is not conclusive. Conceivably, unsuspected or unmeasured changes in aerosols or in wind currents might somehow have mediated the same phenomenon. Again, the uncertainty will leave the probability at less than 100 percent, while not adding evidence to disprove the hypothesis.
Many of the challenges to the consensus conclusions involve exactly this – uncertainty rather than disproof As long as uncertainty regarding supportive evidence rather than positive evidence that the consensus is wrong is applied to the thousands of datasets I’ve referred to, support for the consensus remains very strong. A suggestion that an alternative is possible does not change this. Only evidence that in a positive sense works in the direction of excluding the consensus will reduce the strength of that support.
Of course, attempts have been made to do this, but they would need to be very conclusive to counteract the support accumulated from a large multitude of positive studies, even if each of these was characterized individually by far less than 100 percent certainty.
Fred
There are two physical explanations for the ice core CO2/temperature relationship – the temperature change caused the CO2 change or the CO2 change caused the temperature change. The first explanation is physically plausible. The second assumes that decreases of 80ppm in atmospheric CO2 caused four Ice Ages, and that all other impacts, including changes in sea level, ocean currents, land-mass distribution, albedo, planetary tides, solar cycles etc. were either insignificant or were feedback effects.
Both mechanisms together can be shown to be the best quantitative explanation for the magnitude and duration of temperature change.
Do you have a reference for this that I could look at?
Roger – I’m afraid my reply may not satisfy you. The answer is that I have seen references to this effect and can undoubtedly retrieve them, given time. I will try to do that. One thing I am learning from these blogs is that it is not sufficient for me to scrutinize data so as to persuade myself of a particular reality – I should also save my sources in some easily retrievable and well organized format (other than my memory) so that I can pull out the appropriate ones when challenged. That will allow me to answer all questions – with the probable result that few minds will be changed anyway. Oh well.
I’ll look for a good source on CO2 as a feedback response to warming initiated by other climate influences. I do know that they exist, and that glaciations and deglaciations are far too extensive for the temperature changes to be explained simply by the triggering factors and ice/albedo feedback.
Fred,
I really sympathize with you. Finding what you know is out there is a bear. (Capturing it is the mama bear: tagging, cross-referencing, and and annotating.)
I’ve only skimmed the surface, but the data base already ~3-1/2 gb, with >2500 entries.
And I can’t immediately find the reference you are looking for (CO2 as a feedback response to warming, such as outgassing from oceans). :-(
“That will allow me to answer all questions – with the probable result that few minds will be changed anyway. Oh well.”
Fred – please don’t give up. I (and I suspect many others) have found my way to Climate etc precisely because Dr Curry has attracted a group of thoughtful and respectful commentators, many of whom know what they are talking about and are prepared to discuss the issues with few of the insults and assertions of bad faith that disfigure so many other fora. If you continue to treat us seriously, I am sure we can cut you some slack about mislaying your references.
Here is one source – Glacial Termination
Insolation changes are too weak to explain the magnitude and duration of deglaciation. The evidence indicates that these changes triggered Antarctic deglaciation. The resulting rise in CO2 (after about an 800 year lag) contributed significantly to Northern Hemisphere deglaciation. Deglaciation proceeded over 5000 years, during which rising CO2 continued to drive it.
Fred:
Thank you. You are not the only one who flounders around looking for papers you know exist but can’t find.
Interesting you should cite the Caillon et al. paper, because this is the one the skeptics seized on to prove that CO2 did not cause the Ice Ages. (“This confirms that CO2 is not the forcing that initially drives the climatic system during a deglaciation. Rather, deglaciation is probably initiated by some insolation forcing which influences first the temperature change in Antarctica … and then the CO2.”) Caillon et al. did go on to state: ”This sequence of events is still in full agreement with the idea that CO2 plays,through its greenhouse effect, a key role in amplifying the initial orbital forcing”, but they were just toeing the party line. They presented no evidence to back this statement up and provided no citation.
It’s more than toeing the party line and less than absolute proof. Almost all calculations of the direct effect of insolation changes on albedo show the effect to be far too small to account for the deglaciation. It’s therefore necessary to invoke feedbacks, and the change in CO2 concentration fulfills that role, whereas there’s no obvious alternative mechanism that would completely suffice.
I believe that the claim that the paper proves that CO2 did not cause the ice ages is a straw man argument. There have been few if any plausible claims that it did. It seems to me to be part of a tactic of dismissing the role of CO2 by showing that not every temperature change during the past 4 billion years was due to CO2.
Fred:
I agree. I don’t think the Caillon paper proves anything either way. But the ice core records are not a good example if you want to demonstrate a connection between temperature and CO2. The consensus of opinion seems to be that Ice Ages were caused by something else altogether.
Fred –
Buried on page 3 of your referenced document –
This confirms that CO2 is not the forcing that initially drives the climatic system during a deglaciation.
Which also saves me some time looking for the same general statement from another source.
Also –
One thing I am learning from these blogs is that it is not sufficient for me to scrutinize data so as to persuade myself of a particular reality – I should also save my sources in some easily retrievable and well organized format (other than my memory) so that I can pull out the appropriate ones when challenged.
I have the same problem. Lots of references, but lack of organization. But I’m working on it.
Neither can explain why the cycles re-occur like clock work.
There’s a third known factor . World circulation currents determine the CO2 transport from where it’s being generated up to the cold regions. This is not random, it’s systematic, and there is definitely not perfect mixing – the mixing would be expected to also be a function of the circulation patterns. CO2 variation in ice cores would expected to be a sensitive function of the circulation patterns, which would be expected to change during the time periods involved. There is also another factor here – the polar regions, particularly for the northern hemisphere, act as sinks for CO2. Assuming that this mechanism is basically quasistatic over the periods of interest is a faulty assumption – NASA’s data shows it to be varying over various time scales.
Time after time, if the investigators don’t know, they assume . They don’t have any choice. But then there is not a lot of creativity in proposing physical mechanisms which make the interpretation of the results uncertain.
To be clear, the obvious limitations on the ice core work assume variation in following things aren’t important:
1) Spacial distribution of CO2 sources
2) circulation patterns
3) Mixing
4)CO2 sink
If you were designing a chemical reactor with these assumptions, you’d be in real trouble.
Fred,
One example of problems in your argumentation.
You refer to the comparison between winter time and summer time. Here in Helsinki (I live a little outside) the average of July temperature averages have varied in 1971-2000 over a range of 24C (+7C – +31C) and January temperatures over a range of 43C (-34C – +9C). The variations are much larger in winter than in summer. Therefore warming for practically any reason is likely to be stronger in winter than in summer. This kind of effects must be taken into account in applying Bayesian logic.
You mentioned in your previous comment that independence of the arguments is essential. The independence may be easy to prove for the actual empirical data, but it is a big problem, when the data can be used only with support of models. The risk of common errors and cross-influence related to these models is very large and its significance is difficult to estimate.
One problem of paleoclimatological data is that it consists of numerous very different pieces of information. This might be a proof of independence of the data sources, but the problem comes from the complicated analysis involving numerous assumptions and models. It is likely that these models are built knowing results of other research groups in a way that adds to the risks of positively correlated errors.
The fact that many papers (or simultaneous other utterances of the authors) make additional sometimes openly speculative comments on, how the results are consistent with strong climate sensitivity casts doubts on the authors ability to counteract the introduction of bias.
I do believe that there is strong evidence on AGW and I do believe that AR4 is not badly out of mark, but repeated observations of likely bias makes it very difficult for me to judge precisely, where the best objective interpretation of the data would lead our conclusions.
Having read some of the reviewers comments in the AR4, I suspect uncertainty is in the eyes of the beholder.
Ask someone like Vincent Gray what the uncertainties are and then ask, say, a Karoly or a Moolten. Answers will be chalk and cheese.
To more formally describe the importance of the convergence principle in support of anthropogenic causality, I would state it as follows:
A. Very few data sources on this conclusion are amenable to a probability estimate of the form: “The probability that anthropogenic causality is true is p. The probability that it is false is 1 – p.”
B. Instead, the vast majority take the form: “The probability that the data can correctly be interpreted as demonstrating anthropogenic causality is p. The probability that the data are insufficient to demonstrate this result is 1 – p.” Here, 1 – p is not the probability that the conclusion is false, but merely the uncertainty about its truth.
The distinction is critical. The confusion between A (disproof) and B (uncertainty) leads to a very substantial underestimate of a valid probability for anthropogenic causality based on the large number of data sources that fall into category B, and the few might be assigned to category A.
I have cited examples above of category B probabilities. How often will evidence for some alternative climate mechanism constitute an example of a Category A disproof type of probability value? Such would occur only when the alternative is inconsistent with the coexistence of anthropogenic causality. This is rarely possible, because quantitative estimates are rarely precise enough for one possible mechanism to exclude an important role for all others. As an example, early twentieth century solar forcing can be assigned a greater role than anthropogenic forcing in mediating observed trends, but the participation of both in proportion to calculated potencies, in conjunction with other factors, known and unidentified, is consistent with the data. Nevertheless, Category A examples should always be evaluated seriously based on the evidence in each case.
I hope it is clear from the above that what we are discussing are probabilities, and not certainties – indeed, the operative term that defines Category B is uncertainty. With that in mind, and with uncertainty as a focus, Judy, of what you are writing about, I hope you will consider adding the convergence principle to other perspectives on probability you are already addressing in formulating conclusions. Without it, I believe an important element of how we evaluate climate data will be missing.
Fred, i intend to write about the fallacy of this line of reasoning when there is substantial uncertainty and ignorance, and probabilities aren’t even justified as part of the analysis.
I would just add that there is a rough analogy between Category A/B confusion and type I and type II errors in significance tests. The analogy is inexact, however, because with anthropogenic causality data, a failure to reject a null hypothesis is often more reflective of data uncertainty than of a weak effect. Even if climate sensitivity is high, for example, demonstrating that would often fail in the light of data inadequacy.
If the effect is large then how come the data are inadequate.
And if the data are inadequate then how do you know the effect is large?
Exactly. The apocalyptic consensus of climate science is unable to answer that question in a meaningful way.
Well stated, by the way.
No.
That’s simply stating how much a hypothesis ought to be believed.
Which is, or should be, meaningless to the science.
In any case, you got A wrong. It should read, “The probability that the finding is not spurious is p. The probability that it is spurious is 1 – p.”
No – findings are findings. Their relationship to conclusions is what is at issue.
Fred, I’d agree with your application here if there were more knowledge and actual independence, or better yet, what I refer to as orthogonality here. There isn’t.
I’m glad we agree on the principle – that’s a start. It would now be worthwhile to revisit the literature to assess the quantity, conclusiveness (as a p value), and independence of approaches to anthropogenic causality and its quantitation. My impression, from having looked at thousands of papers in dozens of climatology, geophysics, and general science journals, is that at least a few hundred would be good candidates.
In response to a comment by Brandon, I gave some examples of independent approaches – Comment
It’s worth pointing out your first and third examples weren’t independent in the sense required for simple combination with convergence. This means you wouldn’t be able to simply multiply out their resultant probabilities.
I probably should have added details that would clarify their independence. A temporal relationship between CO2 and temperature is valuable because it is the item of specific interest, but its uncertainty resides in the fact that we typically can’t identify and quantify confounding variables. A temporal relationship between a different phenomenon (e.g., volcanism) and temperature may be of interest because we can accurately adjust for confounding variables, but its uncertainty lies in the assumption that its climate sensitivity parameter can be extrapolated to CO2. The uncertainties in each case lie within the realm of high certainty in the other.
Fred Moolten, your clarification in no way establishes independence. Your first example has us considering a temporal relation relation between CO2 and temperature. To determine such, we need to (try to) account for confounding factors. One such factor is volcanism. In your third example, you then consider this very factor and try to extrapolate from it. That is not independent.
Now then, a lack of independence is not some fatal problem. It simply means the convergence of evidence is less demonstrative. For example, if you assigned a probability of 50% to independent factors, convergence would mean the two combined give you a total probability of 75%. Without independence, this number drops (by a factor of their dependence).
This means to combine your examples you need to first find the extent of their dependence. Once you’ve done this, you can down-weight their convergence by the appropriate amount.
Volcanism needn’t interfere discernibly with the first scenario, which can be evaluated over intervals with only minor and fairly constant background volcanic activity. In fact, some the “energy balance” efforts to estimate climate sensitivity utilize multiple intervals of one year or less, and can separate out (and discard if necessary) those intervals affected by significant eruptions. I was referring to confounding factors that can’t be easily identified, so that one can’t confidently conclude that major adjustments are unnecessary.
I certainly agree though with your larger point – in the real world, independence is relative, and the degree of independence mut be ascertained.
You cannot simultaneously “account for” something and be independent of it., nor does it matter if other factors are more problematic for your result. If your results are changed due to considering a factor, your results are not independent of that factor.
I don’t think the overlap in this case is particularly extensive, nor do I think this specific issue is all that important. However, you claim proper handling of convergence of evidence would lead to greater certainty than that of the IPCC. Claiming such while exaggerating independence discredits your point.
I’ll let anyone else reading these exchanges draw their own conclusions.
Your response doesn’t make any sense to me. You gave two “independent” examples. I pointed out a clear relation between the two which indicates they are not independent. You tacitly acknowledged this by describing how one impacts the analysis of the other.
Quite frankly, I expected you to say something like, “Yeah, the two are not completely independent. However, any overlap their is is not significant enough to matter for their convergence of results.” I would still disagree with you describing them as “independent” without any qualifier, but at least then the distinction would be clear.
I really don’t get this. Your point would in no way be weakened by acknowledging the lack of total independence in your examples. A single sentence acknowledging some merit in my point would settle everything, and it would do so without shutting down communication.
What is the point in shutting down the conversation?
To demonstrate the issue I’m discussing, allow me to offer a simple example. Imagine you have nine blocks of three shapes (circle, square and triangle) and three colors (red, blue and green), with each block being unique from the rest. If I ask you what the odds are of randomly selecting a block which is red or a circle, what is the correct answer?
Obviously, the answer is five. There are three red blocks and three circular blocks. The overlap between the two is one (the red circle), so you have to subtract it out. This is the same issue as with convergent evidence. Any time multiple pieces of evidence overlap, the overlap must be accounted for to avoid double-counting.
Of course, the amount of overlap can be small or large. With the numbers from my previous post, the answer isn’t 75%, but it could be 74%. Or it could be 55%. We can’t know which it might be until we figure out the extent of overlap.
Simply calling things “independent” won’t work.
Fred, I have been hoping that someone would tackle your “convergence” argument in this thread head-on, and various people have taken on bits of it. But it turns up in various guises, and it needs to be nailed properly.
The proposition you wish to establish with high confidence is that “anthropogenic greenhouse gas emissions are a major contributor to global warming.” No evidence can establish the truth of this proposition, because the proposition is incomplete. It cannot be tested until the word “major” is given an operational definition. (As a grammatical rather than logical difficulty, your statement contains two implicit propositions, the first of which (global warming is occurring) must be established before the second has any meaning. But that is minor.) This bears on the question of whether a particular study supports there being a “major” contribution. The IPCC has defined “major”, and if you accept their definition you would have to consider whether each of the studies you have read supports a “major” contribution or just a contribution. I don’t see that you claim to have made that distinction, and I not sure that you could have.
Next, you assert that there are large numbers of independent studies (“thousands” at one point) which provide evidence in support of this proposition. But many of the studies cannot provide such evidence: paleoclimate studies, for example, cannot directly give evidence about the effects of anthropogenic emissions, because such emissions were not happening at the time. What they can do is to allow us to test our understanding of the climate system, and that understanding is itself evidence about the causes of contemporary global warming. But you massively double-count if you present all of the studies that support our understanding and also count the models themselves (as you do, twice over, when you say that the models fail to reproduce trends without including anthropogenic forcing and also that we must accept high-end values for other drivers if we exclude anthropogenic effects; this is a single argument expressed in two different sets of words). In fact, the evidence here is the models, not the studies that support the models – the supporting studies may mean that we have high confidence in the predictions of a single model, but not that we can treat the predictions of the model in a new situation as having been confirmed thousands of times by the thousands of studies in quite different situations that we used to build up our confidence in the models. What we might have is 90% confidence in the description offered by one model, not 50% confidence in each of thousands of independent predictions of AGW.
Next, you misuse Bayes’s Theorem in the way that you combine your streams of evidence. Bayes’s Theorem allows us to update the probability that a hypothesis is true given evidence about related observations; it can be applied sequentially to each of many pieces of evidence, and the probability that the hypothesis is true builds up very quickly towards 1 in exactly the way you calculate. But what is needed in the update is not what you claim: it is NOT the probability that the hypothesis is true given the next piece of evidence, but the probability of observing the next piece of evidence given that the hypothesis is true (and also that probability given that the hypothesis is false). If our next piece of evidence, X, has a 50% chance of being observed if theory A is true, and has a 50% chance if theory B is true, then the Bayesian update does not bring the probability of A upwards towards 1, but somewhat closer to 50% (from either above or below 50% based on our previous evidence). In fact, if there is also a 30% change of observing X if theory C is true and a 10% chance if theory D is true, the Bayesian update will pull the probability that A is true down below 50%. And the contrary theories do not need to be the same in every iteration. It is possible that in an ice-bubbles study the alternative hypothesis is gas contamination, in a tree-ring study it is physical damage, and in a historical temperature set it is incomplete corrections for UHI. In each case, if the alternative is about as consistent with the evidence as the conventional explanation, the conventional explanation will not gain ground, even if it is a contender in every single study. That is why scientists need to design careful studies which demonstrably eliminate every alternative explanation in that particular study (that is, they must show that the probability of observing result X under contending theory B is some very small amount such as 0.01); and they have to repeat this feat again and again and again. There is no short-cut principle by which weight of numbers of unconvincing studies can eventually overwhelm all opposition: your “convergence” mantra is a fallacy, not a principle.
Finally, your formulation of the question at issue is wrong. You present it as a true-false hypothesis, but it is actually a measurement problem. Most people whose views deserve to be taken seriously have no doubt that adding CO2 and its friends to the atmosphere will warm the planet. The $64 question is: how much? If doubling CO2 will warm the planet by 0.1C, we have nothing much to talk about – the atmospheric scientists can be left in peace to get on with their work. If doubling CO2 will raise the temperature by 20C, we have a great deal to be concerned about (and I would want to withdraw a policy comment I made above). What matters is the magnitude, and the difficult problem is to measure the magnitude, given the complexity of the feedback processes and the fact that some of them (and even some of the forcings) are incompletely understood and even more incompletely calculable. There may be debate about which magnitude is to be measured: suppose it is the equilibrium temperature sensitivity. Suppose, further, that several careful studies of different kinds have estimated this to be 1.5+/-0.7, 4.2+/-2.9, 0.6+/-0.3, where the +/- ranges are 95% confidence intervals. (For the avoidance of doubt, I just made these numbers up.) All of these studies clearly support a value greater than zero, but they are inconsistent with each other. Are they supportive of the AGW hypothesis? Yes, of course, but that was never at issue. The average estimate is 2.1, but that is completely inconsistent with two studies and only marginally consistent with the other. Assuming normally distributed errors in each study, the maximum-likelihood estimate of the sensitivity is 0.77, but this most-likely estimate is still extremely unlikely, with a probability density there of only 0.001 (reflecting the mutual inconsistency of the estimates). That is, this made-up evidence overwhelmingly supports both of the propositions that (1) the true sensitivity is greater than zero, AND (2) we do not at all understand what is going on. The problems might be measurement errors, inconsistent definitions of what is being measured, or processes that we do not suspect. The only way to get a credible estimate is to work out why the estimates vary and to design new measurements that correct the problems; we have not done that until all of the estimates agree within observational error.
So when you say that you have read many studies and they overwhelmingly support the AGW hypothesis, can you tell us how many of these studies provide (independent) estimates of the sensitivity, and have you noticed whether these estimates are mutually consistent? Some such estimates exist, but there seem to be not very many of them. If the IPCC writers have cherry-picked the studies to which they give credence (or the literature is biased against publishing studies which give estimates away from the consensus), then the persuasiveness of the claims quickly becomes much less than is thought. So such allegations of (even unconscious) bias in the process are, if at all credible, very damaging to our confidence in the consensus views – these debates matter for scientific reasons, not just as political point-scoring. BTW, suppressing the highest of my made-up estimates would be as damaging as suppressing the lowest, because each action would convey the wrong impression that our understanding is better than it really is.
None of this is to say that your belief about AGW is wrong; I share at least some of your views. But the argument that you present for it is utterly bogus.
Paul, thank you very much for this. I have been struggling with a convincing way to counter the multiple lines of evidence argument used by the IPCC, this is much better than what I have come up with. I am going to elevate this to a post, to focus discussion on this.
You’ll have to excuse me for my very abbreviated response to Paul, but he misrepresents my argument, and it is the misrepresentation rather than the argument itself that he then disputes.
It’s true that I didn’t define “major”, because I was only trying to point out that claims that anthropogenic warming is trivial can be shown to be almost certaintly wrong based on the convergence principle. That includes claims that climate sensitivity is very low.
My use of Bayes theorem was correct – it involves the probability of a hypothesis given the evidence, which necessarily involves the probability of the evidence given the hypothesis as opposed to the alternative hypotheses, as well as the prior probabilities of the competing hypotheses. Please refer to standard formulations of the theorem for this principle.
The logic underlying the convergence principle was also correct.
Dr. Curry,
I wrote a blog post that appeared on WUWT this morning relating to AR5, you and other top climate scientists. If you have not seen it, I would love to know your opinion of the idea I put forward.
See http://wattsupwiththat.com/2011/02/13/a-modest-proposal-in-lieu-of-disbanding-the-ipcc/
hi ron, i spotted that. I totally agree with the general idea that the opposing arguments need to put forward in a systematic way, part of argument justification that i described previously on this thread. However, whether something like this can happen under the auspices of the IPCC, I view that as doubtful. A more comprehensive and credible NIPCC effort might be the better way to go, but organizing this through advocacy groups such as Heartland wouldn’t work IMO in terms of accusation of bias. To be credible, this would need to be led by the academics (as you suggest) and not advocacy group types. So there are substantial challenges to actually pulling something like this off, but I agree that this is needed to sort out the scientific ambiguities.
Yes, I don’t think it is likely the IPCC will agree to the proposal – not because it does not have value but because they have invested too much in the advocacy position they have taken. But I think it would be a mistake not to put the request to the IPCC. If the proposal is not put forward to the IPCC, some people will criticize the effort for ignoring the IPCC.
I am glad to hear you favor the idea. Please give it a little thought. This would be a chance to do an assessment correctly. I am certain the IPCC does some things correctly but they also make some glaring mistakes.
Dr Curry, I haven’t tried them, but there are web based Delphi and real time Delphi (I’m not familiar with how good this is) which could mean many many scientists could participate from their offices. No travel, no real expenses, it could be done in the time it takes to make a few blog posts. With such a low barrier to participation, I would think even the CRU people would be willing to participate.
Who would want to be left out of the largest scientific sampling of climate scientists? And if they didn’t show up, who would miss them?
yes i am looking at this, seems really interesting
“Author teams are instructed to make this evaluation of evidence and agreement the basis for any key finding, even those that employ other calibrated language (level of confidence, likelihood), and to provide a traceable account of this evaluation in the text of their chapters.”
WOW. More Orwellian doublespeak (or possibly I’m just too dense to get it). What does this mean?
I just hope AR5 doesn’t again try to use “real” numbers, like real science papers use, like “better than 90% confident” to express uncertainty, when it there is absolutely no way to assign a probability to their GUESSES! If they do, IPCC will again be the butt of many jokes, and Josh will have a heyday!
Upthread, Fred Moolten says:
A simple example illustrates the general principle. Imagine a hypothesis tested by 10 independent techniques, each yielding a probability value of 0.5 (50% probability) for the truth of the hypothesis. The strength of eah result is weak. However, the combined probability would be 2^-10, representing a 1023/1024 likelihood that the hypothesis is correct.
This is kind of true, but it says “hypothesis tested by 10 independent techniques.” That’s incorrect. It doesn’t matter how independent techniques are. What matters is how independent their conclusions are. This might seem like a trivial correct, but later Moolten says:
I’ve encountered many hundreds of datasets that provide independent evidence for major CO2 warming effects, including a dominant role in the warming of the past half century. I’ve seen almost none that offer evidence contradicting this.
There aren’t “hundreds of datasets” which provide independent evidence. The fact the origins of these datasets are “independent” doesn’t mean the evidence is independent. Either this is a misunderstanding by Moolten of how the evidence converges, a misunderstanding of the nature of the evidence, or it is just poor phrasing.
Ultimately, giving a numerical amount of evidence means nothing. Evidence doesn’t converge all to the same point. Instead, there are basically tiers. Datasets A, B and C may converge to support Point D. Point D may converge with Point G (which stems from datasets E and F) to support point H. While there are five datasets, there are only two independent points.
In a somewhat related issue, I’ve never been able to find a good breakdown of the claimed support for climate sensitivity despite how much people talk about there being “multiple lines of evidence.” There should be a simple breakdown of the various lines of evidence. Each line of evidence could then be broken down into it’s various evidentiary components. This could then basically be turned into a flowchart.
Doing so would make clear where uncertainties lay, as well as how large they are. This would be good for educating people as well as providing a clear outline which could then be a basis for discussions. It should be a relatively simple task for something like the IPCC, yet there seems to be no effort to do anything like it.
I have to disagree,Brandon, but perhaps we are having a semantic quibble. What must be independent are the approaches to the problem. For example, one approach to climate sensitivity is to observe the temporal relationship between CO2 and temperature, correcting as much as possible for confounding variables. If a perfect correction were possible, we could draw a conclusion with very high certainty. As it is, we can draw a similar conclusion with only modest certainty. A second, independent approach is to compute CO2 forcing from radiative transfer codes, the Stefan-Boltzmann law and observed CO2 measurements, utilize the Clapeyron-Clausius equation to estimate the water vapor response, and then compute (along with other feedbacks) an overall response. Again, as most observers know, this could in theory yield conclusive evidence but in practice is surrounded by substantial uncertainty, and therefore provides tentative answers. A third example is to measure the temperature response to an identifiable forcing other than CO2 (e.g., a volcanic eruption) and extrapolate the results to CO2 on the assumption (unproved) that sensitivity is independent of the nature of the forcing. Many other examples could be cited. Despite uncertainty surrounding each, their convergence toward a greater degree of certainty is the principle I am emphasizing.
The number of datasets, derived from independent approaches to evidence from over 400 million years, and which converge to substantiate the existence of strong anthropogenic causality certainly exceeds one hundred. If the question were simply how many such independent datasets bear on climate sensitivity per se, the number will be less, but still very possibly over one hundred.
I do agree that the IPCC, despite a creditable job in chapters 8 and 9 of AR4 WG1, should seek a more systematic approach to identifying the number, degree of independence, and conclusiveness of the lines of evidence.
Fred Moolten, you said the techniques need to be independent. I say the conclusions need to be independent.
You offer as example three lines of evidence which can be used to support the idea of AGW. Each of these uses an “independent” technique. However, each also results in an “independent” conclusion. As such, your examples meet the both of our requirements. This obviously could not contradict my position (nor yours).
Ultimately, the technique used to come up with a conclusion isn’t important. If the same technique can come up with two independent conclusions, it doesn’t matter that the technique used for both is not “independent.” Moreover, independent techniques can produce non-independent conclusions. Because of these two issues, you cannot measure convergence based upon the techniques used, even if those techniques can often be a useful proxy for independence.
The number of datasets, derived from independent approaches to evidence from over 400 million years, and which converge to substantiate the existence of strong anthropogenic causality certainly exceeds one hundred. If the question were simply how many such independent datasets bear on climate sensitivity per se, the number will be less, but still very possibly over one hundred.
The number of datasets supporting something means nothing. What matters is how many independent lines of evidence there are. Unless you mean to tell me there are over a hundred independent lines of evidence, your talk about datasets means little.
As a side note, the idea of convergence requires you detail exactly what is being converged to. Convergence to the point, “Anthropogenic emissions of CO2 increase the average temperature of the planet” is quite different than if you use actual numerical values.
Don’t know if this helps, but the easiest way to see the fallacy of what Fred is suggesting (and I’m not necessarily arguing this, just using it to demonstrate the fallacy) is if CO2 increase and temp increase are correlated but not casual.
I would add that in fact the interesting proposition isn’t whether CO2 does cause temp increases, it is the degree to which this occurs. The IPCC proposition is that AGW causes more than 50% of the 20th century temp increase (was it 90% confidence?).
Accumulating large number of studies that simply show CO2 correlates with temp increases doesn’t help, although showing that prior period CO2 increases don’t add significantly to the explanatory power of models of temp increases over the 20th century will tend to raise doubt about a casual relationship.
Correlation is useful information, but the anthropogenic causality concept relies on a large array of evidence that substantiates the causal element. The importance of correlation is not only that it is confirmatory, but that when other sources of the correlation are adjusted for, a temperature change in response to a CO2 change more clearly signifies a causal relationship in its own right.
I don’t know whether you are familiar with the epidemiology of medical and health-related events (e.g., cigarette smoking and cancer), but many of the same principles apply there as well.
Fred
There is however a qualitative difference between the medical example and CO2 and temp in climate example. Smoking and cancer wasn’t established by epidemiological studies alone, rather experiments that controlled confounding variables were required.
CO2 and temp is more like attributing 20% of crime to absent fathers at 3 years old (and being male of course). It’s a complex problem not least because of the problems running experiments. I’m not sure if you had a chance to look at the Jarl K. Kampen paper I mentioned over at the Spatio-temporal chaos thread, that concludes with some useful advice in this regard.
Re the IPCC (yr comment below) you’re probably right, I was simply making the point that the argument is about degree.
I would argue that CO2-mediated warming has not been substantiated by correlational studies alone, but by an array of evidence that includes the spectroscopy of infrared absorption by CO2, field measurement of CO2-mediated changes in radiative flux, and observational data confirmatory of predicted changes in atmospheric water vapor, ice, and (with some disagreement) clouds.
The correlative data are powerful, but they are complemented by theoretical and observational quantitation.
Fred
What is the form of the relationship between CO2 and global temp i.e. if the relationship is something like:
T(t) =f(CO2(t-1), …. CO2(t-n)) (or however you care to characterise it)
what does f(…) look like?
Briefly (because it has been covered in the radiative transfer thread and elsewhere):
Delta T = λ Delta F, where Delta F is the change in radiative flux at the tropopause, given as
Delta F = 5.35 ln C/Co
and λ is the climate sensitivity parameter.
In other words, temperature changes as a function of the ln of the CO2 concentration (within the range of concentrations relevant to foreseeable levels of CO2).
Current estimates center around a temperature increase of about 3 C for a doubling of CO2.
That’s a pretty simple model.
Is lamda a constant and what’s the lag between CO2 increase and Temp rise?
It’s an inconstant constant. It is determined both empirically and through models, and in each case is generally considered to remain constant over multidecadal intervals or even centuries. In theory, it could eventually vary, but at a time and climate too far from where we are now for menaingful assessment.
Temperature starts to rise within nanoseconds of a rise in CO2. However, the climate system is characterized by enormous thermal inertia, due mainly to the heat capacity of the oceans. As a consequence, rises sufficient to be distinguished from noise will typically require years or even a few decades at the current rate of CO2 increase.
So you think we can approximate the temp change from a base year by the weighted sum of ln CO2 impulses over the last few decades or so (with weightings a function of t and recent weightings being comparatively small – in fact not significantly greater than 0) plus some noise?
BTW what kind of noise?
The noise is everything else – anthropogenic and natural – that causes temperature to vary.
Climate models use changing CO2 with time as boundary conditions, but also include to the extent available the other variables – solar, aerosols, ENSO and other internal climate fluctuations, volcanism, etc. The output is a curve of temperature change from the starting year over the course of subsequent years.
Climate models run without anthropogenic factors yield curves showing interannual bumps and dips, but with long term averaging. When anthropogenic factors, including CO2, are added, they yield a rising temperature curve in accordance with the logarithmic relationship.
An example is Hansen 1988 . Compare fig. 1 (noise) with fig. 3 top (anthropogenic).
So by definition you are saying noise is everything that doesn’t cause the recent temperature rise (it only causes fluctuations around a long term average), and CO2 is what does?
Have you tried to fit say a ARIMA model to the global d temp time series to describe this noise and then see if adding lagged ln CO2 to it adds any significant explanatory power? It’s the minimum empirical test one would expect this model to pass under the circumstances.
No, “noise” is everything that happens when the model is run without the perturbation of interest. Whether temperature rises or not is irrelevant. In the case of CO2, it rises, whereas anthropogenic aerosols cause the temperature to fall.
That’s not exactly how the term “noise” is generally used, but I’m assuming that you are arguing the GCM stabilises at some point (I’m unclear why this should happen, what happens if it doesn’t, or what happens if it stabilises on the wrong global temp and/or shows other biases).
But anyway I repeat my second question.
You have now used your GCM to develop a hypothesis about the relationship between CO2 and global temp, have you tried to verify this empirically along the lines I suggest?
Fred,
Concerning the last 50-100 years, the increase in CO2 concentration is certainly affected very little by the increase in temperature. Simple observations and balance calculations tell without doubt that the totally dominating reason for the increase in CO2 concentration is anthropogenic and in particular the use of fossil fuels. Various natural processes have acted as feedbacks. The sign of the feedback depends on selected system boundary (setting boundary to include only the atmosphere leads to a strong negative feedback, but “feedbacks of feedbacks” and some other factors contribute to positive direction).
When the correlations are used in the analysis of preindustrial and paleoclimatological data, the nature of correlations is not as simple. The natural interpretation involves strong positive feedbacks, but they may be restricted to particular historical situations. The feedbacks are certainly not linear, which makes drawing quantitative conclusions even more difficult.
HAS – A small point regarding your comment, but I don’t think the IPCC attributed most 20th century warming to anthropogenic factors, but rather that anthropogenic greenhouse gas emissions account for more than 50 percent of warming in the later decades of the century.
So Mr. Moolten, you say that from a total of 0.7C what we’ve experieced in two 30 year periods of warming since the 1900 or so (1910-1930, 1970-2000), around 0.5C-0.6C took place after 1970 (HadCRUt), and of that half was anthropogenic, say 0.25-0.3C? In your opinion, what would that suggest about the climate sensitivity, and furthermore, what was the natural cause of the rest? Sun? ENSO? PDO? Clouds? Volcanic activity? How were they quantified?
Yes, I am aware that my questions above cannot be answered with reasonable certainty. Still one has to wonder the debate of 2C “critical limit”, often cited by MSM and politics as some kind of target, as we are nowhere knowing what kind of changes in atmosphere and various natural cycles would cause that – not to mention when that might happen, if ever.
@anander So Mr. Moolten, you say that from a total of 0.7C what we’ve experieced in two 30 year periods of warming since the 1900 or so (1910-1930, 1970-2000), around 0.5C-0.6C took place after 1970 (HadCRUt), and of that half was anthropogenic, say 0.25-0.3C?
Based on this model the anthropogenic component ought to be closer to 0.46 °C, with the rest being a 0.14 °C swing attributable entirely to ocean oscillations (AMO and PDO). Note that the unexplained residues (black curve) at 1970 and 2010 are essentially the same, so no net change there despite some intermediate fluctuation.
The IPCC did not say “exactly 50%,” they just said “more than 50%.” This is consistent with the anthropogenic part being 75%.
Of course I don’t understand any of the statistics (the obvious reply being. “that is obvious”.
A simple example. Say there is one FACTOR that will make a manned `mission to mars’ successful. Getting this factor `correct’ depends on 100 other independent factors, all determined with a low degree of confidence. However, since these all converge with a low probability, one is highly confident that the critical Factor is a known.
I don’t think I am going on that rocket.
One comment on all these data points derived from independent data sets. As we have seen the GISS, HadCrut, Met temperature data sets are not independent at all, despit ethat arguement being used as proof. In fact they all basically use the same original data set. So I would say that all of these independent data sets are not independent. Far from it.
Many of the arguments are based on a few so called knowns that are in fact debateable.
If A=B=C=D=E…. life is good…. but if anywhere along there, saC does not = D, then things suddenly start to go to crap. Goes to the idea that it anly takes one think to prove a theory wrong.
If the 100 other factors completely determined the outcome, and if they were all truly independent, then even if each were associated with a 20 percent certainty and an 80 uncertainty, confidence that the Factor is correct would be well over 99 percent. It would be incorrect with a probability of 0.8^100, which is very small.
There is a critical distinction, however, that must be made regarding values such as these. If 20 percent were the probability the Factor is correct and 80 percent that it is incorrect, then it would be dangerous to go on the mission. If, however (as in my climate analogy), the 80 percent is simply the level of uncertainty (Did we measure it correctly? Did it change since the last measurement? ), then the high confidence estimate applies.
It’s just like the Drake equation, which can not be used to draw conclusions of any kind. Any result is worse than useless. Much worse because it can give an illusion of a meaningful result.
I think anyone familiar with the literature on quantifying uncertainty, and specifically with a focus on results relevant for policymaking, is aware that the main points about uncertainty are already well known in climate science and part of over a decade of focused discussion of analytics.
I don’t think there’s anything new about Judith’s concerns. I’m not sure why she thinks her concerns are significant in ways that have not been adequately appreciated and understood in the climate science community or new in relation to results for policymaking.
So I look forward to her paper
Martha
I have a background in policy analysis, political decision making and to some extent statistical inference. I, unlike you, find the treatment of uncertainty etc in climate science deserves a “must try harder”.
A simple test I’d suggest for you to try is to go through recent climate science papers and count how often the output from climate models is used as if it is empirical data with no robust attempt to quantify the uncertainty involved.
When scientists have a reasonable quantitative estimate of likelihood, why does the IPCC want to reduce this quantitative estimate to a verbal term (such as “likely”) that means different things to different readers? This question is especially pertinent for readers who aren’t familiar with the alternative terms that could have been used to describe likelihood. What is gained by this vagueness? Why do scientists tolerate it? The IPCC’s SPMs cover technically difficult subjects; the amount of technical difficulty added by including quantitative estimates of likelihood is trivial in comparison. Reporters, activists, and policymakers wanting to quote such statements will have an incentive to learn what they mean, so they can respond to questions from editors or the public.
If a single die is rolled, different people will use different verbal terms to describe the probability that the outcome will be a 6 or a number >4, but both scenarios get lumped into the category of “unlikely” by the IPCC. For me personally, neither of these outcomes is “unlikely” in a scientific sense and the latter might be described as “there is limited evidence suggesting the outcome will be 66% likelihood” than they do of “likely”.
I was at the park yesterday and half of the park was covered in snow. Seems like a sign of global cooling to me. Why, imagine if that ice continues to grow…this is how stupid and retarted the whole concept of global warming is.
Perhaps the good doctor would be well advised to read the blog rules before posting. A personal attack like that contained in your first sentence is frowned upon here. Also, there are a couple of heavy hitters that are frequent commentators that will rip an argument such as yours to shreds. I, myself seldom post as most of the time I am quite content to learn from others whose knowledge way surpasses mine in this debate. Stick around, listen and learn, because jumping in guns blazing, is a sure way to end up full of holes.
that message was deleted, thanks for flagging it
I’m afraid this is going to be another mammoth post- apologies!
I’m really interested to what you’ll come up with Judith, it’s not an easy subject to write on or get your ideas across well on.
With regard to a few of your proposed sections:
“2. Framing of the climate change problem”
This has been touched upon above, but i think it is worth saying again that it is the political framing of the issue that has caused most of the issues/disagreements. Granted there are seen to be many issues with the science but it is the response to the criticisms, the tone of the debate and the way the ‘message’ is reinforced via the political and media-based entities that REALLY get people’s goats (poor goats).
If i were you I’d do everything i can to try to re-frame the issue as a scientific debate ONLY. Try to crowbar the political and the environmental aspects from the debate and return to the science.
This is actually far more important than I’d imagine most people realise. The uncertainties and the deficiencies in the theory (such as they are) tend to be glossed over or ‘forgiven’ due to the scale, the seriousness and the imminence of the perceived danger (cAGW). These aspects of the debate seem to be pulling the science up by its bootstraps- i.e. the overall theory is lent more credence not because of the solid science, but because of its potential seriousness.
This is never more evident than in the decision making processes in the UN, the IPCC and (particularly) the UK govt. ‘We’ constantly hear that the issue is so serious that we cannot let a few issues with a data set/misquoted references/incorrect references/suspect methods etc etc get in the way of saving the planet. I genuinely think that this has not only given the theory undue impetus but it has also blurred the actual science around the issue. Please note I’m not saying which way the issue leans (I’m going through a ‘try to prove myself wrong phase’), only that the debate is becoming increasingly difficult to engage in as the political and environmental issues are allowed to muddy the science.
3. Uncertainty, ambiguity, indeterminancy, and ignorance
This is the most important section to my (fairly simple) mind. I am not an expert in calculating uncertainty, so would leave this to the far more qualified people on the thread, but my concerns would be about representing the ‘collective uncertainty’.
I.e. if you have the following aspects; A, B, C, D and E which make up and support Theory X. How do you calculate and present the uncertainty of the individual aspects and of the theory, X, as a whole?
For example- if the following uncertainties/error limits apply:
A= 5%
B= 4%
C= 35%
D= 40%
E= 2%
What is the level of uncertainty surrounding X? Is it the average uncertainty, the upper band uncertainty (the 40%) or is it/should it be an accumulative factor of them all (i.e. > than 40%)? It seems to me that regardless of the ‘low’ values assigned to these uncertainties, the sheer prevalence of them should RAISE the level of uncertainty (though of course you’d have to have a base-line level of uncertainty that was classified as normal and un-accumulative, if you follow).
4. Consensus, disagreement and justification
Now the difficult one; strictly speaking and following my comments on your section 2, the first word- consensus should be harshly criticised. After all, strictly speaking consensus has no place in science proper. It’s comforting to know your position is supported by your peers/other experts, but it is zero ‘proof’ that your position is correct. Scientific history is littered with such examples of the ‘burst-consensus’. I feel, quite strongly, that the opening of this section should detail the perils of consensus and attempt at least to relegate the term ‘consensus’ back to the political arena where it belongs. Though note- this is not the same as dismissing a large body of evidence, only the ‘artificial’ protection of a consensus opinion.
The second point, disagreement, I think you’ll have fun with- I’d go to town on ‘them’, but remain constructive.
The third, justification also ties in to my points on your section 2.
As I suggested in section two, the justification of the actions (i.e. impending doom etc etc) are used to shore up the science (whether it needs it or not) and again this needs to be separated. The justification needs to be tied, firmly and with chafe-free rope (!) to the uncertainties. Some sort of critical limit needs to be established- i.e. once the uncertainty falls below X then Y percentage of actions Z become applicable.
How you’d do this, I have no idea, it requires far more thought that I can dedicate to it just now- but that’s where my thinking takes me.
Now all make yourselves a nice brew for sitting through that twaddle!
The real issue is disagreement about policy and the disagreement about policy has very much to do with uncertainties, but these indisputable great uncertainties are on the level of the consequences of policy choices and perhaps to a lesser extent on the level of the possibility of really catastrophic outcomes, such outcomes that almost everybody considers to be very unlikely, but a few on the extreme edge as likely outcomes of continued use of fossil fuels.
In Europe the environmentalist approach that emphasizes the possibility of catastrophic outcome and applies the precautionary principle explicitly or implicitly has a strong influence as can be seen in the decisions taken by EU. In most other parts of the world this line of thinking is not as strong. The uncertainties of the climate science are not really essential for the controversy, but the climate science is being used as a tool in the debate. Discussing the real uncertainties and ethical issues of the policy disagreement appears too difficult and abstract to the dedicated proponents of both sides. Therefore they both have projected the controversy to disagreement on the climate science.
Solving better the imaginary disagreement on climate science is not going to resolve the real problem. It can be resolved only, when people are ready (willing and capable) to discuss the real sources of disagreements.
Pekka, you claim that the real issue is disagreement about policy is ambiguous.The real isue in the USA is whether or not there is a problem calling for action. Is that a policy decision in your terms? It obviously brings in the scientific issue. Or are you referring to the policy issues regarding which actions to take. We have those too, mostly regarding the bogus concept of clean energy.
David,
Most people start from being convinced about the policy conclusion. Then they work backwards and bend their thinking about climate change to support the conclusion.
People do not know enough about science and about the mechanisms that link science to the rational decisions to do it the other way. Actually it is highly questionable, whether anybody knows enough about the other steps to deduce correct policy actions from any certainty about climate science, unless the science will in future give a result well outside the limits considered likely by the mainstream science.
Further on this logic.
Because the initial and strongest conviction is on the policy conclusion, getting one of the erroneous beliefs on the basic science corrected is not going to change much, as there are always other details or steps to hang on, some of them even fully legitimate issues of disagreement.
Dr Curry
I am not persuaded by the approach that the uncertainties and gaps in the scientific knowledge must be reduced and described accurately so that the policymakers can get on with their work. The economic theory of information starts with the question “what decision has to be made?” – only once that is settled can we decide what information is relevant, how accurate it needs to be, and how much it is worth spending to improve which bits of it. I know the IPCC was given the mandate to provide scientific evidence for the policy-makers, but IMO that is one cause of the muddle that the IPCC has got into.
If we start with the possible policy decisions, we can rule out massive voluntary decarbonisation of the economy. It is just not going to happen for the simple reason that no democratic government would survive introducing effective decarbonisation on the scale talked about. Governments without democratic legitimacy (eg China) are even more terrified of their population and less inclined to demand massive sacrifices in living standards. Some countries have introduced timid schemes (governments and interest groups like subsidies for things like wind power), but anything with teeth gets pulled back pretty quickly just before it might start to bite.
More plausible are decisions that make sense without climate change and will make even more sense if climate change turns out to be a problem. I will believe the Europeans are serious about climate change when they stop subsidising their coal industry – but that would put miners out of work in a industry that would go under (as the British industry did) without active state help. Subsidies are economic nonsense, so they should be abolished anyway, and if CO2 is a problem then a subsidy for coal is nonsense on stilts. It would be sensible for Americans to stop building cities on coastlines in hurricane zones – these cities get regularly damaged as it is, so this makes sense even if climate change turns out not to be catastrophic. (Not that existing cities should be removed, but they should stop putting more capital in harm’s way.) Australians have largely dried up the flow in the Murray-Darling river system, partly in order to irrigate rice and cotton. Even if the climate does not change, one might consider it insane to grow rice in a semi-desert continent, when nearby countries with lots of rain can grow any amount. And on and on.
None of these examples require international agreement; they simply require governments to face down their own domestic interest groups. Also, none of them requires the science to be settled. Whether the temperature rises 0.1C or 1C or 10C and sea levels rise 10cm or 10m over the next century, these actions would have been beneficial; just much more beneficial if the change is more extreme.
For decisions like these, we have enough information now to go ahead. Over the next few decades, the scientific estimates will become clearer and the policy actions can be revised or extended accordingly. (Anyone contemplating the political difficulty of getting even these policies implemented will understand why thoughts of turning off the power stations or forcing most vehicles off the road are just fantasies.)
More difficult are policies that make no sense unless serious climate change is very likely to occur. These will mostly not be implementable until there is (a) no real doubt about the reality of the change (b) reasonably accurate measurements of the magnitudes and (c) reliable regional forecasts of what is going to happen where. These are needed because there needs to be enough information to make fairly good cost-benefit estimates for implementing a particular policy in a particular country. My judgment is that we now have (a) but not (b) or (c). Here, better science will help; but often it will not be needed, because the time-frames are often fairly loose. For example, some crops will no longer grow in their current locations. I live near Wellington, New Zealand. Rough estimates suggest that over the next century Wellington’s climate might become like Auckland’s, Auckland’s like Sydney’s, Sydney’s like Brisbane’s, and Brisbane’s like Singapore’s. Casual observation suggests that people already live in these places, and that various forms of agriculture are successful nearby. Farmers in our part of the world regularly adopt new crops and new methods over time-scales of a decade or so, and they can be expected to adjust as conditions change in future. No government action or policy response is required at all; farmers can just be left to get on with it. Subsistence farmers have less margin for error but they are not helpless, and climate changes over periods of decades, with appropriate external assistance, should not usually produce intolerable stresses.
Reframing the problem, not as “what uncertainties in the science do we need to identify and resolve”, but as “what political choices need to be made in the near future”, produces a quite different sense of how to proceed. I have not presented a complete list of issues, of course, and there may be some matters for which it may be important to reduce specific scientific uncertainties quickly. But whether such matters exist, and what they are, needs to be decided by looking at policy options, not by looking at science.
Paul, excellent post, your statement is pretty much consistent with what I have been saying. My issue is not so much about resolving uncertainties (see my uncertainty monster post), but rather understanding and characterizing them. This kind of information is important in making policy decisions, and overconfidence in the science and future projections can result in costly and/or ineffective policies. The no regrets type of policy decisions (e.g. your 3rd and 4th paragraphs) is what I have been referring to as robust decision making and no regrets policies, I agree that this is the route that makes the most sense. Your statement “what practical choices need to be made in the future” is spot on; I will remember this phrase and use it!
Excellent post. I’d argue that the question being asked in the political arena is not answerable on a scientific basis yet. Either change the question, increse the scientific knowledge, or both.
BTW, in my opinion, asking the wrong question is the simgle biggest area where unexpected risk can enter into a decision making process. It’s roughly equivalent to specifying a software design. How software is specified has a huge impact on the work that has to be done and the final quality of the product. Everyone reading this has seen bad software due to poor specifications.
Preferable to UNFCCC policy based evidence making, I suggest.
Dr. Curry, I think Paul Dunmore’s comment with risk management and policy highlights one of the side issues that tend to bog down discussion. Using Fred Moolten’s comments on this thread as an example, almost all of that has been discussed have a variety of assumptions. In a systematic approach to a problem such as one would do in a risk management scenario, assumptions are required to be stated and examined within the framework of the model. In Fred’s model, he has assumed independence, yet opponents point out the role of prevailing assumptions and non-independence.
Whether one looks at the well reasoned comment by Paul, or the discussion of Fred’s point, the framework of the assumptions, in many cases, determine whether fruitful discussion can be acheived. Your #3 on the “unknowns” is good. I think #2 on framing needs something added concerning the role of our assumptions and how in the mix of science and policy that climate change is, these assumptions have a way of framing and even moving the target that can spawn ineffective communication. i.e. tribes talking past each other. You say “An underappreciated aspect of uncertainty in climate change is associated with the questions that do not even get asked because of the way that the climate change problem has been framed” I think somewhere and #3 may be a better place, is the role that assumptions play, both the good and bad part, should be included. To me, this is especially true when there are at least two sides trying to use both assumptions inherent to policy and science to “win” for their side.
I like the idea of this post; I think blogs can do great service as Open Notebooks. In that spirit, here’s some links to consider for your reading stack.
Special issue of JCP on Uncertainty Quantification, especially of interest to NWP and climate folks might be this paper. I’m sure that Martha’s right about climate scientists already knowing all there is to know about UQ, it’s just disappointing to see that nobody brought their GCM out to play for this special issue ; – {
Having said that, this thesis is a promising indication that we might see some contributions from climate scientist in the future at the brand new International Journal of Uncertainty Quantification. Based on the quality of work that’s come out of some of the folks on the editorial board, I think this would be a good one to watch.
“The AR4 guidance (paragraph 13) presented quantitatively calibrated levels of confidence intended to characterize uncertainty based on expert judgment regarding the correctness of a model, analysis or statement.”
Now look. Since when did scientific certainty depend on “expert judgment”, and since when, for anybody from Freddy’s mother to a retiring nuclear physicist, could an opinion be held with 75%, 90%, or 95% certainty — and any way be determined to distinguish between them?
All this reminds me of growing up in rural Wisconsin, where the old German farmers had a way of weighing their cows:
First, you lay a long, strong log across another log, so they cross exactly at the middle.
Then, you put the cow on one end of the log.
Then you pile stones, very carefully, on the other end of the log, until they exactly balance the cow.
Then you try to guess how much the stones weigh.
[I thought this scientific how-to might be useful in a technical thread.]
How important is uncertainty?
Who knows?
How can it be reduced?
Hard to say.
How can we stop politicians from making self-serving assumptions and twisting science and scientists to suit their ends?
Definitely impossible.