by Judith Curry
My paper “Reasoning about climate uncertainty” has now been published online at Climatic Change; it looks like mine is the first to make it online of the papers in the special issue entitled Framing and Communicating Uncertainty and Confidence Judgments by the IPCC.
Also of relevance, there is a new working paper from the LSE Grantham Research Institute entitled “Scientific uncertainty: a user’s guide” (h/t Bishop Hill).
Reasoning about climate uncertainty
Judith Curry
Received: 1 April 2011 / Accepted: 14 June 2011 # The Author(s) 2011.
Climatic Change DOI 10.1007/s10584-011-0180-z
This article is published with open access at Springerlink.com
Abstract. This paper argues that the IPCC has oversimplified the issue of uncertainty in its Assessment Reports, which can lead to misleading overconfidence. A concerted effort by the IPCC is needed to identify better ways of framing the climate change problem, explore and characterize uncertainty, reason about uncertainty in the context of evidence-based logical hierarchies, and eliminate bias from the consensus building process itself.
You may recall the previous thread here where I posted an earlier draft of the paper for comments. I GREATLY appreciate the comments that I received on this thread, and in my acknowledgements I made the following statement:
Acknowledgements. I would like to acknowledge the contributions of the Denizens of my blog Climate Etc. (judithcurry.com) for their insightful comments and discussions on the numerous uncertainty threads.
Its always interesting to go back and read your own paper after not thinking about it for a few months. I’m still pretty happy with it, but upon rereading section 2, it seems to have gotten a bit muddled in all the editing.
Once the special issue is published, we will have a thread discussing the other papers.
Scientific uncertainty: a user’s guide
Seamus Bradley
Abstract. There are different kinds of uncertainty. I outline some of the various ways that uncertainty enters science, focusing on uncertainty in climate science and weather prediction. I then show how we cope with some of these sources of error through sophisticated modelling techniques. I show how we maintain confidence in the face of error.
Contents
1 Two motivating quotes . . . . . . . . . . . . . . . . . . . . . . . 5
1.1 Laplace’s demon: ideal scientist . . . . . . . . . . . . . . 5
1.2 On Exactitude in Science . . . . . . . . . . . . . . . . . . 6
1.3 Characterisations of uncertainty . . . . . . . . . . . . . . 7
1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Data gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Truncated data: imprecision . . . . . . . . . . . . . . . . 9
2.2 Noisy data: inaccuracy . . . . . . . . . . . . . . . . . . . 9
2.3 Deeper errors . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Meaningfulness of derived quantities . . . . . . . . . . . 11
3 Model building . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 A toy example . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Curve fitting . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Structure error . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Missing physics . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.6 Discretisation . . . . . . . . . . . . . . . . . . . . . . . . 17
3.7 Model resolution . . . . . . . . . . . . . . . . . . . . . . 18
3.8 Implementation . . . . . . . . . . . . . . . . . . . . . . . 18
4 Coping with uncertainty . . . . . . . . . . . . . . . . . . . . . . 19
4.1 Make better measurements! . . . . . . . . . . . . . . . . 19
4.2 Derivation from theory . . . . . . . . . . . . . . . . . . . 20
4.3 Interval predictions . . . . . . . . . . . . . . . . . . . . . 22
4.4 Ensemble forecasting . . . . . . . . . . . . . . . . . . . . 23
4.5 Training and evaluation . . . . . . . . . . . . . . . . . . 25
4.6 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.7 Past success . . . . . . . . . . . . . . . . . . . . . . . . . 29
5 Realism and the “True” model of the world . . . . . . . . . . . 29
6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
IMO the chapters on Model building and Coping with uncertainty are the most interesting. There are aspects of the coping with uncertainty chapter that I don’t agree with, particularly the section on robustness. A main theme of the paper is “ways we have of maintaining confidence in our predictions despite these errors.” I don’t think the paper makes a very convincing case for this.
New paper by Jonassen and Pielke Jr
Just after posting this, I spotted this Pielke Jr’s blog
Jonassen, R. and R. Pielke, Jr., 2011. Improving conveyance of uncertainties in the findings of the IPCC, Climatic Change, 9 August, 0165-0009:1-9, http://dx.doi.org/10.1007/s10584-011-0185-7.
Abstract. Authors of the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) received guidance on reporting understanding, certainty and/or confidence in findings using a common language, to better communicate with decision makers. However, a review of the IPCC conducted by the InterAcademy Council (2010) found that “the guidance was not consistently followed in AR4, leading to unnecessary errors . . . the guidance was often applied to statements that are so vague they cannot be falsified. In these cases the impression was often left, quite incorrectly, that a substantive finding was being presented.” Our comprehensive and quantitative analysis of findings and associated uncertainty in the AR4 supports the IAC findings and suggests opportunities for improvement in future assessments.
Some excerpts:
If we confine our attention to those findings that refer to the future, one can ask how many IPCC findings can be expected to become verified ultimately as being accurate? For example, if we consider findings that refer to future events with likelihood in the ‘likely’ class (i.e., >66% likelihood) then if these judgments are well calibrated then it would be appropriate to conclude that as many as a third can be expected to not occur. More generally, of the 360 findings reported in the full text of WG1 across all likelihood categories and presented with associated measures of likelihood (i.e., those summarized in Table 2 below), then based on the judgments of likelihood associated with each statement we should logically expect that about 100 of these findings (~28%) will at some point be overturned.
Although the IPCC has made enormous contributions and set an important example for global assessment of a vexing problem of immense ramifications, there remain clear opportunities for improvement in documenting findings and specifying uncertainties. We recommend more care in the definition and determination of uncertainty, more clarity in identifying and presenting findings and a more systematic approach in the entire process, especially from assessment to assessment. We also suggest an independent, dedicated group to monitor the process, evaluate findings as they are presented and track their fate. This would include tracking the relationship of findings and attendant uncertainties that pass up the hierarchy of documents within AR5. Strict rules for expressing uncertainty in findings that are derived from (possibly multiple) other findings are needed (see, e.g., the second example in the Supplementary Material).
It is not the purpose of this note to discuss other, related scientific assessments of climate change knowledge; but, we do note that our preliminary analysis of the U.S. Global Change Research Program Synthesis and Assessment Products suggests a far less systematic application of the guidance supplied to authors of those documents and far less consistent application of the defined terms. We believe that the concerns we have expressed here, and the resulting recommendations, apply more broadly than the IPCC process.
You can find the full text here. The full dataset of IPCC findings is online here.
Congratulations, Dr Judith. I recall that there was a big discussion here about how many examples of IPCC ucertainties to put in the paper to illustrate your points. I reckon it’s about right. Section 2 does not seem too muddled to me.
I agree and respect Professor Curry for patiently pointing out uncertainties that were “glossed over” in the UN’s IPCC reports.
The basic problem seems to be the IPCC’s backward approach to science:
Accept the validity of a conclusion that is popular with world leaders and then select experimental data and observations to support that particular conclusion.
Good luck, Professor Curry! Arthur D. Little, Inc. reportedly made “a silk purse out a sow’s ear” in 1921, but they had better starting material than the UN’s IPCC Reports.
http://libraries.mit.edu/archives/exhibits/purse/
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
The National Academy of Sciences was established by an Act of Congress . . . to “investigate, examine, experiment, and report upon any subject of science or art” whenever called upon to do so by any department of the government.
http://www.nasonline.org/site/PageServer?pagename=ABOUT_main_page
Can you help develop this list of unavoidable questions so Congress can obtain information from NAS on today’s AGW controversy?
1. The influence of CO2 on Earth’s changing climate is:
a.) Large b.) Small c.) I don’t know
2. The Sun’s influence on Earth’s changing climate is:
a.) Large b.) Small c.) I don’t know
3. The interior of the Sun is mostly:
a.) Hydrogen (H) b.) Iron (Fe) c.) I don’t know
4. The Sun is powered principally by
a.) H-fusion b.) Neutron repulsion c.) Electricity d.) I don’t know
5. Which publication (and references) is more scientifically credible
a) IPCC reports b.) APEIRON report c.) I don’t know
a.) IPCC reports: http://www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml
b.) APEIRON report: http://arxiv.org/pdf/1102.1499v1
With your help, endless discussion can cease and integrity restored to science and government!
Sincerely,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
http://myprofile.cos.com/manuelo09
Not to mention, of course, that the IPCC’s characterization of the various “probablility” levels as likely, very likely, etc. is disingenuous and highly misleading. The standards set are pathetically low, and would be/are risible by comparison with those required of any hard science papers or hypotheses (in physics, etc.)
The clear implication is that climate science is a very “soft” science, if it is one at all, whose conclusions are not to be relied on and which can be counted on to contradict itself 11 times before breakfast.
From the Bradley paper:
Laplace was operating in a purely Newtonian world, without quantum effects, without relativistic worries. In this paper, I will do the same. On the whole, climate scientists make the assumption that it is safe to model the climate system as a Newtonian, deterministic system.
An interesting assumption to make given the immediately preceding paragraph (parenthetical annotations are mine):
Laplace attributed three important capacities to his demon.2 First it must know “all the forces that set nature in motion”: it must be using the same equations as Nature is (safe to say this condition is not yet satisfied). Second, it must have perfect knowledge of the initial conditions: “all positions of which nature is composed”. Laplace makes reference only to positions, but let’s grant that he was aware his demon would need to know various other particulars of the initial conditions — momentum, charge. . .— let’s pretend Laplace was talking about position in state space (Another unfulfilled condition). The third capacity that Laplace grants his demon is that it be “vast enough to submit these data to analysis”. This translates roughly as infinite computational power. I will have less to say about this third capacity, at least directly. (even if it existed, it would be irrelevant in the absence of the first two)
Immediately following he uses a map metaphor to argue that an exact representation of the climate would be nearly useless, but this is dependent on the unsupported assumption that such a model could not be time compressed to facilitate prediction.
The compression issue is an interesting way to look at the problem. If one uses lossy compression such as jpeg, one can later expand the compressed image to closely represent the original. In effect that is what all climate models seek to achieve.
However, if one applies the same technique to an encrypted version of the picture, and later expands the encrypted image, the image will have no resemblance to the original after decryption.
So, the question then becomes, from the point of view of information theory does the complexity of the climate system constitute a form of encryption that prevents modellers for compressing the problem space?
The answer to this may well be yes, in that the complexity of the climate system serves as a nearly infinite randomizer, which mimics the effects of encryption. Any loss during the compression will make recovery of the original image impossible by de-compression. Since all climate models involve loss, they cannot recreate the climate image reliably.
ferd,
sorry if I caused some confusion…essentially my point was that he seemed to assume that a more exact simulation would run in real time (thus being useless for prediction). That’s not exactly a given. Execution time would depend on multiple variables.
What you are describing is the maximum entropy principle. Assume the minimal information known about the process, such as a mean, and then use the probability distributions that arise from maximum entropy. This approach is studiously avoided by lots of people. For example, it is not mentioned in Curry’s paper even though that is the basis of properly applying uncertainty propagation, starting from the large spread in CO2 residence times. Most of these writers are neophytes when it comes to doing stochastic phenomenon properly.
From the Robustness chapter of the Bradley paper (emphasis mine):
If different models with different idealisations make the same prediction, that suggests that the prediction is not an artefact of this or that idealisation (Muldoon 2007). What we have here is evidence22 that the prediction is due to the shared structure of the models which we hope is structure they also share with the world.
This seems to be the Easterbrook method (validate the model against the theory rather than observations) in action.
The discussion between Annan and Pielke Jr is not to be missed:
http://rogerpielkejr.blogspot.com/2011/08/how-many-findings-of-ipcc-ar4-wg-i-are.html
See also James Annan’s post at
http://julesandjames.blogspot.com/2011/08/how-many-of-rogers-findings-about.html
I find Annan’s new version of ‘the dog ate my homework’ to be very novel.
hunter,
Annan’s objections are entirely valid – Pielke is spouting nonsense. He makes an assumption that the IPCC has correctly judged the probability of certain outcomes and uses this to “prove” that a large proportion of them are incorrect.
Time will tell to what extent the IPCC’s projections will turn out to be accurate, but Pielke’s paper has absolutely nothing to say about it.
So you think that having a bird eat your dice is a good excuse for a prediction not working?
I think it is hilarious.
As far as who is ‘spouting nonsense’, I think the Pielke’s have a strong record of spouting very little of that.
The IPCC is increasingly looking like the emperor’s new cape.
hunter,
So you think that having a bird eat your dice is a good excuse for a prediction not working?
Read the exchange properly – it was a perfectly reasonable response to Pielke’s rather silly argument.
The point is that if you say there is a 83.33% chance (or that it’s “very likely”) that you will roll a number between 1 and 5 and then you roll a 6 that doesn’t mean you were wrong.
And actually the Pielke’s have a very good record of sputing nonsense. Jr does speak sense on accasions though, and I certainly agree with this (as quoted by Eli) –
Public has at least for 20 years been strongly behind climate science and the idea that action needs to be taken. What we have seen is a big partisan divide….It’s become part of the culture wars of the United States….as assumption that many scientists and experts carry with them that if only the public understood the science as they understand the science, the public would come to share their values….As a political scientist I look at issues like the debt ceiling or the war in Iraq or the TARP program and when you look at what public opinion was when action was taken on these controversial topics you find out that the strength of public opinion on climate change is at or exceeding the levels for which action was taken for the other issues. So I would say the evidence suggests pretty strongly that public opinion is not a limiting factor in taking effective action on climate change.
andrew adams
Well, time has already told us that the following projection turned out NOT to be accurate:
The emissions happened, but the warming did not (in fact it has cooled slightly, according to HadCRUT3).
Max
I thought the exchange between Annan and Pielke Jr was amusing, because Pielke’s headline – 28% of the WG1 findings are wrong – was tongue in cheek. He doesn’t really believe that (as his text itself reveals), but was just being provocative. In a broader sense, the argument reminds us that “probability” is not an inherent property of a system but a measurement of our ignorance about the system. In the case of climate, as we learn more about particular phenomena, the probability of their occurrence will change, even though our learning process hasn’t had any impact on how the natural world behaves. A future event that is 72% probable now may become 99% probable in the future (or 1% probable) even if we have done nothing to change its behavior.
(A weather prediction is a familiar example. The forecast may predict rain two days from now as 75% probable. Tomorrow it may become 40% probable although the forecaster hasn’t changed the weather but only the forecast.)
Fred,
You may well be right and Pielke was just being provocative. I didn’t read the actual paper, just the piece on his blog. Maybe the paper is more nuanced – it’s hardly remarkable to point out that even if (when) it turns out that the IPCC is broadly right they won’t be correct on every detail.
One big flaw in Pielke’s calculation of his 28% figure is that it assumes that the various projections are independent when that is obviously not true. For example if we take arguably the two most important claims – about the extent to which the recent warming is due to human CO2 emissions and the level of climate sensitivity, if one is true then they almost certainly both are and much else will follow from that. If either is false then makes many other estimates less probable.
Fred, Well said!!
Andrew Adams: “I didn’t read the actual paper” There is a lot of this going around I’m learning …
I have noticed that many on both sides of the climate wars are wont to ask their detractors to read the papers they are trashing before they do the trashing – good advice.
As I understand it (I hope Roger will correct me if I’m wrong) the main thrust of the paper is that by their own criteria, if their estimates are correct, about 1 in 4 of the projected outcomes could be expected to be wrong (the assumptions and caveats to this conclusion are also stated).
Can not the IPCC use this information to “self correct” the way science is supposed to?
Is not one of the purposes of publishing papers to expose people to new ways of looking at data and doesn’t this paper do just that?
Perhaps not being quite so close to it makes it easier to focus more on what we can learn from this paper to make a better IPCC AR – and who doesn’t want that?
Actually, as I read it,
the thrust of the paper is a mathematically exercise in calculating the expectation. Nothing more – nothing less.
The paper essentially says that the mathematical expectation is that 100 out of 360 predictions would be wrong.
The problem seems to be what others are inferring into the paper.
68.9% of stats on the internet are made up right on the spot. 74.5% in conversations.
That is completely wrong in the larger context. Probability can be an inherent property of a system. Consider Boltzmann statistics or Fermi-Dirac statistics, which are both measures of probability. Fermi-Dirac statistics in particular give rise to the way transistors work and therefore in our extreme confidence in the the outcome of any executing software program.
Such aleatory uncertainty in the ensemble behavior of the behavior of holes and electrons in a semiconductor device is not about ignorance but in our realization that this probability can not be further reduced. So we take advantage of our knowledge of the stochastic behavior to create analog and digital devices.
When all epistemic uncertainties are removed, we are still left with the aleatory and therefor physical system effects. The distribution of CO2 residence times is likely completely explained by aleatory uncertainty, representative of a stochastic physical process that everyone should be extremely interested in. Alas, that seems to be missing from the discussion.
Hi WHT – I believe I must stick with my statement that probability is not an inherent property of a system but rather a measure of our knowledge or ignorance about it. None of your examples refutes that principle. That the behavior of a variety of ensembles must be treated probabilistically can’t be disputed, but that is because of the limitation in our ability to know the behavior of individual elements. In theory, if we knew the properties (position, momentum, etc.) of every element in a system, we could predict its future behavior with close to certainty. In certain systems, the extent we can do that is significantly limited by quantum considerations, but in many cases, our probability calculations could in theory be brought much close to a value of 1 or zero with additional knowledge that is theoretically achievable although often impractical to achieve. Even in systems where achieving that very high level of certainty is impossible theoretically, our probability values could be modified by additional knowledge. I’m unaware of any system in which probability is a fixed property unresponsive to further information.
Fred, If you want to philosophically think that way, fine. I just know that my way of thinking is conducive to practical problem solving. This is also evidenced by the way that the way that physics is taught and the way that new theories are developed.
So if you want to track individual particles, knock yourself out, ha ha.
There are two things pretty much missing. First that the IPCC estimates are stated as > X%, e.g. a statement that the estimate of something happening is greater than 66%, does NOT mean that the estimate of it NOT happening is 33%,
Second, the discussion (JA has something on this) on the estimate for a single event is inherently misleading in the context of the IPCC report (or weather reports). One looks at multiple instances (e.g. rain predictions for say 1000 days made 3 days previously), and finds the proportion that were correct in order to evaluate the accuracy of the prediction. IPCC predictions have to be evaluated in the same way.
For example, if the rainfall model says that for 3 days out predictions will be correct 66% of the time, and it rains on 658 days, that validates the model and its uncertainty estimate.
Third (whose counting) the IPCC is not providing probabilities but expert estimates. This is very non-frequentist territory.
To me the conclusions of the Jonassen and Pielke Jr. paper have two meanings.
One is that their presentation makes it clear, what is the value of results that are true with a likelihood of 70%. They emphasize correctly that erring 30% of the time is really too much. By that I don’t mean that the WG1 report would be wrong, but rather that it’s presenting results that are almost irrelevant or rather statements that we don’t know well enough.
The other point is that it’s indeed possible to look at results which are either changing from estimates about future to history or at least becoming much better known. In this way one can start gradually to judge, whether the stated likelihoods were scaled correctly.
The discussion between Annan and Pielke is irrelevant in my way of looking at the conclusions.
Pekka – It seems to me that even some evidence is better than none if it tells us that an outcome is more or less likely than we would otherwise have guessed. The IPCC can’t manufacture certainty out of uncertainty, but doesn’t it still have a good reason to evaluate the current state of our level of uncertainty? If I were in the insurance business, I would appreciate knowing the odds for a particular expense I might incur in the future even if those odds differed only slightly from a 50/50 outcome I would have been forced to assume in the absence of evidence, and even if I were insuring against an event that had never happened previously in the particular region where I operated.
Some evidence is better than none, but there is always a limit, where presenting the conclusion is more misleading than useful. Whether 70% is on the right or wrong side of that limit may depend on the issue in question. For some issues, it’s definitely so low that presenting that is not justified.
Telling that something is even 50% likely may be significant information, if the prior estimate would be 5% likely. Thus it’s difficult to formulate generic rules.
I haven’t read much detailed econometric analysis recently, but my recollection from times past is that an r2 under .95 was disregarded, and that .99 was considered necessary to validate a tested relationship; .95 wa sindicative. .70 would not be regarded as adding any useful knowledge. Why should it be any less in climate science/policy?
Policy should always be based on best judgment. When we know something with 99.9% certainty that’s excellent, but decisions must be made also, when we have much poorer knowledge. How they should be done, when the knowledge is poor, is a complicated matter. Whether something is 30% likely or 70% likely may make a difference, but often it does not.
The requirements for the level of certainty depend hugely on the settings of the practical decisions that are being made and on the role of that particular piece of information in that.
There are no fixed rules for that.
Not “fixed”, but rational. Weighted EROI estimates, for starters. Since the “damage” from low probability warming disasters is of the same or lesser magnitude than the high probability depopulating fuel starvation disasters following decarbonization/mitigation strategies, the conclusion is clear. Doing nothing, or awaiting developments while building general resources and flexibility, is by far the most intelligent option.
For me, this is a pretty easy “decision”::
This is basically what our host here has advised a US congressional committee last fall.
Without going into the statistical semantics of “very likely” versus “more likely than not, plus the major uncertainties related to unintended consequences of any fear-based actions we might suggest, this seems the practical decision today, inasmuch as we do not know enough yet to make a sound decision on mitigation/adaptation action.
Max
It would be better if such things were decidable. They could be stated in a way that would make things clear.
If they have statements about which they wish to convey their strength of conviction they could divide them into strength catergories, as they have so done, and then make a clear statement concerning how many of them they anticipate will be false.
So for instance if they have a marginal category they could say that out of all of those statements (say 110) they insist that no fewer than 25 and no more than 45 will evaluate as false.
Being “correct” or not, would turn on whether the number that were indeed false fell in that range. If all the predictions did in fact come to pass then they would be judged to have failed (been incorrect) in their assessment as no more than 85 of the statements should have evaluated as true.
Alex
RPJ: “Probabilistic statements are made to add information to decision making, not to subtract from accountability.”
He imparted some wisdom with that statement. Got a laugh out of that one.
Annan was trying to assert that actual results don’t really matter in these probability predictions. A preliminary knee jerk defense against what everyone senses as poor modeling results. Nonsense.
A single bad result does not equate to the model being “wrong”, but it does matter. And an accumulation of poor results matters a great deal. A single correct result does not prove the model “right” either. In fact the worst thing that can happen here is that an incorrect model will give correct results for a few decades, that would set back the modelling process a very long time.
Annan’s dice analogy doesn’t work here. For the climate predictions, nobody even knows how many sides the dice really have, so it is entirely possible the number six will never ever actually show up. The assumption that the climate PDF’s are known and correct and the only variable in the system is random chance is a fallacy. The correctness of the PDF’s are what is really being tested, and how to validate these is a pretty hard problem.
.
From the Past Success chapter of the Bradley paper:
For something like weather forecasting, we have plenty of evidence that our models are doing pretty well at predicting short term weather. This gives us confidence that the model is accurately representing some aspect of the weather system. Parker (2010a) discusses this issue. Past success at predicting a particular weather phenomenon, together with the assumption that the causal structure hasn’t changed overmuch does allow a sort of inductive argument to the future success of that model in predicting that phenomenon. Past success suggests that problems with implementation, hardware, and missing physics are not causing the models to go wrong.
What is missing here is a realization that the divergence between the weather model and reality that occurs over time is an indication of problems (whether implementation, hardware or missing physics) over the longer time scale. If the model is sufficiently complete, given his assumption that the system is deterministic, then the divergence should not appear.
On the contrary, in weather it is generally stated that the rapid divergence is due to deterministic chaos, specifically extreme sensitivity to initial conditions. Lacking perfect knowledge, which is impossible, the demon cannot reliably predict the weather two weeks out, even though the model is complete and perfect.
Although he did claim perfect knowledge (both in the mechanisms and the state) for the demon.
In the real world, however, how easy is it to differentiate between divergence due to incomplete knowledge of the initial conditions versus an imperfect simulation?
I did claim perfect knowledge for the demon: but the point was to show how our actual position differs from the demon. And one way is exactly in not having perfect information.
Laplace did not know about deterministic chaos, as it was only discovered by Poincare about 100 years ago. It makes the difference between perfect and imperfect knowledge much more fundamental. Absent chaos, accuracy improves with knowledge. But when chaos is dominant it does not.
Gene, this is one of the biggest questions facing science today, namely identifying the chaotic limits to predictability. (Many people cannot even accept that these limits exist.) However there are both mathematical and experimental methods for exploring these limits, in principle at least. Most climate data, like most weather data, exhibits the so-called footprint of chaos.
To the extent that these limits exist they are a fundamental uncertainty, not just a lack of knowledge. But whether they exist, and how big they are, is a present lack of knowledge, hence a research question. Thus there are two very different uncertainties at play here.
Gene, see the uncertainty monster thread, this is the difference between epistemic uncertainty (lack of knowledge) and ontic (aleatory) uncertainty which is irreducible uncertainty
Dr. Curry,
Thanks for the pointer. I found an exchange between John Whitman and Tomas Milanovic that touches on my questions regarding this (currently irreducible vs permanantly irreducible).
Wikipedia’s description of Lorenz’s discovery seems to suggest that the sources of chaotic behavior, while not obvious, are ultimately knowable:
To his surprise the weather that the machine began to predict was completely different from the weather calculated before. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 was printed as 0.506. This difference is tiny and the consensus at the time would have been that it should have had practically no effect. However Lorenz had discovered that small changes in initial conditions produced large changes in the long-term outcome.[44]
Based on David’s comment above and the exchange above, I think the answer to my question is that the jury’s still out.
Gene, deterministic chaos is a mathematical property. Specifically it is a property of certain equations, typically equations with nonlinear negative terms, which when found in nature may represent negative feedbacks. Poincare discovered this class of equations around 1910. Lorenz discovered the first physical system to probably be chaotic, in the sense that chaotic equations described it, around 1960. Since then the search for chaos has spread to all the sciences. If a chaotic equation fits a system then that aspect of the system behavior is intrinsically unpredictable, due to extreme sensitivity to initial conditions. Past behavior is likewise unexplainable, in the sense that explanation is often prediction after the fact. (This is greatly oversimplified.)
I should also mention that a chaotic system can oscillate due simply to a constant forcing. The climate equations have the right nonlinear negative feedbacks to produce chaos. This leads to what I call the chaotic climate hypothesis. In short the climate changes we are struggling to explain may simply be due to constant solar input. No changes in forcing are required to produce long term changes in temp, precip, etc. It is an intriguing possibility that is little studied.
I wouldn’t lump everything under deterministic chaos. Many phenomenon occur simply as a result of stochastic disorder. Nonlinear effects don’t necessarily dominate; the important point is how the state space fills up,. An ergodic process that fills up the state space can’t always be differentiated from one that fills up the space by non-linear processes such as bifurcation, limit cycles. etc.
Maybe that is why use suggest that your explanation is “greatly oversimplified”?
Thanks, David. If nothing else, I’ve at least managed to improve my conceptual understanding of chaos theory.
Web, I am not lumping anything, merely pointing out the unexplored hypothesis that might be fundamental. When I discuss this with climate modelers their attitude is that they will find the predictable then chaos is whatever is left, if that. But the way chaos studies are done one builds the model, including the nonlinearities, then pushes it to find the chaotic regime. The scientific question is then how much can chaos explain, given what we know? I don’t see anyone doing this with climate. It is also how one distinguishes chaos from stochastic noise, to the extent one can do this.
But then people are also still selling seasonal weather predictions, simply ignoring the known chaos. There is no money in unpredictability, so no one is looking for it.
David,
Right. So there are essentially three modes to consider. The conventional mean-value approach, the stochastic model which assumes potentially wide variations, and the chaotic regimes which can open up all sorts of interesting possibilities.
Yes Web, that sounds about right. But it is not how the research program looks. If they would just give you and me the billions we could straighten it out, ha ha. Actually I would settle for a modest grant to explore the chaos, but I know of no RFP along those lines.
David, It doesn’t take a research grant to think.Lots of data is available on the internet and it only takes some motivation to dive into it.
Judith,
You provide a great benefit by putting these papers in one location and thus allowing a venue for comparative evaluation.
I deeply appreciate it.
Now, I will only be able to respond by next Monday if I devote 50% of my time, from now till then, reading carefully all the papers. :^)
John
What will it take to make the following a true statement?
We can just barely ‘predict’ *current* weather but on average we will be far more accurate predicting long term weather—i.e., climate change.
How is anything more than simple-minded wishful thinking to bet that we can most certainly predict climate trends decades into the future based on all of the unknowable factors?
Does anyone really believe that we can predict a monotonic increase in CO2 and that is all we need to know about global warming? Is it really true that we do not need to know about undersea volcanoes, recurring solar activity and ENSO events on decadal, centennial and millennial climate cycles, cosmic radiation as our solar system skitters through the spiraling arms of the Milky Way at the edge of the galaxy, the interaction between the Earth the big planets of Jupiter and Saturn on the Earth’s rotation, axis, magnetosphere, etc.?
Global warming alarmists ignore reality by first feeling free to ignore history. “[O]ver the past 12,000 years, there were many icy intervals like the Little Ice Age [that] alternated with warm phases, of which the most recent were the Medieval Warm Period (roughly AD 900-1300) and the Modern Warm Period (since 1900).” ~Henrik Svensmark: “over the last 12,000 years virtually every centennial time-scale increase in drift ice documented in our North Atlantic records was tied to a solar minimum.”
Wagathon,
The assertion that we can predict future weather better than short term is a laughable coneit.
That they call the weather ‘climate’ and predict it out 100 years only makes it odder still.
Very good point. Why does no one on this blog want to address the huge probability spread in CO2 residence times? This by itself is an excellent case for working a propagation of uncertainty exercise starting from different scenarios (in other words, pathways) for fossil fuel emissions.
Do the convolution of the emission forcing with the residence time and you can start to understand the monotonic increase in CO2 concentration. (It is not strictly monotonic as it shows the oscillations from daily and season influences, but this come out of the wash when you actually do the math).
I certainly agree with Jonassen and Pielke, Jr that the USGCRP reports are worse than the IPCC reports when it comes to expressing (hence not understanding) uncertainty. This is important because the folks writing the USGCRP reports are the ones spending the $2 billion per year on US climate research. This is roughly half the global budget for climate research. The IPCC is merely an advocacy organization but the USGCRP is the US climate research program, and it is seriously misguided.
It’s great to see that my paper has already led to some interesting debate! Thanks for linking to it Judith.
I should start with the caveat that I am a philosopher by training, not a climate scientist. I am also writing for a non-technical audience. These two factors together mean that the paper might not strike the right tone with experts on the climate. I may have been careless in the way I expressed what are obviously very difficult and complex issues.
First: Laplace and his assumptions. Gene is absolutely right to point out that in fact none of Laplace’s three assumptions is strictly speaking satisfied. But that’s the point: how are we not living up to these ideals and how does that failure impact our ability to predict? If we really were in the position Laplace assumed for his demon, then much of my paper would be irrelevant!
Next the assumption that a perfect (at all scales) model couldn’t be run in less than real time. (Gene’s point again). This is a fair comment, but I’m not sure that an exact replica of the climate system could run in less than real time. This would have to be a perfect model of the system at all resolutions. To take the compression analogy: if the compression were at all lossy then this could introduce errors. Those errors would be small, but they would be compounded by the non-linear behaviour of the climate system.
Weather forecasting and divergence (Gene again!). Divergence of medium term weather predictions from the actual course of the weather is of course an indicator that the model isn’t perfect. This emphasises the point that we have to be careful about what sort of predictions we take seriously: at what sort of lead times are our models good predictors?
Thanks for the comments!
Hi Seamus, thanks for stopping by, I very much appreciate the good job that you did explaining all this to a nontechnical audience.
Thank you for laying this out so clearly.
Seamus,
Thanks for both the clarifications and the gracious manner in which they were provided.
I think you are pushing the lossy/lossless distinction a bit too far. A “perfect” lossless model would represent the precise quantum motions of every subatomic particle in the system. Such an ideal is almost meaningless. Some standard of “effectively lossless” needs to be established, perhaps, but the AGW GCMs are light-years from anything like that. In fact, they are so lossy that it’s dubious whether any usable information survives.
Right. So what sort of scale do we have to model at to produce “effectively perfect” predictions? Can we do it in less than real time? Probably not. So we need to do research into which of our predictions we can be confident of (and to what extent we can be confident of them).
Heh. And a prediction not made in less than real time is no prediction at all. Anent which, the GCMs can’t manage even the much longer than real time standard without pre-plugging the result with “tuned” parameters. A different set and menu for every model, tweaked differently for every hindcasting run.
Risible.
re: the perfect at all scales model
This would have to contain – thanks to the old butterfly effect – a working model of the computer on which the model was being run, which, in order to be perfect would need to be running a copy of the model running a copy of the model, which, in order to be perfect would need to be running a copy of the model running a copy of the model running a copy of the model etc…
A serious problem for every Turing machine …
Seamus,
I thank you for replicating Borges and I must wonder if you intented to share a joke with me, perhaps no more than through its commencement by ellipsis for that would be a sufficient qualification.
The facet that you highlight was previously lost to me, that his conceit can be applied literally but so much of Borges is lost on me.
We give credit when it is due and I am thankful that in this case its due date is 1999 which is to credit Borges with less than I did greedily wish for him. Yet this is the work of Andrew Hurley as well as Suarez Miranda.
I do not know and perhaps cannot know if Borges contemplated whether a perfect model could not but run in real time and if not which was then the real and which the simulation. I choose to imagine that he did for it spared me from all desire to read Baudrillard, at least while he lived.
I write this cheerfully and honestly give my thanks in any respect and if the joke was intented this is my echo, my delinquent copy, my Borges.
Alex
The Borges story is actually quote in its entirety: Borges starts his story with that ellipsis. I didn’t want to add or remove anything from it. All the spurious capitals are his too.
Compliments Judith on your uncertainty publication and raising these issues.
Uncertainty of economic projections
The IPCC CO2 projections depend strongly on economic projections (and fossil to renewable technology and transitions)- but economic projections are notoriously poor. Economic downturns are much steeper than economic growth. Most optimistic projections fail on downturns.
e.g. in If You Loved it at $100, How Do You Feel at $2? Bill Tatro observes:
Evaluations of economic forecasting ability show poor performance: <blockquote.While there is statistically significant evidence that forecasting ability is persistent, the top third of forecasters in a prior period are just 4 per cent more accurate in a subsequent period than the bottom third of forecasters. This low level of persistent forecasting ability means that prior forecasting ability has no association with analysts’ ability to identify mispriced securities in a subsequent period. Furthermore, regardless of forecasting ability, analysts are pre-disposed to recommend stocks with low book-to-market ratios and positive price momentum. We suggest that this bias outweighs analysts’ objectivity, thereby offsetting any ability to make accurate forecasts and profitable recommendations
Forecast accuracy and stock recommendations, Jason L. Hall and Paul B. Tacon
The IPCC is updating its projections: See
Representative Concentration Pathways
Has the IPCC addressed systemic economic risk? Do the IPCC projections rest on economic projections that could vary by an order of magnitude?
In Steep Oil Decline or Slow Oil Decline – Expanded thoughts Gail “the Actuary” Tverberg warns that we are transitioning from an era of rapid growth built on cheap rapidly growing fuel, to an era of expensive constrained transport fuel, (based on existing technology and resources.) High fuel costs undergird the recent economic crises and high unemployement.
The underlying Energy Return On Energy Invested (EROEI) for oil has declined from > 100 to < 20. EROEI for ethanol (at ~ 1) is less than the essential minimum EROEI of about 3.
This is likely to cause major constraints to economic growth, thus making IPCC's projections over optimistic (until we see major game changer technology breakthroughs).
I think that forecasting is by far the weakest element of economics, in large part because model forecasts necessarily assume a continuation of existing relationships and can not predict unforeseeable events which de-rail those relationships. But the IPCC national accounts econometric modelling which underpins all scenarios is even more fallible as it was done by non-experts in the field who made basic errors, e.g. in not using the internationally-agreed PPP measure of national income.
Where economic modelling is useful is in estimating the impact of different “shocks” to the system; but this would usually be done for a ten-year period, not the IPCC’s century.
It is simpler than that. Economic theory is wrapped up in game theory, which is human behavior and all the psychology, reverse psychology, double-reverse psychology, ad nauseam that entails. Nature does not do game theory; the closest thing to that is black swans and dragon kings, the latter which come out of fat-tail statistics. These are the low probability, high impact events that can cause massive change.
An example of this is the discovery of a new super-giant oil reservoir. The theory and observation predicts these exist, as dragon-kings, but whether we will see them again is based on probability models and our careful bookkeeping of areas that we have already searched. The forecasting in this case is extremely interesting and something few people try to work out.
‘We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king.’ http://arxiv.org/abs/0907.4290
Economics and climate have much in common – both complex and dynamic systems. Predictability is limited to considerations of autocorrelation.
‘Here, we analyze eight ancient abrupt climate shifts and show that they were all preceded by a characteristic slowing down of the fluctuations starting well before the actual shift. Such slowing down, measured as increased autocorrelation, can be mathematically shown to be a hallmark of tipping points. Therefore, our results imply independent empirical evidence for the idea that past abrupt shifts were associated with the passing of critical thresholds. Because the mechanism causing slowing down is fundamentally inherent to tipping points, it follows that our way to detect slowing down might be used as a universal early warning signal for upcoming catastrophic change. Because tipping points in ecosystems and other complex systems are notoriously hard to predict in other ways, this is a promising perspective.’ http://www.pnas.org/content/105/38/14308.full
The discovery of a large oil field is not a dragon-king at all – i.e an extreme outlier associated with chaotic bifurcation. ‘One of the most remarkable emergent properties of natural and social sciences is that they are punctuated by rare large events, which often dominate
their organization and lead to huge losses. This statement is usually quantified by heavy-tailed distributions of event sizes. Here, we present evidence that there is “life” beyond power laws: we introduce the concept of dragon-kings to refer to the existence of transient organization into extreme events that are statistically and mechanistically different from the
statistically and mechanistically different from the rest of their smaller siblings.’
The point is that these emerge from the self organising properties of complex and dynamic systems rather than as merely chance occurrences of events such as the size of new oil field discoveries.
I don’t think they necessarily do. My thought is that the self-organizing theory is the approach that you would take if you want to win a Nobel Prize in some field. The simpler approach is to apply maximum entropy to a range in accumulation rates, accumulation volumes and accumulation times. This will give the distribution of oil reservoirs, and also lake sizes, and lots of other physical phenomena.
Research physicists don’t like my approach because it is practical and won’t find new physical states or behaviors. It only involves meat and potatoes calculations of disorder. This won’t win any science prizes but it will tell us what we need to know. That is really the big failing of the research that people like Sornette and before him, Per Bak, have attempted to do. These guys are all going for the glory and don’t really care about the rote stuff.
I guess that you mean by maximum entropy essentially searching for the right measure of the phase space and using a uniform prior in that measure, when the approach is defined in Bayesian language.
Self organising is a term for a system with multiple interacting positive and negative feedbacks with control variables. It shifts abruptly into different states when pushed passed a threshold. You don’t think climate is chaotic – as in chaos theory? Whatever.
Since ‘the climate system is complex, occasionally chaotic, dominated by abrupt changes and driven by competing feedbacks with largely unknown thresholds, climate prediction is difficult, if not impracticable.’ http://www.biology.duke.edu/upe302/pdf%20files/jfr_nonlinear.pdf
Mitigation uncertainty
IPCC “mitigation” projections have major systemic feasibility/implementation uncertainty from projected political assumptions. e.g., see:
Prediction: In an era of high government debt economic risk, sparks will fly over such unilateral biased taxation!
Uncertainty Guidelines
It would help to refer to formal international guidelines for evaluating uncertainty. See NIST’s Uncertainty Bibliography, especially the Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, B.N. Taylor and C.E. Kuyatt, NIST TN 1297 and
Guide to the Expression of Uncertainty in Measurement. (GUM) International Organization for Standardization (ISO)
See: NIST Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results html, and NIST TN 1297 pdf.
Again, while it’s progress for science uncertainty to be listed and “bias” discussed as word it remains sadly lacking.
Why and what are the IPCC and the “Consensus” biased for or against? Should bias be viewed as abstract disputes between academics? Marsh vs. Cope? Deeper science drive outlooks that the public can’t perceive?
This is nonsense, the divide and bias is based on political cultures. Statism, eco-extremist sympathy vs. libertarian outlooks. Where is the peer reviewed study of the IPCC summary committees political inclinations in general? None exists, it’s rude to mention?
If you aim at the nail you might as well hit it on the head rather than the sideway approach as Dr. Curry seems committed to. Are we going drift for years more on endless discussion as “if” there is bias and uncertainty? The larger question is what people are biased to in the first place. When you get there the IPCC and phony consensus meltaway in credibility. The driving force of AGW is traditional eco-left self-absorption wrapped under U.N. and government regulatory mysticism. There is obviously an academic enclave with similar politics concentrated at the heart of the “science” as well. When Dr. Curry speaks on this directly without whitewashing she will deserve full credit from skeptics. She might never be pal (I mean peer) reviewed again but she would have respect that should be far greater in value.
Dr. Bradley,
Thank you very much for this paper. It has enhanced my understanding of climate modelling considerably, being pitched at a level a non-scientist like me can begin to grasp. Personally, I’d like to see it permanently linked to here as recommended reading for the layman.
I do have one question, and it concerns figure 4. Maybe I have missed something obvious, but what does the height of the columns on each of the grid cells represent?
The heights of the columns seem to agree reasonably well with the average altitude of the area covered by the cell.
I wondered about that, Pekka, but it made the depth of the Atlantic look surprisingly shallow (unless it is arbitrarily cut off).
‘The global coupled atmosphere–ocean– land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ http://www.gfdl.noaa.gov/bibliography/related_files/Hurrell_2009BAMS2752.pdf
Chaotic systems are theoretically determinant – and so would give the demon little trouble. For us on the other hand ‘since the climate system is complex, occasionally chaotic, dominated by abrupt changes and driven by competing feedbacks with largely unknown thresholds, climate prediction is difficult, if not impracticable.’ http://www.biology.duke.edu/upe302/pdf%20files/jfr_nonlinear.pdf
The ‘difference between epistemic uncertainty (lack of knowledge) and ontic (aleatory) uncertainty which is irreducible uncertainty’ is as well somewhat artificial. If knowledge were perfect – the apparently stochastic element would reduce to the certainty of the cosmic watchmaker. The initial conditions along with their trajectories, the thresholds of abrupt and non-linear change and the feedbacks would all be known. The surprises – nonlinearities, abrupt change, ‘slowing down’ (an increase in autocorrelation as the system approaches a threshold) or ‘dragon-kings’ (defined as extreme variability ‘associated with a neighbourhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point’ – http://arxiv.org/abs/0907.4290 – would still exist but would no longer be surprises.
In the absence of perfect knowledge – the chaotic systems of weather and climate diverge exponentially with time from the chaotic systems of the climate models. ‘Sensitive dependence (arising from limits to knowledge of initial conditions) and structural instability (which occurs as there is a range of plausible values for boundary conditions) are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ http://www.pnas.org/content/104/21/8709.full
The problem of knowledge is not that there might be a linear change that is known within some limits of error – but that nonlinear change occurs that is intrinsically unknowable in sign or extent. CO2 is theoretically a threshold phenomenon – a small change in initial conditions that could push climate into a phase change with unpredictable consequences. .
McWilliams (2007) provides insight into how atmospheric and oceanic simulations (AOS) are evaluated.
‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. Plausibility criteria are qualitative and loosely quantitative, because there are many relevant measures of plausibility that cannot all be specified or fit precisely. Results that are clearly discrepant with measurements or between different models provide a valid basis for model rejection or modification, but moderate levels of mismatch or misfit usually cannot disqualify a model. Often, a particular misfit can be tuned away by adjusting some model parameter, but this should not be viewed as certification of model correctness.’
‘We think we have most of the mechanisms and parameters and the answer is in the ballpark of the expectations of us and our peers?’ Oh well.
Chief Hydrologist,
You appear to be a believer in applying deterministic chaos to forecast behavior. These arguments get a bit tiresome because we all know that is a dead end computationally. For example, I have done a lot of work with dispersive transport and breakthrough curve measurements (something you might be familiar with based on your title). As it turns out, a lot of the inherent chaos in transport pathways is easily modeled via maximum entropy principles. This gets applied in particular to the disorder in physical mechanisms (diffusion and drift parameters in particular). Why everyone does not do this, I have no idea, I guess it is not buzz-word compliant.
When a system is chaotic in the sense that a very small change in initial conditions leads rapidly to unpredictability assuming no stochastic disturbances, adding stochasticity may make it behave more predictably. This may be the case, because the stochastic disturbances may force it to follow largely the average behavior rather than all the complexities if deterministic individual trajectories.
Have you observed that kind of effects in the applications you are familiar with?
I think a related phenomena is that of stochastic resonance. In this situation, certain paths can resonate with the environmental conditions and show a kind of dominating behavior. The randomness is needed to get these to kick in.
I have run across a case (not necessarilly stochastic resonance) with ice crystal size distributions in clouds. The size of ice particles is definitely a disordered phenomena, as the nucleating sites can be widely distributed according to seed materials available and they can all grow at varying rates leading to a steady-state distribution. You can sometimes see determinism as obvious bumps in the PDFs. This link is one of high-altitude cloud composition which shows a bimodal distribution; most of the curve is dispersive in sizes but what I think is a clear bump appears that allows some rare but large ice crystals to appear.
http://mobjectivist.blogspot.com/2010/04/dispersive-and-non-dispersive-growth-in.html
I can’t quite grasp your meaning.
But the systematically designed model families of McWilliams are intended only to explore irreducible imprecision in AOS. As this hasn’t or can’t be done – there is little to be said in favour of the solutions presented.
Predicting weather and climate is not so simple – odd little notions of maximum entropy notwithstanding.
‘‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space. Prognostic equations for ρ, the Liouville and Fokker-Plank equation are described by Ehrendorfer (this volume). In practice these equations are solved by ensemble techniques, as described in Buizza (this volume).’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)
–>CO2 is theoretically a threshold phenomenon – a small change in initial conditions that could push climate into a phase change with unpredictable consequences.
You could say the same for the beating of a butterfly’s wings so that really is not very helpful. In fact, you are giving wings to the fearmongers.
Studies have shown that CO2 follows global warming, it does not precede it. And even having some appreciation for the holistic process that are involved in global warming does not mean we will ever be able to effectively model climate change except perhaps in an abstract such as by way of using the mathematics of chaos.
The shifting crusts and volcanic eruptions, oscillations of solar activity on multi-Decadel to Centennial and Millennial time scales with variations in gamma radiation and the role of the big planets, Saturn and Jupiter– and a changing North Pole and variations in the magnetosphere– all are a part of a holistic process that is the Earth’s climate.
EPA government science authoritarians do not control average global temperatures. But we know what does: nominally, it is the sun, stupid.
And we do have converging explanations that better help us understand the complexity and interconnectedness of natural phenomena comprising global warming. Key elements to a better understanding of climate change are the concepts of a ‘torque’ and the natural power of ‘swirling vortices.’
These phenomena relate to the role of the atmosphere, the oceans, the Earth’s ‘molten outer core,’ and also the formation of Earth’s magnetic field. Adriano Mazzarella (2008) criticized the GCM modelers reductionist approach because it failed to account for so many of the factors that can only be reckoned with using a holistic approach to global warming.
One factor is itself just a part but an important part of a larger process. Unlike thinking of climate as the result of CO2 as a single unit, even when thinking in the most simple terms, we can not think of a ‘single unit’ in any more simple way than the ‘Earth’s rotation/sea temperature.’
Holistically, we see that included in this single unit are changes in ‘atmospheric circulation’ and ‘like a torque,’ variations in atmospheric circulation can in and of themselves cause ‘the Earth’s rotation to decelerate which in turn causes a decrease in sea temperature.’
It may soon be possible to model this effect mathematically. A study conducted by UCSB researchers (results to be published in the journal Physical Review Letters) involved filling cylinders with water and then heating the water from below and cooling the water from above. This was done to better understand the dynamics of atmospheric circulation and hopefully enable researchers to mathematically model a phenomena observed in nature known as swirling vorticies.
As applied to Earth science perhaps it won’t be long before, for example, it can be conclusively shown that Trenbreth is never going to find the global warming that he is looking for in the deep recesses of the ocean for the simple reason that it simply is not there. It can’t be there. It’s not there because no matter how much AGW True Believers may wish otherwise global cooling is not proof of global warming.
Hopefully we can use the mathematics of the UCSB researchers to reveal the obvious–what everyone really knows already–that, given differences in ocean temperature in the real world, cold surface water sinks to the bottom.
In our water world with the Earth rotating on its axis with warm water at the bottom of the ocean, why will cold water on the top sink? It is the difference in the temperature itself from top to bottom that is a ‘causal factor’ that drives the flow. I think everyone understands the process, i.e., it’s the process we call convection. Let’s hope that the mathematics of ‘swirling natural phenomena’ will help contribute to a global politic where government science authoritarians of a secular, socialist Education Industrial Complex will be prevented from acting like persecutors of Galileo and that anti-capitalist government bureaucracies like the EPA will be prevented from destroying the economy based on the superstition and dogma.
Giving comfort to the alarmists is not really the point and the correct policy response is a separate issue – http://thebreakthrough.org/blog/Climate_Pragmatism_web.pdf – what we are talking about is essential uncertainty in how we understand climate.
‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.
Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected. http://www.nap.edu/openbook.php?record_id=10136&page=1
‘The new framework now emerging will succeed to the degree to which it prioritizes agreements that promise near-term economic, geopolitical, and environmental benefits to political economies around the world, while simultaneously reducing climate forcings, developing clean and affordable energy technologies, and improving societal resilience to climate impacts. This new approach recognizes that continually deadlocked international
negotiations and failed domestic policy proposals bring no climate benefit at all. It accepts that only sustained effort to build momentum through politically feasible forms of action will lead to accelerated decarbonization.’
This is a no brainer
“You could say the same for the beating of a butterfly’s wings so that really is not very helpful.”
Something caused all those climate swings. It sure wasn’t butterfly wings.
I don’t think you grasp how significant rising greenhouse gases are. A doubling of CO2, a feat man will easily succeed in, produces something like a 3.7wm-2 forcing. That’s massive considering that a 1% increase in solar output (not gonna happen) only produces about a 2.4wm-2 forcing.
That you compare such an impact on climate with the impact of butterfly wings is ignorance at it’s finest.
“Studies have shown that CO2 follows global warming, it does not precede it.”
Your claim is stupid.
CO2 is a greenhouse gas. As levels of it rise it will cause warming. There are no studies showing anything to the contrary.
“The shifting crusts and volcanic eruptions, oscillations of solar activity on multi-Decadel to Centennial and Millennial time scales with variations in gamma radiation and the role of the big planets, Saturn and Jupiter– and a changing North Pole and variations in the magnetosphere– all are a part of a holistic process that is the Earth’s climate.”
I just find it rather revealing how you factor in effects of Saturn and Jupiter, but dismiss any role for greenhouse gases out of hand. Look you can’t just project some fantasy world on the real world. Just because you don’t like the inconvenience of CO2 causing warming doesn’t mean you can pretend it doesn’t.
“It’s not there because no matter how much AGW True Believers may wish otherwise global cooling is not proof of global warming.”
The globe isn’t cooling. Even after a PDO switch and a low solar minimum. Remember all those cooling predictions concerning those two events? How many failed denier blog predictions does it take until you guys wake up to the fact you are being deceived by idiots?
History is going to look back and laugh at you guys for ignoring experts like Trenbreth and instead siding with fruitcakes.
Lolwot,
‘That you compare such an impact on climate with the impact of butterfly wings is ignorance at it’s finest.’
The climate shifts at least metaphorically are driven by butterfly wings – the expression derives from the shape of the ‘strange attractors’ in the chaotic Lorenz system. http://en.wikipedia.org/wiki/File:Lorenz_attractor_yb.svg
The metaphor was invented by a journalist but to not understand chaos theory is to not understand the fundamental nature of weather and climate.
‘CO2 is a greenhouse gas. As levels of it rise it will cause warming. There are no studies showing anything to the contrary.’
There are biological functions in play – although on geological timescales there is a balance between CO2 emissions from volcanic activity and the long term sequestration that is rate limited by silicate weathering – there are immense organic stores of carbon that wax and wane. Silicate weathering increases with temperature – so sinks increase. But biological activity also increases. The interactions are complex but it is known that biological activity – and therefore CO2 flux from respiring organisms – increases with temperature. It can be seen in the year to year fluctuations in atmospheric carbon. High for instance in 1998 and low in 2000. Something of the sort in seen in the ice core record – an 800 year odd delay that I don’t understand.
‘I just find it rather revealing how you factor in effects of Saturn and Jupiter, but dismiss any role for greenhouse gases out of hand. Look you can’t just project some fantasy world on the real world. Just because you don’t like the inconvenience of CO2 causing warming doesn’t mean you can pretend it doesn’t.’
We believe that the orbits of the large outer planets especially influences the rotation of the solar barycentre and therefore the solar magneto, sunspots, UV and TSI. UV especially is implicated in ozone interactions in the stratosphere influencing the Northern Annular Mode and the North Atlantic Oscillation and the Southern Annular Mode and ENSO. Try this but there are many others – http://iopscience.iop.org/1748-9326/5/3/034008/fulltext
‘The globe isn’t cooling. Even after a PDO switch and a low solar minimum. Remember all those cooling predictions concerning those two events? How many failed denier blog predictions does it take until you guys wake up to the fact you are being deceived by idiots?’
The ‘climate shift’ after 1998 is an example of a chaotic bifurcation in climate – a shift to a new ‘strange attractor’. It is characterized primarily by a cool PDO and more intense and frequent La Niña – together these known as the Pacific Decadal Variation. The PDV last for 20 to 40 years in the proxy records – so another 10 to 30 years in this cooler trend. The change in radiative forcing as a result of PDV cloud feedback is much larger than contemporaneous change in greenhouse gas forcing – and will lead to cooling of oceans and atmosphere as the PDV intensifies.
http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/
‘History is going to look back and laugh at you guys for ignoring experts like Trembled and instead siding with fruitcakes.’
Currently – science is deciphering much more abut the nature and extent of natural climate variability. Simple explanations that minimize uncertainty do no one any service at all.
“We believe that the orbits of the large outer planets especially influences the rotation of the solar barycentre and therefore the solar magneto, sunspots, UV and TSI.”
We *know* that rising CO2 has a warming effect. The radiative imbalance from a doubling of CO2 has even been quantified.
Speculative solar forcing effects don’t even have mechanism behind them let alone quantified figures that dwarf the massive forcings calculated from a doubling of CO2.
“The ‘climate shift’ after 1998 is an example of a chaotic bifurcation in climate – a shift to a new ‘strange attractor’. It is characterized primarily by a cool PDO and more intense and frequent La Niña – together these known as the Pacific Decadal Variation. The PDV last for 20 to 40 years in the proxy records – so another 10 to 30 years in this cooler trend.”
The PDO is already cool and La Ninas can hardly get anymore intense and frequent than they already have been in the last 5 years. So there can’t be 10 to 30 years of a cooler trend. Most of the cooling from more intense and frequent La Ninas has already happened, and yet it wasn’t enough to reverse the warming – even though it also coincided with a low solar minimum.
“The change in radiative forcing as a result of PDV cloud feedback is much larger than contemporaneous change in greenhouse gas forcing”
How much larger is it? Give the global figure in wm-2.
The data on SW and LW radiative flux shown on the ISCCP-FD site linked to – about 2W/m2 net flux change between the mid 80’s and the end of the 1990’s – mostly in the SW as a result of cloud changes – as in the NASA quote. These are mostly ENSO radiative feedbacks – as it mostly happens in the tropics and subtropics.
‘During the descent into the recent exceptionally low solar minimum, observations have revealed a larger change in solar UV emissions than seen at the same phase of previous solar cycles. This is particularly true at wavelengths responsible for stratospheric ozone production and heating. This implies that ‘top-down’ solar modulation could be a larger factor in long-term tropospheric change than previously believed, many climate models allowing only for the ‘bottom-up’ effect of the less-variable visible and infrared solar emissions. We present evidence for long-term drift in solar UV irradiance, which is not found in its commonly used proxies.’ Lockwood – linked to already
Judith Lean (2008) commented that ‘ongoing studies are beginning to decipher the empirical Sun-climate connections as a combination of responses to direct solar heating of the surface and lower atmosphere, and indirect heating via solar UV irradiance impacts on the ozone layer and middle atmospheric, with subsequent communication to the surface and climate. The associated physical pathways appear to involve the modulation of existing dynamical and circulation atmosphere-ocean couplings, including the ENSO and the Quasi-Biennial Oscillation. Comparisons of the empirical results with model simulations suggest that models are deficient in accounting for these pathways.’
The PDV is in a cool mode – the PDV involves cool PDO and an increase in frequency and intensity of La Nina over 20 to 40 years. You can see this clearly in Claus Wolter’s MEI – La Nina (blue) dominant to 1976, El Nino (red) dominant to 1998 and La Nina dominant since. The cloud feedbacks change the global radiative dynamic.
If you are not going to bother looking at things I post – and simply respond with a narrative – I am finished.
http://www.esrl.noaa.gov/psd/enso/mei/
Looking at this thread that started with the statement:
The residence time behavior of CO2 is not the same kind of effect as a butterfly wing, even though it may have a similarity of outcome. Atmospheric concentration of CO2 has fat-tail statistical properties and is not something caused by a chaotic event. The point is that we have to understand how CO2 builds up to get to a potential breakdown point, and then go from there. The point is that this is not a chaotic forcing function but a gross-level driver produced over the course of time.
You had said this to provoke lowot’s comment:
I would phrase this as some combination of CO2 with the surrounding system context may turn into a threshold phenomenon. CO2 by itself does not show threshold properties, as it is governed by entropy and the ensemble properties of reaction kinetics and the statistical mechanics of the gas in the volume it resides.
I bring this up because this aspect always seems to be missing from the discussion, as in counting your chickens before they are hatched.
CO2 is both a control variable in a system with strong negative and positive feedbacks – and a feedback as CO2 in the atmosphere is increased with warmer temperature.
We have no idea of at what level – if any – increasing CO2 passes a threshold and triggers a chaotic bifurcation.
Exactly. It’s a meaningless boogy-man, since it can by definition never be specified either by place or time or consequence. In effect, it counsels a state of non-quivering stasis lest something bad happen.
“Your claim is stupid.
CO2 is a greenhouse gas. As levels of it rise it will cause warming. There are no studies showing anything to the contrary.”
Any math or physics to back this claim up lolwot??…..I didn’t think so, but it’s just like “gravity” right?
What next? Large objects fall faster than small one? Is this how science is proved to you? It sounds good it must be true.
Billions spent, thousands of really big brains involved and nothing is settled or the blog would be empty. “History is going”……like you have a clue about what has already happened? Not likely.
It’s settled science cwon1. Greenhouse gases warm the earth.
Are you a ‘birther’ as well?
Those who believe global warming is the cause of many worldly ills are only admitting one thing: facts simply do not matter to them anymore.
If facts don’t matter why is it that I am defending the fact that the greenhouse effect exists when all skeptics around me seem to be trying to hard to pretend it’s at best an open question.
The earth is in a cooling trend. There is no global warming. There is only climate change now. But there is always good climate change. That is the fact. Everything else is dogma.
lolwot
The GHE is real.
CO2 is a GHG.
Humans emit CO2.
That much is corroborated. (Yawn!)
What is NOT corroborated, however, is the premise that AGW, caused principally by human CO2 emissions, has been the primary cause of 20th century warming and that it, therefore, represents a serious threat to humanity and our environment.
There is just too much uncertainty surrounding that hypothesis and no empirical evidence (based on actual physical observations or reproducible experimentation, according to the scientific method) to support it.
And, lolwot, that is what is being debated.
Max
There is something unsatisfactory about the Bradley paper if it is used to speak to the issues being discussed in the other two and by most commentators on this thread.
It goes back to the dichotomy introduced by the quotes from Lapace and Borges. As the desideratum Lapace sets up an idealised model that is indistinguishable from reality (apart for the addition of an observer), whereas Borges is much more prosaic and worries about the value of the model (and by implication introduces the notion of utility as a criteria to be used in assessing models).
Utility and purpose are intrinsic in modelling, and the analysis of uncertainty in modelling is intimately involved with those concepts. At the simplest uncertainty is likely to be a dimension of utility.
By focusing on the Lapacian demon and its effort to describe the complete climate as a system Bradley implicitly draws us into a labyrinth of GCM complexity that might be relevant to the grand scientific endeavour of a universal understanding of our climate (or forecasting tomorrow’s weather). But this is very likely irrelevant to the purpose others here have to hand: “What is the impact of GHGs likely to be on earth in 30 – 50 years’ time?” (if only because of the time that will be required to reduce the uncertainties in these grand models to a useful level).
By tacitly ignoring much more limited purposes than the grand theory we are not forced to ask the question whether techniques such as the greater use of stochastic modelling or modelling much more limited domains may not provide more useful models for the purpose at hand.
And no doubt the important uncertainties will most certainly be different if they did.
I wouldn’t take my Laplacean demon talk too seriously. Perhaps I should have said more about the goals of modelling, and how they also impact on what counts as a good model and so on. The demon talk was supposed to act as a contrast to actual science, but I guess by seeing things too much on that dimension, you lose the pragmatic aspect of actual climate modelling. This is a good point. Thanks!
Here is a list of Nobel prize winners whose works support the AGW theory.
Svante Arrhenius
Jacobus Henricus van’t Hoff
Johannes Diderik van der Waals
Wilhelm Wien
Max Planck
Niels Bohr
Arthur Compton
Peter Debye
Max Born
Werner Heisenberg
Erwin Schrodinger
You have a beef with AGW, take it up with them.
But that’s an appeal to authority! All skeptics know authority is meaningless. That’s why we never bring it up.
30,000 scientists agree with me.
Edward Lorenz ‘was a professor at the Massachusetts Institute of Technology when he came up with the scientific concept that small effects lead to big changes, something that was explained in a simple example known as the “butterfly effect.” He explained how something as minuscule as a butterfly flapping its wings in Brazil changes the constantly moving atmosphere in ways that could later trigger tornadoes in Texas.
His discovery of “deterministic chaos” brought about “one of the most dramatic changes in mankind’s view of nature since Sir Isaac Newton,” said the committee that awarded Lorenz the 1991 Kyoto Prize for basic sciences. It was one of many scientific awards that Lorenz won. There is no Nobel Prize for his specific field of expertise, meteorology.’
Abrupt change in complex and dynamic Earth systems is the new paradigm – it displaces simple linear systems thinking that has prevailed. Abrupt change rather than global warming.
Lorenz did not originate the idea of extreme sensitivity to initial conditions. Poincare discovered it mathematically around 1910. Lorenz was unusual for a meteorologist in having a deep math background, so when his simple computer model of atmospheric circulation exhibited sensitivity he recognized chaos, where others would have seen a glitch or some such. Thus Lorenz married the math to the physics, creating a new branch of dynamics in the process.
The history of science is full of cases where new math leads the way. The development (or discovery) of algebra, analytical geometry, calculus, non-Euclidean geometry, and tensors are major examples. Chaos is another.
Poincare in the 3 body problem discovered the first deterministically chaotic system.
Didn’t he also show that chaotic behavior is often predictable?
That’s why they call it “deterministic chaos”. Anything that is deterministic is by definition predictable. The opposite to deterministic is stochastic, which means that we can define at most a probability as to which path a behavior might take. The key thing to realize about stochastic behavior is that it is often modulated by billions of individual agents that have the same stochastic properties. This leads to fairly predictable outcomes and is the basis of the science known as statistical mechanics and to the laws of thermodynamics.
There is a confusion here between two sense of predictable. Deterministic means predictable only in the sense that requires perfect knowledge. The predictability we are interested in is predictability by humans, where perfect knowledge of the state of a physical system is impossible. So in the sense that counts chaotic behavior is intrinsically, or perfectly, unpredictable.
This is what is exciting about chaos: that it is deterministic but completely unpredictable (by humans). Intrinsic unpredictability changes the nature of science but people have a hard time seeing the world that way. Most still don’t.
Web, stochastic is not the opposite of deterministic. The underlying phenomena in statistical mechanics and thermodynamics are all deterministic (unless you throw in quantum indeterminacy). Stochastic refers to an approach that makes no attempt to use the determinism because it is too complex and unobservable.
You are the second person now to bring up a philosophical point that really has zero practical implications.
Statistical mechanics incorporates the ideas of probability distributions such as Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein. Look it up and you will find stochastic is simply an alternate definition of probabilistic (in two fewer syllables)..
So electrons, the velocity of gas molecules and a Bose-Einstein condensate (?) have climate applications?
Stochasticity – probability – whatever – are explanations for things not understood and have little relevance in climate other than as stochastic generation of cloud, rainfall or other parameters. Marginal in other words.
Except for the obnoxiously and eccentrically opinionated.
I thought entropy was based on something other than an opinion.
I would say that the distinction between chaos and stochastic has actually huge practical implications, and that statistical thermodynamics is a proof of that.
it’s indeed true that the subject of classical statistical mechanics is fully deterministic and chaotic to the extreme. At the same time statistical methods produce very accurate results. Thus this example proves that we may have a strongly chaotic system, but still have unique and accurate results and that is actually related to the fact that it’s stochastic. The real world will never enter any of the strange paths that a deterministic chaotic system may go through, but the stochasticity changes one single system effectively to a canonical ensemble solving one of the philosophico-mathematical problems of statistical mechanism, but telling that similar phenomena may occur in other places, where the chaotic dynamics is used as an argument against even limited predictability, most notably in climate science.
The fact that there are lots of chaotic details in the Earth system does not by itself tell much about the predictability of climate. The observation of chaotic features leaves the answer open – both ways.
Hmmmm….well I guess its settled then. Nothing left to discuss.
‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.
Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.
The new paradigm of an abruptly changing climatic system has been well established by research over the last decade, but this new thinking is little known and scarcely appreciated in the wider community of natural and social scientists and policy-makers.’ http://www.nap.edu/openbook.php?record_id=10136&page=R1
It is of course barely begun – but I fail to see any point to your comment at all.
My reply was to the OP on his appeal to Nobel authority. Very thinly veiled sarcasm.
That’s OK then.
It’s their works that need to be shown to be in error, in order to show the greenhouse effect is wrong, quite a row to hoe, your sarcasm wont do.
Very few people are trying to show that the CO2 is transparent to IR radiative flux. Just that there is a bigger picture.
Pusillanimous nonsense. Introduction of CO2, or any other “tweak” of the system, is as likely to oppose and delay an “abrupt change” as stimulate it. E.g., if AGW really is valid it will help us resist the onslaught of the ice sheets. Fat chance in reality, but in your hyper-sensitive universe it’s got a shot.
pu·sil·lan·i·mous
[pyoo-suh-lan-uh-muhs]
–adjective
1. lacking courage or resolution; cowardly; faint-hearted; timid.
2. proceeding from or indicating a cowardly spirit.
I guess I’m saying that ‘examples lead to an inevitable conclusion: since the climate system is complex, occasionally chaotic, dominated by abrupt changes and driven by competing feedbacks with largely unknown thresholds, climate prediction is difficult, if not impracticable.’ http://www.biology.duke.edu/upe302/pdf%20files/jfr_nonlinear.pdf
I am saying that AGW theory fundamentally mistakes the nature of climate – and of climate models indeed – being deterministic chaos rather than linearity – but you have no comprehension of this at all.
I think you are full of supercilious nonsense – delivering mere rhetoric ‘superficially in the culturally potent language of science’ – but not showing any depth of knowledge or understanding of the spirit of science.
‘More fundamentally than in the realm of politics, over-stating confidence about what is known is much more likely to lead us astray in basic research than admitting ignorance…This dynamic tension has always been the motor force in scientific revolutions.’ LSE 2010 Hartwell Paper
How does your ‘reasoning’ not lead to the ‘stand-still’ strategy? Avoiding all effects on the ‘climate system’ of any nature? Deterministic chaos, poorly characterized in even its crudest reactions to input, is not a basis for ANY intelligent decision making. It certainly does not justify avoiding a swing in CO2 levels dwarfed by those known to have occurred naturally.
Or have you stopped advocating the Precautionary Principle?
‘The new framework now emerging will succeed to the degree to which it prioritizes agreements that promise near-term economic, geopolitical, and environmental benefits to political economies around the world, while simultaneously reducing climate forcings, developing clean and affordable energy technologies, and improving societal resilience to climate impacts. This new approach recognizes that continually deadlocked international
negotiations and failed domestic policy proposals bring no climate benefit at all. It accepts that only sustained effort to build momentum through politically feasible forms of action will lead to accelerated decarbonization.’
http://thebreakthrough.org/blog/Climate_Pragmatism_web.pdf
Deterministic chaos implies that there is a risk of serious adverse consequences – what that is and even if it will happen is of course impossible to tell. We come to a core of uncertainty – to argue that it won’t happen requires more knowledge than I possess. It suggests to me that a creative and flexible response is required – as in the Breakthrough Institute policy document recently discussed. It is worth reading.
It has a political purpose as well – it enables classic liberals to capture the policy high ground and bend the policy to the essential goals of global economic growth and development and at the same achieve practical ecological conservation goals. Merely resisting is self defeating – particularly where it is not soundly based but based instead on an argumentum ad ignorantiam.
There is nothing wrong with being cautious – it is how that is translated into real world politics that is the key to progress.
Chief: There is nothing wrong with being cautious if that means not acting precipitously. If being cautious means taking significant action then there is a lot wrong with it. There is nothing in chaos theory to support your view. The butterfly effect refers to infinitesimal differences. There is nothing to suggest that specific actions increase risks.
What action is suggested – is that of the recent Breakthrough Institute paper.
The climate system is characterised by strong negative and positive feedbacks with little known thresholds. But you are right – there is nothing other than a presumption an effect might occur. But to leap to the conclusion an effect won’t occur is also a presumption.
Whatever the “risk” of a destabilization of the climate, what is CERTAIN is that the disingenuously described “achieve practical ecological goals” mitigations involve reliance on ruinously regressive and expensive “mitigations”. Unless you have some other kinds in mind. If you do, they can amount to nothing other than preparing for and coping with all eventualities. Which de facto requires maximizing both wealth and flexibility.
Mitigation strategies are fatal to both those requirements. And, incidentally, to the billions of humans they render “surplus” by means of fiat fuel poverty. (That is the real world effect of “decarbonization”.)
Oops!
In the real world ecologies are just that – I had in mind things like the restoration and conservation of viable ecosystems. Goods for a rich world. Economic growth and development speak for themselves.
You keep trying to push me into some category in your imagination based on a false dichotomy.
This thread advances from the ludicrous to the laughable. Listing a series of Nobelists, all dead before the development of AGW theory is a red herring. The issue is not whether CO2 gives rise to warming. It does. Let us even stipulate that lolwot is correct when saying “A doubling of CO2, a feat man will easily succeed in, produces something like a 3.7wm-2 forcing”. The fact is that this level of warming will lead to a rise in temp in the order of 1.2-1.5 degrees centrigrade. Hardly catastrophic, as even the IPCC concedes. If this were the total expected impact from AGW, the IPCC would have folded its tent long ago. That is not the issue AT ALL. The issue at hand is whether or not there is destined to be positive feedbacks from this warming, and the extent, if any, of these feedbacks. On this subject, none of the nobelists cited opined, and I am not aware of anything in their science that directly supports the positive feedbacks that are relied upon by the IPCC. THIS is the source of the uncertainty.
These feedbacks are not even necessarily generated by the models: a significant portion of the feedback are inputs to the models. Absent these feedbacks, all the drama surrounding AGW and CAGW disappears. The reality is that we do not yet have proof of these feedbacks, while recently we appear to have some evidence of their absence. This gap in the models’ performance is, most recently, ascribed variously to Chinese coal burning and/or ultra-deep storage of heat in the oceans (which we cannot yet measure). All this may be correct, but is certainly does lead to an increasing degree of uncertainty.
Larry, You write “Let us even stipulate that lolwot is correct when saying “A doubling of CO2, a feat man will easily succeed in, produces something like a 3.7wm-2 forcing”. The fact is that this level of warming will lead to a rise in temp in the order of 1.2-1.5 degrees centrigrade.”
It would be interesting to see any response from lolwot. However, I am one of the skeptics who maintain that the physics of how you convert change in radiative forcing to change in surface temperatures does not exist. The value could be 1.2 – 1.5 C for a change of 3.7 wm-2, or it could just as easily be 0.12 – 0.15 C.. We just dont know.
Jim Cripwell,
If you’d read a non-scientific subject at uni I can understand that you might just throw up your hands and say I “just don’t know”. But ,I thought you were supposed to be a Physicist?
So how about looking up the Total Solar Irradiance?
http://earthobservatory.nasa.gov/Features/VariableSun/variable2.php
Its about 1367 W/m^2. So divide by 4 for the ratio of the Earth’s diameter to its surface area gives 342W/M^2. Right?
Average Earth’s temp is about 290K.
So that works out at 0.85 deg K / (W/^2)
That means a change in forcing of 3.7 W/m^2 will produce a warming of 0.85 x 3.7 = 3.1 deg K
Maybe that Physics degree was just a long time ago ?
You fall down on three counts there:
1) The surface temperature is a function of net flux (incoming – outgoing). You’re only considering incoming in your calculation
2) you don’t take the 0.3 albedo into consideration
3) Before you can use that 3.7W forcing for heating, you first have to subtract latent and sensible heat.
The argument of tempterrain is on the effective radiative temperature of the Earth and on the imbalance at TOA before the temperature adjustment has had time to develop.
Very roughly the change in the effective radiative temperature affects the whole system uniformly. Thus 0.85 C in the effective radiative temperature corresponds to 0.85 C everywhere below the original tropopause (the altitude of the tropopause rises, and the changes in that range are different).
More accurate calculations tell that the temperature change at the surface is slightly larger even without factors that are usually included in the feedbacks. The details of flux at the surface belong to these additional calculations. All components are included in that, both up and down.
Albedo is usually assumed to remain unchanged in the basic calculation and its changes are included in the feedbacks. The constant value of albedo is taken into account.
The calculation that leads to the conclusion that surface warms a bit more than the effective radiative temperature includes convection and some effects related to latent heat, but most of that is handled as feedback.
‘Earth’s global albedo, or reflectance, is a critical component of the global climate as this parameter, together with the solar constant, determines the amount of energy coming to Earth. Probably because of the lack of reliable data, traditionally the Earth’s albedo has been considered to be roughly constant, or studied theoretically as a feedback mechanism in response to a change in climate. Recently, however, several studies have shown large decadal variability in the Earth’s reflectance. Variations in terrestrial reflectance derive primarily from changes in cloud amount, thickness and location, all of which seem to have changed over decadal and longer scales.’ http://www.bbso.njit.edu/Research/EarthShine/
Peter137,
1) So what are you saying I should use double the flux or subtract one from the other and use zero?
2) The factor of 0.3 is the same for the 342W/m^2 as it is for 3.7 W/m^2. It doesn’t affect the calculation.
3) Subtract latent and sensible heat? You mean subtract all heat? If so why?
You sure you know what you’re talking about?
Sorry, I could have worded that a bit better. So I’ll put it a bit differently.
At the TOA, an extra forcing of 3.7W/m2 means that an extra 3.7W/m2 needs to be emitted to restore radiative equilibrium. This extra 3.7W/m2 is made up by the the (effective) radiating surface being around 1C warmer. This is a far cry from the 3.1C you calculated.
At the surface things are a bit more complicated, as 3.7W/m2 is insufficient to heat the surface by 1C, because of latent and convection losses.
Your calculations are impeccable. However, the 3.7 wm-2 we are talking about do not disappear from the earth’s surface. It is radiated out into space from somewhere else in the atmosphere. Your calcualations apply to wherever this is; NOT to the earth’s surface.
So you’re saying that the adiabatic lapse rate ( the falling of temperature with increased elevation) will decrease? In other words the upper layers of atmosphere will become warmer but the surface temperature will remain the same or similar?
http://en.wikipedia.org/wiki/Lapse_rate
If so, your reasons are…. ?
Jim,
I should have also pointed out that the 3.7W/m^2 is an equivalent forcing. Adding GH gases to the atmosphere doesn’t mean that the Earth actually receives any extra radiation from the sun or radiates any extra energy itself.
If an observer on the moon were to measure the temperature of the Earth using its IR emissions he would measure approximately 255 degK. About 33 deg K lower than the surface temperature, regardless of the GH gas concentrations in the atmosphere.
I am not going to enter into a disucssion on this. You can go straight to Wikipedia and argue with these sources:
Rahmstorf, Stefan (2008). “Anthropogenic Climate Change: Revisiting the Facts”. In Zedillo, E. (PDF). Global Warming: Looking Beyond Kyoto. Brookings Institution Press. pp. 34–53.
“”Without any feedbacks, a doubling of CO2 (which amounts to a forcing of 3.7 W/m2) would result in 1°C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback”;[6] addition of these feedbacks leads to a value of approximately 3°C ± 1.5°C.”
Larry,
Well if its “easy to calculate”, why can’t you show us?
tempterrain
IPCC has already done that for us.
IPCC (Myhre et al.) tells us that the radiative forcing for 2xCO2 = 3.7 W/m^2 and that this results in a warming of just under 1°C.
IPCC then tells us (AR4 WG1 Ch.8, p.630) that model-based feedbacks are (W/m^2 °C):
With a note: Cloud feedbacks remain the largest source of uncertainty
On this basis, IPCC tell us (p.633):
So IPCC models have estimated that, on average, net cloud feedback is positive, increasing the climate sensitivity by around 1.3°C to 3.2°C, but there is large uncertainty in this estimate.
Fortunately, there have been studies since AR4, which clear up some of this ”largest source of uncertainty” on cloud feedbacks, notably Spencer et al. 2007, which shows, based on CERES observations over the tropics, that net cloud feedback is negative, rather than positive, as estimated by the IPCC GCMs.
http://blog.acton.org/uploads/Spencer_07GRL.pdf
In addition, a model study was made by Wyant et al. using superparameterization for clouds to better estimate their behavior than was possible by the GCMs cited by IPCC
ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf
Based on these newer data, we can clear up some of the uncertainty in the climate sensitivity estimates including all feedbacks accordingly, arriving at a revised figure of around 1°C or less.
Max
Note Willis’ current posting on feedbacklessness:
http://wattsupwiththat.com/2011/08/14/its-not-about-feedback
He says sensitivity is a variable, part of the “governor” in a homeostatic system. Specifically, the warmer it gets, the lower the sensitivity. And it’s very local, and “flips” between several states in the tropics in the course of a normal day.
In any case, the one thing clouds are NOT is positive feedback agents. In their thunderstorm form, they are so negative they carry on negating way below the start point! Heh.
You have a limited grasp of the history of AGW theory, a large portion of it predates the establishment of the Nobel prizes.
And about the feedbacks, are you blissfully unaware of the trend in specific humidity in the atmosphere?
I am not aware is an argument from ignorance, surely you are aware of that.
Arrant nonsense. The “AGW theory”, of which I am very well aware, that predates the Nobel prize consists of Arrhenius’ ideas. (He is the same guy who helped foster the ‘science’ of eugenics in Sweden, by the way.) Even Arrhenius acknowledged that a doubling of CO2 would unlikely raise temperatures above 1.6 degrees C. As to the trend of specific humidity in the atmosphere, that has been claimed as negative as you claim it to be positive towards your espoused position. Why can’t we all be honest and face the reality of uncertainty? All this brave certainty is not doing the science any favors.
Honest, how about checking facts?
http://www.ncdc.noaa.gov/bams-state-of-the-climate/2009-time-series/humidity
Got a cite for the claimed negative trend in specific humidity?
The support of eugenics by Arrhenius is a non sequiter.
And his number for doubling CO2 was 4 not 1.6. See page 53.
http://books.google.com/books?id=1t45AAAAMAAJ&printsec=frontcover#v=onepage&q&f=false
And I think you meant Arrogant Nonsense!
From Wikipedia:
(my bold)
Including the water vapor feedback 2.1 C
That’s beside the point, and you know it. You accused Larry of “Arrogant Nonsense” for quoting the 1.6C figure. But then when you got caught out on that then you change your tune.
Its interesting that Arrhehius did arrive at figures which look to be in the right ballpark as long ago as he did.
I haven’t checked his work, but my guess would be that there was a large element of luck involved in his getting as close as he did, probably with one error cancelling out another, and it is quite irrelevant whether his first figure was higher than his second. This is not meant as a criticism of Arrhenius but rather its just an observation that he couldn’t have possibly have had the necessary knowledge at his disposal at the time.
You have an old TSI value – it has been adjusted down to ‘about’ 1361. You have the wrong formula. It should be:
(1-A) Tsi/4
Where A is the albedo. Albedo is not constant – it is an ontological absurdity to suggest that it might be. It is still the wrong formula. There is a differentiation of the Stephen-Boltzmann equation that – but that uses a constant calculated elsewhere.
Use this instead – http://geoflop.uchicago.edu/forecast/docs/Projects/modtran.html
Use an altitude of 100km looking down – first run with a CO2 concentration of 280 ppm then double CO2 – then increase the surface temperature to compensate until IR up is the same as the first run which gives you a line by line estimate – all other things being equal – of the temperature increase.
All other things are far from equal – you are an amateur and should learn to crawl before you walk and not be such such an obnoxiously insulting dickhe@d.
I am surprised at Pekka for encouraging such simplistic nonsense.
.bob droege
Yeah. It’s a NOAA site, based on data going back to 1948.
http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Specific+Humidity+(up+to+300mb+only)&level=300&lat1=90&lat2=-90&lon1=180&lon2=-180&iseas=1&mon1=0&mon2=11&iarea=1&typeout=1&Submit=Create+Timeseries%20
I plotted it here together with the HadCRUT3 temperature record.
http://farm4.static.flickr.com/3343/3606945645_3450dc4e6f_b.jpg
Hope this helps.
Max
PS Arrhenius later corrected his earlier estimate downward, for what it’s worth.
By the same token, you could also say that most of the scientists on your list supported nuclear war.
Once again…
I am appalled that there is so much written on the limitations and confidence of the models and so little actual analysis of real results (I’m speaking about “Scientific uncertainty: a user’s guide” here).
Vagueness, generalizations. None of these apologists has the courage to state the obvious, these models are woefully inadequate at this time to make policy judgments on.
Sure we cannot state they are completely broken, but we sure can state they are trending very poorly and are “very likely” broken in quite significant ways. Clouds, handling of el nino et al. for example.
I wish these guys would address the whole truth, and stop treating these things with kid gloves. The science would get better faster.
In my defense, I am not qualified to properly analyse the “real results”. Philosophers deal with generalisations. That’s how we work.
I don’t think it’s my place to be telling scientists how to do their job.
That said, I think you’re right that more analysis of the limits on predictability would be worthwhile. (Although I’d frame it more as “working out how much of the predictions we can be confident of”)
Models can’t be considered broken with respect to clouds if they were never intended to model individual cloud behaviour due to specific restraints concerning processing speeds and cell dimensions etc.
bob droege
Correct.
That is why the model estimate cited by IPCC for net cloud feedback included such a “large source of uncertainty”.
The studies I cited to tempterrain cleared up some of this “uncertainty”
In particular, the Wyant et al. model studies using super-parameterization for clouds helped to resolve the problem you cite with the IPCC models.
Max
Models can’t be considered broken with respect to clouds if they were never intended to model individual cloud behaviour…
That depends on perspective: there’s nothing wrong with a model that fails to do something it wasn’t designed to do, sure. But if you want to use the models to make predictions, and making good predictions requires that you model clouds properly, then the models are broken with respect to this goal.
‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’
http://www.pnas.org/content/104/21/8709.full
The problem is that no ‘systematically designed model families’ – such the level of ‘irreducible imprecision’ is unknown.
‘The global coupled atmosphere–ocean– land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that
collectively result in a cont inuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.
The large-scale climate, for instance, determines the environment for microscale (1 km or less) and mesoscale (from several kilometers to several hundred kilometers) processes that govern weather and local
climate, and these small-scale processes likely have significant impacts on the evolution of the large-scale circulation (Fig. 1; derived from Meehl et al. 2001). The accurate representation of this continuum of variability in numerical models is, consequently, a challenging but essential
goal. Fundamental barriers to advancing weather and climate prediction on time scales from days to years, as well as longstanding systematic errors in weather and climate models, are partly attributable to our limited understanding of and capability for simulating the complex, multiscale interactions intrinsic to atmospheric, oceanic, and cryospheric fluid motions.’ http://www.gfdl.noaa.gov/bibliography/related_files/Hurrell_2009BAMS2752.pdf
‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. Plausibility criteria are qualitative and loosely quantitative, because there are many relevant measures of plausibility that cannot all be specified or fit precisely. Results that are clearly discrepant with measurements or between different models provide a valid basis for model rejection or modification, but moderate levels of mismatch or misfit usually cannot disqualify a model. Often, a particular misfit can be tuned away by adjusting some model parameter, but this should not be viewed as certification of model correctness.’
http://www.pnas.org/content/104/21/8709.full
The models are marginally ‘plausible’ – in the sense of representing Earth system physics – better models requiring a 1000 times more computing power. The bounds of ‘irreducible imprecision’ are not known. A non-unique solution (within bounds of plausibility for initial and boundary conditions) is arrived on the basis of ‘a posteriori solution behavior’.
At RC I have asked Chris Colose the question, “[do] GCMs adequately model the succession of day and night on a 24-hour cycle[?]”. Awaiting his reply.
Judith,
The problem with uncertainty is that it can be used as a catch all phrase for the MANY mistakes currently being made in science.
Creating a like minded closed system has generated age old theories as facts being traditional passed down by education.
Models will never be correct until the mechanics of the forces on this planet is understood. This is fluffed off into the fiction by current scientists.
I have now read “Reasoning about climate uncertainty” more carefully.
The issue for me is that only at the end do you get to talk about who the users of the IPCC reports might be and what they need from the reports. Following on from my earlier comment on this thread I can’t help but think that starting there might bring the issue of uncertainty into clearer focus.
One obviuos comment is that I’m sure this would suggest materiality to the matter in hand is an important dimension to be explicit about, including in discussions of uncertainty in all its various guises.
Interesting distinctions, there. The official “users” are the governments that comprise the “Intergovernmental” part of IPCC. They mandated, it seems, exploring methods for dealing with, as a given, human-induced climate change.
Which begs the question for any user not prepared to accept that “given”. “Givens” don’t have uncertainties, and this one has the material effect of exponentially magnifying the reach and resources of the official users to deal with it, once accepted.
Pachauri’s recent claims that conflict-of-interest concerns simply don’t exist within the IPCC is telling.
I had been taking a rather purist view of what the official users might want in advice – advice that reflect what a professional public service might give.
Of course the IPCC is more like a “independent” expert body set up to deal with a complex and controversial issue. These abound at the national level and occasionally surprise when they exhibit true independence.
The problem for the IPCC in this situation is that it is set up at an international level where it doesn’t have the guarantee that the adminsitration that tasked it will be there for it when it comes to implementation. Individual nation governments have a range of views.
So to the extent that the IPCC moved from simply reporting “Just the Facts Ma’am” (whatever they may be) to taking a partisan view in anticipation of what their UN clients wanted, they presented an opportunity for partisan debate at the national level rather than help build an informed consensus.
The rest as they say is History.
The do want advice. As in, “What’s the best justification for pulling off this huge fast one?”
An Choo-Choo specifically denies that the IPCC is a UN body, saying it’s an independent creature of the 194 “member” governments, and not answerable to the UN.
typo: And Choo-Choo, Chuff-Chuff,
(ref: http://climateaudit.org/2011/06/18/pachauri-no-conflict-of-interest-policy-for-ar5/)
Actually, there was another interview I had misinterpreted. The IPCC is a UN sub-body.
But somehow it is not supposed to fix past conflicts:
“Of course if you look at conflict of interest with respect to authors who are there in the 5th Assessment Report we’ve already selected them and therefore it wouldn’t be fair to impose anything that sort of applies retrospectively.”
If data corrections are not random but linear and progressive, then are the error bars balanced +/- as they as on the GISTemp graphs? If UHIE corrections, in particular, are arguably not large enough systemically, does this not mean that the temperature of urban areas as given is the high side, not the middle of the range?
If an enlarged area sampling of the world temperature generates an increase in world temperatures, does this not suggest that the temperature data collected is an upper limit due to data selection biases?
If errors in the assumptions of CO2 primacy, in the radiative power of
CO2, of feedback, of residence time of CO2, of thermal transfer of heat to oceanic depths and other assumptions/fundamentals of IPCC climate science – if errors in these assumptions uniformly reduce the global temperature projections, should not the IPCC projections all be at the top end of their error bars?
The error bars/uncertainty I see and hear seems to me to be the middle of the road, based on randomness in effect, but the probable errors I see result in a cooling, not in warming. So the display, if anything, is wrong, the display showing “it could be worse” when I see it as “this is as bad as it could be”.