by Judith Curry
Since people are clamoring for a new thread, lets talk about this article in the the Australian Quadrant entitled “Science without method,” subtitled “Global warming research: whatever happened to the scientific method?” To review previous Climate Etc. posts on the Scientific Method, click here.
In the first two paragraphs, the article cuts straight to the heart of the matter:
Most of these papers are, of course, based upon the output from speculative and largely experimental, atmospheric models representing exercises in virtual reality, rather than observed, real-world, measurements and phenomena. Which leads to the question “What scientific methodology is in operation here?”
Though much has been written concerning the scientific method, and the ill defined question as to what constitutes a correct scientific approach to a complex problem, comparatively little comment has been made about the strange mix of empirical and virtual reality reasoning that characterises contemporary climate change research. It is obvious that the many different disciplines described as being scientific, rather than social, economic, or of the arts, may apply somewhat different criteria to determine what fundamental processes should define the “scientific method” as applied for each discipline. Dismayingly, for many years now there has been a growing tendency for many formerly “pure” scientific disciplines to embody characteristics of many others, and in some cases that includes the adoption of research attitudes and methods that are more appropriately applied in the arts and social sciences. “Post-modernism”, if you like, has proved to be a contagious disease in academia generally.
FYI, postmodern science (differentiated from postnormal science) is described here, summed up by the following quote: “If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. Postmodern science must therefore overcome the separation between truth and virtue, value and fact, ethics and practical necessity .”
I’m ok with this so far. Read the article for some history of physics and discussion of the scientific method in this context. Then the article comes back to climate change and the enhanced greenhouse effect:
Out of this cut and paste “history” of physics, comes the strongest criticism of the mainstream climate science research as it is carried on today. The understanding of the climate may appear simple compared to quantum theory, since the computer models that lie at the heart of the IPCC’s warming alarmism don’t need to go beyond Newtonian Mechanics. However, the uncertainty in Quantum Mechanics which Einstein was uncomfortable with, was about 40 orders of magnitude (i.e. 10^40) smaller than the known errors inherent in modern climate “theory”.
The following two sentences are incorrect. Newton’s second law, the ideal gas law, the first and second laws of thermodynamics, Planck’s law, etc. all well established “assumptions” that provide the foundation for reasoning about the Earth’s energy budget and the transport of heat therein.
Yet in contemporary research on matters to do with climate change, and despite enormous expenditure, not one serious attempt has been made to check the veracity of the numerous assumptions involved in greenhouse theory by actual experimentation. The one modern, definitive experiment, the search for the signature of the green house effect has failed totally.
Oops. The signature of the Earth’s greenhouse effect is well known and demonstrated by the infrared spectra at the surface and measured by satellites (see this previous thread). The “enhanced” greenhouse effect and the associated feedbacks are at issue. I suspect the author meant to say “enhanced greenhouse effect” in the above sentence. Further, you can’t conduct an experiment on a natural system such as the Earth’s climate system in the same way you can conduct a controlled experiment in a physics or chemistry lab.
Projected confidently by the models, this “signature” was expected to be represented by an exceptional warming in the upper troposphere above the tropics. The experiments, carried out during twenty years of research supported by The Australian Green House Office as well as by many other well funded Atmospheric Science groups around the world, show that this signature does not exist. Where is the Enhanced Green House Effect? No one knows.
The issue of what is going on with temperatures in the tropical upper troposphere (both in terms of the observations and also the relevant processes in climate models) is an active area of research and needs to be sorted out. More on this topic in a future post. Assuming the veracity of the relevant data sets (which disagree with each other in any event) and this interpretation of the discrepancy between observations and climate models in the upper tropical troposphere, a discrepancy between climate models and observations in the upper tropical troposphere could be a result of known inadequacies in convective parameterizations, subgrid entrainment processes, ice cloud microphysics, etc. Such potential model deficiencies accumulating in the upper troposphere do not invalidate the enhanced greenhouse effect; rather the discrepancy is motivating increased analysis and improvement to the relevant model parameterizations.
In addition, the data representing the earth’s effective temperature over the past 150 years, show that a global human contribution to this temperature can not be distinguished or isolated at a measurable level above that induced by clearly observed and understood, natural effects, such as the partially cyclical, redistribution of surface energy in the El Nino. Variations in solar energy, exotic charged particles in the solar wind, cosmic ray fluxes, orbital and rotational characteristics of the planet’s motion together provide a rich combination of electrical and mechanical forces which disturb the atmosphere individually and in combination. Of course, that doesn’t mean that carbon dioxide is not a “greenhouse gas”, so defined as one which absorbs radiation in the infra red region of the spectrum. However, the “human signal”, the effect of the relatively small additional gas that human activity provides annually to the atmosphere, is completely lost, being far below the level of noise produced by natural climate variation.
The final concluding sentence of the above paragraph is an overconfident dismissal of the human signal on climate. Separating natural (forced and unforced) and human caused climate variations is not at all straightforward in a system characterized by spatiotemporal chaos. While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.
So how do our IPCC scientists deal with this? Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs? Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information? Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully? Do they look at possible solar influences instead of carbon dioxide? Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?
Of course not. That would be silly. For there is a scientific consensus about the matter, and that should be that.
Well the IPCC deserves the above criticism to a large extent. Yes, much research is ongoing to understand discrepancies in climate models and improve them. However, in the AR4, their experimental designs and conclusions are dismissive of explanations that involve natural variability.
JC’s conclusion. I found it pretty surprising to find an article on such a sophisticated topic in the Quadrant, but the more I read the Quadrant, it seems that they have some pretty good articles. This article raises an important issue related to reasoning about complex systems. From the previous thread on frames and narratives:
The second monster in my narrative is the complexity monster. Whereas the uncertainty monster causes the greatest confusion and discomfort at the science-policy interface, the complexity monster bedevils the scientists themselves. Attuned to reductionist approaches in science and statistical reasoning, scientists with a heritage in physics, chemistry and biology often resist the idea that such approaches can be inadequate for understanding a complex system. Complexityand a systems approach is becoming a necessary way of understanding natural systems. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures. Complex systems are studied using information theory and computer simulation models. The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community. How to reason about the complex climate system and its computer simulations is not simple or obvious.
So while this article is very interesting and raises important issues, in the end it oversimplifies the issue.
Judith,
It would be extremely hard to generate a model based on a few hundred years of data out of 4.5 billion years of planet history.
Most of the past is theories as very little physical evidence that current science can find but some of the science has been tainted by bad conclusions.
The path I am following makes a great deal of sense to the evidence that is left behind. Totally measurable with the rotational loss of water at a rate of .00025mm per year on average or 2.5mm/10,000 years.
It will make many readers cringe, Joe, but . . .
. . . modern climatology, funded by Western governments,
is no more scientific than conclusions of the Flat Earth Society.
World leaders, Al Gore, and the UN’s IPCC tricked an army of “Nobel Prize winning climatologists” into thinking Earth’s climate and long-range weather are independent of the stormy Sun that heats the Earth, completely engulfs this planet Earth, and sustains our very lives [1-5].
Political leaders want us to think that they have the power to control Earth’s climate, but they and their followers appear to understand the principles of science no better than members of the Flat Earth Society.
To understand how the gigantic, powerful Sun controls the tiny ball of dirt on which we live, see 1-5.
1. “Super-fluidity in the solar interior: Implications for solar eruptions and climate”, Journal of Fusion Energy 21, 193-198 (2002).
http://arxiv.org/pdf/astro-ph/0501441
2. “The Sun Kings: The unexpected tragedy of Richard Carrington and the tale of how modern astronomy began” by Stuart Clark (Princeton University Press, 2007).
The solar eruption that completely surrounded the Earth in 1859 is described in the front flap.
http://www.amazon.com/Sun-Kings-Unexpected-Carrington-Astronomy/dp/0691126607
3. “Heaven and Earth: Global Warming, the Missing Science” by Ian Plimer (Conner Court Publishing Pty Ltd., 2009)
http://www.amazon.com/Heaven-Earth-Ian-Plimer/dp/0704371669
4. “Earth’s Heat Source – The Sun”, Energy and Environment 20, 131-144 (2009): http://arxiv.org/pdf/0905.0704
5. “WeatherAction” and the long range weather and climate predictions of astrophysicist Piers Corbyn.
http://www.weatheraction.com/
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
” I found it pretty surprising to find such a sophisticated analysis in the Quadrant”
By what measure is it a “sophisticated analysis”? It’s main conclusion is wrong, the support it invokes for it is likely both wrong and wouldn’t support the conclusion if it was right. It then further concludes nefarious motives on the part of scientists who don’t support the wrong conclusion (which includes yourself apparently). It spends a lot of time on an irrelevant discussion of quantum physics.
To put it another way, John Nicol can now claim his article was described by Judith Curry climatologist and chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology as ” a sophisticated analysis” which “raises an important issue related to reasoning about complex systems.”
An article which concludes the greenhouse effect doesn’t exist and that scientists covered it up.
ok, i am going to change the wording to “an article on such a sophisticated topic”
Professor Curry,
The analysis in the Quadrant seems far more sophisticated than reports on global warming in Nature, Science, PNAS, etc.
I would encourage the editors of these once-respected journals to study and comment on the first sentence: “Most of these papers (on global warming) are, of course, based upon the output from speculative and largely experimental, atmospheric models representing exercises in virtual reality, rather than observed, real-world, measurements and phenomena.”
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
Despite all the flaws uncovered, climate science itself is still amenable to the standard scientific method.
See today’s news story by Dr. David Whitehouse:
“Is It The Sun Wot Done It?”
http://thegwpf.org/the-observatory/2864-is-it-the-sun-wot-done-it.html
They make an EMPIRICAL claim that MOST (meaning more than 1/2) of papers are based on models. They offer no evidence for their claim so one cannot accept it on faith as you have. But you like the sentiment expressed, so you agree.
Some scientific attitude you have.
Based on some of the other mistakes they made in the article I would doubt their claim.
Judith, note that comments on the previous topic include a link to a Quadrant article by Bob Carter, David Evans, Stewart Franks, Bill Kininmonth & Des Moore, who collectively have reasonable credentials to assess climate science and policy. It’s by no means unknown for Quadrant (not “the Quadrant”) to carry good articles.
A bit of context from an inhabitant of the land of Oz might help. Quadrant is one of a plethora of “small” magazines originally set up under the auspices of the Congress for Cultural Freedom sharing a sociopolitical niche with mags like Norman Podhoretz’s “Encounter” in the US, which now cater to a neoconservative perspective (this is an observation, not a criticism). While sometimes able to come up with some thought provoking journalism, it suffers from a small pool of writers who tend to ruminate on contrarian themes (this is a criticism as well as an observation).
It’s current editor, Keith Windschuttle, is a onetime Maoist turned neoconservative whose most recent contributions to the Australian intellectual scene have been to question the “consensus” around the contribution of European settlement to the catastrophic decline in our indigenous population, which is often likened to genocide. His methodology as a historian is rather questionable though I actually believe the Australian track record in our dealings with our indigenous folk is far better than, say, that of the US in its dealings with Native Americans (where a policy tantamount to active genocide seems to have been pursued).
With regard to AGW, Quadrant tends to lionise Monckton and Plimer. While I haven’t delved into Monckton’s presentations, Plimer’s opus is greatly flawed by dubious referencing (see Monbiot’s debate with Plimer on the Australian ABC in which Plimer dodges Monbiot’s admittedly aggressive questioning turning the “debate” into an sterile exchange in which two angry men talk past one another).
Windschuttle’s judgment when it comes to editorial oversight of science is as dubious as his approach to history. Quadrant’s occasional forays into my own field – psychiatry – have left me wincing.
I’m a politically conservative soul inclined towards the sceptical/lukewarming end of the spectrum of the AGW debate who dislikes alarmist oversimplification. However, I’m a stickler for accuracy and feel considerable irritation when I see complexities papered over from partisan motives on either side of the debate. Quadrant, I fear, tends to do this all too often even while arguing a perspective towards which I feel a natural sympathy.
+1
We see a lot of oversimplified narratives on both sides of the aisle in climate science. Historically when this happens it is a good indicator that reality is much closer to the mean.
Chris your characterization of Keith Windschuttles methods as questionable is ridiculous. Windschuttle is a classical historian basing his assesments on what is documented and demonstratable not what is speculative agenda. Hence his demolition of Henry Reynolds and Lyndell Ryan. If you are a stickler for accuracy as you claim then you need to exercise some.
Keith Windschuttle when read closely is one of the most extraordinary cherrypickers I have come across.While his critique of Reynolds and Ryan is not without substance, he often cites the very historians he criticises elsewhere when it suits his purpose to bolster his argument. He dismisses oral history but chooses to give great weight to documentary evidence from the time such as records from courts and enquiries as if the latter told the whole story or at least the only story we can safely believe. Anyone with a modicum of experience with courts of law or public enquiries would instantly know how extensively “edited” such sources are and how much information they exclude.
His work, IMHO, lacks internal consistency, which I thought a great pity as I began reading his work from a position fundamentally sympathetic to his his thesis but finding myself increasingly disconcerted by seeming non sequiturs (it’s too long since I read his work to give specific examples, which would take us way off topic). I would add that I’m still not fond of the so-called “black armband” perspective on Australian history, which Windschuttle critiques. I just don’t like the way he did it displaying much the same flaws mutatis mutandis as the historians he sets out to demolish.
His approach seems similar to writing a history of the “Climate Wars” and Climategate restricting our evidence to the findings of the Muir Russell and other official enquiries into the emails, the peer reviewed literature and the IPCC, while excluding the output of the blogosphere on either side.
Judith,
For some people it is a big and cherished victory to find even the smallest point, even if it is just a spelling mistake, a slightly wrong choice of wording, etc. We find them in all open forums on the web.
It is obvious to 99% of the readers and participants here that whether the article is “sophisticated” or not compared to other articles from the same source, or compared to your own expectations of the source, is irrelevant to the issues brought up.
But great response. Give the trolls their little crumb, or morsel of pleasure, and keep focusing on the issues.
That’s what we are here for anyway. Not for endless nitpicking.
BAZZZZINGA! :-)
“An article which concludes the greenhouse effect doesn’t exist and that scientists covered it up.”
Not the same take home message I got. Sounds to me like you already decided what the article was going to say before you finished reading it. This seems to be the prevalent POV for those who want to close their ears to other possible descriptions of how the climate might work…. and then attributing nonsensical statements to the authors of anything that questions the climate establishment.
Your post further drives home the type of mentality that John Nicol is pointing to.
“Not the same take home message I got.”
Of course and naturally anyone “taking home” a different message than yours is wrong or better yet
” drives home the type of mentality that John Nicol is pointing to.”
” This seems to be the prevalent POV for those who want to close their ears to other possible descriptions of how the climate might work…. “
If he brings the evidence I’m happy to listen. As things currently stand the “No greenhouse effect” idea is in the kook bin.
“If he brings the evidence I’m happy to listen.”
I doubt it.
So you want evidence of natural forcings? What cave do you live in? I’ll send some over.
Cheers,
Big Dave
shaperoo,
Psssst….Did ya read it?
Didn’t think so.
Ah yes, the “contagious disease” of post-modernism, which any self-respecting scientist should despise. It’s not at all clear to me that the definition you provided to “post-modern science” is what the author had in mind though. “Post-modernism” in the article seems to be a way of referring to the mixing of interdisciplinary methods (namely the dilution of formerly pure sciences by the arts and social sciences) and a disregard for the scientific method. This is bad form in my view, but I’m fairly used to seeing post-modernism used as a punching bag.
Formally called “fiction”.
or maybe Science Fiction?
Pointman
““If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. Postmodern science must therefore overcome the separation between truth and virtue, value and fact, ethics and practical necessity .” This is nonsense. The detached observation of reality required to understand natural processes is not dependent on the moral stance you take. However, it can lead you to truth and understanding, which is the basis of any sustainable moral code. The adoption of a particular morality as a prism through which scientific study is undertaken would appear to be a barrier to a disinterested search for truth.
I have no idea what this means:
Maybe I just don’t speak postmodern.
ChE –
It means that if the scientists screw up too badly they could find a crowd with torches and pitchforks on their front lawn.
Although IMO that crowd would do better to find the politicians houses first.
While I am not familiar with this particular author, the argument appears to be that science must be conducted on a moral foundation or the consequences for the world will not be good. Something like – science without morality is inherently harmful. I could see such a view being called postmodern, but that doesn’t mean all “postmodernists” would agree with it.
I’m not aware of any historical link between “morality” and science in the past. Immoral science (i.e. Mengele) is a problem, but ammoral science is the way it always was.
This has a creepy medieval church ring to it. Your science will fit into our framework.
Climategate is an example of removing moral values of truth and integrity from science. The consequence is we are left with “anything goes” as long as it favors getting the next grant.
No. They stepped over the line between amorality and immorality. They actively circumvented normal ethics.
Yes, but in their own minds, they crossed the line between amorality and morality, which I assume is meant by post modern science in the article: once you start thinking that science must work for the good of society, you might end up on a slippery slope, because everyone does not have the same concept of “good”.
Obviously some scientists thought that catastrophic climate change was imminent, so science should become a tool to stop the disaster.
In this way, even bad science can be justified as long as it is for the “common good”.
Bingo. We’re all aware of and sensitive to the dangers of immoral science. What’s less obvious is the perniciousness of moralistic science. That’s what gets them into ethical trouble. If there’s a “greater good” that can be rationalized, hang on to your wallet, because their ethics will go out the window.
Let me ask you a question: How is the idea you’re articulating different from a moral belief system in which the dispationate search for the truth is the central moral value? It may be that I’m simply misunderstanding your point. I suppose another way to get at my question is to ask what the distinction between “amoral” and “immoral” might be.
Then let me ask another question. Which was Mengele, and how do you know?
The problem is of course nobel cause corruption.
This has permeated every other social movement that ahs enbraced science as its rationale.
AGW is chock full of this problem.
The Road to Hell is paved with Good Intentions. Good Intentions have their roots in morality. Authority is Morality in action.
Folks the AGW debate will NEVER have a conclusion because the fanatics will never see the attempted falsification of AGW as an amoral natural function of science, they will continue to see it as an Immoral act propagated by Immoral people. The AGW moderates will simply go along with their ‘leaders’. Observations back this up immensely.
AGW is a Religion, the only sane course of action is to prevent the fanatics from tithing us and negatively influencing our lives and futures. This is tricky because the fanatics have convinced the AGW moderates that everyone is going to Hell if the fanatics don’t get their way. They are going to tithe us and make our lives more difficult whether we like it or not.
Resistance is the key. Stop going along with them. Stop your State Govt’s from funding this religion. Including, no make that especially learning institutions. They have enough followers to make their priests wealthy without forced government altruism and indoctrination.
Scuse me for interrupting and all, but isn’t the whole reason for doing science, – rather than prayng for enlightenment from the gods or studying the entrails of dead animals or any of the other daft things people get up to – is that it is a method that doesn’t involve ‘morality’.
Like Gil Grissom say ‘its all about the evidence’. Not about whether you like or dislike the conclusions. Or whether you work your burette with the love of yahweh or the worshp of gaia or any other idea in your heart.
When did they stop teachng this in junior science classses?
“When did they stop teaching this in junior science classses?”
After they stopped teaching this at colleges and universities, methinks.
But then again, so many of the cAGW proponents don’t seem to have heard of Lysenkoism either – or if they have, they see nothing wrong with that and even seem to try and emulate it.
Wow, that is a very weird idea you have there.
No, I don’t think “the whole reason,” or even a very small part of the reason, for doing science is to avoid things involving morality. It seems fairly clear to me that the main reason for doing science (aside from the fact that its fun) is because it predicts the future better than the dead-animal type activities you mention.
Morality remains every bit as important as it ever was. It can’t be replaced by science, because it serves a very different purpose. Either one without the other is a formula for human misery.
Perhaps I misphrased what I meant.
Attempts to understand the world prior to science were based upon ‘morality’ in the sense of ‘it says so in the Bible’, ‘the Gods predict it’, ‘Aristotle said so’, ‘Allah has decreed it so’ etc etc.
Science frees us from such false considerations. It is thus morally neutral….it does not depend on what you or I or Fred Bloggs or Jenny Sixpack believe because of their faith and what they’ve been told to believe.
We agree I think that science should score a null on morality. Thise who claim to be doing something with a moral dimension aren’t doing science. They are doing morality. Different stuff.
Ah, well–that does make a lot more sense. But it’s really, really wrong.
Firstly, it’s not really true that “before science” people understood the (physical) world in terms of morality, or, at least, not universally so. The Persian king, Xerxes, did famously have the seas flogged when they refused to obey his command, delaying his invasion of Greece. But the Greeks certainly didn’t think of the physical world that way. Not that they thought about it with the modern scientific method, either, of course. But they understood the difference between objective and subjective reality. Indeed, I believe the Greeks are credited with the first formal articulation of that distinction, with the Ancient Greek aphorism: “Fire burns here and in Persia, but the laws differ.” So I think you’re mistakenly conflating “moral reasoning” with “everything that’s not reductionist analysis,” or “not rational,” or something to that effect.
I’m not sure, but it seems to me that you’re also conflating “morality” with “belief.” Of course, the classic “fact-value distinction” teaches that there is no objective reality to be found within moral questions, so it seems like you’re speaking from within that paradigm. This particular bit of belief, like the others I mentioned, can be traced to a particular place in our history. It happened when the initial euphoria of the enlightenment philosophers died off. They believed that reductionist analysis was going to finally answer those thorny moral questions with the certainty of a mathematical proof, but it eventually became apparent that, while reductionist analysis would tell you how to build an internal combustion engine that you could predict with great certainty would work, when applied tomoral questions you got much less satisfactory results. Of course, the only logically justifiable conclusion from that data is that reductionist analysis doesn’t answer moral questions, but the one our society drew was that moral questions can’t be answered (and so it’s pretty much dumb to take them seriously).
So I believe we do very much agree that moral reasoning and science are two very different things, but I fear we have different opinions about the value of moral reasoning.
Up until a moment ago I wasn’t aware that I had an opinion at all about the value of moral reasoning :-). Nor that I was ‘speaking from within a paradigm’. Us IT/Chemistry guys leave all the high-faluting stuff to the philosophers. We just get on with making things work and stuff. Too much introspection about values and things gets in the way of getting the job done.
But thanks for devoting so much of your time to my pretty throwaway remarks. I hope it was worth it.
Heh…You’re being ironic, but believe it or not it is quite possible to have an opinion about something and not know it. In fact, it’s pretty ordinary to do so.
In fact, I think your comment here pretty much illustrates my point about how our culture has relegated moral reasoning to the back of the intellectual bus. Not to take away from your scientific accomplishments. But like I said, either one without the other is a formula for human misery.
You might consider what, exactly, are the things you’re “trying to get done.”
Thanks for your advice to
‘You might consider what, exactly, are the things you’re “trying to get done.”’.
Since I’ve temporarily forgotten how to spell ‘patronising bastard’, I’ll have to answer another way:
Well y’know like ‘stuff’ and all.
Maybe like building those computers and all that the clever people use to tell us what to be scared about and how evil we all are because we want to keep our houses warm in winter.
Or providing the Internet so that academics can have philosphical discussions of no practical value whatsoever about morals from their nice centrally heated/air conned offices while the rest of us try to keep public transport going in the cities (while stilll being the aforementioned evil amoral bastards making mother gaia weep with sadness about our lack of care for her).
Just earning a living and making things happen sort of stuff. Nothing like as important as auto navel examination.
What was your conclusion when you indulged in such introspection? Values shape up OK? Top notch on the old morality front? Ticketyboo on the piles? Or does sitting on so much moral and intellectual superiority cause you pain?
Latimer –
So far, so good. Now how’d you like to take on “amoral science”? :-)
As a freelance, I’ll take on any commission for a suitable consideration….
But as a proof of concept, I think there needs to be a disticntion btween the science and teh morality (or amorality or immorality). The science is morally neutral. How that knowledge is applied may have moral attributes. But the two are distinct and different. Climatologists seem to not only confuse them, but get the order the wrong way round: viz:
Moral isuse: Save the world from my perception of evil (i.e. humans I do not agree with). Strategy: scare then witless so I can prevent them doing whatever I don’t like. Tactic: Produce climate scare stories. Work product: ‘science’ that ‘proves’ the climate scare.
Ref: Sir John Houghton- ‘we must announce disasters or nonbody will listen’
Interesting snippet: Houghton and Gavin Schmidt were both at Jesus College, Oxford at the same time……….It isn’t a very big place AFAIK…200 undergraduates?
“If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. ”
How does this follow? It is nonsense. The author appears to be slip one by and trying to say amoral = immoral. World of difference.
The IAC’s review of the IPCC clearly identifies “biased treatment of genuinely contentious issues” as one among several deficiencies in the IPCC’s modus operandi. It is the alternative hypotheses (rather than GHGs) which have been ‘swept under the carpet’ in the interests of supporting the agendas of sponsor governments. The vast majority of the 194 nations participating in IPCC expect to receive vast sums of money as the main outcome of this process. In fact huge sums of money have already been pledged and in part committed to the UN by several nations.
The IPCC’s focus on CO2 seems to rely largely on assigning a large value to climate sensitivity together with large positive feedbacks (directly/indirectly to CO2) in their computer models.
For example Shaviv has demonstrated a model which involves solar magnetic variability modulating cosmic ray flux, low altitude troposphere ionisation, cloud nucleation, cloud cover and albedo. When tested against historical data, the residuals are HALF those of the IPCC models. And he does this without needing to assign large values for climate sensitivity and feedbacks. Increased CRF during passage of the solar system through the spiral arms of the Milky Way (with star formation and supernovae) would account for glaciations. Did IPCC give any serious consideration to this in AR4?
This is the problem. EVERY area needs serious attention until proven ABSOLUTELY incorrect.
The change in tone is self evident to readers of the full IPCC reports, from the parts written by scientists (which sound sensible) to the parts designed for the politicians (overloaded with spin).
The sad part is that even the scientists within the IPCC who acknowledge scientific uncertainty http://environmentalresearchweb.org/cws/article/opinion/35820 have not been very vocal about their view, being somehow beholden to a higher authority – probably a non-disclosure agreement signed by IPCC participants as is the norm with government bodies.
Beyond that, somehow the public face of climate scientists outside IPCC aligns more with the IPCC view than the uncertain view.
I see the article as having more credibilty than the Dr Curry commentary on it. (Still waiting to see a clear, testable statement of the core physical phenomenon thought to underly atmospheric warming.)
An interesting question raised in the article is of course “What scientific methodology is in operation here?” concerning the climate models.
Can the results validate the method?
If you do 100 runs of a climate model and check for correlation after 50 years, and one of the runs seems to fit pretty well – does it prove that both method and the assumptions fed into that particular run were correct?
Of course not. This is science done exactly backwards – instead of testing a hypothesis through exposure to real life (lab experiments, observations) climate modelers are testing real life through exposure to multiple hypotheses: so if the one run (among hundreds) with x aerosol forcing and x solar forcing, etc, fits observations best, then two things are proven:
1. The model run (hypothesis) in question, its methodology and the particular inputs of forcings etc were proven right.
2. The climate system is proven to have characteristics resembling the forcing scenario of the model run (hypothesis).
So they seem to validate each other. But in fact nothing is validated. The dynamics of the climate system during the period of the experiment might as well have been caused by an ensemble of other factors, unknown at the time, or even the same factors in a different mix.
And this will be the case for the foreseeable future.
So where is the value in this exercise?
This planet has produced a few anomalies that take centuries to show a difference in order to get a correct reading.
Fast tracking science into a generating bad science for not covering EVERY avenue that interacts or is viable.
Well fast tracking certainly seems to be going on. Afterall that’s what the IPCC is all about, constructing a “certainty through consensus” in a couple of decades, where a normal scientific process would take a century to produce anything at all “certain”.
And we keep hearing how climate models have predictive skill. Just a few days ago Michael Mann held a lecture where he stated that Hansen’s 1988 model basically got it right, seemingly proving both that climate models are reliable, and that the forcing scenario of CO2 employed by the model was pretty close to reality.
And this was climate modeling in its very infancy. Quite a visionary, Mr Hansen, cramming the global climate system into his IBM in 1988, pressing the button and hey presto!
Extremely well put. The value is that millions of people are coerced into believing that which they will never understand and have confidence in this belief because they feel it is moral. This has never been a scientific debate, it has always been one of morality. That is why it will never end until it is only the Believers who are funding it.
This idea of Postmodern Science is a crock and extremely dangerous. Science has not been nor will it ever be moral or immoral. Nature knows nothing of right or wrong behavior, it only dictates whether those behaviors are possible.
Once you introduce ‘morality’ you automatically introduce immorality which must be addressed by those who claim to be ‘moral’. Morality requires an ‘authority’ to coerce moral behavior. We’ve seen the outcome of numerous dangers in ‘Scientific Authority’. Morality has everything to do with how Humans interact with each other, but NOTHING to do with how the Universe behaves.
One reason why I think the AGW Believers like this Postmodern crap is that ‘moral’ has another definition tied to certainty. ‘Moral’ also means PROBABLE. A moral certainty is an intuitive probability. One that requires no conscious rational process or thought. Sounds about right for AGW.
These charlatans have already bet their careers, they must continue their bluff and ‘Postmodern’ charades to continue to ‘deserve’ their incomes. They sure as heck are not earning them.
Science has not been nor will it ever be moral or immoral.
Your post is thought provoking. The (conventional) scientists I know generally think that postmodernism is one of the tools being used against them (that any perspective or view by non experts has equal value to the peer review research they produce) and are also generally hostile to scientists that act as advocates or campaigners why still conducting research full time becasue they think it suggests a conflict of interests. From what I can gather, most climate scientist think they are doing normal science in the sense of puzzle solving and incremental discovery.
I think that climate change activists might like the element of ethics in relation to the need to choose between an emphasis on some research rather than spend money on other things (research in breast cancer). The right, particularly the religious right, might think that some elements of science conflict with other knowledge and producing abortion pills, or animals with human DNA might be considered immoral science. I think many people would consider experiments on human subjects or attempts to clone humans as immoral science, but this is probably not what you mean here.
http://mitigatingapathy.blogspot.com/
The (conventional) scientists I know generally think that postmodernism is one of the tools being used against them
Not surprising since morality has been a tool of force since recorded history.
(that any perspective or view by non experts has equal value to the peer review research they produce)
The value of peer review is bastardized when the function of the review is anything other than rigorously testing methods and conclusions. Expert is a relative term. If a non-expert properly fails to falsify someone’s work, or does so with ease, the value comes with that procedure being able to be repeated by experts and non-experts alike. Is it not? Is the ability to falsify not the mission of a review?
and are also generally hostile to scientists that act as advocates or campaigners why still conducting research full time becasue they think it suggests a conflict of interests.
Makes sense if there is any correlation between the advocacy and a desired result.
From what I can gather, most climate scientist think they are doing normal science in the sense of puzzle solving and incremental discovery.
From what I gather, most climate scientists , and their advocates (politicians and activists), are engaged in postmodernism and disdain ‘conventional’ science since it restricts their behavior and funding. There is a ‘moral certainty’ and the authority that is inherent with morality that drives the entire Climate Science Industry. It is an industry that must keep up the AGW charades (desired results) in order to keep relevancy. This is no longer, if it ever was, a search for truth, it is a crusade.
I think that climate change activists might like the element of ethics in relation to the need to choose between an emphasis on some research rather than spend money on other things (research in breast cancer)
I have no issue with activists being ethical, with their own money. The US needs a moratorium on Federal grants for science during which period random and exhaustive audits are performed to find the value to all 300 million citizens. We’re all in this together after all, we should all benefit from our collective, though highly distributed, efforts.
The right, particularly the religious right, might think that some elements of science conflict with other knowledge and producing abortion pills, or animals with human DNA might be considered immoral science
Who gives a damn what the right, whether religious or not, thinks about amoral results of a repeatable test? 2+2=4, if anyone disagrees, then they are free to and if they do so while in a contract then a judge can be used to determine if the complaint requires easement or some other remedy.
I think many people would consider experiments on human subjects or attempts to clone humans as immoral science, but this is probably not what you mean here.
Involuntary experiments are infraction against one’s personal property, namely themselves. Voluntary are common and often necessary to that individual. For me cloning comes down to funding, if you don’t want to participate in the funding of human cloning, then you should be free from it.
No.
You got it wrong. Postmodernism was originally a movement in the arts, painting literature, where “modernism” had been a break away from the classic, confined classifications and genres of the trade.
From there it made its way into the social sciences, particularly sociology, anthropology, where it made its impact through such concepts as cultural relativism, where each culture should not be judged through the glasses of an outside observer, but rather in the context of the culture in question. And where the fight against racism and discrimination between people became an ingrained principle of the science itself.
Womens studies / cultural feminism is another science area deeply affected by post modernism, where it is frequently claimed that that gender is only a construct of upbringing and the pressures of society. And biology has nothing to do with the personae we become.
Noone with real life experience believes it, but go to a University near you, and you will find a well funded professor claiming that you like cars instead of makeup, because society expects it of you.
This is an extremely rough guide to post modern science, I know, but I think it is enough to make my point:
The real (formerly hard) sciences have to some degree followed in the footsteps of the social sciences and the arts towards post modern science in looking for a “mission”, and upon finding this “mission” (saving the world from catastrophic climate change) they abandoned all concept of scientific truth-seeking, and went for all-out advocacy.
(The fact that this new “truth” also gave them unlimited funds, tripled incomes, media attention, fame, TV-time, radio time, and made headlines in local newspapers all over the world, probably didn’t reduce their enthusiasm for this new-found way of doing science.)
What could be more reductionist and simplistic than the formula equating radiation to temperature:
(ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter
But we have complexity to burn in the various GCMs, which are all designed to model what we don’t understand fully, which run away unless their inputs are carefully massaged, which don’t model cloud effects properly.
I like Tomas Milanovic’s understanding of complexity http://judithcurry.com/2010/11/12/the-denizens-of-climate-etc/#comment-11807
Is the system predictable or not? If not, can I go outside and play now? My homework’s done.
While the author of the article highlighted likely meant for the reader to presume the answer to each of the questions in the last italicized paragraph from said article to be ‘no’, the actual answer to each and every question is ‘yes’.
What’s more astounding is that most of the questions posed are totally irrelevant. Lean, as she is brought in the text, has been published and part of committee after committee looking into the solar influence, both by total irradiance and spectral irradiance, over and over again. It’s not like she’s on the outside looking in. She has a paper cited almost 700 times.
What’s the citation record for a climate change paper?
She is also a proponent that human forced warming is a major contributor to the temperature record, in accord with the IPCC. So I don’t know why she’s brought up at all.
I don’t get it. I thought the article was about the ‘scientific method’. Yet, no such method was used in producing the article. Irony at its best I guess.
Here is a prime example of the sort of outrageous behaviour (by academics) which brings climate science into disrepute! http://joannenova.com.au/
Just follow the money trail. Also, note the posting below the one about Art Robinson and his 3 now disenfranchised, ex-PhD student offspring.
(Apologies if this seems off topic but the money trail is IMO inextricably enmeshed with the bias in climate science.)
Judith wrote: “The final concluding sentence of the above paragraph is an overconfident dismissal of the human signal on climate. ”
Not what he said. He said any human signal is small and lost in the noise of the natural variability. The point is that, using the scientific method, you can’t find it. Could it be there? Maybe. He doesn’t say there isn’t any human influence.
I concur stan.
The ratio of natural carbon to human produced carbon in the carbon cycle is evidence of this.
The following requires repeating!
The irrefutable hallmark of real science is that it involves testable and potentially falsifiable hypotheses. It is now be relevant to ask: “What is testable and potentially falsifiable about the mainstream hypothesis of AGW being the dominant cause of global warming?”
The answer seems to be something like “We’ll adjust our model to conform to the known data” thereby ‘moving the goal posts’.
Has the scientific method metamorphosed?
Well, it should be obvious that what is testable and falsifiable about AGW being the dominant cause of global warming is whether, if we keep putting more GHGs into the atmosphere, global temperature increases steadily beyond anything known in the last few hundred thousand years.
The problem is you don’t have a control planet to live on if the null hypothesis of “no AGW” is rejected.
In the real world (that is, where science is done), it’s been obvious for decades that there are bezillions of reasonable scientific hypotheses that can’t be tested directly with known technology. Real science therefore looks for indirect ways of testing these hypotheses (think of giving huge doses of chemicals to rats, instead of minute doses of chemicals to millions of humans).
It is true, as implied in this thread, that the epistemology of computer simulations is a novel and problematic area. There are plenty of actual experts working on this topic. But it is false that the only evidence for AGW (or a climate sensitivity of around 3) comes from GCMs.
‘But it is false that the only evidence for AGW (or a climate sensitivity of around 3) comes from GCMs’
Care to list some of the rest of the evidence you assert? Measurements of what actually happens and stuff? Experiments?
Well, response to volcanic eruptions is one; pattern matching with non-GCM climate models is another. You might consider reading the relevant IPCC chapters and some of the literature therein. I can send you some papers too if you don’t have access to them.
Thanks for your offer Paul.
I’d be delighted to learn from any method that doesn’t rely on first creating a model which uses a sensitivity (fudge) factor. And then using real world measurements to match the model output by adjusting the fudge (sensitivity) factor.
Because this method of reasoning puts the cart before the horse by essentially assuming that the model is correct in all other respects and so ‘the sensitivity must be….x’. I tried to do this in my MSc thesis until my very wise research supervisor forecfully pointed out the error of my ways.
Please also send any limks ot useful papers. Outside academe and university/industry libraries, it is not possible to shell out twenty quid each time to read an irrelevant paper. There aren;t that many twenty quids to go around.
Well, I’m not sure we’ll agree about what counts as useful, but I would start with
Knutti, R., & Hegerl, G. C. (2008). The equilibrium sensitivity of the Earth’s temperature to radiation changes. Nature Geoscience, 1(11), 735-743. doi: 10.1038/ngeo337.
You might also look at Annan, J. D., & Hargreaves, J. C. (2009). On the generation and interpretation of probabilistic estimates of climate sensitivity. Climatic Change, 104(3-4), 423-436. doi: 10.1007/s10584-009-9715-y.
This latter paper argues that very high estimates of climate sensitivity (e.g., over 4ºC are very unlikely but supports the ~3ºC best estimate, based on a Bayesian methodology.
I can send you a copy of either or both if you send me your email (I’m paul.baer@gatech.edu).
Paul,
Shaviv’s has demonstrated that the IPCC’s favoured value for climate sensitivity is a gross overestimate. IPCC’s model predicted a temperature drop of 0.3 degree following a volcanic eruption whereas the actual, observed drop in temperature was 0.1 degree.
Likewise recent work has revealed that the water vapour positive feedback is actually less than the value ASSUMED in the IPCC models which also renders false the assumption of constant relative humidity with rise in temperature.
OK, I’ll have to learn about Shaviv’s paper. Do you have the full citation?
And I’ll have to bone up on water vapor too, I guess!
Paul,
This may be of interest
I strongly recommend these presentations. Here are the links viz.
Shaviv
http://www.youtube.com/watch?v=L1n2oq-XIxI&feature=related
Courtillot
http://www.youtube.com/watch?v=IG_7zK8ODGA&feature=player_embedded
Shaviv’s presentation illustrates the nexus between intrinsic solar magnetic variability, solar wind, cosmic ray flux, ionisation in the lower troposphere, cloud nucleation and changes in cloud characteristics (total water content, extent of cloud cover and albedo). While it is acknowledged that correlation may not necessarily reflect causation two pieces of independent evidence are provided viz
(1) Long term variations – over the past 550 million years, climate proxies (O18:O16 ratios in brachypods in ocean sediments) correlate with cosmic ray flux (CRF) as measured in iron meteorites which are subject to higher CRF during the passage of the solar system through the spiral arms of the Milky Way [where we experience a higher cosmic ray flux due to Type II (core collapse) supernovae associated with star formation regions].
(2) Short term variations – Forbush decreases – several-day-duration decreases in CRF due due to solar flares and coronal mass ejections correlate with changes in cloud nucleation and cloud parameters – total water content, extent of cloud cover, decreases in albedo and short term (days) increases in the rates of sea level rise due to thermal expansion.
Shaviv’s model accomplishes all without needing to invoke any net positive feedback (from CO2/H2O) or high value for climate sensitivity (which is treated as a free parameter). The variations attributable to this chain of events is of the order of 1 +/- 0.35 Watt/M2. (c.f. The IPCC GCMs use the variation of Total Solar Irradiance of 0.17 W/M2 which is insufficient to account for the changes in the energy balance of the oceans. i.e. The cosmic ray flux, modulated by the solar wind is the dominant mechanism of ionisation in the lower troposphere, thus influencing cloud nucleation, and amplifying the climate variations by varying the cloud characteristics. Because of the small particle size, the surface area:mass ratio is increased by the cosmic rays thereby increasing the albedo of the clouds formed.
These variables (solar magnetic variability, solar wind, cosmic ray flux, cloud variations) have not been included in IPCC’s GCMs and, compared with the latter, the residuals (the difference between Shaviv’s model and empirical observations) are half i.e. his model provides a better fit to the empirical (observational) data.
This is supported by the work done by Svensmark et al and provides a rational explanation/mechanism for the glaciations associated with passage of the solar system through the spiral arms of the milky way – a galactic context nicely complementing the Milankovich effect. Incidentally, Shaviv also demonstrates that the IPCC GCMs overestimate (several fold) the drop in temperature following volcanic eruptions .i.e. they grossly overestimate the climate sensitivity.
Courtillot demonstrates a tight correlation between solar magnetic variablity and regional surface temperatures for Europe and USA (also illustrating the futility of the concept of ‘global temperature’.
Note: An earlier scientific paper on that very same subject by Nicola Scafetta accepted for publication in the Journal of Atmospheric & Solar-Terrestrial Physics on 12th April 2010 is also well worth a read – find it here at
http://arxiv.org/PS_cache/arxiv/pdf/1005/1005.4639v1.pdf
Gyptis – I’ve started looking into the work of Shaviv and Courtillot, and – as I’m sure you know – both are outside the mainstream of climate science. I’ll read their papers, but I’m curious what reason you personally have to believe that their work should be considered better than “mainstream” work.
see the knutti paper I linked to. models are but one of the ways sensitivity is estimated.
the fact that you ask for experimental data shows you have no understanding.
A controlled experiment would look like this.
Take the earth. Double the C02 instaneously. hold all other forcing equal. no volcanos, no change in the sun, no change in aerosols, no change in other GHGs. Wait a thousand years. Measure the response. Then repeat the experiment.
So, we are left with estimating the parameter from a variety of sources. Historical sources.
1. Paleo records
2. modern observations
And we can confirm those by building a model of the climate and doing “what if” analysis.
steven,
Is there any evidence that CO2 levels have ever doubled instantly?
Do you think using that as a baseline is valid, and if so how come?
the point isnt whether of not C02 has ever doubled.
The point is estimating the response IF it doubles.
It’s perfectly acceptable as a baseline. you are simply calibrating the system response.
Take your car on a dyno. Set the peddle to idle. x mL of gas per sec. wait for the speed to stablize. record that.
Then, double the flow. wait for the speed to stablize. measure that. Whats the response to doubling? how much faster does the car go?
You are characterizing system response. nothing more.
steve,
But you cannot instantaneously get the rpm’s to double.
The engine can’t do that.
And on a dynamo you are measuring one thing.
CO2 acts in a complex of dimensions that all vary with or without CO2 doing anything, as well as any influence the ghg effect of CO2 adds.
My point is that the explanation I just outllined happens to fit reality:
That the impact of CO2 is indistinguiishable from the typical variability of climate and cliamte manifestations.
IOW I see no relevence in the CO2 based predictions of doom compared to reality.
Perhaps part of it is due to making linear deterministic and instantaneous assumptions of CO2 impact?
hunter,
so, in summary, the answer was that real world doesn’t matter.
Paul
It was my training that the Null Hypothesis doesn’t depend on the field; it has no default value. The Null Hypothesis depends on the claim.
In John Nicol’s case, he makes quite a few claims by assertion, such as in the phrase “should global warming resume..”
Is Nicol’s claim that global warming is real and proven?
This is a logical necessity for the phrase to be meaningful at all.
Is John Nicol’s claim that global warming has stopped?
There are statistical tests for accepting or rejecting each of these claims, corresponding sets with rather opposite Nulls, no?
Do we choose for the first implicitly assumed hypothesis the null, “There is no ‘euphemistic climate change’*?” as our null, or its opposite, “Global warming is real?”
(*I find the expression, ‘euphemistic climate change’ unsatisfying. What unpleasant or embarrassing category is ‘climate change’ meant to be a euphemism for? Death? Lewdness? Drug use? Disease? A criminal past?
If John Nicol’s saying people who study climate are lewd sick felonious addicts not long for this world, he’s picked just the right word.
Perhaps Nicol may have meant ‘synonymous’ or ‘equivalent’ or ‘more broad’ if he didn’t mean to imply shameful associations?)
Failing to reject a null hypothesis does not prove it true; we’d generally want to choose our null hypothesis to reduce the costs of errors of specification and decision, however as those costs, a) are largely a matter of policy; and, b) have never been competently quantified in any way a skeptical analysis would deem accurate, what null to elect depends on facts not in evidence.
In such cases, it’s typical for scientists and engineers, policy analysts and well-meaning amateurs to resort to conservative principles, such as the precautionary principle: which errors are most likely to leave us in a good position to revisit the bad decisions we might make at some future time when we have better information?
“..authors’ prime expertise is often found to be not.. as one might have anticipated..”
I guess he really did mean euphemism, and to imply everyone who disagrees with his narrow view of climate is an embarrassing or unpleasant wretch, often with insufficient expertise.
“provides for abundant research funding, from which they feed, more easily than other areas of research of greater interest and practical use”
Ayup. John Nicol clearly views those whose research gives answers he does not approve of as criminal lower life forms.
Why would anyone repeat such drek?
I’ve been described as having a nasty manner for the things I say about people who I think do bad science, but I hope if I ever so tar a whole profession so damningly, that I’ll have armed myself with footnotes and citations, references and source material, or sound logic or good reason, or anything more than propaganda, innuendo, scaremongering and smarm.
Sorry to have bothered you with the point I was about to make, as it seems I’m basing it on a trashy bit of invective not really worthy of discussing.
So, are you following the hockey play-offs?
Paul Baer
You wrote:
Maybe you feel this is “obvious”, but it really hasn’t been demonstrated based on empirical data, Paul.
(And “a few hundred thousand years” is an awful long time.)
But I think we already have a fairly good test of whether the alarming AGW premise, as being promoted by IPCC is validated or falsified by the observed data.
The premise: AGW, caused principally by human CO2 emissions, has been the primary cause of 20th century warming and, hence, represents a serious potential threat for humanity and our environment.
The test:
The GMTA record used by IPCC (HadCRUT).
The ARGO record of upper ocean temperature.
The record of atmospheric CO2 as measured at Mauna Loa.
The premise: The IPCC forecast of warming caused by AGW of 0.2C per decade for the first decades of the 21st century.
(This happens to be equivalent to the calculated theoretical “equilibrium” GH warming one would expected from the measured CO2 increase at Mauna Loa from 2001 to 2010 using IPCC’s model-based 2xCO2 climate sensitivity of 3C.)
The observations show that there was no actual warming, either in the atmosphere or the upper ocean. Latent heat (net melting ice, net evaporating water) is too small to make much difference and there is no real evidence that the deep ocean has warmed. IOW our planet lost energy over this period, while atmospheric CO2 increased to record levels. (Trenberth referred to this “unexplained lack of warming” as a “travesty” and suggested in an interview that it might be explained by energy radiated “out to space” with clouds acting as a “natural thermostat”.) This sounds perfectly reasonable to me.
I agree fully that CO2 is a GHG, that GHGs trap outgoing LW radiation, which should result in warming (all other things being equal), but the observed data seem to show that “all other things” were NOT equal and, therefore, don’t look too good for the “alarming AGW” premise as promoted by IPCC.
But maybe we should wait another 10 years and see if it looks any better then.
What do you think?
If we still have essentially no warming after 20 years would this falsify the IPCC premise of alarming AGW?
Or do you think it would take 30 years?
Max
Well, there are several claims there that I quite justifiably ought to have good answers to. I’m not optimistic that we can come to an agreement, but I’m willing to give it a shot.
I will say this up front: if there were no balance of evidence of AGW demonstrable between 2000 and 2020, I would think a good case had been made that the fears of “alarmists” (like myself) had been shown to be overstated. I’m not prepared to say now anything much more specific than that, because it would depend on the various kinds of evidence (which I expect to continue to be fragmentary and even contradictory) in fact accumulated.
However, we are not likely to agree easily on what counts as evidence. I will say categorically that “temperature in 2020 being less than temperature in 2000” would not itself count as disconfirming evidence. As many people with different perspectives have pointed out here and elsewhere, there is a great deal of natural variability in the system; I would consider a question like “what is the difference in the five year mean at 2000, 2010, and 2020?” to be a necessary adjunct.
However, this (flat temperatures) plainly could not “falsify” AGW in any complete sense. As someone has demonstrated here, if it’s true that there is a 60 year cycle imposed on a rising trend, with the down trend beginning in 2000, you might well expect a 30 year period of near level temperatures, followed by a dramatic increase in the next 30 years. (Clearly we’ll need a new name, as it will no longer resemble a hockey stick :^) Indeed, without any compelling explanation for the trend of the century from 1900-2000, a reasonable prediction is that this trend will continue, and that indeed temperatures will begin to shoot up dramatically in 2030.
It is the trend that worries me. It does in fact seem to me – and I’ve studied the problem pretty closely by now – that anthropogenic GHG emissions are the most likely explanation of that trend. I don’t need to argue it’s “very likely” to be very worried. (This, of course, is because I also consider the possible consequences of even another degree (C) of warming to be quite, well, alarming.)
To finish up for the moment: I do not know the details of the ARGO ocean measurements. It’s plainly incumbent on me to know more about it if others think it is a strong piece of evidence about AGW.
But Paul, you’re not a scientist, so why are you a self-confessed “alarmist”? Do you just accept the “scientific” consensus? And what is wrong with a degree of warming? Warming and CO2 are both good for plant and animal life.
Well, I am partly a scientist. And I’ve been reading papers about climate change for over 15 years, especially about carbon cycling and climate sensitivity. But, quite frankly, it’s substantially because I have been a participant in academia for my whole adult life, and I actually have substantial trust in the basic quality control within “establishment” science. Certainly more than I do in the quality control of the variety of “contrarians” who are producing non-mainstream research results.
It is also the case that I think that the impacts of a degree or two of global warming could be – not will be, but could be, with substantial probability – very harmful. I am concerned that the melting of land glaciers, the artic ice cap and permafrost could already be locked in with current levels of GHGs, and that the consequences could be very bad. Unfortunately, again, these kinds of hypotheses are not subject to what so many people here refer to as “the scientific method” – there’s only one experiment you can do, and if the null hypothesis of “no AGW” is wrong, you don’t get a second chance.
To put it yet a third way, I’m much more averse to the risks of climate change than I am to the risks of strict carbon mitigation. This raises a different set of questions (and goes back to the economics posting that brought me back into this blog). About which, more another time.
But the fact that a lot of it does have to do with trust raises the questions that were beginning to be addressed in the polyclimate thread: who, if anyone, could be trusted by “both sides” (a gross oversimplification, admittedly) to be an honest juror?
@paul
You say
‘I have been a participant in academia for my whole adult life, and I actually have substantial trust in the basic quality control within “establishment” science’
Do you not understand that one of the primary reasons why there is such a preponderance of scepticism among educated scientists and engineers outside the academic world is that we look at the ‘quality control standards’ of academe and see them as ludicrously trivial and superficial compared with the standards pertaining in industry and elsewhere in the outside world. That we’ve spent our professional lives working within really tight evidence led projects – with serious actual consequences to thousands or millions of people if we get it wrong. Where data is kept and archived by law .. where failure to comply means jail time and career suicide.
And then we see a bunch of lab based dilettantes with no QC systems worth the name managing to get a non-valiadated run of a model to work and then declare that they fully understand soemthing complex like the climate. By design (or possibly by accident) they throw away the supporting data and then have the effrontery to claim that we are far too ignorant to understand their magnificent craft…because they are Climate Scientists..and we are not.
And you academics can’t understand why we’re sceptical.
Do your work to the same standards of the outside world – without moianing and groaning that you’re all far too important to bother about such trivia. Once you;ve done it for ten years and got soem consistent results..that have been externally audited and reproduced …just like we have to do.. and once you’ve built up a portfolio of work so derived, then maybe we’ll start to think that you’ve learnt a bit about QC.
Up until then your processes are a joke.
‘
Latimer: suppose I accept your premise that QC in academia is inadquate. What if you accepted my premise that the prima facie evidence is that we have already put an unreasonably large amount of GHGs into the atmosphere? If we had to do a QC process in a hurry that would provide enough confidence to justify a mitigate/don’t mitigate decision, what would you do? How many people would you assign? How much money would you spend? Keep in mind that the reason academic QC is limited is partly because resources are scarce.
@paul
First I don’t accept your argument that academic QC is inadequate because of lack of resources. That is the reflex squeal of grant-funded people everywhere.
I think it is far more likley that QC is inadequate because you institutionally don;t care about whether you are right or wrong. You care about getting a paper published, getting it mentioned by others and getting the next grant. And so you see adequate QC as an unnecessary burden and don’t even pay lip service to it. The paper’s the thing! Not the truth. Once the paper’s published you have the bix ticked and can move on.
That is not a personal criticism, it is a direct consequence of the way the academic reward structures are designed. But it is a f….g awful way to design those structures to get to the bottom of a possibly difficult problem like climate (if problem it really is).
In other fields outside academe, reward structures are different and scientists and engineers are judged on being consistently right (the bridge design was built and didn’t collapse, the medicine performed in real life as it did in the tests..the aeroplane flew and performed as designed…..the implemantation of the new IT system had the predicted effects on finance and customer satisfaction etc etc). You do not get the brownie points just for writing a headline ctaching paper. They don’t come until your predictions have been validated and verified and been shown to be true.
And just to be sure you might just get audited by seriously unpleasant people to make sure that you followed the rules and didn’t cut corners. That’s just part of life…to be tolerated if not embraced. Anyone outside academe seen trying to resist external audit is effectively raising a big white flag saying ‘look at me – I’ve got something to hide!!!’
Your other point is the classic call to arms ‘I don’t care what you do ..but juts do something’ or ‘Do it quick, but don;t bother doing it right’
I’ve seen absolutely no evidence that ‘climate change’ is an urgent problem that needs fixing despite thirty years of people writing a rag tag of increasingly hysterical academic papers about it. Sealevel rise hasn’t inundated even the most low-lying lands. Polar bears have not been wiped out. Al Gore’s beachside apartments are still on the dry side of the beach.
But if it were to be so, the first thing I’d do was to review the organisational design for the effort needed to understand and solve the problem. I think it very unlikley that the correct way to do so would be having by a host of disparate teams beavering away at whatever bit of the cake they happened to find interesting in an uncoordinated way. And certainly not to reward them just for writing papers and running away from teh consequences. Its also extremely unlikely that such a design would have an IPCC-like political coordination function where each group jockeys for influence and power, independent of the need to actually understand the problem.
But these things are so discredited in climatology that they will probably die a death anyway. I don’t imagine that anyone will bother to read IPCC5, let alone take any notice of its conclusions. It is already a debased currency.
Whatever better structire is designed, it’ll have reproducibility, responsibility and ‘ownership’ at its heart. If you becoem the worldwide expert on topic A, the price you pay for that status and influnce is that you become its ‘champion’ You have to go out and persuade others that you are right…not just those who agree with you. You have to be able to justify every last iten of data and every little bit of reasoning.
And I’d have a QA team employed composed of the nastiest toughest most devious and seriously unpleasant bastards I coudl find to keep the experts honest.
Many of today’s practitioners are likely hopeless cases and will fall by the wayside. The mental chnage from paper publishing in the clo seted world of academe to the tough and nasty world of proving your case and standing by it will be beyond them. They are set in their ways.
Those who can’t pass an audit will find other pastures. Those who confuse science with advocacy or running their own personal propaganda websites will be required to shape up or ship out. But good people doing good solid work will thrive. We;d have proper training for climatologists..including a deep understanding of correct scinetific processes.
Once bedded in (5 years??) we can start to do some proper work on climate..with reliable data, sound processes and hence draw good conclusions. In another 5 years we should begin to put our feet firmly on the floor with some preliminary conclusions and the next generation of scientists can take over.
It’d be atough job to make those changes happen. But the current structures just aren’t fit for the supposed importance of the task at hand. Maybe good enough for understanding the lifecycle of the lessser undulatinmg Mexican fly toad..where nobody in th real world gives a s..t about the outcome.
But ludicrously inadequate for studying ‘the most important problem humanity has ever faced’.
Paul Baer
Thanks for your response.
As I understood you, the absence of a “visible” warming trend over a 20-year period of observation would apparently not yet give you reason to not “be very worried” about AGW.
IMO this would mean that your “worry” (a reaction linked to the emotion of “fear”) is rather one of “faith” or “belief” in a hypothesis, even if this hypothesis is not corroborated by actual physical observations.
Do you see it this way, or do you still believe that your “worry” would, in that case, be based on “scientific reality”?
Max
As I said, a flat trend for twenty years would lead me to think it more likely that the climate sensitivity is low, but what impact it would have on my overall views would depend on what other evidence had accumulated in the meantime, concerning ocean heat measurements, satellite measurements of clouds, etc.
Again, as I noted in my comment, quite a few people have pointed out that there is evidence of a substantial 60 year cycle imposed on a steady rising trend. And – absent any other compelling explanation of the rising trend – I would think it a reasonable conclusion that you would expect warming to pick up dramatically again in 2030, and for the century level trend to continue. Wouldn’t you? If not, why not?
Paul, Latimer writes “The paper’s the thing!” – indeed, as Michael Tobis, who seems to be typical of climate academe, has said here – finding the Null Hypothesis does nothing for your cite count.
A fine example of institutional and organisational bias.
If nobody is going to get brownie points (papers, citations, kudos) fro finding out that there really isn’t anything there, then – surprise, surprise, nobody is going to examine the case too hard. But they will spend a lot of time and effort looking for the slightest sign of a positive signal. Because they can publish a paper about it…and that is the sign of admission to the Climatologist’s Club.
People act in the ways that their institutions expect and that they get rewarded (in th widest sense) for. Academic work has an inbuilt bias to publish only positive results and ignore/suppress any neutral or neagtive stuff.
And guess what..surprise, surprise…that’s what people do.
Which is another reason why this particular organisational/reward model is unfit for discovering the truth about ‘climate change’
Paul,
In the rational world, no evidence means no crisis.
Why pick 2000 – 2020?
Why not pick 1995 – 2015?
Why not pick the last 170 years?
2020 is convenient far away to focus on the sales pitch of apocalypse and not on the boring reality of today and yesterday.
You have missed the point entirely. Even IPCC scientists are now admitting that they cannot distinguish or quantify warming from natural causes as opposed to anthropogenic warming.
What and who exactly are you referring to by “IPCC scientists are now admitting”? The IPCC has never claimed to quantify this precisely; even statements like “it is very likely that the majority of 20th C warming is from anthropogenic GHG emissions” are pretty darn vague (and forgive me, I’m not looking up the exact quote.)
Paul,
here is the link where you will find the full citation.
http://judithcurry.com/2011/04/07/separating-natural-and-anthropogenically-forced-decadal-climate-variability/
Amy Solomon’s name is prominent on AR4
Paul,
Citation: Solomon, Amy, and Coauthors, 2011: Distinguishing the Roles of Natural and Anthropogenically Forced Decadal Climate Variability. Bull. Amer. Meteor. Soc., 92, 141–156. doi: 10.1175/2010BAMS2962.1
http://judithcurry.com/2011/04/07/separating-natural-and-anthropogenically-forced-decadal-climate-variability/
Amy Solomon’s name is prominent on AR4
Thanks, that looks like an interesting paper. It’s not obvious that it contradicts the IPCC, but it’s not obvious that it doesn’t. I look forward to reading it.
However, I do think that you’re mistaking Amy Solomon for Susan Solomon, who was the co-chair of WGI for AR4. Amy Solomon’s name does not appear among the contributors.
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/annexessannex-ii.html
We have control planets. Mars and Venus have high CO2 atmospheres. Adjusted for the distance from the sun, their atmospheres should be warmer than earth at similar pressures if the GHG Theory is correct.
They are not. At similar pressures, the atmosphic temperatures of venus, earth and mars vary with their distance from the sun, not their composition. The surface temperatures are driven by the pressure of the atmosphere at the surface, and the distance from the sun.
ferd,
‘We have control planets. Mars and Venus have high CO2 atmospheres.’
You’re not very familiar with scientific controls, are you?
The single most important terrestrial process in climate (average weather), is the Coriolis force on earth. It accounts for the vast majority of the atmospheric dynamics we observe daily, which are then captured in climate via the averaging process.
The Coriolis force is tied to the rotation of the earth about its axis. To compare Earth and Venus, we’d have to asked then, how do these rotation rates compare? There are 365 earth days in one solar orbit. There are 2 Venus days in one solar orbit. Actually, the day on Venus begins to turn due its solar orbit before its own rotation.
Therefore the Coriolis force is MUCH smaller on Venus than on earth. If the Coriolis force is driving most of what we observe on earth, then we can’t really compare earth’s climate to Venus’s climate. There is no way to account for this difference.
Mars is an interesting case. It’s much smaller than earth, with a gravitational field less than half as small. But it has a faster rotation about its own axis. So I think the Coriolis force should be fairly strong.
But there’s no water. There is less water in the atmosphere on Mars than CO2 in the atmosphere on Earth. So there is no hydrological cycle. Without that, there is no way to compare the atmospheric dynamics.
Nice try though.
In the context of a discussion of what is testible/falsifiable, it should be pointed out that the phrase “climate sensitivity” references the change in the equilibrium average global temperature at Earth’s surface. As the equilibrium temperature is not observable, claims regarding the magnitude of the climate sensitivity are not testible/falsifiability.
Terry,
‘As the equilibrium temperature is not observable, claims regarding the magnitude of the climate sensitivity are not testible/falsifiability.’
I think that comment should result in your banishment from commenting ever again on this blog.
The temperature of a gas, like the atmosphere, is proxy for the average kinetic energy of the particles that make up that gas. In the case of the atmosphere, it’s most molecules. The kinetic energy is related to the translational motion of those molecules.
Those motions can be measured at any time we want. The bulk status of thermal equilibrium plays no role in whether or not those motions are ‘observable’. The molecules are moving around at some rate and we can observe that rate via a thermometer.
Therefore, temperature is ALWAYS observable. I didn’t think that this thread would get to the point where I would have write that statement, yet here we are.
maxwell
We agree that the temperature is observable. However, it is the equilibrium temperature that figures in the notion of the climate sensitivity (aka equilibrium climate sensitivity) and the equilibrium temperature is not observable.
Perhaps you are unfamiliar with the term “equilibrium temperature” as used in climatology. It is unrelated to the notion of “equilibrium” in thermodynamics and synonymous with the term “steady state” as used in engineering heat transfer. Equilibrium temperatures are a consequence from holding all of the various forcings constant and waiting for an amount of time that is unbounded. Though we can think about these temperatures we can’t observe them.
Terry,
I see. Sorry for the confusion and jumping on you.
With all of the confusion on this thread, I began to see the real possibility someone would challenge basic thermodynamics.
Cheers.
While the article certainly misses several points, it does show what happens when you try to move from science to policy. Every Tom, Dick and Harry is going to dust off their calculators or slide rules to try and figure out why they have to pay through the nose for something that doesn’t seem to be happening in the catastrophic way it is presented. Especially, when the proposed solutions are little more than Jane Fonda regurgitated wishful thinking and legitimate errors are found in the iconic published science.
MT seems to think that, just making a poor choice of statistical method, is no reason question the abilities of a scientist. Well, a screw up is a screw up, don’t be trying to change the world if you are a screw up. Climate scientists would do well to distance themselves from the screw ups. Which might just get Tom, Dick and Harry to go back to 12 oz curls and football.
There is a method?
‘Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models. One curious aspect of this result is that it is also well known that the same models that agree in simulating the 20th century temperature record differ significantly in their climate sensitivity. The question therefore remains: If climate models differ in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?
The answer to this question is discussed by Kiehl (2007). While there exist established data sets for the 20th century evolution of well-mixed greenhouse gases, this is not the case for ozone, aerosols or different natural forcing factors. The only way that the different models (with respect to their sensitivity to changes in greenhouse gasses) all can reproduce the 20th century temperature record is by assuming different 20th century data series for the unknown factors. In essence, the unknown factors in the 20th century used to drive the IPCC climate simulations were chosen to fit the observed temperature trend. This is a classical example of curve fitting or tuning.
It has long been known that it will always be possible to fit a model containing 5 or more adjustable parameters to any known data set. But even when a good fit has been obtained, this does not guarantee that the model will perform well when forecasting just one year ahead into the future. This disappointing fact has been demonstrated many times by economical and other types of numerical models (Pilkey and Pilkey-Jarvis 2007).’
http://www.climate4you.com
And this is before chaotic instability in the models is considered.
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ http://www.pnas.org/content/104/21/8709.full
It has all been said many times before. A warning, however, for Dallas and others. Dynamical complexity implies the possibility of catastrophic change, abrupt change, tipping points, non-linear change etc. Trifling with the Dragon Kings is hardly a prudent course. http://web.sg.ethz.ch/wps/pdf/CCSS-09-005.pdf
“Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models.” Yet, as I understand it, the very large majority of them don’t “reproduce the 20th century temperature record”, they more or less reproduce the 20th century temperature anomaly record. Most of the models run several degrees hot or cold so far as absolute temperature is concerned: see http://rankexploits.com/musings/2009/fact-6a-model-simulations-dont-match-average-surface-temperature-of-the-earth/.
Kiehl 2007 is available here.
w.
Willis,
I have extended Kielh’s analysis with a simple two-hemisphere analytical model. This was done because I have long been concerned that many GCMs use large aerosol cooling efffects to offset high model sensitivities: the concern is that aerosol effects are concentrated in the NH, but observations show the NH to be warming faster than the SH where aerosol effects are much smaller. A usual excuse is that the greater expected SH warming is being offset by heat absorption into the larger SH ocean. Recent Argo observations show that this is unlikely to be valid. The upshot is that climate sensitivity is most likely lower that 2 degrees C, possibly much lower (my best guess 1 to 1.5). I do some Monte-Carlo analyses to show how observational uncertainties influence these conclusions. I’d be happy to furnish a copy of a paper on this to you or others who might be interested if we can figure out how to make a connection. By the way, I’ve much enjoyed many of your pithy and to-the-point observations on this whole circus.
No doubt Dragon Kings are not to be trifled with. It is a bit difficult to make good decisions if you only focus on the unthinkable though. You would never get out of bed because of dread. I am a little concerned that more people don’t support more actions that tend to hedge their bets, on both sides.
I am shocked at the poor statistical choices, especially paleo and recently in Antarctica’s modern era temperature reconstruction. The number of not all that accurate peer reviewed papers is a touch troubling. I understand it is a new science, but with all the clamor to change the world NOW, I would think a little more robust peer review is in order. If than means throwing a few mathematically challenged “experts” under the bus, so be it. What is the saying, lead, follow or get out of the way?
Give me ten good reasons why I should get out of bed.
The uses of science are so appalling on both sides of the climate wars. Each side mouth half understood concepts in an idiomatic scientific jargon. I won’t say I understand it any better – but taking it less seriously is a pre-requisite for intellectual growth. Hell of a lot more fun too.
The policy issue is separate from the science. I decided long ago that changing the composition of the atmosphere might not, ipso facto, be the most prudent course. Since we have not the wit to determine the outcome of the great atmospheric experiment caution suggests that it be limited to the extent feasible. Dragon Kings in 1976/1977 and 1998/2001 merely reinforce my native caution.
The policy responses are many and varied – multiple paths and multiple objectives. I can’t see what the problem is? Reduce black carbon and tropospheric ozone for health and agricultural benefits – as well as making major progress with anthropogenic forcings. Conserve and restore ecosystems. Conserve and restore agricultural soils by adding carbon. Provide health, education, safe water and sanitation to stabilise population. Halve at least the incidence of malaria and HIV. Encourage free trade and good economic governance.
The climate wars are a distraction at best. That I personally am called a sceptic by one side and an alarmist by the other – is just amusing. That in Australia we have wasted a generation of opportunities for biological conservation while quibbling about inconsequentials – more and more species going to the wall for reasons of the feral invasions mostly – is a crying shame . The lost opportunities for humanity globally is a tragedy of staggering proportions.
After spending a couple of weeks reading this blog, it’s become increasingly clear to me that the fundamental disagreement here is about burden of proof. A large majority of the posters believe that the data so far does not falsify the null hypothesis of “no AGW”, and they probably won’t be satisfied that there is AGW unless global temperatures rise several more degrees. The much smaller number of posters who support the mainstream consensus believe that there is enough evidence to support a null hypothesis of “CO2 doubling will cause dangerous climate change”, but as a practical matter, given the noise in the signal, there is little hope of falsifying this in a conventional statistical manner either in a short period of time. The logical experiment from this perspective is to stop GHG emissions, but this turns out not to be a cheap experiment.
Restate that in fuzzy logic, and you’re more-or-less correct.
http://en.wikipedia.org/wiki/Fuzzy_logic
Paul –
The logical experiment from this perspective is to stop GHG emissions, but this turns out not to be a cheap experiment.
That’s likely the understatement of the century.
Paul,
Having tuned in to this blog for more than a few months now, I mostly agree with your very concise summary. However as far as the logical experiment, that begs the question “are there other logical experiments?”, with geoengineering springing to mind. What if it is cheaper, even factoring in all externalities, supposing we could do that, to geoengineer via stratospheric sulfates or space mirrors? Shouldn’t we do that then?
The arguments I see against the latter are often, but not always, applications of the precautionary principle. However, I feel like the precautionary principle can be applied in reverse to the GHG problem from the skeptical point of view (first, do no [economic] harm) or words to that effect. In this context, the burden of proof can and will continue to be debated and I don’t see an easy answer on the horizon. Distributional equity issues are on the table as well.
I’d appreciate your additional thoughts or Dr. Curry’s.
Bill
No, Paul.
The “logical experiment” is NOT “to stop GHG emissions” and see what does (or doesn’t) happen.
We already have the “experiment” under way (see my above post).
Max
I think you missed my point: depending on your null hypothesis, there are two different “logical” experiments. If you think the null hypothesis is “no AGW”, it’s logical to just keep on emitting. No steady warming, no falsification. But – if you’re wrong – there’s no control planet left.
If you think the null hypothesis is “AGW”, you stop emissions. If temperatures keep rising, it probably wasn’t AGW. Then you worry about what to do about rising temperatures (which is one reason lots of people are talking about adaptation as a “no regrets” alternative).
The economic consequences of reducing emissions rapidly are of course a major concern, and people vary in their beliefs about what would happen. My personal view is that this is a much preferable experiment because the economic impacts of mitigation are subject to human choice, whereas the climate response to emissions is not.
Paul –
If you think the null hypothesis is “no AGW”, it’s logical to just keep on emitting. No steady warming, no falsification.
No – it’s NOT logical to just keep on emitting – but it’s what will happen anyway. Logical would be to convert to nuclear for electricity, convert cars to electric for local use but keep diesel/gasoline for longer distance transportation (this would require a heavy development program) AND decentralization of the grid, using solar and wind for local power generation where practical. Who ever told you humans were logical?
Note – this is NOT necessarily the alarmist/green position because it envisions a different “mix”/philosophy.
But – if you’re wrong – there’s no control planet left.
There’s neither logic nor evidence that that would be true. Only fear.
If you think the null hypothesis is “AGW”, you stop emissions. If temperatures keep rising, it probably wasn’t AGW.
In which case you “might” have a planet – or not. Depending on other factors.
But you would have no viable human civilization.
Preference for the planet over humans is no different than preference for a political philosophy over humans. Witness Stalinist USSR among others – but that would be only a preview of the stop emissions scenario.
Humans are not only logical. But they are not only not logical. Deliberate policy design is a constant of modern life.
And I don’t think that there needs to be only one alarmist/green position. Do you spend any time advocating for the energy transition you think desirable, or do you think it is hopeless or somebody else’s problem?
To be clear, when I say “there’s no control planet left” I’m not implying that in fact the planet or the people on it will be destroyed, only that you will be left with the consequences, and they may be quite unfriendly. I do, frankly, fear what could happen. There is evidence of various kinds that provides reason for such fear.
The claim that there is no viable human civilization without constant or rising GHG emissions seems to contradict your earlier idea of a largely nuclear plus renewables transition.
The Stalinist Russia metaphor seems to me a bit stretched. Personally I think that reducing the risk of AGW dramatically is the pro-people thing to do. It might not be “pro personal freedom to pollute”, however, and making it “pro-poor” would take a bit of redistributive policy. But that’s doable if we choose to.
Paul –
To be clear, when I say “there’s no control planet left” I’m not implying that in fact the planet or the people on it will be destroyed, only that you will be left with the consequences, and they may be quite unfriendly.
That will eventually happen regardless of anything you or I or the entire human race does. There are two parts to that statement.
First – You would have us “stop emissions” . Wonderful. How? When?
Specifically, how are you going to stop the emissions of the Chinese, Indians and other Third World countries? Do you think they’re going to abandon their people and cultures to the poverty they’ve already experienced for the last several thousand years when the vision of what the developed nations have been (and are yet – if not in the future) stands before them and has been already internalized?
Do you have any understanding of the depth of commitment of the Chinese to developing a technological society? How many nuclear reactors, coal plants, wind farms, automobiles and roadways – along with the supporting infrastructure they are building? How many new modern cities? Do you understand that the Indians are right behind them? And that between those two alone, the US emissions will soon be a minor blip on the CO2 scene? We are already #2 – and slipping down the list very quickly. That’s not to say that we shouldn’t upgrade our energy network with as much “non-carbon” energy as practical, but it does mean that your “stop emissions” scenario is simply not practical for the forseeable future. And THAT, my friend, is reality. Try reading Christian Gerondeau’s book – “Climate: The Great Delusion”.
The second part to that first statement is this – sooner or later, “warming” will be something you or your descendants will wish fervently for. How long has humanity been in the present interglacial? How long have interglacial periods been in the past? Do you think this interglacial will go on forever? Do you understand that converting the world’s economies to handle “only” a warm world with minimum or no-carbon energy sources as youpropose would be a death sentence for far more of the world’s population than any likely degree of warming would be? I presume you know that the excess death rate from cold far exceeds that from heat? The best case scenario would be to “adapt” to whatever conditions prevail in the future. One does not do that by throwing away ones options.
I do, frankly, fear what could happen. There is evidence of various kinds that provides reason for such fear.
The only “evidence” so far comes from models. I’m an aerospace (read spacecraft ) systems engineer, I’m more than familiar with models – and with what they are and what they aren’t. And I don’t believe those that are telling you how terrible the future will be. Of course, YMMV
I will not live my life nor abandon my grandchildren’s lives to fear. I’ve seen on this blog what state the “science” is in – and that short look has only confirmed what I already knew before coming here. If the science ever grows up, it may be worth listening to. But at present, I have no reason to draw the same conclusions as you. And far more reason to draw other conclusions.
The claim that there is no viable human civilization without constant or rising GHG emissions seems to contradict your earlier idea of a largely nuclear plus renewables transition.
Not at all. The largely nuclear plus renewables transition is only logical – in time. Which is why the Chinese and Indians are pursuing it. And why the rest of the world will eventually follow them – AFTER they realize that fossil fuels alone will not sustain 9 billiion humans forever. But they will get there first – our “head start” civilizatoin is our greatest handicap. More – my largely nuclear plus renewables transition (and that of the Chinese. etc) does NOT mean the elimination of all fossil fuel usage.
Note that at present the immediate cessation of fossil fuel usage (as some on your side of the dance floor have demanded) would bring the economies of the world to a dead stop. And shortly after would cause the deaths by various means of a large part of the world’s population. Without fossil energy there is not nor will there be sufficient alternate power to keep them alive or to keep the economies running at even minimal level for at least the next 20 years or more. I’ve written some minor comments on this blog about some of the challenges involved in the transition.
I don’t expect that you advocate immediate cessation, but there are those who do. It’s an extreme position that deserves no respect.
Of course, at this point in time, the US is heading down that road due to present energy policies. So we may get a taste of that future sooner than expected.
Keep in mind that it was exactly that rising GHG emissions scenario that vaulted this country into its present place of prominence in the world. It wasn’t the armed forces or war as some seem to believe – it was commerce, driven by a fossil fuel/CO2 economy. Do you believe this country can survive the coming “fall from grace” ?
Long ago, a very wise man said these words –
It sounds very pessimistic to talk about western civilization with a sense of retreat. I have been so optimistic about the ascent of man; am I going to give up at this moment? Of course not. The ascent of man will go on. But do not assume that it will go on carried by western civilization as we know it. We are being weighed in the balance at this moment. If we give up, the next step will be taken – but not by us. We have not been given any guarantee that Assyria and Egypt and Rome were not given. We are waiting to be somebody’s past too, and not necessarily that of our future.
The Ascent of Man – final chapter, Jacob Bronowski
The descent into the nether regions of History that he speaks of would be fueled and sped by fear coupled with the failure to face and overcome that fear. China is facing and overcoming its fear by preparing its country and people for a future of adaptation. Are we smart enough to do the same? If not, then our fate will become that of Assyria and Egypt and Rome.
The Stalinist Russia metaphor seems to me a bit stretched. Personally I think that reducing the risk of AGW dramatically is the pro-people thing to do. It might not be “pro personal freedom to pollute”, however, and making it “pro-poor” would take a bit of redistributive policy. But that’s doable if we choose to.
reducing the risk of AGW dramatically means what? In the face of increasing CO2 emissions by every nation that can manage it, just what do you expect to do – reduce our emissions in order to make a meaningless contribution to a barely measureable reduction in future temps while at the same time reducing our capability to survive the future catastrophes of “climate change” that, if real, will come regardless of what we do? What kind of “climate change”? What kind of catastrophes? Until you can answer those questions, there is no possible long term policy that would not carry greater risk than “business as usual”.
Do you understand that the reduction you look for would reduce our survival potential – as a nation – and as a race – until such time as we can replace present energy sources AND provide equivalent levels for future population? Think about it – Haiti/Japan – which one survived better, which one will recover faster? Why?
As for the “Stalinist” thing – he killed (how many?) millions of people? How many more millions would die if the carbon reduction you desire is too soon, too fast or badly handled? Do you expect either the UN or the US government to handle it right? Given the present disaster that passes for energy policy in the US? Much less the UN disasters perpetrated in Iraq, Rwanda, the Congo and several dozen other places around the world?
Do you REALLY trust them to do it right? Then why aren’t they doing THIS right?
http://www.jpands.org/vol16no1/goklany.pdf
Because if they do it wrong, it’ll make Stalin look like an amateur.
I’m not happy with this because it’s not nearly detailed and precise enough, but it’ll have to do for now because I’m out of time.
Paul –
I would also add that you’ve fallen into the “binary solution” trap. Which is also what I did in answering.
One of the more obscure truths in this life is that there are ALWAYS AT LEAST three solutions for every problem. IF…OR… is a form of false logic when applied to human problems.
The null hypothesis is natural climate change (Roy Spencer)
Paul, I’m not sure that the best experiment isn’t already being provided for us by mother nature free of charge. We will have a weak solar cycle, a negative PDO, and an AMO that will soon be going negative. All we have to do now is collect the data from the experiment.
Yes, except it doesn’t begin to address the question of what caused the warming trend in the 20th century.
For which the best explanation I know of still seems to be anthropogenic GHG emissions.
What are the other plausible explanations for the trend? Or is it just assumed to be essentially irrelevant?
The first question is – what caused the greater warming at the beginning of the 20th C?
The second question is – what caused the greater warming at the end of the 20th C? And what evidence other than correlation is there that they have different causes? Correlation is not causation.
AGW is only an explanation if it it’s demonstable that the cause of the former is not the cause of the latter.
I didn’t assume anything that I am aware of other then the fact that when the known or presumed natural variabilities change it should be of value in attributing their effects. This is self evident is it not?
Paul Baer
Are you referring to the “warming trend” of the late 20th century (1971-2000) or the statistically indistinguishable one of the early 20th century (1911-1940), which occurred before there was much human CO2?
Studies by several solar scientists attribute around half of the total 20th century warming to the unusually high level of solar activity (highest in several thousand years), with most of this occurring in the first half of the century.
The GMTA record shows a cyclical warming pattern, with multi-decadal warming cycles of ~30 years followed by multi-decadal cycles of slight cooling, also of ~30 years, while atmospheric CO2 shows a steady increase at a CAGR of around 0.4% per year, since measurements started in 1958, with a somewhat slower rate prior to this, as estimated from ice core data.
There is no robust statistical correlation between atmospheric CO2 concentration and GMTA. Statistical analyses show the GMTA to be more of a “random walk”. Where there is no statistical correlation, the case for causation is weak.
Cycles in ocean current oscillations (PDO, ENSO, etc.) seem to show a better correlation with GMTA than CO2, and there have been studies showing a link with these to solar activity, but the mechanism has not been established.
Observations have also shown that cloud cover decreased 1985-2000, thereby reducing the planet’s albedo and increasing the amount of incoming solar radiation reaching the surface, with a reversal of this trend after 2000. This correlates with the warming seen from 1985 to 2000 as well as the lack of warming after 2000, but no clear mechanism has yet been proposed.
So, you see, that there are many “best explanations”, none of which are really much “better” than any other.
Max
A large majority of the posters believe that the data so far does not falsify the null hypothesis of “no AGW”, and they probably won’t be satisfied that there is AGW unless global temperatures rise several more degrees.
This is the Smart People Who Get It vs the Poorly Informed Hoi Polloi argument redux. Gee, we’ve never heard this one before.
And… you’re wrong. Just like the 10,000 other alarmists who make variations of this argument.
Most of the posters here are familiar with radiative transfer argument and are quite willing to accept that man influences his climate to some (pun!) degree.
The actual disagreement revolves around “how much influence” which then informs “what needs to be done.” The non-alarmists want to see an acceptable argument that shows, once and for all, an actual quantifiable and repeatable Q (figure of merit) regarding historical human influence vs natural signal. Once human influence can be reliably demonstrated, THEN is when they will discuss “what if anything needs to be done.”
Alarmists are either politically or ideologically willing to ascribe more human influence than can be actually proven (demonstrated) and naturally gravitate straight to “oh there’s a problem let’s fix it.” They bypass the step where contention needs to be shown as true. Due to ideology or politics they happily accept an inflated or imagined 51% chance of being right and then claim skeptics can’t do math.
The actual argument is skeptics vs alarmists who don’t get Q.
Like the Princess and the Pea. The problem with you troglodytes is that you’re just not refined enough to feel the pea.
No, I’m not going to follow that with the obvious pun.
I agree with Random’s summary and further suggest that many AGW “believers” are also pushing for highly costly actions by an individual nation (the US) that would have virtually no measureable impact on the world climate. The push this agenda in the “hope” that it will inspire the rest of the world to follow suit.
In summary:
1. We have a basic theory that increased GHG’s will lead to increases in worldwide temperatures
2. We have no reliable data to demonstrate that a warmer world is actually worse for humanity overall in the long term
3. We do not yet understand if/how much the increase in GHG’s will actually impact temperatures in the real world due to multiple other factors impacting the basic science that are not fully understood
4. There is no realistic, implementable method to stop the rise in GHG’s for decades to come, yet costly actions are being implemented that will have no climate impact
5. Virtually all the potential “concerns” relative to a warmer climate can be easily managed with proper infrastructure construction and management. This issue can only be accomplished by individual nations.
6. We are likely decades away from having reliable models that can forecast climate at a regional level more than a short period into the future
7. Supporters of AGW being a pending disaster for humanity do not seem to like to discuss these points and often revert to name-calling when they are not able to convience others of their position logically
Rob:
1. Yes, we agree that we have this theory. We disagree about the probability distribution of the climate sensitivity (as a proxy for the increase of temperature/degree of climate change as a function of GHGs).
2. Yes, there can be no “reliable data” about unobservable future states. I think there is good reason to be worried that plausible changes could be very harmful, and the evidence for this, while not conclusive, is substantial. We can talk about the details.
3. Yes. See 1. But there’s no obvious reason to think that the uncertainties will resolve in favor of less change rather than more (and there are reasons – like carbon and methane feedbacks – to fear the worst).
4. This is a statement about political economy. Views on this differ. What is “realistic” is not fixed. I will admit that what I think it would take to make actual reductions is not politically feasible today, but it could be in a small number of years.
5. This seems not to be at all obvious to me, but it is subject to concrete discussion – what potential impacts, what infrastructure investments, what costs. A case can be made that this works better as long as the warming is kept modest (which is itself an argument for mitigation, but it suggests delay in mitigation is more acceptable).
6. Yes, but “reliable” is a relative term. You can plan on the basis of probabilistic forecasts.
7. Yes, for some, but the reverse is also true – inasmuch as there are reasonable cases for the alarmist “side” of each of these propositions, there is close-mindedness and name calling by (what is your preferred term for non-CAGW-supporters?)
Obviously my goal here is to represent the way I think the AGW arguments go. I was going to say “the arguments that have persuaded me” but I’m willing to admit that it’s not merely logical arguments that have influenced my opinions.
The problem of course is simply that in a system this complex, “proof” is not easy to come by. You are right that the “alarmists” are willing to take potentially expensive actions based on probable, not certain, estimates of the risks of continued increasing GHG concentrations. As I said, the question of who should bear the burden of proof is not one which has a scientific answer; it does depend on your views about the risks associated with various courses of action.
Paul
When you write about “probable risks” could you expand upon your position? What are the risk(s) to the United States for example and what do you believe the “probability” is of these risks being realized if no action or, only economically advisable actions; were implemented?
It depends (imo) on the risks to individual nations and the costs to those individual nations and the results that the proposed actions will accomplish. IMO, most of the actions being discussed will accomplish very little and will cost a great deal. That would seem to be ill-advised.
As I said, the question of who should bear the burden of proof is not one which has a scientific answer; it does depend on your views about the risks associated with various courses of action.
The notion of burden of proof is where you go wrong. Demonstration of understanding is sufficient even with no proof either way, and alarmists have yet to demonstrate any understanding at all, much less “clear.” To this day we have yet to see what creationists call “macro” evolution but man also has sufficient understanding to know it happens. It’s not been proved in the sense of a court of law, yet we have a demonstration of understanding that is perfectly reasonable enough. Climate alarmists have yet to reach anything even close to this plateau.
Meanwhile the view of “risk” is laughable. It’s purely imagined. If you have no clear understanding you have zero concept of risk in any direction. Alarmists however claim understanding they don’t have and then seek to imagine various risk factors accordingly.
Climate change is natural. It must be else there would have never been ice ages. Man influences climate. This is demonstrable via UHI where it’s no mean stretch to extrapolate influence on mesoscale climate from multiple UHI sources.
Jumping straight to “risk” from noting the obvious (man can also influence the climate) is simply politics. Without a figure of merit the notion of risk has no meaning whatsoever.
Paul Baer
I think if you discuss this with a scientist, such as Judith Curry, you will see that “proof” is not part of the scientific method (as it is, for example, in law).
But let’s walk through the logic.
Another argument, which is invalid in science, is the “argument from authority”. This has been used in climate science, as follows: “90% of climate scientists believe that AGW is potentially dangerous, so it must be true”, or “the NAS or RS have endorsed the premise that AGW is potentially dangerous, so it must be correct”.
This is a logical fallacy, as Wiki tells us:
A second invalid argument, which has been used in climate science, as well, is the “argument from ignorance”. This has been used by IPCC to argue for the anthropogenic forcing (for example, AR4 WG1 Ch.9, p. 685):
and (p. 686)
This goes in the direction of “our models can only explain the warming if we include anthropogenic forcing”
Again, Wiki tells us that an “argument from ignorance” is an informal logical fallacy, which asserts that a proposition is necessarily true because it has not been proven false, in that it excludes a third option, which is: there is insufficient investigation and therefore insufficient information to “prove” the proposition to be either true or false.
In this case, the “insufficient information” is the complete knowledge of all natural climate forcing factors and their impact on our climate (a point that Dr. Curry has also made).
So we have identified two logical fallacies sometimes used in the climate debate to support the so-called “mainstream consensus” position as supported by IPCC.
But how about the scientific method?
A key part of this method, and of science in general is “empirical evidence”
An essay “An Introduction to Science” discusses the application of the “scientific method” as follows:
http://www.freeinquiry.com/intro-to-sci.html
The scientific method involves four steps geared towards finding truth (with the role of models an important part of steps 2 and 3 below):
1. Observation and description of a phenomenon or group of phenomena.
2. Formulation of a hypothesis to explain the phenomena – usually in the form of a causal mechanism or a mathematical relation.
3. Use of the hypothesis to quantitatively predict the results of new observations (or the existence of other related phenomena).
4. Gathering of empirical evidence and/or performance of experimental tests of the predictions by several independent experimenters and properly performed experiments, in order to validate the hypothesis, including seeking out data to falsify the hypothesis and scientifically refuting all falsification attempts.
How has this process been followed for AGW?
√ Step 1 – Warming and other symptoms have been observed.
√ Step 2 – CO2 has been hypothesized to explain this warming.
√ Step 3 – Models have been created based on the hypothesis and model simulations have estimated strongly positive feedbacks leading to forecasts of major future warming
X Step 4 – The validation step has not yet been performed; in fact, empirical data that have been recently observed have suggested (1) that the net overall feedbacks are likely to be neutral to negative, and (2) that our planet has not warmed recently despite increasing levels of atmospheric CO2, thereby tending to falsify the hypothesis that AGW is a major driver of our climate and, thus, represents a serious future threat; furthermore, these falsifications have not yet been refuted scientifically.
Until the validation step is successfully concluded and the hypothesis has successfully withstood scientific falsification attempts, the “dangerous AGW” premise remains an “uncorroborated hypothesis” in the scientific sense. If the above-mentioned recently observed falsifications cannot be scientifically refuted, it may even become a “falsified hypothesis”.
So the flaw of the “dangerous AGW” hypothesis is not that several scientific organizations have rejected it, or that it is not supported by model simulations, but simply that it has not yet been confirmed by empirical evidence from actual physical observation or experimentation, i.e. it has not been validated following the “scientific method” .
And this is a “fatal flaw” (and IMO there is no sound scientific basis for wrecking the global economy with draconian carbon taxes and caps as long as this “fatal flaw” has not been resolved using the scientific method).
Max
The link to the cited paper on the scientific method has changed to:
http://www.indiana.edu/~educy520/readings/schafersman94.pdf
Max
No one should say “90% of scientists believe, so it must be true.” I would not defend such a statement. Rather I would say, if the people who know the most about it say it is probably true, you might want to act as if it is. If both of your doctors say “there’s a good chance you have X, and we suggest treating it with Y”, the fact that they both admit they may be wrong is relevant but usually not decisive.
Similarly with your argument against the argument from ignorance: No one asserts that the argument “CO2 has caused 20th century warming” is true because it has not been proven false. Rather the argument goes like this: “we have reason to expect that CO2 would warm the climate beyond natural variability. The climate seems to have warmed beyond natural variability (I do realize that this is also contentious). Therefore we might want to act as if it is the CO2 causing the warming.” If you have one plausible cause for a symptom (I return to the medical metaphor), and no more compelling alternative hypothesis, you might want to address the plausible cause as a precautionary measure.
Again, my real point here is that there are not logical fallacies involved on either side, rather different judgments about the weight of evidence, the costs of action and the costs of inaction.
Where I think there is room for discussion is on relatively particular pieces of evidence. I personally think that the claim that “our planet has not warmed recently” is false in the sense that matters; that is, what’s relevant is not whether this year’s temperature is higher than some year in the recent past, but whether (I’m simplifying here) the rolling average is higher than it was, say, 10 years ago or so. This is a discussion that has been going on in a variety of venues recently, though I can’t point to a particular one.
As far as the claim that “net overall feedbacks are likely to be neutral to negative”, I’m behind on the research on this question. What is the evidence for this that you find most compelling?
And, of course, there’s always this debate about “wrecking the global economy with draconian carbon taxes.” I admit that the impacts of carbon taxes or caps are uncertain and may be costly, but the economy seems to bounce back from pretty severe shocks in a couple of years. I’m not confident that the climate system is that resilient.
“if the people who know the most about it say it is probably true, you might want to act as if it is.”
Paul,
The problem is those people are mostly buraucrat scientist (anti-scientists) and they know the least about it.
If they say something is true, then you might want to act as if it is not.
Paul Baer
Your “argument from authority” regarding “90% of the scientists, etc.” is simply a rewording of the other one. It remains an “argument from authority” and, hence, a logical fallacy.
“Our planet has not warmed recently” is a statement of fact rather than conjecture (based on physically observed data, assuming these are factual). It’s relevance in the “overall scheme of things” is a nice hypothetical theme of discussion, but does not change the actual physical observations.
You ask about recent observations on climate sensitivity: Since AR4 was issued, there have been studies based on CERES and ERBE satellite observations (Spencer & Braswell 2006, Lindzen & Choi 2009/2011, Spencer 2011) which have shown that net cloud feedback is negative, rather than strongly positive as IPCC model simulations had assumed, and that the overall 2xCO2 climate sensitivity is likely to be around 0.6C, rather that 3C as estimated (on average) by the IPCC model simulations. These studies all came out after AR4, so were obviously not mentioned there; let’s hope they do get mentioned and considered in a future AR5 report (if such a report really ever gets published).
We can get into discussions about the “pros and cons” of “wrecking the global economy” to fight an “imaginary hobgoblin (Mencken), but I think it probablt makes more sense to concentrate on the “science” behind this “hobgoblin” first.
And, as I have shown you, it is not very robust.
In the scientific process, it is still an “uncorroborated hypothesis” today, as I pointed out; if the recent cooling despite CO2 increase to record levels continues for another few years, it will become a “falsified hypothesis” (and IPCC can fold up).
Max
Re: “argument from authority”
Equally fallacious: ” A statement is incorrect because the statement is made by a person or source that is commonly regarded as authoritative.”
The statement “X% of climate scientists say Y” proves nothing, but at what point do you think somebody would be entitled to give some weight to such a statement?
Judith
I am not being overconfident when I dismiss human impact because that is what the data says:
There was five-times increase in human fossil fuel use from about 30 to 170M-ton of carbon in the recent warming phase from1970 to 2000 compared to the previous one from 1910 to 1940. However, their global warming rate of about 0.15 deg C per decade is nearly identical as shown in the following graph.
http://bit.ly/eUXTX2
In the intermediate period between the two global warming phases from 1940 to 1970, there was global cooling with increase fossil fuel use of about 70M-ton as shown in the following graph.
http://bit.ly/g2Z3NV
And since about 2000, there was little increase in the global temperature with further increase in fossil fuel use of about 70M-ton as shown in the following chart.
http://bit.ly/h86k1W
Either change the data or dismiss AGW!
“Separating natural (forced and unforced) and human caused climate variations is not at all straightforward in a system characterized by spatiotemporal chaos. While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.”
In your last sentence Dr. C. you’re guilt of an asymmetry. The IPCC and their cohorts routinely and uniformly exaggerate their case for AGW. Most knowledgable skeptics, that is skeptics with a science background, do not dismiss a small human impact. Sure, the skeptics who do entirely dismiss AGW do so overconfidently, but that’s pretty much by definition…. hence it’s not a particularly meaningful. observation.
Mercifully, that is one of the shortest things I’ve read by John Nicol.
Since he denies multiple lines of evidentiary research e.g. satellite data, empirical observations, ice cores, modelling, etc. and rejects the summary of the IPCC and references to thousands of papers , I suppose it’s not entirely surprising that you wish to see him as a rational skeptic. Judith Curry, at least, thinks he raises important questions. He can usually be found claiming that people “underappreciate” the benefits of an increase in carbon dioxide, what with it being the basis for plant growth, and all that. He excoriates the public for reasonable and informed concerns about agriculture, clean water, increased poverty, and other almost certain effects of dangerous climate change in many regions. It’s hard to imagine what sort of disdain one has to have for one’s fellow human beings and for the natural world to be in John’s state of willful denial. But no matter…
Judith Curry wants to engage in a more serious critique by identifying the most obvious nonsense in Nicol’s understanding of physics and climate science. On honest science sites, people tend to wonder how someone like John, with a PhD in physics, who claims to speak with superior knowledge of climate science, could make such elementary errors.
Maybe it’s great that Judith wants to show the gentle reader why it is best to be open to the idea that someone like John, demonstrating what can only be described by any objective observer as astonishing gaps in his knowledge of the basics (never mind current knowledge of fast moving climate science), could be a person who accurately articulates both the important problems with climate science and the evolving strength and consensus of the science. But it just seems so unlikely, almost to the point of an error in reasoning, to place any confidence in such a source.
Oh… and I wonder if it matters, even in just some small way, that he is associated with a think tank expressly funded to campaign against regulating emissions.
Any funding of people opposing mainstream climate science is massively outweighed by government funding of the latter. http://www.faststartfinance.org/content/contributing-countries
The IAC review of IPCC has revealed irrefutable evidence of political interference, lack of transparency, bias, failure to respond to critical review comments, vague statements not supported by evidence, and lack of any policy to preclude conflict of interest. Anyone (outside climate science) with any engagement with science can recognise AR4 as a ‘snow job’ The IAC report is available at ttp://reviewipcc.interacademycouncil.net/report.html
This, coupled with the revelations of ‘climategate’, the hockeystick and hide the decline undermines any confidence in the findings of the IPCC.
Dear Martha,
I agree wholeheartedly with the need to limit the great atmospheric experiment. It seems only prudent. But what if much of the science is wrong? The models reveal nothing but the expectations of modellers.
http://judithcurry.com/2011/04/25/science-without-method/#comment-64982
Earth systems are themselves dynamically complex and resist deterministic and correlative methods. The satellites with their God’s eye view of global energy dynamics say something entirely different about the causes of recent warming and the results are dismissed minimal rationale. Other than when they say the right things about spectral absorption (Harries 2000) or are useful for analysing cloud feedback (Dessler 2010). What is a poor hydrologist and environmental scientist to make of this?
We know that we have decadal variability in rainfall – have known since the 1980’s at least. These are linked in large part to patterns of standing waves in the Pacific but which are linked globally in chaotic spatiotemporal Earth systems. There is no one cause of any of this – but the blocking patterns in the atmosphere seem driven in part by ‘top down’ solar UV forcing of Earth systems – especially at the poles. A very new idea but one with considerable potential – http://iopscience.iop.org/1748-9326/5/3/034008.
The associated decadal sea surface temperature patterns (SST) are very useful because they are persistent. I can say that Australia will be flood prone for another decade or 3, the Sahel will continue to green, India’s monsoons will intensify and the Americas will be dry in the west especially. There will be fewer cyclones in the Gulf of Mexico and more in Australia and Indonesia. These are just simple rules evolved over 100 years of observation. Because La Nina is involved – we are looking at more low level cloud (associated with cooler SST) in the Pacific and global cooling for a decade or three as well. You can surely see for yourself what this will do for the politics of decarbonisation of the global economy – unless the narrative is radically changed and soon. Do you want to continue to rely on a ‘science’ that is certainly wrong on the dynamic of Earth systems and on the energy budget?
There is an effective approach to decarbonising the global economy that has multiple paths and multiple objectives. Health, education, safe water, sanitation, good corporate governance and free trade to stabilise population. Conservation and ecological restoration. Reducing black carbon and tropospheric ozone. Investing in energy research. There are many opportunities and many reasons to do this – primarily the emergence of humanity into a brighter future. It is happening, is obvious, is inevitable, is relentlessly pragmatic and is probably where we should focus rather than on the nonsense from both sides that is the climate wars.
Robert
Chief Hydrologist
………………..where we should focus rather than on the nonsense from both sides that is the climate wars.
Unfortunately the IPCC’s AGW advocacy is being used as the pretext for politicians to impose ‘economic remedies’ such as carbon tax and ETS without any proof that these will have any measurable impact on temperatures or climate. Such ‘remedies’ will not only be ineffective but will also be economically damaging. Has anyone compared the economic performance indicators (GDP, inflation, unemployment, national debt, foreign exchange reserves) for European countries with ETS c.f those of countries without ETS? I suspect such comparisons may be odious/toxic to AGW proponents.
Yes – concentrating on taxes is counter productive, ineffective and not likely to be implemented by any one outside of the benighted west – the latter is a good thing for the world’s suffering poor. Taxes or carbon trading is the thing I didn’t mention. But limiting the great atmospheric experiment is not automatically about taxes – there is a false science/policy nexus here that should be more openly debated.
I have tried twice now to engage Martha in a discussion – I fear she is more concerned with dropping by and flaming.
This post of Martha’s is unhelpful, disconnected, gormless and incoherent. I’d like my time back.
Cheers,
Big Dave
I added her to the “scroll past” list of commenters long ago.
Pointman
Yeah. After that thing about Dr. C in leathers.
Doh! Now that comment I might have been interested in.
Pointman
Big Dave,
IOW this post of Martha’s is precisely like her other posts.
I think her dependable consistency is a marvelous attribute.
Am I right in deducing that you don’t like the guy very much?
Coz though you tak about ‘obvous nonsense’, you don’t actually come up wth any examples.
Looks like its personal and emotonal to me.
Well, there is this:
“However, the uncertainty in Quantum Mechanics which Einstein was uncomfortable with, was about 40 orders of magnitude (i.e. 10^40) smaller than the known errors inherent in modern climate “theory”.
??
My helpful remarks were addressed to Martha. The relevance of your quote from JC escapes me.
The quote is not from JC, it is from Nicols. If I understand Martha’s point, it is Nicols’ “obvious nonsense” she was criticizing.
Perhaps you’ll have to explan why it is ‘obvious nonsense’. I think understand a little about what Einstein meant, remember a bit of Heisenberg and Planck, and Nicols’ remarks don’t seem to be obviously wrong-headed to me.
Please elucidate – so that those of us a long time away from our study of QM can be reminded of why Nicols should be so ridiculed by climatologists.
Latimer,
the quote is comparing the minimum uncertainty dictated by quantum mechanics in knowing both the position and momentum of a particular particle under investigation to the error in a computer simulation of a model of macroscopic physics. So the difference in scales makes this comparison ‘trying’, to say the least.
More than that, the author is clearly picking the minimum uncertainty as though that is some kind of meaningful marker in any real sense of quantum mechanics. In most experiments testing the fundamental implications of quantum mechanics, the error involved is never anywhere near the minimum uncertainty because the wavefunctions involved do not resemble the minimum uncertainty wavepackets and the equipment used in such experiments does not have resolution down to one part in 10^34.
On top of that, Einstein’s role in the quote is simply to bolster the author’s position and has absolutely no relevance in the context of what is being discussed.
So, first, it’s nonsense to compare the minimum quantum uncertainty to a macroscopic description of a physics system. Second, it’s nonsense to assume that any real experiment/simulation will attain minimum uncertainty, even for quantum systems. Third, it’s nonsense to bring up that Einstein was or was not comfortable with quantum mechanics when nonsensically comparing quantum mechanics to a climate model.
Is that a bit clearer?
If you want a more meaningful comparison, then one should compare the ability of numerical methods (like density functional theory or other ab initio techniques) to calculate and predict the quantum mechanical parameters of atoms, molecules and solids. There are still substantial disagreements between theory and experiment. I guess by the author’s standards, that means quantum mechanics isn’t ‘science’ either…
Maxwell said it very well, I think.
@maxwell
No need to instruct me on the uncertanties of quantum mechanics. It was the failure of the real world to act in accordance wth theory that caused us to chuck the theory and me to decide on a career outside academe.
But I think you are doing no more than arguing about whether the relevant exponent in Nicols’ essay is 45 or 40 or 35. His rhetorical point is a good one and you are doing no more than reinforcing it by arguing about the detail.
Climatology gets nowhere near – not even by many orders of magnitude – to the level of proof and predictability expected in many truly scentific fields. Whether it s 10^25 times or 10^45 worse isn’t really the point. It is a stupendously large number.
And yet climatologsts continually overplay ther hand with nonsenses about ‘the science is settled’ ‘there is no debate’ ‘why should I show you my data- you’ll only try to find somethng wrong with it’……….
And each time a lay person like I once was decides to take a look for myself, they see that the clams are vastly overblown, the physical evidence does not match the theories and that the climatologists are blithely adopting some very dubious positions. More like spivs than scientists.
I lke Nicol’s analogy. If it’ll make you happy I’ll move ten thousand millon times in your direction, and agree that the relavant exponent is 30, not 40. You can’t say fairer than that, can you? It’s still a very, very big number
Latimer,
you’re demonstrated a clear lack of reading comprehension in this conversation.
No where do I make a point that the number of orders of magnitude of uncertainty limited by the minimum uncertainty value matters in Nicols’ argument.
I was pointing out that it’s a stupid comparison because one is a quantum system (ie very small) and the other a very large macroscopic system. From a purely physical perspective, the two situations are not comparable.
More than that, in a realistic comparison of the error from numerical calculations of quantum mechanical parameters of material systems to the climate models we find that the errors are very much similar! So not only is it a stupid analogy and comparison, it’s wrong too!
‘Climatology gets nowhere near – not even by many orders of magnitude – to the level of proof and predictability expected in many truly scientific fields.’
Maybe you could give us an example or two of currently researched calculations that have many orders of magnitude less noise or better agreement to experiment than climate models. As someone who routinely attends lectures and seminars on such calculations in the context of the quantum mechanics of molecules, I have a hard believing that any exist, but am ready to be proven wrong.
This will be a great test of the scientific process. Latimer’s hypothesis: other currently researched fields have much more accuracy in determining experimental results from theory. Null hypothesis: other currently researched fields have about the same amount of error in such calculations, let’s say within an order of magnitude. Let’s find out if he can sufficiently reject the null.
But if you don’t have concrete examples of current research (which is what would be comparable to current climate research), then your argument is just as meaningless as Nicols’.
It would be par for the course though.
@maxwell
Thanks for your halpful remarks about my comprehension ability.
And thanks also for reinforcing my point yet again that by arguing about the detail of Nicol’s rhetoric you are missing the big picture.
No point in continuing this sterile discussion any further. Apart form to note that in QM it is normal to make predictions and then test them against reality., It is this last step that helps to build credibility of its answers – however bizarre they may seem.
But in climatoloy the sequence seems to be ‘write model. Get it to compile. Run it once. Issue press release about dreadful consequences. Ensure that there are no backups then modify it so that the original cannot be rerun. Run version 2. Issue press release (citing version 1 as evidence) that things are even worse than expected….and so on and so on.
I don’t see anywhere in that sequence called the ‘check against reality’ which is the bit that you need to do to test the ability (error) of the model. But then..its climatology. I don’t expect much. Lots of self-congratulation, but no actual science.
I seem to remember that even Newtonian mechanics can do things to six plus significant figures and be show to be right. How does that compare to your climate model?
A
Latimer,
this is how uninformed on these process you are.
‘I seem to remember that even Newtonian mechanics can do things to six plus significant figures and be show to be right. How does that compare to your climate model?’
ANY MODEL CAN GIVE ANY AMOUNT OF SIG FIGS UP TO 16! The amount of significant digits that a given physical model can calculate a parameter up depends on a computer’s ability to write any given number. Currently, the standard highest precision that a number can be calculated on a typical process is to within 16 significant digits. That’s the same no matter what kinds of mechanics is being used.
In terms of agreement to a specific amount of significant digits, the bar to be held to is on the experiment side. Classical mechanics is known to such high precision because the experimental equipment is very simple and, therefore, can be known to several significant digits. The same is not true of satellite based measurements on climate.
What’s more funny to me, and anyone else that actually knows what he/she is talking about, is the fact that you think that climate models are different from Newtonian mechanics.
CLIMATE MODELS ARE BASED ON NEWTONIAN MECHANICS! Pick up an actual book or actually investigate ANY of the claims you’re making. For someone claiming to know what is and what is not science, you certain play an informed person quite well.
@maxwell
Sorry Maxwell.. you must be doing ‘post normal science’ or ‘pre-menstrual tension’ or something.
The bit that you seem to have missed (but then you are a climatologist after all so not much is really expected of your ability to understand science) is the ‘and be shown to be right’
You may not have heard of these things caled ‘experiments’ They are where the theory (eg models) are compared with actual measurements. And we have a funny strange old convention that if the theory and the experiment don’t agree we change the theory!!!
Not like in climatology where you just say ‘the data is consistent with our theory’ or ignore it altogether a la Hockey Stick. I know you will find this concept difficult to grasp…but then new and exciting things often are.
As to climate models being based on Newtonian mechanics, well this doesn’t come as a shock to me. What would have been a bolt from the blue would have been if you had said ‘and we regularly evaluate them against reality and our reproducible error is 10% or 20% or 100% or 10^-34%’.
Just so long as your model gives you the results you wanted before you started…enough to get a pal-reviewd paper, a mensh in the citation idex and the next grant…that seems to be all the checking that is done.
And having done 30 years in IT – including writing some early high atmosphere reaction kinetic models – I’m tolerably familiar with the numerical accuracies of computing equipment.
But just doing the sums ain’t making a good model…it actually has to reflect reality as well. A shocking concept for you ..but you will, with time and a break from academe, come to accept it.
Perhaps quantum mechanics has more in common with dynamically complex spatio-temporal chaotic Earth systems than imaginable at 1st glance.
One is the ontological evolution of a probability density function in the Schrödinger wave equation. The other is predictable only as a probability density function of Earth systems – I am refusing any more to use the term climate as it has been so debased – inhabiting a finite phase space in the ontological evolution of planetary standing waves.
Much better than the tomfoolery of reductionist and correlative alchemy – the latter as a description of Earth systems has real meaning and substance.
Chief,
I think that was the point of Tomas’ post from some, several weeks back. I think he was very much ‘inspired’ by the formalism of quantum mechanics.
One question, what is the physical relevance of ‘planetary standing waves‘ in terms of the system? Are they ‘real’, so to speak, as in heat waves or ocean waves? Or are they more like a physical abstraction that would be more akin to a wavefuntion in QM?
It’s interesting nonetheless. Maybe ‘climate’ has its own multivariable wave equation. Is it homogeneous or inhomogeneous? Weird.
Hi Max
I was joking about quantum mechanical similitude.
Standing waves in the coupled ocean/atmosphere system are another thing. These are long standing patterns – the AO, SAM, PNA, PDO, ENSO etc – that are persistent and change state occasionally.
Cheers
someone please clarify.
is the morality/immorality referred to here general good character such as the foundations of judeo/christian morality? Or is it rather the expectation of advocacy in promoting a political agenda through ‘science’ – such as “gotta save the planet BS”?
Yeah. That’s the problem, right there.
Depends on who’s talking about it.
And which part they’re talking about.
And how little they know about the subject.
And a whole lot of IFs.
And a whole lot of assumptions.
IOW – nobody knows. I’ve been through that mess more than once and I’m not gonna do it again here.
There does seem to be a low signal to noise ratio in one segment of the published literature. Not in the basic science in which I would include people publishing work in their sphere of expertise but in the free for all areas such as climate sensitivity and its side kick attribution. Which I guess is the bulk of what feeds the internut debate.
The criticism over the blend of real and simulated data is probably justified in many such papers, and it does make the arguments difficult to follow and sometimes leaves one to wonder what if anything had been demonstrated.
On the subject of the models I do wonder if the scientific community is overly polite about some of the groups and whether this tends to discredit the better groups by association. Put more brutally I think that if asked those that know about these things would rank the models (and hence the groups) and be very rude indeed about those at the bottom of the pile.
Criticisms that I have heard are ones such as that those models that don’t demonstrate blocking ought to be ignored for climate reasearch. Also those that don’t provide information on how real world data has been used to characterise their model should not be allowed into ensembles that are used to access model skill. Perhaps the oldest complaint which related to flux adjustments has finally led to such models being ruled out of order.
Regarding these weak models I am not implying that those modellers that say that their models are not heavily characterised by long term real world measurement data are being untruthful. We only ever hear from very few of the modelling groups but that does not mean that all groups behave in the same way. Now maybe I am wrong but if that is the case perhaps someone would vouch for the other modelling groups. What little I do hear is that some of the models are more or less useless but even then the models are not named but I guess those in the business would know which ones they would be.
I do understand that climate modelling is to some degree a matter of national prestige and that, due to the way it is constituted, the the IPCC system is in a bit of a bind if it snubs some nations’ efforts. I also know that the CMIP people have been a bit rude about some aspects of model performance even at quite a basic level e.g. getting the global mean temperture within a even few degrees of what is the case (in some of the CMIP3 models we were toast in pre-industrial times). I do not think that the due and valid criticism that the models recieve from the modelling community itself are sufficiently communicated outside what is a relatively small group of scientists.
As far as the scientific method goes, I would welcome a more critical eye being cast on the models but cast in a detailed fashion designed to aid progress and understanding.
As I have often stated I am very concerned about what the next round of models will have to say about regional impacts, which in some cases may be national impacts. If the scientific method is not addressing itself to the awkward task of separating the modelling wheat from the chaff then it is doing all of us a disservice.
Alex
While the data sets do not agree on the actual measurements, none of them support a tropical upper trop warming at over twice the rate of the surface. The idea that there is even some support for the models and that the models have some reality is simply wrong. The continued carrying of this corpse simply shows how some will not allow ANYTHING to get in the way of their narrative.
The scientists, if there are any still allowed to work in the CLimate Arena, need to start over with their modeling using ALL the most recent data on the atmosphere and influences. Until they do this is a total waste of time and money.
Have you noticed?
The magnitude of global cooling rate since 2002 (http://bit.ly/dRdMYP) of 0.08 deg C per decade is less than that for the period from 1940 to 1970 (http://bit.ly/g2Z3NV) of 0.03 deg C per decade!
Will this trend continue?
The rate of cooling was faster over the last decade than from 1940 to 1970.
I don’t know, will this trend continue?
http://www.woodfortrees.org/plot/gistemp/from:2002/trend
Looks like choosing a different data set provides exactly the opposite conclusion.
‘Looks like choosing a different data set provides exactly the opposite conclusion’
Well that’ll be a first in climatology :-) . Never happened before.
Every other piece of data is entirely unequivocal.Even the errors all point in the same direction. Which is what led to the undeniable conclusion that the ‘Science is Settled’. There is no room for doubt. If any, its just that the facts are worse than we thought. Only a few short years ago we had Fifty Days to Save the Planet
/sarc
I was cherry picking to show that cherry picking is wrong, anybody pick up on that?
I’m looking forward to your post on the troposphere hotspot.
How substantial an issue is it? Frequently I hear the lack of a hotspot dismissed as not important, just one part of the model outcome. Is this the case, or is this a major discrepancy that calls a whole model into question? If a model is an enhanced weather model being run for a longer time frame, can the model be considered reliable for anything, and if so, why?
It is my understanding that the “enhanced” greenhouse effect has to do with positive feedback from increased temperatures from carbon dioxide warming, particularly the positive feedback of increased water vapor. From AR 4
From Roger Pielke Sr. Blog today
He was also referring to a post by Marcel Crok where he said:
If you have a theory which can not be measured can you
a. input unmeasurable data into a computer model and expect robust results?
b. can you in fact claim to have a theory at all if the criteria by which the theory is to be proven can not even be measured. ?
Just asking
http://jer-skepticscorner.blogspot.com/2011/04/case-of-vapors.html
Thank you Jerry. That is the most compelling information I have seen so far!
Pielke Sr. has truly ‘hit the nail on the head’. No wonder the IAC declined to comment on IPCC’s conclusions.
Since he has a hammer
He hammers in the morning,
He hammers in the evening
He hammers all the livelong day.
Can’t ya hear the whistle blowing
For a dollar and a dime a day.
==============
Theories about anything are only useful if they can make reliable predictions about what *will* happen. That they can explain what *did* happen is nice, and may be interesting for their own sakes. But without being able to tell us something about the future, they are merely curiosities.
In the real world, outside the hallowed halls of academe (with all its quaint shibboleths and rituals and traditions and deference that make the Court of Queen Victoria look like an informal and rumpus room), the people who make their living by being proved right or wrong each day are the professional horse racing tipsters. They lay down their predictions in black and white each day…1st Fast Thoroughbred, 2nd Great Runner, 3rd Pretty Good Horse..and by next evening their success or failure is visible to all. No excuses, nowehere to hide…either the horses came in as predicted or they didn’t. If they did, great, if not the punters follow a different guy’s wisdom.
Seems to me that climatology is no more than tipstering writ large and on a bigger scale. But with one crucial difference….its predictions are deliberately couched in such terms as to be non-testable. Or at least with plenty of wiggleroom and ‘getoutofitability’. Other fortune tellers do the same. Gypsy Rose Lee predicts meeting a tall dark stranger, unless they should short and blond. The old Scottish weather forecast ‘if you can see the mountain it’s going to rain..if you can’t it is already raining’ is another.
Which is all a long way to say that I agree with Jerry’s point B.
‘can you in fact claim to have a theory at all if the criteria by which the theory is to be proven can not even be measured?’
And leads my nasty suspiciosu mind to wonder why the climatologists persist in trying to pull this trick..of ensuring that their predictions are never testable. You couldn’t get away with such sleight of hand in the saloon bar of the Dog & Duck for more than ten minutes before even the most befuddled drinker would cotton on to the fact that he was being played. Perhaps they should spend less time in the academic world and more on the racecourse..or even the D&D.
Or even…. though I fear I will never live long enough to see the day…..make some testable climate predictions ahead of time and stand by their results. Doing so – and being consistently right – would rebuild their credibility considerably…even with a hardened sceptic like me. And might even lead to a new career in racing journalism :-)
The very fact that they can not prove the basics of the theory ie. increased water vapor or the tropical troposphere signature etc.is the very reason that they make such absurd pronouncements as regards to other events being “consistent” with their theory.
This is also why, and obviously so to people who pay attention, that the climate science community is riddled with corruption. The very fact that people who claim to be “honest brokers” such as the esteemed Richard Muller buy into the theory while simultaneously condemning the methods and persons responsible for the “promotion” of the theory is not only hypocritical it is knowingly continuing the promotion of a scientific fraud upon society. If the scientist behind the theory are using unscientific methods to prove their theory why would anyone with an ounce of common sense or integrity for that matter buy into the theory that these corrupt scientist using corrupt methods are so determined to defend? $$$$$$$$$$$$$$$$
That is an interesting point.
It may be that climatology is not riddled with corruption. And that we outsiders are just being misled into thinking that because it gives every appearance of being largely run by immature and childish individuals acting in a way designed to give rise to those suspicions, that these suspicions must be true. It may not be so.
But Jeez they don’t go out of their way to demonstrate that their field is whiter than white. Everywhere you turn there are examples of sharp practice,,,from pal-review to FOI evasion. From hockey hockey sticks to imaginary Chinese data. From rigged ‘independent’ whitewashes to ‘grey literature’ in the IPCC report. The latest scandal from UEA is that they won’t comply with FOI because if they showed how their CRU guys worked, nobody would ever publish their work again (see WUWT and CA today).
All this stuff just stinks to high heaven. As the man says, if it walks like a bunch of shysters, talks like a bunch of shysters and quacks like a bunch of shysters…………….
There is a so called greenhouse effect. The ‘enhanced’ refers to adding gases and water vapour. The models ‘calculate’ water vapour on the basis of constant relative humidity. There is no shortage of water vapour and warm air holds more water vapour than colder air.
Minschwaner 2004 – suggest that relative humidity might not be quite c0nstant. ‘Their work verified water vapor is increasing in the atmosphere as the surface warms. They found the increases in water vapor were not as high as many climate-forecasting computer models have assumed. “Our study confirms the existence of a positive water vapor feedback in the atmosphere, but it may be weaker than we expected,” Minschwaner said. http://www.nasa.gov/vision/earth/lookingatearth/warmer_humidity.html
Cherries for everyone.
This is the biggest problem for me. The assumption that average water vapour concentration is *only* a function of temperature (which, in turn, is *only* a function of GHG concs). The reality is, of course, quite different, as water vapour concs are dependent on a whole host of other variables, which have variability going out from decadal to centennial to millenial etc.
The secondary problem is diagnosing a closed loop system. For my sins as a researcher into remote sensing systems, I have had to debug linear closed loop systems in the past. That is, a correctly designed closed loop system that perhaps has an unknown (e.g. software error) problem causing it to break. Observing the parameters of the system is just not a good way to diagnose these systems; a single software error will cause all parameters to drift together. It is almost always impossible to identify causality in this way in a closed loop system. Debugging is typically achieved by breaking the loop and running the system in an open loop form.
For the atmosphere, let’s say people do find an increase in water vapour to match recent warming. How do you work out whether the increase in water vapour was caused by the warming, or caused the warming?
My experience here refers to systems which are linear (by design). Add on the complication that the atmosphere is complex and non-linear, including the discussions on this blog on chaos, difficulties in diagnosis become orders of magnitude more difficult – perhaps even intractable. Yet climate scientists lack any kind of humility when faced with these problems; they are just dismissed, and advocates place them on a pedestal not to be questioned. And they wonder why they are failing to convince so many “Joes with science degrees”.
Hi Spence
I am not one to argue nonlinearity and Dragon Kings. The more people like ourselves the farce that is attribution and prediction – the better.
My point was exactly that water vapour is a complex thing and a view should not be based on one presentation by a journalist. That said – other things being equal – warmer air does hold more water vapour.
Cheers
Hi Chief
I fully agree with your comments here.
I would note though, whilst warmer air has the capacity to carry more water vapour, this is not a constraint. (Conversely, it can be a constraint that cold air must carry less). I know you realise this but I am a sucker for stating the obvious :)
It is also interesting to note that the driest places on earth come from the coldest and hottest regions – e.g. deserts of antarctica at one end, death valley at the other end. You can’t escape from those nonlinearities.
Spence,
Your comment articulates quite nicely what I have been referring to as ‘diagnosis by exclusion’ i.e “we cannot account for the temperature data unless we invoke large positive feedbacks and high climate sensitivity”. What is going under the radar of course, is that this approach assumes that ALL the relevant variables (including positive and negative feedbacks) have been identified, quantified and accurately represented in the computer model on a spatiotemporal scale which is meaningful when compared to the real world. Given the track record of climate science to date (and to plagiarise their terminology) I consider this ‘extremely unlikely’.
Absolutely! There is much reason to be sceptical. :)
Chief
The M&D 2004 report, which you cite, even produced a nice chart showing how water vapor content (specific humidity) increased with temperature in their satellite observations.
A closer look at this chart shows that the IPCC assumption of “constant RH with warming following Clausius-Clapeyron” gives greatly exaggerated values of specific humidity, and that even the M&D model showed much higher SH increase with warming than the actual observations (I have extended the graph to show the full 1K impact).
http://farm4.static.flickr.com/3347/3610454667_9ac0b7773f_b.jpg
Max
Thanks – I hadn’t realised the difference was so great. Another pillar of climate modelling bites the dust?
The use of the IPCC “forcings and feedbacks” formalism as a vehicle of argument continues. It is a method of calculation, not a physical theory. It simply carries the previously well known law of conservation of energy into climatology. No one can object to that.
But to create a science of climatology , laws of nature specific for climatology are needed. Global scale laws that say something specifically about the energy transport process of the earth.
Why is it that, for the year-round global average, the atmospheric radiation emitted to space is twice the radiation emitted by the solid earth? Kiehl and Trenberth 1997 figures are 390 W m^-2 and 195 W m^-2. Balfour Stewart’s steel shell model has this property too. Is this a meaningless coincidence? Can you prove it to be so by some systematic argument? Pray tell.
Talk of “feedback” is talk about the mathematical structure of a method of calculation, not talk about laws of physics. The relevant law of physics seems to be that the global water vapour content of the atmosphere tracks its global average temperature rather well. Can you explain why these two variables track seem locked together? Can you give a precise physical reason why the tracking is at a relative humidity of about 50% ?
A hard one is to explain, in general physical terms, why the clouds are at about 50 or 60% . Do they also track temperature? Why?
When would the current interglacial period drift into a glacial period, supposing that there were no man-made greenhouse gas emissions? Why?
These are questions about physics. In general, to answer a question about the precise magnitude of the effect of a small perturbation to a physical system, one needs to know something specific about the dynamical structure of that system. Christopher Game
“What scientific methodology is in operation here?”
Maybe it would be easier to say what is not the scientific method when it comes to climate science. From what I have read since beginning to frequent climate blogs, when engaging in climate science:
You do not have to posit a null hypothesis;
You do not have to formulate your theory in a way that is falsifiable;
You do not have to include adverse data in your graphs;
You do not have to work with statisticians when creating new statistical methods in doing your research;
You do not have to validate your climate models;
You do not have to be accurate in your predictions;
You do not have to understand the forcing effect of water vapor in constructing your model;
You do not have to make public your data and code after publishing (at least not to those who might disagree with you);
You do not have to comply with FOIA;
You do not have to conduct experiments to prove elements of your theory;
You do not have to properly site your weather stations;
You do not have to adjust (much) for UHI;
You do not have to actually measure temperature over vast swaths of the land, oceans and atmosphere to calculate global average temperature (or to publish dramatic graphs thereof);
You do not have to adjust your theory when your predictions are wrong or other adverse data becomes available;
You do not have to notice, let alone criticize, dishonesty by your fellow climate scientists (at least not the ones you agree with);
You do not have to engage in fair, open and reasoned dialogue with those who disagree with you;
You don’t even have to read material written by those who disagree with you;
and best of all
You don’t have to admit you are wrong about anything…ever.
@Gary
Please don’t call it ‘climate science’. Whatever these guys do has few connections wth science.
‘Climatology’ will do. Like astrology or UFOlogy.
Latimer,
Your comment reminds me of when I read Steve McIntyre’s quoting of Esper, stating that the ability to select samples (i.e., cherrypicking) was an advantage “unique to dendroclimatology”. My immediate thought was that it is not unique to dendroclimatology – it is also used by astrologers, water diviners, and all manner of other pseudosciences and quackery.
A Quote from Esper et al 2003
Gary M,
Ouch.
While GaryM’s list may seem to be an attempt at humor, I think what he has highlighted is a growing perception of Climate Science and Climate Scientists among technical people who work in other sciences and fields like engineering particularly outside of academia. The people Judith once referred to as tier 2 if I remember correctly. I would be worried about this perception if I were in the climate science field – it’s very real and growing.
Humour???
I thought it was the draft syllabus for ‘Climatology 101’ as drawn from the experiences of the ‘leaders’ of the field. And, for once, I thought they’d made a good job of it. Very true to life.
Just “an attempt at humor”? …well that hurts…
But more seriously, except for the last one, I have read comments making each of those arguments in various formulations within the last 6 months. Each of them alone can sound innocuous enough. When you list them all together (and there are others) it does not paint a pretty picture.
I just meant that alarmists will probably see it as nothing more than someone trying to be funny, when in fact I believe it is a very real and growing perception – accurate too.
Has anyone ever come across an alarmist who demonstrates a sense of humour? I thought that the two were mutually exclusive. Surely they are obliged to see everything in the world through the prism of imminent thermageddon. Far too serious a matter for joking about.
I imagine the IPCC review meetings are a real bundle of fun and helpless hilarity…….especially the final wrap up with Chucklechops Pachauri laughing all the way to his bank.
Q: How many alarmists does it take to screw in a light bulb?
A: That’s not funny!
I’ve been told that alarmists don’t use light bulbs because they’re reducing their carbon footprint. :-)
Alarmists don’t do screwing.
They are too busy saving the world and berating the rest of humanity for its sins and lack of love for gaia to have amy spare time for recreation/procreation.
Latimer – have you actually ever met an alarmist? They’re really pretty much like normal human beings (which is to say, they vary quite widely in their appetites and character traits).
@paul
Yes. I have met many alarmists. The mildly green-tinted ones are quite tolerable. If they wan to worry about their carbon footprint that’s fine with me.
It is the single-minded committed earthsavers that I lampoon. I have the misfortune to know one very well. In olden times he’d have been a hell and brimstone ‘you are all evil sinners. Change your ways or you will die’ sort of preacher..full of hatred and bile.
Come to think of it he is a hell and brimstone ‘you are all evil sinners. Change your ways or you will die’ sort of preacher..full of hatred and bile.
Nothing changes..just the god of their imagination. It is the hatred and bile that is constant through the years……..
That is actually funny. We have a saying in my country – ‘the only reason angels can fly is that they take themselves so lightly.’
I can see both sides. My core position is that changing the composition of the atmosphere is, ipso facto, probably not the most prudent use of human ingenuity. On the other hand, carbon taxes and the like just seem so hideously malformed. The Hunchback of Notre Dame of economics. Ludicrously ringing the bell but succeeding in nothing else.
Have you read the Hartwell post – http://judithcurry.com/2011/04/10/hartwell-paper-game-changer/ There are approaches that involve multiple paths and multiple objectives – that I call the MDG’s plus approach.
Paul –
On reflection, how did those get into that light bulb to do whatever it was that they were doing? :-)
A: One, to stick his finger in the socket and test whether or not the power is on, and two to carry him to the hospital when he finds out it was.
I too used to think that warmism and wit could not coexist, until the Richard Curtis splatter video thing – I haven’t yet quite recovered.
GaryM
They do admit their wrongs and doubts. Unfortunately, they do it in private:
I believe that the recent warmth was probably matched about 1000 years ago. I do not believe that global mean annual temperatures have simply cooled progressively over thousands of years as Mike appears to and I contend that that there is strong evidence for major changes in climate over the Holocene (not Milankovich) that require explanation and that could represent part of the current or future background variability of our climate.
http://bit.ly/hviRVE
The verification period, the biggest “miss” was an apparently very warm year in the late 19th century that we did not get right at all. This makes criticisms of the “antis” difficult to respond to (they have not yet risen to this level of sophistication, but they are “on the scent”).
http://bit.ly/ggpyM1
GaryM
“You don’t have to admit you are wrong about anything…ever”.
Have you been speaking to my wife?
Your wife must be in secret contact with my g/f.
Unless you live in London, this culd be the only actual evidence for the otherwise bonkers theory of teleconnections.
Latimer,
Hopefully, your g/f either
1. Doesn’t read these posts
2. Has a wonderful sense or humor, or
3. Does read these posts and is not your only g/f :)
My sweetie and I just had our 41st anniversary so by now could care less about what I write :)
@Kent
At least one of the above is true :-)
Happy Anniversary
The question of constant relatively humidity in the tropospheric column is crucial to the cAWG debate and has not in my opinion received adequate treatment. Quoting from the abstract of Paltridge et al (NCEP reanalysis data study 2009):
The upper-level negative trends in q are inconsistent with climate-model calculations and are largely (but not completely) inconsistent with satellite data. Water vapor feedback in climate
models is positive mainly because of their roughly constant relative humidity (i.e., increasing q) in the mid-to-upper troposphere as the planet warms. Negative trends in q as found in the NCEP data would imply that long-term water vapor feedback is negative-that it would reduce rather than amplify the response of the climate system to external forcing such as that from increasing atmospheric CO2.
Except this study found that specific humidity is increasing, confirming a positive water vapor feedback, although it also found that relative humidity was decreasing.
http://www.nasa.gov/vision/earth/lookingatearth/warmer_humidity.html
“”The increases in water vapor with warmer temperatures are not large enough to maintain a constant relative humidity,”
If relative humidity goes down, then the amount of clouds go down, which further reinforces the positive water vapor feedback.
Of course there are inconsistencies between the radiosonde measurements and satellites which need clearing up, the following comment from your reference is pertinent:
Using the UARS data to actually quantify both specific humidity and relative humidity, the researchers found, while water vapor does increase with temperature in the upper troposphere, the feedback effect is not as strong as models have predicted. “The increases in water vapor with warmer temperatures are not large enough to maintain a constant relative humidity,” Minschwaner said. These new findings will be useful for testing and improving global climate models.
Something worth exploring, no?
True, if models assume that relative humidity is flat with increasing specific humidity, and data shows that relative humidity decreases with increasing specific humidity, then models could be improved by taking that into consideration when developing new models.
Martha, isn’t ‘fast moving settled climate science’ an oxymoron?
Rob – it is certainly a poorly constructed sentence (of which I am frequently guilty) but if it means “dynamic and universally accepted” it need not be an oxymoron. I’ll give Martha the benefit of the doubt, but just this time!
Another mistake you should flag
“So how do our IPCC scientists deal with this? Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?”
Sensitivity is not assumed.
Steven Mosher
You are stepping onto a slippery slope with your statement that “sensitivity is not assumed”.
The IPCC estimate of 2xCO2 climate sensitivity is a result of GCM simulations, which are based on many inputs, which, themselves are based on hypothetical deliberations and interpretations of selected paleo-climate data, all of which entails a lot of “assuming” along the way.
And we all know how to spell ASS-U-ME.
Max
Actually sensitivity is not “assumed” in global numerical climate models. sensitivity is a net result of the integration of the model equations. Of course you can “tune” the model parameterizations and end up with a different sensitivity (e.g. change the cloud microphysical parameterizations). My previous post on what can we learn from climate models describes how/why climate models are tuned
http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/
You’re assuming a complete understanding of the feedback mechanism though, which practically amounts to the same thing.
actually no. climate models are a black box when it comes to producing the sensitivity, since it is impossible to trace things through owing to nonlinearities and complexity. Say if you tweak the microphysics parameterization, it will probably change the sensitivity, but you can’t predict in advance the sign of the change.
And what exactly do you mean by a “black box” in this context?
And in what way do you think that climate models “produces sensitivity”?
Please explain.
let me make it simple. a two variable model
Inputs;
1 TSI forcing
2 C02 ppm
You spin the model up and run it till you achive equillibrium.
temperature = 14C
Doubling test:
Inputs
1 TSI
2. 2X C02
you run the model till it achives equillibrium
Output: temp = 17C
What is the equillibrium sensitivity response to doubling C02
17-14 = 3C. 3C for doubling. It’s an output, not an asumption.
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6.html
Climate sensitivity is a metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2 concentration (see Box 10.2). Spread in model climate sensitivity is a major factor contributing to the range in projections of future climate changes (see Chapter 10) along with uncertainties in future emission scenarios and rates of oceanic heat uptake.
Definition
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2.html
So climate sensitivity is not assumed, because it is a result of the integration of equations?
Excuse me, but:
Climate forcing for each forcing variable has to be assumed, as their forcing capability and their interactions are not known.
Are you saying that these are known? Or that they can be destilled from the real-life climate using different sorts of equations?
Please be clear.
Forcing is not feedback. Forcing is something that is applied externally in the models (e.g. CO2 concentration, incoming solar flux). The model and its equations respond to these forcings, and the model response to these forcing may be amplified or damped by feedbacks from internal processes in the models.
Applied externally to what? This is where the confusion begins. For example, the CO2 is inside the atmosphere. The ocean is external. Chaotic systems oscillate under constant forcing solely due to feedbacks, so no external forcing causes the changes. And so it goes, the distinction does not work.
Its about how you define the system. The sun is external to the earth and its atmosphere, but internal if you define the system as the solar system. CO2 is external to the system if it does not include chemistry; once you start including interactive biogeochemical cycles, then CO2 is not external to the system. Same for aerosols. So the distinction does work provided that you are clear about how you are defining your system.
I am aware that forcings are not the same as feedback.
But your point was that climate sensitivity is not assumed in the models.
Both forcings and feedback are part of what we call climate sensitivity.
So as long as the different models and the different model runs have different values for each forcing component, and each feedback component, how can you claim that “sensitivity is not assumed” as long as it is composed by two variables, both assumed?
Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing. So if the forcing doubles then the temperature response doubles, but the sensitivity remains the same. So by changing the forcing, you do not change the sensitivity but rather the response.
8.6.2 Interpreting the Range of Climate Sensitivity Estimates Among General Circulation Models
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2.html
I guess in scientific jargon an assumption is called a “useful concept” :)
To make it simpler:
How can “climate sensitivity” not be an assumption of the model.
As long as “climate sensitivity” consists only of two factors:
1. Climate forcings.
2. Climate feedback.
If these two factors are assumptions of the model, how can climate sensitivity not be an assumption?
Of course you could call it something else than an assumption, but that would only be wordplay, not worthy of the last hours of the Easter holiday.
The fact is that climate sensitivity is an assumption, since it is a direct product of other assumptions,
Nope. If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C. So sensitivity is not an assumption in climate models.
“So how do our IPCC scientists deal with this? Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs? Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information? Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully? Do they look at possible solar influences instead of carbon dioxide? Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?
Of course not. That would be silly. For there is a scientific consensus about the matter, and that should be that.”
##################
I’m sorry Judith but this last paragraph contains a lot of silliness.
Sentence by sentence:
1Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?
Wrongly assumes that sensitivity is an assumption put into models. Further, the history of GCM development shows that the models (theory) IS revised to better suit the results. For example, it’s recognized that the models under predicted the ice melt, nobody looks the other way on these matters.
2.Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information?
Yes they do. Didnt we just lose a satillite on launch that was going to collect aerosol information? Haven’t people been working on creating a harmonized land use dataset? (Hyde) ?
3.Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully?
This question clearly shows a lack of understanding of complex model development. Does going back to basics include re discovering basic physical laws? rewrite the orbital mechanics? As for statistical models, the biggest issue is that statistical models are not comprehensive. A statistical model may by chance do a better job of predicting ice melt, but you cannot replace a comprehensive model that predicts temperature, pressure, winds, currents, surface sea salt, ice cover, snow cover, precipitation (the climate) with a narrow statistical model that “explains” a single feature. That is not the scientific method.
4Do they look at possible solar influences instead of carbon dioxide?
False dilemma. It is not either OR. it is not solar INSTEAD OF C02. The right question is do they look at possible solar influences IN ADDITION TO C02. The answer is yes.
5.Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?
Yes they allow the likelihood. Its a LOW likelihood primarily because these authors ( for the most part) are not really offering ways forward to improve our understanding. Take Scafetta. What he would need to do is provide a physical theory that one could incorporate into a GCM. The inputs to scafettas model are positions. the output is the average temperature of the globe. Problem? The average temperature of the globe isnt a GCM input.
There is a problem with ending a piece with questions that you think are rhetorical, when they are not.
steven,
If I am reading your correctly, are you stating that you believe the ‘consensus’ is properly recognizing the sensitivity?
There are three methods, roughly, for estimating sensivity.
1. Paleo studies. see hansen
2. empirical studies, see schwartz
3. OUTPUTS from GCMS.
Each of those methods gives a different range of values with substantial overlap. The point I am making is that Sensitivity is NOT an assumption of the models. In fact, the models, are probably the least certain evidence of the range, as Hansen notes. For a cartoon version of how sensitivity is an output of the GCM. You set your forcings (TSI, aerosals, GHGs) to some initial values. You DOUBLE C02. you let the climate model respond (takes decades) and it reaches an equillibrium at some point.
You note the temp when you started: X. you note the temp when the system reaches equillibrium: X+2.7C. That’s the sensitivity to doubling C02. Not an INPUT, not an assumption. Its an output. ModelE from Giss is 2.7.
As hansen notes the sensitivity output by models is the most uncertain measure because we know that models dont get all the processes represented.
“1Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?
Wrongly assumes that sensitivity is an assumption put into models.”
Well, as long as the specific climate sensitivity to different forcing factors is not known, and the interaction of these forcings are not known, how else are these to be incorporated into climate models, if not through assumptions of different forcing capability?
Are you suggesting that the forcing capability of all climate forcings are already known and therefore treated as constants in the models, rather than assumptions?
“Well, as long as the specific climate sensitivity to different forcing factors is not known, and the interaction of these forcings are not known, how else are these to be incorporated into climate models, if not through assumptions of different forcing capability?
Are you suggesting that the forcing capability of all climate forcings are already known and therefore treated as constants in the models, rather than assumptions?”
This is wrong . Lets just take a simple forcing. TSI. total solar irradiance is a forcing. It is expressed in watts. those watts drive the the physics models which are also in watts. The assumption is not the sensitivity, but rather this:
1. we know past TSI figures
2. The effect of the sun is limited to this
3. we can predict future forcing.
With other forcings, say C02, the issue is modelling how changes in C02 change the transmission of radiation. With aerosals for example you have a bunch of problems? what were the levels? do they reflect or absorb? how much, etc.
the forcings are not constants, you can see what the forcing inputs look like by using google GIYF
in general skeptics dont get that the models are not heavily relied on as the best evidence of sensitivity. But they are useful for executing what if scenarios. Since the sensitivities of the models overlap with other non model based estimates of sensitivity, there projections can be used a guide for policy.
Think of it this way. The best evidence for sensitivity is the paleo record and empirical studies of observation records. Those two give you a wide range. Then you model the climate. You check its output and calculate a sensitivity. The sensitivity of the models is consistent with the other methods, therefore, its reasonable to use it in what if studies.
If you dont like models, then use hansens response function. A much cruder tool since it only outputs temperature
Thanks, well put.
I think the heart of this matter(in the article) is Big Science often undermines the scientific method (like the LHC, NASA, etc. ).
The Wikipedia entry on ‘Big Science’ is interesting.
here
Also, mathematicians, system engineers and computer scientists know that some problems just have too many variables to solve accurately. Just look at weather prediction – a 30% chance of rain is a statistic, not prediction but the weather scientist that says the temperature is going to 15 degrees c the next day and then it goes only to 5 degrees is common in my area, and that is a prediction.
Judith Curry
I liked your comments on the “Science without Method” article from the Australian Quadrant.
The discussion of “real” science versus the “strange mix of empirical and virtual reality reasoning that characterises contemporary climate change research” is interesting.
But the quote below by Bohm on “post-modern” science bothers me.
This introduces a “moral” component to science. Morality is, by definition, subjective rather than absolute. Depending on whom you ask, it can be based on religious dogma, philosophical deliberations, socio-political ideology or what-have-you.
“Medical science” is cited as an example where the principles of “post-modern” science could apply.
This is based on an intrinsic fallacy. Medicine, as practiced, is not a “science”, it is an “art”. It starts with the Hippocratic oath, which, itself is moralistic credo (to protect human life above all). It uses “science” (as do many non-scientific fields of endeavor, such as civil engineering, politics, sociology, economics, environmentalism, etc.), but it is not a science per se.
IMO we do not need a bunch of philosophers, sociologists, moralists or anyone else redefining the very straightforward scientific method. Science should be kept pure (as William Newman stated on the “Polyclimate” thread).
Worst of all, we certainly do not need self-appointed moral guardians warning us that it is sinful to emit greenhouse gases since their climate models tell them this could lead to the destruction of humanity and our environment, and even if their models might be wrong, we should still stop emitting GHGs for moral reasons “just in case” because the result could be so horribly disastrous (if their models happened to be right).
The article’s historical treatise on the development of physics was nice, but I’ll certainly not be foolhardy enough to argue with you about the significance of “Newton’s second law, the ideal gas law, the first and second laws of thermodynamics, Planck’s law, etc.” all of which “provide the foundation for reasoning about the Earth’s energy budget and the transport of heat therein”.
As to the differentiation between the “natural” and the “enhanced” greenhouse effect, I would agree with you that the author got that wrong. (Since the “enhanced” GHE is what everyone is talking about, I think he can probably be excused.)
And the sentence:
Is incorrect (as you stated) and should probably have been reworded to say:
OK. The “missing hotspot” argument is a valid observation even if it may not be conclusive as far as the validity of the GH theory versus the actual physical observations is concerned. (I would put this into the “unexplained” category, like the observation that the tropospheric warming rate as measured by satellites has been slower than the warming as measured at the surface, where the GH theory tells us the opposite should have been true.)
On the point about the “human signal” or GH warming effect in the GMTA record either being “lost in the noise” or being “very likely”, I’d agree with you that this is a wash in actual fact (i.e. one statement is as questionable as the other).
However, I’d say that there is one major difference here: the “very likely” claim comes from the global “gold-standard” authority on climate science today, while the other comes from an op-ed in an Australian journal.
Just saying that they are both probably wrong does not help the credibility of climate science as embodied by IPCC. It simply confirms (what you and others have already stated) that the IPCC “very likely” claim is based on false overconfidence.
But, all in all, I think the Quadrant op-ed was interesting and your comments were spot on.
Thanks for sharing this with us.
Max
“This introduces a “moral” component to science. Morality is, by definition, subjective rather than absolute.”
The second sentence should read “Morality is, by ONE definition, subjective rather than absolute.”
Modern society attempts to define morality as each person’s subjective judgment. That concept is what gives rise to multi-culturalism, legal realism and other progressive dogma. Which is where many of our current societal problems come from.
The better definition of morality is: “a doctrine or system of moral conduct.” Whether originating from a divine source, or a code developing over time through the experience of trial and error, an objective system of moral conduct is essential to any human undertaking. See for instance: “We hold these truths to be self evident, that all men are created equal.”
“IMO we do not need a bunch of philosophers, sociologists, moralists or anyone else redefining the very straightforward scientific method.” No one I know of claims morality “redefines the straightforward scientific method.” What objective morality defines is what may and may not be done within such a method. At the extremes, Joseph Mengele may not use humans as guinea pigs in his experiments, and the U.S. government may not let syphillis go untreated in black males so they can gather data on the course of the disease.
At a more mundane (but also important) level, what good is science of any kind if it is not done in accord with the objective moral requirement of honesty? How about humility? What good is science if you can’t trust the results? Imagine how today’s debate might be different if these archaic notions of morality had actually been practiced throughout the scientific community. Whenever anyone says morality is irrelevant, what they rally mean is that other people’s concept of morality is irrelevant.
Science and morality are not only not antithetical, they are symbiotic. It might do some well to remember that the first Western universities, the University of Bologna and the University of Paris, were founded by the Catholic Church. Many others, including Oxford, grew up around monastic communities. There are apparently currently 244 universities and colleges in the U.S founded by the Catholic Church (a few of them are even still Catholic), and 219 christian colleges or universities. There are something like 7000 Catholic elementary and high schools in the U.S. as well. In many cities, they are the best chance some children have of being competently educated in anything, let alone building the foundations of a study of science.
The attempts to divorce science from morality, and vice versa in some religions, is nothing new. But it is a fool’s errand. Religion uninformed by science is naive and ignorant of the real world. Science uninformed by morality is at at best unreliable, and at worst tragic.
Gary
You brought up an example of “morality” at work in “science” with Mengele.
The “morality” was defined by the political movement of the time in the country of question.
Morality is not absolute, unfortunately, Gary. It is human, by definition. It is very subjective. And it can become very political.
Would “bending” the scientific results a bit “for a good cause” be more “morally” acceptable than doing so “for a selfish motive”? If so, who would define what “a good cause” would be?
Honesty is paramount in science. But this is nothing new. It is a given in the scientific method and does not need to be introduced by way of “morality”.
Sorry, Gary. No sale.
Max
Or is it better to stick with the “scientific method”
As usual, we seem to be getting tangled up in semantics. I agree that talking about moral science (or more specifically talking disparagingly about amoral science) is dangerous territory. But I think some people are hearing “ethics” when they read “morality”. Ethics is something else. We need ethics. Morality is too slippery, and as you say can lead to “end justifies the means” that results in abandonment of ethics. I submit that that’s one of the primary causes of the climate science train wreck.
Science needs to be performed with ethics. But the pursuit of the facts have no moral dimension. The facts are what they are.
Now this is pure semantics. Ethics and morality are not somehow different concepts.
ethic, noun, plural but sing or plural in constr : the discipline dealing with what is good and bad and with moral duty and obligation
http://www.merriam-webster.com/dictionary/ethics
ethics: –plural noun 1. (used with a singular or plural verb) a system of moral principles
http://dictionary.reference.com/browse/ethics
I know it is so uncool to believe in “morality,” with its embarrassing past associations with religion and God, but let’s leave the language as it is please.
“Right and wrong” is not a scientific concept. The concept urged by Manacker and ChE here is actually what is referred to as situational ethics. And it has not served mankind well.
situation alethics: a system of ethics in which moral judgments are thought to depend on the context in which they are to be made, rather than on general moral principles
I can assure you Michael Mann does not consider his hockey stick to have been dishonest. He defines “honesty” as portraying the data in a way most likely to lead its viewers to reach a correct result. Like Dan Rather who assured his viewers that his reports based on falsified documents was accurate even though those documents were admittedly forged. Or the ABC reporter who attached pyrotechnics to the bottom of an automobile to show how dangerous its gas tank was.
Those who reject the very notion of an objective morality have no rational reason to judge the behaviors of others as “wrong.” Yet they all do. Which is why in my original comment, I wrote “Whenever anyone says morality is irrelevant, what they really mean is that other people’s concept of morality is irrelevant.”
“Honesty is paramount in science…It is given on the scientific method.’ Why? Sez who? If there is no objective morality, each scientist can decide for himself whether to include adverse data or not in his graphs. Who are you to judge? I can rationally criticize “hiding the decline” as being objectively dishonest. But I have made a value (ie., moral) judgment in doing so. Yet I have not read you as defending, or being neutral on, dishonesty in science. How do you explain it?
Mengele did indeed represent the morality “defined by the political movement of the time in the country of question.” But those who practiced that politically defined morality and were caught tried as war criminals and were hung. It seems the rest of us did not agree.
Gary
You are still missing the point here.
No one is for “war criminals” who misuse Csience.
No one is for “immoral” science.
No one is for “dishonest” science.
Scientists “fudging” the data (for whatever justification) is “dishonest” (and could, therefore, be described as “immoral”).
Scientists withholding data from FoI requests is illegal (and possibly “immoral”, as well).
But the “scientific method” itself does not have any room for dishonesty or morality. It is absolute.
Introducing “morality” opens the door for introducing “religion” or “politics” or (in the worst case) introducing the warped morality of a Mengele.
Keep it out.
Max
“Morality is not absolute, unfortunately, Gary. It is human, by definition. It is very subjective.”
If morality is not absolute, ie. objective, then you have no rational basis for judging the behavior of others.
You are simply wrong when you claim that “No one is for ‘war criminals’ who misuse science. No one is for ‘immoral’ science.
No one is for ‘dishonest’ science.” My examples were of those who were “for” just such science. If you really believe there is no such thing as objective morality, then “immoral science” is an oxymoron, and that part of your comment a non sequitor.
How can you say Mengele’s morality is “warped” if there is no objective standard against which to judge it? Mengele is a classic example of keeping objective morality out of “science.”
And this is not a semantic point. I can assure you that your belief that morality and ethics have no place in science is shared by the many of the very people you criticize for “dishonesty.”
To be precise, you make an objective moral judgment when you write that “the ‘scientific method’ itself does not have any room for dishonesty or morality. It is absolute.”
You still haven’t answered my question. Who says the scientific method does not have room for dishonesty? Where does this objective standard come from, let alone such an “absolute” standard?
Gary M
You ask:
We are drifting dangerously into abstract philosophy.
That there can be “dishonesty” in the practical application of “science” became painstakingly obvious with the Climategate revelations as well as the many exposes of IPCC exaggerations and outright fabrications in a (so-called) “scientific” summary report.
That this is deplorable and should be discouraged is also obvious to me.
Inasmuch as the guilty parties were misusing public funding or violating FoI requests, there should be some corrective action taken (if nothing more, simply cut off this funding).
Does that make “climate science” per se “immoral”?
No.
According to Webster’s New Collegiate Dictionary, the definition of science is
The steps of the “scientific method” have been listed as follows”:
IMO there is no place for “morality” as such in this process. Doing it honestly without “bending” the results to “fit” the hypothesis must be a given or “condicio sine qua non”.
It is like counting ballots.
Either they are counted honestly or there is voter fraud. Skewing the count in order to favor the “better qualified” or “nobler” candidate is just as fraudulent as skewing them for the other candidate.
The concept of “post-modern” science introduces “morality” into the scientific process itself, as a separate consideration. IOW, it allows for the “bending” of the scientific principles if the basis for doing so is “morally correct”.
This would make it perfectly OK for climate scientists to “bend” the results of their studies in order to stop humanity from self-destruction through GHG emissions.
I think this would be a dangerous bastardization and politicization of science and the scientific method.
But let’s wait until Judith gets a separate thread started on this to get some other viewpoints.
Max
I have not said science is immoral. You have claimed that it is amoral, while still maintaining that “Doing it honestly without ‘bending’ the results to ‘fit’ the hypothesis must be a given or ‘condicio sine qua non’.” My point from the start was that the scientific process is not moral or immoral in and of itself, but must be practiced infromed by morality.
Yet again you do not address WHY honesty is a sine qua non of science, if science is amoral. You claim that it is obvious that the dishonesty involved in climategate “is deplorable and should be discouraged.” A sentiment with which I agree, and which confirms my point.
My original comment was not that “science must be moral,” it was as follows: “No one I know of claims morality ‘redefines the straightforward scientific method.’ What objective morality defines is what may and may not be done within such a method.” How do you “deplore” anything without making a moral judgment.
Your description of post modern science reflects your desire to view morality as merely subjective. The “bending” you fear is an aspect of progressivism’s “the end justifies the means” situational ethic. It is not a product of objective morality. What you fear is in fact immoral behavior conducted under the guise of morality. But that is no logical basis for claiming that science should be done without regard for morality.
We aren’t “drifting” into abstract philosophy, we plunged right in when you wrote “Morality is, by definition, subjective rather than absolute.” My criticism of that still stands, and the rest of my comments have just been meant to show that in practice you hold scientists to an objective moral standard, honesty, yourself. You, not I, read the moral imperative of honesty into the scientific method itself, which seems a fairly objective standard to me.
Gary M
We are going around in circles.
Following the “scientific method” rigorously and honestly in science is a “condicio sine qua non” , which should not even need to be mentioned.
Fabricating, eliminating, “cherry-picking” or ignoring data points would all be violations of the “rules of the game”, as would hiding data sources, stealing others’ work without citing references, etc.
If you call this a “moral” constraint on how the process is physically applied, so be it. I would just call it the established “rules of the game”.
I just do not believe that “morality” should be injected into the process itself (as is being suggested for “post-modern” science) for the reasons I pointed out above.
But we have, indeed, beaten this dog to death and should wait until Judith opens a specific thread on “morality as a part of the scientific process”.
Max
Ah, I see you’re an “I need to have the last word” kinda guy. Very well, I will let your “established rules for the game” stand in place of an “objective morality,” though I am at a loss to see a difference. Oops, sorry, shoulda waited til the next thread to write that, as you have twice suggested (at the end of rather long comments)….
I don’t know how anyone can think genuine science is conducted when it is impossible to audit or replicate any work. Steve Mc is being stonewalled again. Not a surprise. The surprise is that some who pretend to be scientists aren’t bothered by the stonewall. Genuine scientists would be appalled.
In reality, Stanley, real scientists are appalled by the popularity of amateur, industry-driven drivel from Stephen McIntyre. :-(
Wishful thinking, Martha :)
Does it ever occur to ‘real scientists’ to turn their supposed Giant Brains to wonder why Steve McI’s work is so popular?
Could it be because he lays out his working for all to see? Is pretty polite but never a patsy? That he likes to really shake a problem until he understands it? That he doesn’t give up? Or just that he gets right up the noses of Real Climatologists? And frightens the s… out of them that he will find out all their little shortcuts and dodgy dealings..and then publish them for all to see.
Maybe its becasue he threatens their insulated closed little self-congratulatory world of pal-review and patronage and of deference and nepotism?
Or maybe they just don’t think that anyone who has grubbied their hands outside academe to make a living should be allowed to touch the hems of their robes……..’He was in trade my dears’ ……said the appalled Vice Chancellor before fainting dead away into the vintage port. Only a swift application of another round of caviar could revive him before the reading of the sacred IPCC texts……..
Stephen McIntyre is probably the most all-round respected climate scientist in the world today.
He is not a climate scientist, you say?
Why bother to go through the process of gaining a Phd, when he knows he already outperforms anyone who could ever give him such a degree?
Don’t get hung up on degrees. They are only an “ok” from someone.
It is fully possible to learn as much as any Phd on your own, and more, without any “ok” from someone, or from any institution.
Even though they would have you think otherwise….
Which rather begs the question of what qualifications does a Real 100% You Bet Your Ass Yes Sirree Bob Accept No Substitiute Qualified Climate Scientist have? And how does she get them?
Medical doctors is easy..they have degrees in medicine. Engineers is easy they have Chartered Engineers and stuff.
But Climatologists are harder. No university anywhere seems to offer a degree in it. There isn’t enough real content for it to be anything more than a sideshow…
So if I had the great misfortune to be accosted in the street by sombody claiming to be a Climate Scientist, how would I know whether he was the real McCoy? Shoud I slip him a quid and send him on his way with a cheery adminition not to spend it all on hockey sticks? Or cut the impostor dead with the withering phrase…’you are not a real climatologist because you do not…xxxxxxxx…’
What is xxxxxxxxx in this context?
Speaking of science without method, I just got this ad for easy & simple climate simulation: “Real-time Climate simulation Made Easy” including free Webinars.
http://hosted.verticalresponse.com/715701/bf6b7ba42e/TEST/TEST/
Perhaps this is the science behind international climate policy.
Thanks for the interesting link, David.
This suggests that the AGW crowd is now getting seriously worried.
If the Climategate iceberg fully melts – and exposes the decades of misappropriated public funds beneath – the consequences for leaders of the Western political and scientific communities may be greater than the recent Japanese tsunami.
RE: Preliminary Analysis of Temperature Data from China Lake, CA Falsifies the Enhanced AGW Hypothesis.
Here are the results of the analysis of temperature data for June 21 for the sample range of 1950-2009:
Mean Tmax +/- AD = 38 +/- 1 deg C
Mean Tmin +/- AD = 20 +/- 0 deg C
where AD = average deviation and resolution of therometer = 1 deg C
Temperature Data Source: The Weather Underground.
For this analysis sunlight is constant over the sample interval of 1 day. Specific humidity is generally low and the sky is mostly cloud free. We do not know the actual change in concentration of CO2 over the sample range at this site. We can estimate that it will increase by ca 25% based on data from Mauna Loa which is only valid for _ highly-purified, bone-dry air_.
I concluded from this preliminary analysis of the temperature data that increasing concentration of atmospheric CO2 causes no “warming” at this site.
Note that N= 2 for this test of the enhanced AGW hypothesis. To complete this preliminary analysis I shall do ths analysis for March 21, June 21 and December 21. A complete analysis would use every day of the temperature record for the sample range of 1950-2009.
A major criticism of this method of analysis is the resolution of therometer must be greater (e.g., 0.1 deg C) to detect any “warming” of the air at this site. However, such data is not usually available from US weather stations especially for the early part of the record.
Comments? Take your best shots at “Old Weird Harold”!
BTW Does anybody remember “Old Weird Harold”?
Dr. C – it seems like this subtopic of morality, immorality, amorality, and ethics in science stirred up a lot of discussion. Perhaps this could be rolled into another thread in the future. Some excerpts from Feynman’s famous “cargo cult” lecture [ http://www.lhup.edu/~DSIMANEK/cargocul.htm ] would tie in nicely. He used the term “integrity”, but in this context, it’s fairly close to ethics, particularly this bit:
ChE
I would second your suggestion to run a separate thread on the ethics in science would be a good thing.
It would be important to me that one separate out the concept of introducing “morality” into science as suggested in “post-modern” science (to me, a dangerous aberration of the scientific method) as opposed to following the established scientific method rigorously and honestly (which, to my way of thinking should be a “given”).
Max
I agree this would be a good topic, i will put it on my list and keep on the lookout for some good material
The concept of introducing “morality” into science as suggested in “post-modern” science is like introducing morality to algebra so that a mother could divide two breads to her three hungry children one each. (2 divided by 3 is equal to 1)
In short, post modern science is corruption of science. Science deals only with the objective truth.
Here is what Feynman said about science:
The principle of science, the definition, almost is the following: The test of knowledge is experiment. Experiment is the sole judge of scientific truth.
From Six Easy Pieces by Richarrd P. Feynman
Judith, you said:
“Nope. If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C. So sensitivity is not an assumption in climate models.”
Ok. GCMs are supposed to be good representations of the global climate system.
To represent this system in a model, one would have to assume different values for forcings and feedbacks. because these are not known. There might even be forcings and feedbacks which we are not aware of at the present moment.
But you seem to treat sensitivity as an outcome of the model, as if there are inputs available beyond forcings and feedbacks, which would allow the model produce something called “sensitivity”.
So from where would this sensitivity come from, unless it is a direct product of the forcings and feedbacks already fed into the model?
Which other factors are important enough, besides forcings and feedback, to produce something called “sensitivity”, far removed from any petty assumptions?
A mathematical equation? Hm…..
Bathes–if you are trying to get someone to defend the current state of GCM’s it will be very difficult to find anyone to defend them as currently being reliable for more than VERY short periods. Certainly Judith does not seem to see them as more than works in progress at this time.
Of course.
I am only trying to establish one thing:
Climate models rely on inputs, and for the most part these inputs are assumptions of the behavior of the global climate system.
These assumptions can be classified as forcings and feedbacks. Forcings being outside influences (sun, cosmic rays, orbital shift, etc), and feedbacks being sulphates, cloud cover, etc.
But to my surprise, I discover that there is a belief that somehow, mysteriously, global climate models can add to this understanding through mathematics.
Now mathematics always does what it is told. It never adds anything where it is not told to add.
So how can it be claimed that climate sensitivity is not an assumption of the model, as long as all inputs are assumptions?
In effect: what knowledge of the climate system does the model add, which was not already put in?
Bathes
You wrote- ” But to my surprise, I discover that there is a belief that somehow, mysteriously, global climate models can add to this understanding through mathematics. “
I am not sure who has such a belief, but there is no “special mathematics” for climate models. As with other computer modeling it is a case of understanding the process of the variables that impact the climate and the weights to apply to each of those variables under different circumstances. In regards to GCMs, it appears we may not yet understand all the variables or the impact of each. We are a long way from reliable climate models at a regional level for more than short term use
Exactly.
as you say: it is a case of understanding the process of the variables that impact the climate.
This process involves the input of assumptions such as climate forcings and feedback. Since none of these are known, quantitatively or qualitatively, one has to make assumptions as to how each of these impact the climate system on their own, and through interaction with other forcings and feedbacks.
These are the assumptions of a climate model, but these are also all the components of what we normally call climate sensitivity.
So you feed all these assumptions into a model, and out comes what? Climate sensitivity?
But that was exactly what was fed in!
No. Climate sensitivity is what is fed in. This is the assumption. Not the outcome.
Are climate models invertible functions?
Ole Humlum has a section on climate models here – http://www.climate4you.com/
Well worth several visits.
The models are trained to match the observed temperatures but each results in a different climate sensitivity for doubling of CO2. A paradox? The individual models are themselves chaotic and what we get is single run from amongst many plausible formulations that is arbitrarily selected as being about right and graphed together by the IPCC as an ‘ensemble’.
That this is almost universally accepted to be the case – Fred is a sorrowful exception – and that this exercise is still promoted as a meaningful exercise says something about something. Fair enough – explore the physics but don’t try to sell me the Sydney Harbour Bridge.
feedbacks are not inputs either.
You would do well to spend some time looking through GCM code.
I will suggest modelE because the source is online and Gavin has done some nice work in the past few years cleaning it up for public viewing.
It’s not that much code, but give yourself a few months. It will help if you know fortran. or you can look at the MIT GCM. they have nice documentation.
A simple way to think about it is this. Imagine you build a very simple model. land sea ice. each of those has an different albedo and different area. You have one input TSI in watts. The ice is represented by a model that takes in Watts and calculates change in ice area. Area changes albedo due to ice. albedo changes watts in. The “feedback” : more watts in = less Ice=lower albedo due to ice area= more watts in. is not an input or assumption. the feedback is observed as a consequence of the physical models.
The assumptions in the model:
1. the historical forcings are correct
2. the physics models capture all relevant processes.
sanity check for the model?
1. does it hindcast ok?
2. does its output for sensitivity match other more reliable estimates?
yes. yes.
>If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C.
Wait a minute. Tamino told me the models aren’t curve-fitting, they represent everything we know about the physical world.
“Tweaking a number of parameters” seems to me to come pretty close to “entering a number of assumptions”, but maybe I am missing something (not being a climate modeler).
Calling a computer model run an “experiment” in the scientific sense of “experiment” also seems a bit of a stretch.
To me a computer is nothing more than a very expensive and very fast version of the old slide-rule. What comes out depends entirely on what went in.
So IPCC’s model-derived 2xCO2 climate sensitivity of 3C (on average) is the result of many “assumptions” and is, thus, itself an “assumption” rather than the result of a scientific “experiment”.
Max
BTW, I have had this discussion with Gavin, who disagrees completely with me that the 2xCO2 CS of 3C (on average) is a model-derived “asssumption”; he prefers the word “result”.
But he agrees with the remark (I think from Judy) that it may take a few “tweaks” to get there.
Max
The point is that you cannot prima facie make an assumption of 3C and enter that into the model. You can get that particular result from the model by varying a number of parameters, so this is not an assumption that goes into the model. But the model can be tuned to give that result. Further, note that very few models give exactly 3C: the 3C is more or less the average of a number of different models.
I think you are being rather naive here. Anybody who works with any sort of model for long enough will get to know quite a bit about how it will react to input data. Especially the guys who wrote it.
And – if it sosmething that is supposed to enhance our understanding of climate rather than just an unknowable magical black box – they will have payed around with it a great deal. It wouldn’t be the work of Eimstein to quickly come up with the variosu settings of your input data to come up with the value for sensitivity that you want.
‘No!’ you may cry ‘That would be cooking the books’ you may assert ‘and no climatologist worth his salt would ever do such a thing. The professional reputation of anybody involved would be in tatters at even the merest hint of impropriety and they would never work again’
And I might point to the flying pigs passing overhead and the pope’s public conversion to Hinduism.
So yes – explicitly sensitivity is not entered directly as a parameter. But implicitly is another matter. I guess that the models can be tweaked ot give the right answer. As long as its two or more. Otherwise we all just yawn and go home. Bad for the career…………..
Sorry, in a complex nonlinear model, the model does not have simple linear responses to tweaking parameters. The issue here is to understand how complex nonlinear models work. If you didn’t catch my earlier posts the first time around, for starters see
http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/
http://judithcurry.com/2011/02/03/nonlinearities-feedbacks-and-critical-thresholds/
Dear Dr.Curry,
Sorry, but nobody, including every singe scientist in the climate science field even understands how complex non-linear models work in real time, especially in areas like climate, which has high complexities. Hence my question is a fundamental one. What are climate scientists trying to do here to dictate so called settled science and even asking for policies to be made on that when they have no practical clue of what they are doing in the first place?
Judy – The links you cite are very much worth revisiting for a better understanding of model construction.
Some of the commentary above yours echoes one of the enduring Internet myths about global climate models – that they are “retuned” to make their output conform to observed trends after initial runs shows a divergence. An alternative version of the same type of misconception is that they are tuned to yield a specified climate sensitivity, such as 3 C. Whether, as a combined effort, they could be made both to fit observations and also exhibit a climate sensitivity of 3 C is an interesting thought, since it would tell us something about the validity of the 3 C figure, but modellers will explain that no matter how much they would like to do that, they can’t. The best modellers can do is to create new models with parameters that cause the models to remain faithful to the observed starting climate (before applying a forcing such as rising CO2), and which incorporate data that appear to better simulate reality. An example would be attempts to more realistically model anthropogenic aerosols, based, for example, on the observational data on aerosol “dimming” reported by Martin Wild and others for the 1950’s through 1980s). Whether the models have succeeded in getting aerosols right is still contentious, but they are constrained by the available data and cannot insert arbitrary values in order to yield desired trend projections.
Because so many misconceptions persist, I wonder whether it wouldn’t be worthwhile to invite a modeller to post here for the narrow purpose of answering questions about global climate models. Gavin Schmidt would be one candidate, but you and he have not been on particularly cordial terms. Another possibility would be Andy Lacis, who has participated here in the past, and who is qualified to discuss model construction, and the virtues and deficiencies of current models.
Here is an excellent reference for how aerosols are modeled.
http://www.climatescience.gov/Library/sap/sap2-3/public-review-draft/sap2-3-prd-ch3.pdf
To add to my earlier comment, there is a difference between the use of GCMs for hindcasting or forecasting projections based on a starting climate within recent decades and their use with paleoclimatologic data for estimating climate sensitivity. In the former case, once models are parametrized to a starting climate and then forced with an agent of interest (e.g., increasing CO2), the results must stand; the modeller does not reparametrize the model and discard the projections because they diverge from observations, but must accept them as indicative of model skill – there is no curve fitting involved. If additional data later yield a better match, they must be reported separately, with an explanation as to the greater accuracy of the new data.
In contrast, paleoclimatologic applications often involve the testing of multiple different parametrizations to determine which best fit the recorded paleoclimatologic evidence. The climate sensitivity of that model is then used as one estimate of the true value of climate sensitivity, with a range of values ultimately derived from the application of this principle to multiple different models and paleoclimatologic datasets. Examples are found in AR4, Chapter 9. Other climate sensitivity estimates are based on more direct calculations rather than attempts to match sensitivity estimates to observed data, and are based on estimated forcings, feedbacks, and temperature change constrained by observational data (see e.g., AR4 Chapter 8).
For aerosol forcing (see Steven’s cited link), some of the studies have resembled the paleoclimatologic applications in testing a multitude of different aerosol forcings to determine which created the best match with recorded temperature trends (but constrained by available aerosol data). However, they differ from the paleoclimatologic applications in that what was tested were not the model parameters but rather the forcings that were applied to an unchanged model.
Section 3.4 discusses briefly the use of the inverse approach.
Ill suggest Dr. Held.
he has a blog and may be open to it.
The myths of tuning
Funny at AGU the guys who ran models noted the two biggest knobs. Slab ocean:on/off. sulfur cycle: on/off. Also, some Ar4 models ran without volcano effects
The difficulty IMO is 1) semantics and 2) the implication or inference that varying assumptions to force the models to better match observations is done to produce a desired output. The latter is curve fitting, though “curve fitting” while often meant as a pejorative isn’t necessarily nefarious.
I support other posts in disagreeing (with hesitation…) with Dr. Curry about the “assumed climate sensitivity.” She says it is not assumed which is literally true. But it is a semantic no-op since almost all of the physics and mathematics that produce the sensitivity in the models are themselves assumptive to some degree or another. For example the non-feedback forcing from CO2 concentration, which feeds the sensitivity, is based on an assumption of projecting what seems to be the past observable forcing (which has varied somewhat over the years as we get more observations) into future environments. Is the forcing from doubling CO2 the same at the 1000ppm to 2000ppm level — which has never been observed — as at the 100 to 200 level? Such projection may be quite reasonable and based on learned and considered physics, but it remains an assumption none-the-less. Then they have to add their best guess (another learned assumption piled on top) for a number of feedbacks like for clouds, aerosols, etc.
Is the sensitivity output a result from assumptions? No doubt. Does that make sensitivity an assumption? For all practical purposes, yes; though strictly probably not. Is it purposely made misleading to match a belief? No. Though the more they get questioned the more the modelers and physicists dig their heels in.
Correct. Instead of a big knob, there’s a row of trim pots. Distinction without a whole lot of difference.
The problem is that you simply dont have enough time to adjust the various trim pots. Thats because model runs take an enormous amount of time. You have a parameter space that consists of over 39 variables. So exploring the parameter space (tuning the knobs) is practically impossible. Some advances are being made in statistically emulating a GCM, that is running a few points in the paramter space and then filling in the responses with statistical model. But tuning? not practical. Except with aerosols which are not know that well. Turning that knob is standard what if analysis.
On the sensitivity issue it works like this.
From emprical studies of the response to volcanic forcing the sensitivity can be constrained to 1.5 to 6., for example
The models currently output sensitivities of 2.1 to 4.4.
So, the idea is that the models produce sensitivities that comport with other estimates. See the paper I linked for all the various ranges that are estimated independent of the models.
Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate sensitivity. GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007
Tuning happens on big loops and small loops. Small loops are people twiddling dials and trim pots. I might accept there is limited tuning on this scale. Big loops include stuff like publication bias. That’s one hellofa big loop which is still a tuning loop. People don’t always see it, but it’s there. Kiehl’s results pretty much confirm it.
Rod – Dr. Curry is correct that no assumptions, either explicit or implicit, are involved in the climate sensitivity that emerges from current GCMs. There is no curve fitting, and no retuning of a model to bring its climate sensitivity closer to modeller’s wishes, nor does the modeller know what sensitivity value will
emerge from the model as contructed.
It is also untrue that the core of the models involve “assumptions” that can be conveniently chosen in hopes to make climate sensitivity “come out right”. The core is based on irrefutable physics principles including those involving fluid flow, and the conservation of mass, energy, and momentum. These are not assumptions. Added input includes physical parameters such as those involved in radiative transfer (e.g., spectroscopic absorption coefficients), as well as observationally derived data on wind, temperature, and so on. For processes occurring on scales too small to be modeled within the large grid cells, parametrizations based on known principles and observations are used to produce the most accurate approximation possible, but these allow little leeway for adjustments that significantly alter sensitivity. Other variables are addressed in the links cited by Dr. Curry, but the point is that assumptions needed to create a particular range of climate sensitivity outputs are not available to the models.
The concept of forcing has been misconstrued by a number of commenters above. One can apply different forcings to a model, either based on observational data or as part of an attempt to determine forcing values that best represent a particular climate. However, changing the forcing does not change the climate sensitivity but merely the temperature response. Therefore, accuracy of forcings is irrelevant to a model’s climate sensitivity. Perhaps a good analogy would be a sales tax – say a 5% tax on the purchase price of a produce. The 5% is the climate sensitivity, the product price is the forcing, and the tax owed is the temperature response. If the sales price is raised from $100 to $200, the tax (the temperature change) rises from $5 to $10, but the tax rate of 5% (the climate sensitivity) hasn’t changed.
Finally, just as climate sensitivity is not an assumed input to models, neither are feedbacks (clouds, water vapor, ice/albedo, etc.). These too are emergent properties from the dynamics described above. It is true that they are affected by parametrizations, but the latter are not assumed either – they are based on the best combination available of principles and observations, and they are not readjusted in the models so as to make the climate sensitivity conform to a desired value – in fact, their ability to influence model climate sensitivity is marginal.
If assumptions and retuning were tools available to modellers, the models would match observations better than they do, and match each other’s climate sensitivity far better than they do.
The claim that climate sensitivity is in even the most indirect sense based on assumptions is unequivocally wrong. Again, the links cited earlier by Dr. Curry should help provide a more accurate picture of how models are constructed and used.
Above here I asked: “A hard one is to explain, in general physical terms, why the clouds are at about 50 or 60% . Do they also track temperature? Why?” No one answered.
The so-called ‘climate sensitvity’ depends most importantly on the cloud response. It is a “parametrized” element of the AOGCMs. “Parametrized”, though it may refer to something constructed “based on known principles and observations are used to produce the most accurate approximation possible”, means actually guessed: no more and no less. The ‘climate sensitivity’ is no more and no less than a guess. Christopher Game
Thanks fred, All well put
That’s an assumption; i.e. that the feedback response is linear, i.e. the gain is constant wrt temperature. Hanson’s “tipping points” imply that that’s not true. To get a tipping point, you have to have a gain that increases with temperature. Otherwise, you just get a fixed multiplier.
I suspect you’re right and Hanson’s wrong, but you can’t both be right.
ChE – To the best of my knowledge, climate sensitivity is fixed in each of current GCMs, so that applying a different forcing will alter temperature but not sensitivity. In this sense, the models don’t anticipate tipping points, but there is no reason a model could not be designed to do that. For example, it might anticipate a large increase in feedback from a massive methane release induced by rising temperatures. Given the challenges models face in simulating even “untipped” climate change, this will probably not be the next item on a modeller’s agenda, but I don’t think modellers would state that there is an inconsistency between designing models with the goal of simulating a smoothly changing climate and the theoretical possibility that a future climate could exhibit an abrupt change in response to a small perturbation.
The potential tipping points that Hansen and others have mentioned generally involve large shifts in the same direction as a smoothly changing climate. A possible exception might be a dramatic slowing of the meridional overturning circulation, but this is considered highly unlikely for any foreseeable scenario over the rest of this century.
It might help to define a GCM as there seem to be many misconceptions floating about.
‘Global climate models (GCMs) are comprised of fundamental concepts (laws) and parameterisations of physical, biological, and chemical components of the climate system. These concepts and parameterisations are expressed as mathematical equations, averaged over time and grid volumes. The equations describe the evolution of many variables (e.g. temperature, wind speed, humidity and pressure) and together define the state of the atmosphere. These equations are then converted to a programming language, defining among other things their possible interacting with other formulations, so that they can be solved on a computer and integrated forward in discrete time steps.’
Professor Ole Humlum
Climate and models are initial value problems and respond non-linearly to small changes in in initial conditions. This is quite simply understood in that the models use the same partial differential equations of fluid motion that Edward Lorenz used in his early convection model – to discover chaos theory. Models also suffer from something called structural instability. This refers to the range of plausible values for aerosols, ozone or natural forcing factors that can tip the model into a different phase space. Together these create the potential for chaotic bifurcation in models. That is points at which the solution space shifts abruptly as a result of non-linear interactions of component parts.
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically
answerable. (McWilliams 2007) The extent of instability in climate models is yet to be systematically explored.
Ideally with models there is sufficient data on which to calibrate the model – and calibration is a necessity with all models – and to validate the model. With climate models this is typically the temperature data to 2000 for calibration and since for validation. Although there may be a problem emerging in the latter.
‘Over the past decade one such test is our ability to simulate the global anomaly in surface air temperature for the 20th century…Climate model simulations of the 20th century can be compared in terms of their ability to reproduce this temperature record
(Kiehl 2007)
Sensitivity is determined by the models – although it remains a problematical concept. The models are run to determine a delta T for a doubling of CO2 and this by definition is the climate sensitivity. There is a subjectivity involved in this – a potential solution from an almost infinite solution space of plausible formulations is sent to the IPCC where it is uncritically graphed alongside other similar ‘solutions’.
‘Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models. One curious aspect of this result is that it is also well known that the same models that agree in simulating the 20th century temperature record differ significantly in their climate sensitivity. The question therefore remains: If climate models differ in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?
The answer to this question is discussed by Kiehl (2007). While there exist established data sets for the 20th century evolution of well-mixed greenhouse gases, this is not the case for ozone, aerosols or different natural forcing factors. The only way that the different models (with respect to their sensitivity to changes in greenhouse gasses) all can reproduce the 20th century temperature record is by assuming different 20th century data series for the unknown factors. In essence, the unknown factors in the 20th century used to drive the IPCC climate simulations were chosen to fit the observed temperature trend. This is a classical example of curve fitting or tuning.
It has long been known that it will always be possible to fit a model containing 5 or more adjustable parameters to any known data set. But even when a good fit has been obtained, this does not guarantee that the model will perform well when forecasting just one year ahead into the future. This disappointing fact has been demonstrated many times by economical and other types of numerical models (Pilkey and Pilkey-Jarvis 2007). ‘
‘The world’s perhaps most cited climatologist, Reid Bryson, stated as early as in 1933 that a model is “nothing more than a formal statement of how the modeller believes that the part of the world of his concern actually works”. Global climate models are often defended by stating that they are based on well established laws of physics. There is, however, much more to the models than just the laws of physics. Otherwise they would all produce the same output for the future climate, which they do not. Climate models are, in effect, nothing more than mathematical ways for experts to express their best opinion about how the real world functions.’
The failure to understand the appropriate uses for models is one of the key problems in climate science. ‘Using climate models in an experimental manner to improve our understanding of how the climate system works is a highly valuable research application. More often, however, climate models are used to predict the future state of the global climate system.’ The limitations of models in ‘predicting’ climate are well known and this is very poorly communicated in the community more generally. The models may not be fraudulent but the uses of models by activists most certainly are – either deliberately or naively.
Fred Moolten said:
I think this is yet another example of over-simplification that represents some of the difficulties associated with open and free discussions of the issues. In my opinion the statement is almost at the same level of those that proclaim that all glaciers everywhere are melting and that is concrete proof that the global-average, near-surface temperature of the Earth has increased.
I think a more nearly accurate description is as follows.
At the continuous-equation level, the core is constructed with models that are approximations to irrefutable physics principles including those involving fluid flow, and the conservation of mass, energy, and momentum. Assumptions leading to simplifications of the fundamental principles are invoked in order to obtain a tractable problem.
This is not to say that the resulting model equations are necessarily less than useful. I think it is simply a more nearly accurate description.
Additionally, the statement completely neglects to mention that the numbers produced by the models are not solutions for the continuous equations. They are instead approximate solutions to the discrete approximations of the continuous equations. It is well known that convergence of the approximate solutions of the discrete approximations to the solutions of the continuous equations has not yet been demonstrated for any GCM. Papers appear in the peer-reviewed climate science literature even to this day in which the dependency of the approximate solutions on the size of the discrete spatial and temporal increments is the subject.
Actually, it is also well established that numerical solutions of the Lorenz models-of-the-models temporal chaotic ODEs have not yet been attained, and very likely will remain out of reach for quite some time. Convergence has been demonstrated beyond a few tens of Lorenz time units only when extremely high order discrete approximations plus extremely large finite-digit representations of real number are both employed at the same time.
Finally, in process models of complex physical phenomena, ( the GCMs are process models, they are not computational-physics representations ) the parameterizations are the absolutely critical link in obtaining a somewhat correct response from the models. The large macro-scale aspects of atmospheric circulation are set by the rotation of the planet and its, almost, spherical shape and the period of its rotation. Macro-scale currents in the oceans are a strong function of the same process with influences set by the stationary continents. The approximate continuous equations will reproduce these motions. The phenomena and processes occurring within these macro-scale motions are the key to getting a somewhat realistic representations. And these critically important processes and phenomena occur at scales smaller than the spatial, and temporal, resolutions used in the GCMs.
The parameterizations are seldom, if ever, mentioned whenever statements like that at the top of this comment are stated. The parameterizations is where the focus of such statements should be. The parameterizations and the lack of demonstrated convergence. Then, maybe, discussions might move forward with the focus on what is important. While the parameterizations are somewhat heuristic, and some even ad hoc, descriptions of the irrefutable physics principles of mass, momentum, and energy are again the objective, but they are further removed from these than are the approximate models of the macro-scale motions.
Again, all this does not say that the GCMs are less than useful. Fruitful discussions require an accurate specification of the topic.
Dan Hughes – I don’t disagree with you, but my comments had to be summaries rather than long explications. The issue of dicretization and numerical solutions to continuous differential equations is addressed in the links Judith Curry provided. Model designers are of course well aware of these challenges and understand that these approximations make it impossible to simulate reality with 100 percent accuracy even in theory, much less in current practice. They are also cognizant of the value of designing imperfect models that are “useful”. I think readers should visit the linked sites for a detailed description of these issues.
Fred, this is false. I have worked with some climate models. MIT’s earlier EPPA model, had for each run the explicit input values of clouds, aerosols, and ocean sensitivity. Properly set, you could get the climate sensitivity to 1C, for warming by 2100.
“But it is a semantic no-op since almost all of the physics and mathematics that produce the sensitivity in the models are themselves assumptive to some degree or another. ”
All physics is assumptive. The task you have in front of you is to find the physics where the assumptions are WEAKEST.
but the IPCC already tells you that: clouds and aerosols and small scale sub grid processes.
The notion that assumptions are avoidable is misguided. the real question is what is the sensitivity of the final answer TO our assumptions. We always, for example, assume that the same physical laws that hold today, will hold tommorrow. Big assumption, because if it turns out wrong, then nothing you think today will be right. yet, we dont question that assumption. for good reason.
Sorry Latimer you are mistaken
I had the joy of sitting through some presentations about GCMs at AGU. The parameter space is so large and the time to compute is so huge that the prospect of tuning knobs is practically impossible.
You should be aware that there are several ways to estimate the sensitivity. There is no way to calculate it directly ( much like the situation with finding “gains” that work in flight control systems)
I’ve listed the various ways in several comments. using models to estimate the metric is only one way. in reality the models are accepted in part BECAUSE they line up with other estimates. So, one can estimate the sensitivity by looking at the observation record or the paleo record. When the output of the models matches those other estimates, that is +1 for the models. If they did not match the other methods, that would indicate they are not usable. But they match, and are useful.
Before you talk about sensitivity I’ll suggest this
http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBoQFjAA&url=http%3A%2F%2Fwww.iac.ethz.ch%2Fpeople%2Fknuttir%2Fpapers%2Fknutti08natgeo.pdf&ei=Uny4TeyDG4G8sAO2x4SpAQ&usg=AFQjCNEC62JNT3AsF3eVUu7kkgszzeFZGA&sig2=w-UrfUzljFCqxCMgw5vUCw
its an easy read with nice charts.
steven mosher | April 27, 2011 at 4:36 pm
…. with nice charts.
… with very comic nice charts !
devastating critique. Look, there are real issues in the question of sensitivity. Those issues need to be addressed in an informed manner not an uniformed manner. The best way to inform yourself about the state of the debate and the state of the science is to read what has been written.
Steven
That is a big scare mongering article.
The globe is cooling at the rate of 0.8 deg C per century as shown in the following graph!
http://bit.ly/jo1AH4
The climate sensitivity should be about 0.75, not from 2 to 4.5 deg C.
remind me never to take stock advice from you.
And you can’t calculate sensitivity that way from observation.
Read up on how it’s done. start with the refernces in the knutti paper
steve mosher
“How it’s done” is a silly concept, as is “you can’t calculate sensitivity that way from observation”.
Sensitivity, shmensitivity…
Girma has simply shown us physically observed data (for what they’re worth), which demonstrate that our planet’s globally and annually averaged land and sea surface temperature has cooled over the past decade.
Another set of physically observed data (for what these are worth) tell us that the average annual atmospheric CO2 content at Mauna Loa, Hawaii has increased over this same period.
The logical conclusion to be drawn from these two sets of data (ALL OTHER THINGS BEING EQUAL – which they obviously are NOT) is
a) globally and annually averaged land and sea surface temperature is not a representative indicator of our planet’s energy balance and should therefore be replaced with a more representative metric, or
b) increased atmospheric CO2 levels lead to lower global average temperatures, or
c) atmospheric CO2 is not the principal driver of our planet’s energy balance as measured by global average surface temperature, which would imply that
d) something else, more important than CO2 in determining our planet’s energy balance and/or the average global surface temperature, is at play.
There are many who would agree with conclusion a) (although IPCC has not been one of these).
Hardly anyone would agree to conclusion b).
Conclusions c) and d) go hand in hand. These may be the most logical conclusions to be drawn from Girma’s data, but IPCC also does not agree with them either.
So we are left with a dilemma.
But there has been no shortage on this thread of rationalizations, hypothetical explantations, beautifully-worded side-tracks, etc. on why Girma’s data do not support the IPCC “mainstream consensus” suggestion that CO2 “should be” driving surface temperature” and that we “should have” seen warming of 0.2C over the first decade of the 21st century as a result.
But the fact of the matter is, it did NOT happen.
And that, Steve, is Girma’s point in a nutshell.
Max
I don’t need to read theories full of assumptions.
Observation is the final arbiter.
http://bit.ly/jo1AH4
The observation says the global mean temperature is not just a plateau, but actually is cooling as shown in my previous post.
Phil said FIVE YEARS AGO the following:
The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant.
http://bit.ly/6qYf9a
Here is the comparison of the global temperature trend when Phil made the above statement in 2005 and now. The graph clearly shows that the global warming rate has dropped further from 0.06 deg C per decade in 2005 to zero now.
Why is the “scientific community” coming down on Phil and us for telling the truth?
Here is the comparison of the global temperature trend when Phil made the above statement in 2005 and now. The graph clearly shows that the global warming rate has dropped further from 0.06 deg C per decade in 2005 to zero now.
http://bit.ly/l3qZq7
steve,
If the entire concept of sensitivity, as used currently, does not seem to match observations, how can that be dismissed?
Girma and hunter. what you dont realize is that there are two system characteristics that matter.
I slam my acceperator from 0 to full.
I measure the response a NANO second later and 5 minutes later.
You see that the response to the same forcing will differ.
sensitivity is defined as the RESPONSE at EQUILLIBRIUM.
not a nano second after the forcing. not 10 years, not a century. AFTER the ful force of the forcing is felt by the system. inertia.
So, you cannot simply look at a few years of data to calculate the EQUILLIBRIUM RESPONSE. CANNOT.
That would be the TRANSIENT response.
get it?
steve,
But you do not, in fact, have any way to know how the system will respond after 100 years and if it will respond the same.
But we do, after ~150 years of steadilyincreasing CO2 know that response to date is not distinguishable for a low CO2 environment.
So, to use your automobile on a dyno test, we know that we have been allegedly pushing the accelerator but we are now noticing that the results on the dyno are not changing.
Skeptics are pointing this out, alarmists say the engine is going to blow, you are saying something is happening but not much.
At most not much is happening.
Certainly not a dramatic change in the dyno reading.
Here’s your problem, though. You knew ahead of time that 5 minutes was more than enough time for the engine to speed up. If you didn’t know that ahead of time, the 5 minute number wouldn’t be meaningful. You can infer the asymptote if you know what the dynamics (first order, second order, etc.) are in advance, by backing out the time constant(s) from the response, but you need to have a very good idea ahead of time what the order(s) is.
You have too many variables, and with nonlinearity, no way to be sure you’ve pinned down the correct ones, no matter how much data you collect.
Che.
With the paleo data (see hansen) you have millions of years to estimate the system response. That estimate is roughly 3C.
with obervational estimates you get other boundaries. (see knutti) It’s easy to bound in on the low end. One thing to look at for the lower bound is the snowball earth stuff. The high end is where you get a long thick tail. with a model its rather easy to tell that you are at an asymtope, moreoever you dont have to run until you get there. If, you run the model for 1000 years and the system response is still increasing , all you’re saying is that within our decision horizen the system response is 3C.. maybe higher after that but its certainly not lower, and if you think it might be lower its really beside the point of the decision in front of you.
The lack of certainty about a system response is not a bar to action. If you slam the peddle down and after 1 minute your speed is 30mph, and after 2 minutes its 60, 3 minutes its 70, 4 minutes its 75. you have information to make a decision.
the speed limit is 15. its a school zone. a cop is sitting 2 minutes up the road. Do you slam the accelerator? or argue that the simulation of your car doesnt run to equillibrium? do you complain that it doesnt model drag exactly right, or the changing air pressure of heating tires, or wind gusts?
OR do you say, “given our limited knowledge of how my car will respond to full accelerator forcing, I better not do that?”
We always have less than certain knowledge. and we are always acting.
With your car’s accleration, will be it be 200 mph in x time or will it top out and just use more fuel to get less and less response in the dyno.
The cop and school zone raises the question of what cop and what school zone?
So you just bought Pascal’s Wager. What if Mr. Bean is right?
Actually I am not using pascal’s wager.
Lets take the car experiment. I slam the accellerator, eventually I run out of gas and It drops to Zero. You have a variety of metrics to characterize the system. The instanteous repsonse, the transient response, the steady state response, and the lifetime response. Since its a resource limited system, the lifetime response of course is 0. the steady sate response is 200 mph. Operationally, we can say.
I’m interested in the response assuming an infinite amount of gas. and then the lifetime response is the same as the steady state.
I can also say, I’m interested in the response of the earth within 100 years, what’s that?
None of this is a problem for the definition of sensitivity. The only issue is looking at too short of a time frame.
Hunter: the cop is simple. the cop is the time you choose to observe. Simply put, you might know that the steady state response will take 1000 years with the last 900 years proving a very small delta over the first 100. The school zone? also simple. think about it.
In short, Che tried to argue that we never know about the asymptope. Not really a strong objection because if we can show that some portion of the response is dangerous then the full response is not important. So, its 3C after 100 years, and 3,5C after 1000 and 3.65C after 10000. Not really a devastating point, to observe that 10K might not be the end of the warming. AND, there no evidence that it turns down after 100 years, AND even IF there was evidence of that it would be beside the point, the practical point, that one has to get through the 3C knot hole.
A MORE effective argument for you to learn is what can and should we do, if anything, about 3C. That is a way smarter argument than the one you are stumbling to make. So, you dont have to lose this argument, just make the better argument from now on.
“A MORE effective argument for you to learn is what can and should we do, if anything, about 3C. ”
Learn to adapt… b/c the likelyhood is we will see 2xCO2.
steve,
It is not clear if we are pushing ‘the’ accelerator. In fact, as I keep pointing out, it is pretty clear that the CO2 lever is not much of an accelerator, and is certianly not ‘the’ accelrator.
The cop implies breaking something, so I did not take your refeerence as a timed observation.
But if we take the timed observation, what do we observe?
Not much at all.
Nothing out of the historical level of climate change, and nothing out of the normal for extreme or dangerous weather.
The school zone is a zone of safety that should not be violated.
So my question stands:
Are we in a ‘zone’, are we doing something that will violate that ‘zone’, and is CO2 the lever to manipulate to comply with that ‘zone’?
More and more the idea that we are in a climate system with many forcings and feedbacks that are all moving in comples relationships of direct influence, feedbacks positive and negative and some seemingly randomly contrary or deceptively related.
The discovery that an Indian Ocean sourced current moving into the Atlantic which was poorly accounted for comes to mind as a nice example of complexity and the climate science community’s non-comprehensive grasp of all significant factors.
Steve- Would you agree that the concept of “equilibrium” when it comes to earth’s climate is relatively silly.
Regarding CO2 sensitivity, it does seem fair to say that models that indicated the earth’s temperature would be sensitive by 3 C for a doubling of CO2 appear incorrect. It now appears that there are other variables that impact temperature changes and mitigate the response to CO2 changes to a greater extent than previously believed.
No, its not silly. Of course its an idealization. In the modelling world the models are spun up for thousands of years with inputs held constant until the drift is less than 2%. so technically its not at equillibrium, but PRACTICALLY, given the time horizen of our decision process it is. Like I said above, if you pulse a system and see it respond by increasing temps 2C with 100 years and 3C within a thousand years and 5C within 10000 years, you have enough information to make recommendations about the advisability of forcing the system that way.
As for the other factors that may mitigate c02. That is BESIDE the point of the metric in question. If I push my accelerator to the floor I can project that I will go 60pm in 5 seconds.. all other things being equal. If you want to know what happens if you tap the breaks while doing this, that is also an answerable question. BUT that doesnt change the sensivity to pushing the gas pedal. Its a metric of the response due to the ISOLATED impact of that forcing. Want to know what happens if you combine forcings? well run that experiment, double c02 and halve the sun. not a interesting experiment. And you would be measuring the sensitivity to doubling C02. understand that’s just a metric.
a diagnostic metric. one we use all the time. nothing strange or objectionable in it. Whats the response to full aft stick in f16?
all things being equal its X degrees per second. But what if there is a massive downburst? well, that’s a different question. It doesnt change the answer to the first, cause its a different question. what if the plane is carrying bombs? different question, doesnt change the first question or its answer.
Sensitivity isnt a mystery. Its an answer to a question. what happens when you double C02 and hold everything else constant? The answer to this question can be bounded. It’s just that the range is fairly wide. The best arguments for skeptics is to look the evidence at the lower end of the spectrum. Attacking the notion of sensitivity is just uninformed and tedious. So, as a friendly suggestion I’d say go focus on the arguments of spencer and Lindzen. That way you can join the debate. And there IS a debate over the estimate of sensitivity, there’s no informed debate about the meaning or the usefulness of the definition.
In short, in debate there are generally a few tactics. first and WEAKEST is a definitional attack. better and stronger is the tactic where you work with the definition and attack on other grounds. Otherwise you get disinvited to the conversation.
steven,
That is well and good, but it implies you know what the acclerator is and how it moves.
After ~150 years of steadily increasing what is considered to be the accelerator, all we have are failed predictions of the accelerator’s influence on the system.
Steve- When you write-
“In the modeling world the models are spun up for thousands of years with inputs held constant until the drift is less than 2%. so technically its not at equillibrium, but PRACTICALLY, given the time horizen of our decision process it is.”
That statement is only as meaningful as the parameters being used to determine the validity of the model (if we are using the same terms.) To write that there are climate models that can hind forecast over 1k years with only a 2% error rate from observered actuals would be inaccurate.
Your example of pushing the accelerator on the car is interesting. When related to climate it assumes a strong/direct cause and effect and assumes that other mitigating influences would not normally be expected to influence or mitigate the effect of “pushing the gas (CO2) in the system (climate). It will be interesting to see over the next decade or so what the date shows. The last decade certainly did not seem to meet modeler’s expectations.
I got to catch a plane to east Asia.
The basic non-feedback forcing equation is 5.35 ln (C/C0), though in most of the 90s it was 6.3 ln (C/C0). 5.35 is assumed to be the current best factor based on historical atmospheric curve fitting based on concentration variances between about 250ppm and 380ppm with most concentration measurements made (assumed??) from limited proxies. Some tightly constrained lab tests supplemented by a conflagration of various physics inputs played a supporting role. Part of the assessment involved trying various assumptions to see what the result would be. Fred accuses me of implying that this is tantamount to fixing the model to get the desired results, despite my clear statement that I do not believe that. This process is simply another way to try to scientifically assess their assumption — that’s spelled a-s-s-u-m-p-t-i-o-n!! Fred’s words that, “[there is] no retuning of a model to bring its climate sensitivity closer to modeler’s wishes…” is true IMO, but his other words in the same paragraph, “…that no assumptions, either explicit or implicit, are involved in the climate sensitivity that emerges from current GCMs” is, well, pure hogwash.
Steven also suggests the modelers don’t have the time to twiddle the knobs. In fact the modelers do do some twiddling (ala Monte Carlo) but, as I said above, not to nefariously force a result but to simply (and properly) test the model(s). Steven also missed my contention about the marginal forcing, which is that while we can get fairly close to estimating forcing factors in today’s 250-350ppm range we have no assurances that this holds accurately for the 1000-2000ppm range, other than some decent clues in a limited lab environment. (As a personal sidebar, I got beat up rather thoroughly at RealClimate over this point.)
Steven’s point that all physics is assumptive is true but adds nothing to the equation. The concern that the biggest uncertainties stem from the weakest (I would use “loosest”) assumptions is also true and also valid. Though blindly letting the IPCC tell us which are and are not strikes me as very risky.
Fred also says,
Some of those are pretty ironclad “irrefutable” physics principles, but some ain’t and require a lot of filling in the blanks with conjecture, aka assumptions. If you get underneath the assumptions, there are a pile of uncertainties in the details of atmospheric radiative transfer — hundreds of texts not withstanding (in fact most textbooks agree). Heisenberg said that even God didn’t know all about your irrefutable fluid flow.
Fred also says, “…changing the forcing does not change the climate sensitivity but merely the temperature response…” HUH?!? “Climate sensitivity” is temperature response, albeit on a per unit basis, isn’t it??
Steven’s link is a good one; thanks. A cherry-picked intro statement, “The quest to determine climate sensitivity has now been going on for decades, with disturbingly little progress in narrowing the large uncertainty range.” might have some insight. On the other side of the coin it also said, “However, in the process, fascinating new insights into the climate system and into policy aspects regarding mitigation have been gained.” Narrowing down the assumptions? ;-)
Sorry if my embedded formating bombed.
Rod – You are free to believe whatever you wish, but as previously stated, changing a forcing applied to a GCM does not change the climate sensitivity of that GCM. That is not an assumption, but a verifiable characteristic of the models. Your other claims about assumptions have also been addressed previously. There are now good resources available for you to learn about this topic. Some have been cited above, and you might want to visit them.
Sensitivity is defined as the change in temperature with a doubling of CO2 equivalent gases in the atmosphere. If you change the rate of ‘forcing’ – the delta T is achieved sooner or later but the delta T (sensitivity) may not be the same. This derives from the intrinsic non-linear dynamical complexity of the models. This can be easily shown by reference to the convection model of Edward Lorenz in the 1960’s. There – a small change in inputs surprisingly resulted in the well known ‘butterfly’ topology of the solution space.
Your response is the trivial notional solution based on an inadequate linear conceptualisation of the problem.
I believe you may have misunderstood the definition of forcing. Delta T can be achieved for a specified forcing, but as long as there is a “rate of forcing” (ie., a changing forcing), delta T is unlikely ever to be achieved beause it has no defined value.
‘Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing. It is usually expressed as the temperature change associated with a doubling of the concentration of carbon dioxide in Earth’s atmosphere.’ http://en.wikipedia.org/wiki/Climate_sensitivity
The forcing is specified – i.e. a doubling. The sensitivity is calculated by GCM as a result of the doubling of forcing. If the rate of forcing is changed – that is the doubling occurs sooner or later than otherwise – the calculated sensitivity is not necessarily the same due to the intrinsic chaotic nature of the models.
You know very well that I have not misunderstood the simple concept of radiative forcing. What I am saying is that your linear conceptualisation is not adequate to the task of understanding how climate models work.
Robert – I think you still misunderstand. Climate sensitivity is specifically defined in terms of an instantaneous CO2 doubling, not a doubling reached over time.
Fred,
The assumption regarding instantaneous sort of makes the rest of the claim for accuracy impossible.
First time I’ve ever heard that. Since it’s an equilibrium concept, time shouldn’t matter.
One can calculate a temperature response for a change in radiative forcing over time – assuming all else stays the same. This does not imply instantaneous doubling. The GCM use data for well mixed greenhouse gases that evolve over time – and this is how sensitivity is defined in AR4. It was the models we were discussing after all – not linear and simplified equations.
This is a little like clutching at straws – and evasion of the main point. The problem was one involving the intrinsic instability of GCM that you have failed to grasp.
I have quoted McWilliams (2007) before.
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically
answerable.’
The paper itself is very hard going – but provides a very useful summary of the problems and potential solutions. I recommend you read it before making further comment on climate models.
Che.
Time does not matter.
In experiment #1 you hold C02 constant. You see that the temp goes to 14C. It takes 1000 years to reach equillibrium.
In experiment #2, you INSTANEOUSLY double C02. Wham. The system responds slowly, decades, centuries… and finally the system reaches equillibrium. The temp is 17C.
Sensitivity is a METRIC of system response. 3C per doubling.
If the system reacted quickly and reached 17C… sensitivity is STILL 3C.
Its the overall system response to the forcing. It includes both slow and fast feedbacks.
It’s just a metric. a system characteristic. not an input, not an assumption. it RELIES ON assumptions: assumptions that physics work, that all processes are captured, that forcings are accurate. But it just a metric.
The models require time to integrate all the processes in play – clouds, water vapour, etc. The link below from the TAR uses a 1%/year increase in CO2 in the atmosphere. Stabilisation is shown at 2 and 4 times CO2.
http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/345.htm
The 4AR states that other methods of sensitivity estimation are unable to constrain sensitivity to less than the range of 1.5 to 4.5 degrees C obtained from models.
The essential problems of models remain – that of solution instability within the range of plausible formulations.
Let’s call a spade a spade:
The “2xCO2 climate sensitivity of 3C” is a model result, based on assumptions fed into the model (how many models or how many parameters or assumptions is not important).
It is NOT a value that has been determined by repeated reproducible experimentation or from empirical data derived from actual physical observations.
All the verbiage and rationalization in the world is not going to change this basic fact, folks.
Max
ChE – Time doesn’t matter for the final temperature change, which is approached asymptotically over many centuries. It does, however, affect the extent of change observed at early intervals, which is why equilibrium climate sensitivity is defined with reference to an instantaneous doubling, whereas other metrics are used for progressively changing CO2 concentrations.
ChE,
But CO2 increases in a dynamic multi-dimensional environment. IOW, getting CO2 to double is not occurring without other things changing along with it.
The biosphere, the oceans, the geology and atmospheric dynamics are all moving along with the CO2.
Simply doubling it instantly makes me wonder how the dynamic responses to CO2 can possibly be properly accounted for.
This is not some nice cracking unit under tight control.
It is a little difficult to come to terms with even the simplest of concepts when they are constantly being reinterpreted. My original statement was in the context of a the models environment which is non-linear. I was told I didn’t understand forcing for some reason and, subsequently, that the CO2 increase in the model sensitivity runs was instantaneous. Now – well time is immaterial because it takes centuries to approach temperature equilibrium.
But – I note that Fred is continuing to insist on instantaneous increase for some reason – despite the link provided to UNEP which specifies a 1%/year increase in CO2 and showing time to doubling of about 70 years. What is wrong with this picture?
It is far from the main game however. The ‘irreducible imprecision’ of James McWilliams is an emergent property due to ‘sensitive dependence’ and ‘structural instability’ in the models. They are fundamental properties of the models in accordance with theoretical physics of dynamically complex systems.
Until that fact is grasped – we are talking in different languages.
Rod – The exchanges about model climate sensitivity and assumptions have grown overlong, perhaps because some of the commenters feel impelled in new comments to defend their earlier ones. It might be best for me to summarize here my perspective – one I believe to be shared by Dr. Curry, Steve Mosher, and others – and then accommodate what I believe is a legitimate point you have made. I would state that:
1. Climate sensitivity and feedbacks are not assumed values fed into model design but emergent properties of the model.
2. It would be extremely difficult to tweak multiple model parameters to yield a desired sensitivity value. The modellers don’t attempt this. Changing multiple parameters by itself would change model climate sensitivity rather little, because that value is also inherent in model structure.
3. The data from which model climate sensitivity is computed are not assumptions, except in the broad sense that the data are assumed to be accurate. They involve basic principles of physics and observed physical properties of the climate system (although as Dan Hughes points out, the differential equations can’t be solved exactly and so approximate solutions are required).
4. Each GCM has a specified climate sensitivity emerging from its design. If a forcing is applied to that model, the magnitude of the forcing will affect the temperature response, but not the climate sensitivity.
Item 4, however, is a point where I believe you and I have been talking past each other to some extent – I realized this when you cited the logarithmic relationship. Current models typically (perhaps universally) utilize the forcing equation from the 1988 paper by Myhre et al that estimates a no-feedback response to CO2 doubling of 3.7 W/m^2. That estimate is not, I would argue, an “assumption” in the usual sense, because it is derived from very solid data – specifically the Hitran-based spectroscopic properties of CO2 at the appropriate wavelengths, the Schwartzchild radiative transfer equations, measured CO2 mixing ratios, and observationally constrained (but still approximated) data for lapse rates, for clear sky/all sky ratios, and for latitudinal and seasonal variations. It is not a measured quantity per se, of course, and therefore subject to error, but appears to be a good estimate. Where I would agree with you is that if CO2 doubling does not generate a forcing of 3.7 W/m^2 but some other forcing, then the temperature response to the doubling (i.e., the climate sensitivity) would differ from the values in the models. If that is the forcing you had in mind rather than a forcing fed into existing models as input rather than part of model design, then we have no disagreement.
My original intent in jumping into the discussion was to reinforce Dr. Curry’s point about climate sensitivity as an emergent property of the models rather than an assumption fed into model design. Some of the participants here may still misunderstand this, and I think they should visit the references in the links Dr. Curry cited.
It’s Myhre et al 1998, not 1988.
‘1. Climate sensitivity and feedbacks are not assumed values fed into model design but emergent properties of the model.’
This at least is correct and would that he had stopped there.
‘2. It would be extremely difficult to tweak multiple model parameters to yield a desired sensitivity value. The modellers don’t attempt this. Changing multiple parameters by itself would change model climate sensitivity rather little, because that value is also inherent in model structure.’
Models are calibrated to observed temperature using a range of values for ozone, aerosols and for natural forcing. The range of sensitivities obtained is a result – at one level – of the range of values assumed for these relatively unknown factors.
‘3. The data from which model climate sensitivity is computed are not assumptions, except in the broad sense that the data are assumed to be accurate. They involve basic principles of physics and observed physical properties of the climate system (although as Dan Hughes points out, the differential equations can’t be solved exactly and so approximate solutions are required).’
The numerical solutions of differential equations can be made arbitrarily accurate to the degree consistent with the accuracy of inputs. Part of my Honours thesis involved comparison of numerical solutions of a differential equation with a special case where the solution was analytically solvable. High order solutions do not a super computer phase. My XT clone however was a different story.
The major sources of instability in the models relates to sensitive dependence and structural instability. Plausible changes in initial or boundary conditions can cause the solution to bifurcate into a different phase space. These instabilities have not been exhaustively explored in systematically designed model suites.
‘4. Each GCM has a specified climate sensitivity emerging from its design. If a forcing is applied to that model, the magnitude of the forcing will affect the temperature response, but not the climate sensitivity.’
Sensitivity typically relates to a doubling carbon dioxide equivalent in the atmosphere. The rest of the statement conflates divergent issues. It should be noted that the sensitivity calculated is not a unique solution within the scope of plausible model formulations. This emerges from dynamical complexity intrinsic to the models.
‘Item 4 … Current models typically (perhaps universally) utilize the forcing equation from the 1988 paper by Myhre et al that estimates a no-feedback response to CO2 doubling of 3.7 W/m^2….’
Mostly irrelevant – to paraphrase the Hitchhikers Guide
Fred, et al
Exactly how does a GCM take as an input CO2 concentration (for example), spin it round and round, and come out with forcing or, in turn, produce the new temperature ala climate sensitivity. If the GCM doesn’t need our input and inherently knows how to do it why do we waste our time studying radiation transfer physics? Why not just ask HAL? ;-)
To quickly address your points in #65436:
#1 — see above
#2 — I have never said parameters are tweaked to obtain desired results.
#3 & 4 — we evidently just differ over the robustness and precision of climate/atmospheric physics. All of those parameters you mention, including a significant part of HITRAN, derive using simplifying assumptions of some degree or another. As Pierrehumbert says, “Sometimes, a simple scheme which is easy to understand is better than an accurate scheme which defies comprehension.” (Not an exact analogy, but insightful.)
Standard CO2 forcing is 5.35 ln (C/C0); 3.7 watts/meter2 = 5.35[ln2]. Your right, we’re talking past each other. I just can’t get my arms around the relevant distinction between “climate sensitivity” (as commonly defined and described by Chief H) and “temperature increase” from added CO2.
Maybe a quote from the Myhre reference can help: “…This work presents new calculations of radiative forcing… using a consistent set of models and assumptions.” (emphasis mine)
Rod – Maybe I misunderstand you, but I didn’t find anything new in your latest comment. GCMs utilize concentration/forcing relationships such as the one from Myhre et al to convert CO2 concentration to forcing. They utilize the climate sensitivity parameters they have generated to convert forcing to temperature change. The point about assumptions has been beaten to death by now. As Steve Mosher pointed out, even the smallest measurement involves assumptions (e.g., your thermometer is recording actual temperature), and many climate computations require some degree of approximation (including numerical solutions to differential equations), which assumes that the approximation is reasonably accurate. The original claim (far above) was that climate sensitivity values were assumptions fed into GCMs, or were based on arbitrarily assumed values for their calculation that were fed into the GCMs. That is incorrect, but the notion that assumptions in the broad sense are part of every calculation that involves areas of uncertainty is of course correct. It is not the same as claiming that a particular value is an “assumption”, and if that distinction is understood, I have no problem with invoking the word assumption.
Incidentally, who at RC insisted that the Myhre-based logarithmic formula breaks down at CO2 concentrations of 1000 ppm? It may be true, but I haven’t seen that claim, and I would not necessarily be convinced unless I saw the data or knew that someone with accurate knowledge of the area (e.g., Raypierre) was the one who said it?
Fred,
the ‘logarithmic formula’ comes from the Beer-Lambert law for linear absorption. The only way it ‘breaks down’ is from a non-negligible contribution from nonlinear absorption of light. That means the higher order material response from the molecule begins to become accessed, ie two-photon absorption, etc.
This is an intensity dependent process as the induced polarization in a gas of molecules can be expanded out in orders of the incident electric field strength. For higher incident field intensities, more of the higher order contributions of said expansion contribute to the overall macroscopic polarization of the gas.
At the field intensities, gas pressures and temperatures normal in the atmosphere, however, no such processes should be occurring to the level that makes accounting for them necessary.
My guess is that whoever claimed that the Beer-Lambert law ‘breaks downs’ was doing so under the auspice that at higher concentrations, there is a smaller change in the light absorbed for any change in the concentration than at lower concentrations. As far as I understand such a statement, it’s basically rephrasing the definition of ‘logarithmic dependence’.
Fred, I may be wrong But i think that GCMs would actually use a band model for RTE. You can see some of the intercomparion work done bwteen GCMs and LBL to see that they check that bthe answers given in the GCM comport with those given by LBL.
I have no clue what Rod is mubbling about 1000-2000ppm numbers. Even if what he said were true it would have no bearing on a discussion of doubling. Same thing with the silly comments about the accuracy of a formula ( the log formula) that doesnt get used, at zero.
tedious distractions from the main debate.
The most annoying thing to me in this conversation is that there IS a debate. a real debate over the range of sensitivity. Skeptics could join that debate and have the debate they want. Instead, they confuse the terms of the debate, deny the usefulness of the best tools we have and then wonder why nobody wants to talk to them. And they make silly arguments about the meaning and importance of assumptions.
Steven – Yes, I believe the GCMs generally use band rather than LBL models because the latter are computationally impractical for extended intervals. Myhre et al also compared LBL with two band models and found adequate agreement.
Fred, it sounds like we’ve almost gotten past the semantics. Climate sensitivity is whatever the GCMs put out after massaging a whole series of inputs. My point was that (many) of the inputs are based on assumptions, so therefore is the output. I think you agree, though put a different character on “assumptions.” It seems you would assess each assumption or approximation at each level – one on top of another, decide they were all very accurate and continue the judgment that the science (model) is still irrefutable. I would agree that most are very good but a few based on very good conjecture, and overall may rise to “very good” but nowhere near irrefutable.
My 1st simple example was the use of 5.35ln(C/C_0) to determine marginal forcing going from (say) 500-1000ppm: does the log function hold? The 5.35 factor? It’s a good bet that they do (at least reasonably close) but nobody knows for sure. Only some highly constrained lab experiments have looked at this. No one has ever observed it within our climate system.
I have more to add (hold the applause ;-) ) but am on a borrowed computer and have to vacate.
Later.
Dr. Curry,
How can time not matter in determining sensitivity to anything in a dynamic system that ahs many variables?
The assumption that it does not fails to pass the smell test.
Hunter–technically time does not matter. What matters is that the variables do constantly change over time. The earth may have been sensitive by 3 C 10 years ago and only sensitive by .3 C today. (only as an example)
Rob,
Technically why?
hunter you are misunderstanding the meaning of the word sensitive.
I will make it simple.
Replace the word sensitive with the word DELTA.
If you double C02 and ONLY change that, what id the DELTA temp?
If you slam your accelerator to 0, what is the instaneous response? 0, your tires spin. then your car starts to speed up, accelerating.. these are TRANSIENT responses. when your car hits a top speed say 130 mph and stays there, you say the FULL RESPONSE to that forcing is 130mph.
Now, it does not matter if it takes you 1 minute or 1 hour to get to 130mph. thats the full response to the forcing.
before that time you have time sensitive responses.
Finally, will the time it takes you get the FULL RESPONSE matter? why yes it will. It will prevent you from doing stupid things like looking at short time windows and concluding anything about the full response.
opps I meant slam to full not slam to 0, hunter. edit appropriately
Steve, if we embrace you car analogy and apply it to 20th century climate, is it not the case that the “accelerator” has been depressed at all times during that period (Anthro-CO2 was emitted in increasing quantities), yet the car’s speed has risen, then fallen slightly, then risen again, and is now (with the “accelerator” further to the boards than ever before) falling once more. Doesn’t this cast doubt on the precise identity and function of the pedal you are pressing? To extend your analogy, if I took a car out for a test and it behaved in this way, I would confidently say that the pedal I had been pressing, while it may have controlled the addition of some performance-enhancing additive to the fuel mix, was definitely NOT the primary means of increasing the speed of the car.
steven,
Actually I do understand the term.
The more I read the explanations of it by those who defend it, the more useless I think it is.
To keep abusing the automobile metaphor- you made a huge assumption regarding the dyno test: that the transmission is automatic.
Let us revisit the concept using a seven speed manual transmission with say a split-tail three speed differential and manually selected four wheel drive.
The accelrator would be only one part of getting to a given speed. And the speed/torque question would be massively more complex.
I think this is a better way to look at how climate would work.
Punching the gas begs the question of what gear, what rear-end is chosen,and if you are in 4X4 or 2 wheel mode.
Hell yes – let’s try it out on a 700 horsepower, turbo charged, 8 litre V12.
Chief,
It is disappointing that so few people see what I am saying.
“Steven also suggests the modelers don’t have the time to twiddle the knobs. In fact the modelers do do some twiddling (ala Monte Carlo) but, as I said above, not to nefariously force a result but to simply (and properly) test the model(s). Steven also missed my contention about the marginal forcing, which is that while we can get fairly close to estimating forcing factors in today’s 250-350ppm range we have no assurances that this holds accurately for the 1000-2000ppm range, other than some decent clues in a limited lab environment. (As a personal sidebar, I got beat up rather thoroughly at RealClimate over this point.)”
Monte carlo? You are mistaken. In Ar4 there were over 20 models run for a grand total of slightly over 50 runs. A good number of models were run only ONCE, some like modelE were run 5 times. So there is no monte carlo in the sense that I know. At the cutting edge now some people are looking at stocastic physics for parameterizations. You can see the netwon institutes seminars on these types of approaches.
With regard to marginal forcing it was appropriate for RC to beat you up. I won’t pile on.
The postdicting in figure 4 is amazingly accurate except for the 1930 to 1940 time. Why are all the red line squiggles missing after 2000?
Surely the purpose (if any) of reasoning depends on the species of Reason.
Predicate logic seems well-suited to resolving immediate problems, mainly predictive. It also provides an economic advantage of efficiency due to its reliable reproducibility. “Fruit out of reach. Climbing tree hard work. Stick is here. Stick is long. Hit fruit with stick; make fruit fall; collect fruit from ground. Eat.” Logic appears to be all about stomachs.
The Principle of Contagion appears to be about avoiding injury: Fire hot. Rocks that touch fire get hot. Don’t touch rocks that touch fire.
Motion at a Distance appears to have evolutionary roots in threat avoidance: Wave arms and jump up and down here, lioness jump out of tall grass there. Don’t wave arms and jump up and down near trees.
The Homeopathic Principle may have its roots in early social reasoning, as it has primitive connections to concepts of family and ownership, territory and inheritance.
The Law of Similarity.. No clue where that one’s from. Maybe it’s a primitive to other forms of reasoning, or purely organic.
However, as a group, the forms of Reason have in common that they are pleasurable, universal to some degree in all human cultures independent of education or upbringing, obsessive pastimes often pursued for no apparent purpose or gain, closely tied to language, and perceived feats of reasoning impress or bring about respect from onlookers, so Reasoning is important in human hierarchical systems of power. Odd then that so many politicians can’t reason their way out of a paper bag.
It doesn’t take being right to be considered wise. Argument long ago ceased to literally bear fruit.
If it had a purpose, it’s largely lost it, replaced by the good it can do the participants in the argument for the social power it imparts, or the pleasure they take in flapping their lips. So long as some few retain the ability to make productive argument and use Reason appropriately often enough for the imitators to be able to mimic them, argument will continue to have in large measure the purpose of enriching the speaker.
Dr. Curry,
You said:
“Separating natural (forced and unforced) and human caused climate variations is not at all straightforward in a system characterized by spatiotemporal chaos. While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.”
I think in the absence of strong evidence for or against AGW, the skeptics have the advantage over the warmists. Why? Because in science, the burden of proof falls on those who make a positive claim. That humans are warming the planet is a rather extraordinary claim. That nature is warming (or cooling) the planet is a perfectly ordinary claim since human civilization is only 5,000 yrs old but the climate has been changing for billions of yrs. It can only be due to nature.
As Carl Sagan said, extraordinary claims require extraordinary evidence. You cannot say there must life on Mars because nobody has ever proven there was none. The scientific method says it is perfectly normal to assume there is no life on Mars unless you find an extraordinary evidence. I say the same is true with AGW.
Honestly – we are engaged in a great atmospheric experiment for which we have not the wit to determine the outcome. Seriously – the last time CO2 levels were this high was 10 to 15 million years ago and this is the result of anthropogenic emissions. Ipso facto – this is probably not our finest accomplishment. Why the heck is it an extraordinary claim that this is potentially a problem and at the very least we should do practical things to limit the experiment?
It is extraordinary that you have the complacency to think that this is somehow in the ordinary course of events. It is OK for people to change the atmosphere and it needs to be proven that this is a bad thing?
I think the world is mad.
For most of earth’s 600 million yrs of geologic history, atmospheric CO2 was above 600 ppm without the help of man. Now CO2 is 390 ppm and you’re certain it’s due to humans. That’s an extraordinary claim. The complacency is that you accept it without extraordinary evidence.
For most of the Quaternary the levels have been 280 to 300 ppm – and we are rapidly heading for 600ppm. That the rise commenced post industrial revolution is an extraordinary coincidence. That there are isotope analyses that suggest a fossil fuel origin seems reasonable.
No reasonable person disputes that the source is anthropogenic or that greenhouse gases are – well – greenhouse gases. It is a nonsense to suggest otherwise – although I am aware that there are some fringe views on both counts. I consider such views to be certifiably insane.
In the last 600,000 yrs, atmospheric CO2 fluctuated from 200 to 300 ppm. The last spike from 200 to 300 ppm began 1,000 yrs ago long before the industrial revolution. Man could have contributed above 300 ppm but that is not a certainty.
The isotope analyses that suggest antropogenic CO2 need critical re-analysis. Roy Spencer provided that in this link.
http://wattsupwiththat.com/2008/01/28/spencer-pt2-more-co2-peculiarities-the-c13c12-isotope-ratio/
Even granting CO2 is man-made, it doesn’t follow that global warming is man-made. There are other possible natural causes. No reasonable person views natural causes as certifiably insane. That is a sure sign of fanaticism not rationalism.
Chief,
You keep on that point about the great experiment, but you have a few assumptions that may be worth checking.
– were things in the environment dangerous to life in the days of 600 ppm CO2?
– is there any evidence that the climate was more dangerous in a 600 ppm atmosphere than the present era’s?
I would love to see us using power sources that emit less CO2, all things being equal.
But so far, not one policy at all promoted by the AGW community has done anything to lower CO2 increases.
But the policies and procedures the AGW community pushes have the net effect of increasing money to the AGW community and increasing costs to tax payers and consumers for many things including food and power.
I can understand the application of the Hippocratic Oath implicit in your concern- first do no harm- but I would suggest that if one objectively looks, it is the AGW community that is violating the principle at this time.
Hunter,
It is worse than useless – AGW has been a monumental distraction for a generation when we could step up useful actions – the multiple paths and multiple goals of the recent Hartwell thread.
I have 2 arms to my argument. The first is that we are changing the atmosphere. That seems abundantly evident despite the quibblings of Spencer and Dr Strangelove – or how I learnt to stop worrying and love 9 billion metric tonnes/year of CO2 emissions.
As for global warming – if he had been at all interested he might have realised that I think it quite likely that the planet is cooling for a decade or 3 more at least. C’est le vie. This is indeed involves the 2nd arm of my argument – that we have not the wit to understand the outcome. Are you suggesting that more CO2 necessarily equals a warmer planet? Tsk Tsk – I expect better of you. That is linear thinking in a non-linear world.
Cheers
Chief,
If, IF, all things were unchanged, and only CO2 was increasing, then the chances are there would be warming.
But all things are not unchanged.
The biosphere increases with CO2, the ocean buffers CO2, etc.
If the CO2 is produced by something sooty, then carbon black settles out all over the place, including ice, increasing melting rates.
and so forth.
Now the lovely Mrs. hunter needs my attention, so I will get back to this repast later.
But think of how trivial 9 gt per year really is in a system the size of Earth.
One of the simpler solution involves reducing black carbon and tropospheric ozone – with tremendous health, environmental and agricultural benefits.
I am not overly concerned with warming but sensitive dependence on small initial changes in dynamically complex Earth systems implies that there is an appreciable risk from greenhouse gases. One that I can’t quantify. The risks theoretically include the potential for 10’s of degrees cooling in places within months. Where do you live?
I don’t want to overplay the risk but the opportunities which I have spoken of ad nauseum. They include biological conservation and restoration. Improvements in agricultural soils by adding carbon. Ensuring that people have access to health, education, safe water and sanitation, that they have opportunities for development through free trade and good corporate governance. The latter would do more than anything else to stabilise population. Simple and relentlessly pragmatic environmental and humanitarian objectives.
There are lots of people with another agenda and these need to be ruthlessly crushed. As I say a monumental distraction by people who think government can solve things through legislation – but don’t tell Bart I said that. All I am advocating is common risk avoidance involving humanitarian and environmental progress.
What can be wrong with that?
Chief,
Actually you and I are very much on the same page.
Too bad the climatocracy and their followers are not.
Dr.Pielke Sr. made some interesting statements on climate sensitivity
” The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear…”
” This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.”
All climate models are made on assumptions fed into them. So they can’t be called as ” experiments ” and their output proclaimed as ” evidence “.
Issues like water vapour feedback and cloud behaviour are very poorly understood.
The below post by Dr.Pielke Sr. on a paper about water vapour feedback behaviour is worth reading to see how poorly the understanding is about this aspect
http://pielkeclimatesci.wordpress.com/2011/04/25/water-vapor-feedback-still-uncertain-by-marcel-crok/
Venter
Having read Pielke but not being a climate scientist or computer modeler, I can only comment as an observer, rather than as an “expert”.
But I’d say you’ve described it pretty well.
I think one could summarize (possibly in an oversimplified way):
The GMTA as reported by HadCRUT, etc. is not a representative indicator of our planet’s net energy balance to start off with.
The climate sensitivity range cited by IPCC is essentially an estimate based on model simulations with many assumed inputs, and the process by which it was derived is quite tortuous.
But the important point is that it is not based on empirical data based on physical observations or reproducible experimentation, but rather on theoretical deliberations backed by some interpretations of selected paleo-climate data from long ago.
And Steve McIntyre’s challenge holds: until we have an “engineering quality” confirmation of the IPCC claim that the 2xCO2 climate sensitivity is 3C (on average), we do not know what we are talking about (and certainly should not base any “policy decisions” on this IPCC claim).
All the talk about “post-normal”, or “post-modern” science, “precautionary principle”, etc. is just diversionary smoke to avoid the main uncertainty here. Let’s settle the basics first.
Max
Max- I believe that you will find that there are currently no climate models that accurately forcasted that last 10 years. The fact that CO2 has risen at very close to the predicted rate over that period, but temperatures did not would seem to indicate that there are variables not yet understood. As I understand it, the models of 10 + years did “hindforcasting” pretty well, but were terrible at actually predicting the future.
Max- I believe that you will find that there are currently no climate models that accurately forcasted that last 10 years.
#######
That is actually a testable claim. and its false.
there were models that had trends which fit the observed trends
The way to test this is to pick your start date that matches the date
were the runs started. Then look at all the runs of all the models
then look at the trends in the runs verus the actual earth trends
http://rankexploits.com/musings/wp-content/uploads/2011/04/Histogram.jpg
As you can see MOST model runs were high, some were low and some were very close. You also need to define ACCURATE. that’s not a scientific question as pure science doesnt have an answer to how much confidence one needs? 90% 95% 99% 99.99%. that is a choice based on tradition, not logic. What we see is that on average “models” over estimate the trend by a little bit. Not wrong.
“The fact that CO2 has risen at very close to the predicted rate over that period, but temperatures did not would seem to indicate that there are variables not yet understood. As I understand it, the models of 10 + years did “hindforcasting” pretty well, but were terrible at actually predicting the future.”
Huh? There are couple things.
1. Co2 response is lagged. You slam the pedal down and it takes decades to see the reponse.
2. The models used a solar forcing (TSI) that ended up being high.
at the time the models ran most of the models used a straightline forecast for the future using the 2000 figure. Well, TSI went down.
I’m also pretty sure they missed the methane forecast and were high on the C02 forecast.
Again, unless you define what you mean by forecast skill, words like accurate and terrible are meaningless.
Venter.
“All climate models are made on assumptions fed into them. So they can’t be called as ” experiments ” and their output proclaimed as ” evidence “.
This is largely a misunderstanding. Let’s take a simple engineering example.
before i build a plane I have to model its flight control system. To do that I rely on a vast array of computer models. these models represent laws of physics. Sometimes in a simple way, sometimes in a more complex way. these models never ever match a real world plane. EVER. anyway, so I build a model of the flight control system. It’s a model of actuators and surfaces. For the surfaces I have models of forces. deflect the flap 10degrees and the model predicts a certain amount of force. Its never ever right. there are things I cannot model, nasty stuff like vortices. I can model them in gross ways, but not in super fine detail. Still we model how an ideal system would respond. we do computer experiments. we feed in assumptions. we play around until we find a range of numbers for the gains in the system. It’s bunches of assumptions. assume the surface models are correct, assume a standard day atmosphere, assume the engine responds as it should, tons of assumptions. its assumptions all the way down to the laws of physics which are assumptions themselves.
And there is uncertanty. But these experiments give us the best understanding we have about how a REAL plane will respond. Then we build that thing we modelled and we fly it. And we see how good/bad our modelling is. the modelling is EVIDENCE. it’s not physical evidence. It represents our best understanding of how the system will respond. No one who works with models would suggest that its physical evidence. no one would fly a test plane if the model said it would crash. no one would put passengers in the plane without testing it first, in the real world.
But with climate we have the following. We have models. Limited, but the best we have today. Those models tell us that IF we double c02 the plane would crash. Your answer is to question the model and load the passengers.
Not good engineering. Of course its even more complicated because of the cost of not loading passengers. But that is a different question.
Sorry to just jump in here, but I’ve been following this for a couple of days and I have some questions if anyone would be willing to answer.
Would we all agree that climate sensitivity as represented in the models are just estimates?
yes. they are just estimates. Plank’s constant is also an estimate and it has a standard uncertainty. 2+2=4 is not an estimate.
there is no epistemic difference between estimates of sensitivity and estimates of other science claims, EXCEPT the range of uncertainty. when that range is really tiny we call it a fact, but its not really. So, its rather silly to make huge issue about the fact that sensitivity is an estimate. all science is estimates. Some estimates are super narrow.
The issue is this. At one end of the uncertainty we have a manageable problem. at the other end we have a disaster.
Averting the disaster is costly. mitigating the manageable problem is less disruptive.
Arguing about whether sensitivity is an estimate, when ALL science is an estimate, is silly. especially when the important question is “how good is the estimate?”
That is the real question, the important question, the question where skeptics could actually join the debate. pointing out that sensitivity is an estimate is trivially true. all science is. Now, move onto the real question. what’s the uncertainty and how was it calculated.
Steven
Sensitive aren’t we ? You are the one making a big issue about estimates it seems, I just asked a question. If you want to answer my next question fine, if not, fine.
What is the range of the estimates used for climate sensitivity in models?
Jerry – Climate Sensitivity values are estimates, but with a confidence level attached to them. Typically, the estimated range is 2 – 4.5 C per CO2 doubling as the 90 or 95 percent confidence interval, signifying that based on available data from a very large number of studies, a conclusion that the true value is between 2 and 4.5 C is 90 or 95 percent likely to be correct. We don’t know the true value, and the confidence interval may change with new data, but even if the true value is outside that interval, we have not been “wrong” because the interval includes that possibility at the 5 to 10 percent level.
For a reasonable discussion of climate sensitivity, see the 2008 review by Knutti and Hegerl, as well as AR4 WG1 chapters 8 and 9 plus references. More recent reports have appeared since then, including those by Clement et al, Lindzen/Choi, Spencer/Braswell, Dessler, and I believe by Lear et al. Do they expand, contract, or shift the 2 – 4.5 C range? I’ve read them, and I don’t see any reason to believe that they do – in fact, some are rather irrelevant to multidecadal temperature responses to persistent CO2 forcing, because they address very short term transient changes due mainly to ENSO variations (Dessler, Lindzen/Choi, Spencer/Braswell).
We clearly would like to estimate climate sensitivity more precisely, but this can’t be achieved in the blogosphere by selecting (“cherry-picking”) references that support a particular value. That has not been common in this blog but has been a standard feature in some others. In the meantime, it is reasonable to proceed based on the canonical 2 -4.5 C range, with any policy decisions derived from both the available scientific evidence as well as value judgments outsider the realm of science.
Fred
Thanks for your answer. I would point out that we do not elect scientist to make policy decisions, though they seem to have taken that role upon themselves which is not only arrogant but rather frightening and is obviously shading their objectivity.
Would I be wrong in saying that climate models are basically an experiment? Or is there a better term that a believer would use to describe climate modelling?
The guys who wrote and ran the models are real confident that they handled the input and algorithims accurately — or damn close. Is that how it works?
RodB –
The guys who wrote and ran the models are real confident that they handled the input and algorithims accurately — or damn close. Is that how it works?
According to a previous discussion on this forum, that’s how it happens in climate science. But then they’re “professionals”, whatever that’s supposed to mean.
I’d have been fired and escorted out the door by an armed guard if I’d tried to sell that stuff where I worked. And rightfully so.
Excellent! How good is the estimate of sensitivity?
If you consider that 1999 to present may be a legitimate trend (yes, I know it is not yet a trend, but when making predictions one might need to make assumptions, say that what appears to be a trend, is predicted to be a climate shift induced trend and what obviously is lower than expected by the models, could be a trend.), that would indicate less than 3 C sensitivity. Based on the paper you linked earlier, the hind cast was good, except for the period 1930 to 1945. That may possibly indicate an unexpected unforced variation. Following the alleged climate shift circa 1999, that model also seems to overestimate sensitivity, due to the not yet a trend trend?
I believe that less than 10% is a common model estimate of unforced variability. If that weird not yet a trend turns into a “real” trend, how good is the estimate?
Dallas,
The AGW community claims weather events as *proof* of their claims.
The AGW prmoters had no trouble claiming the 1980 – 1998 period as *proof* of CO2 doom.
We should have no compunction at all in claiming a >10 year trend as a…..trend.
The fun thing for years now has been to observe the AGW believers weather watch, claiming an amazing litany of warm, cool, cold, hot, wet, dry calm, stormy, snowy, warming cooling as *proof* of their great enlightenment.
Hunter,
Yeah, it is pretty funny. Then they scream for action, but only seem to shot down what is reasonable under the circumstances. Lots of calls for action but few realistic solutions offered.
hunter –
Some years ago (~2003?) I did some calculations based on a 5-year trend, which is what RC was using. But when I went into RC to check some numbers, I found that they’d changed to an 8-year trend. Since then, of course, the requirement has been increased to 10 and then 15 years. And now they don’t want to talk about anything less than 20 or 30 year trends. Isn’t it amazing how standards change when the data dosen’t fit ones preconceptions?
Jim Owen,
Exactly.
Both camps are moving, but the believer camp moves away from reality while the skeptic camp moves towards reality.
5-8-15-20-infintity and beyond for the believers.
Skeptics embrace CO2 as ghg and warming over time yet point out that AGW policies are failures and that the predictions are not working out so good from an accuracy point of view.
Fortune telling has always been a “chancy” business. And the practitioners have rarely, if ever, gotten the respect they think they deserve.
a ten year trend is a ten year trend. it has a confidence bound that is large. Not very much you can tell from that except in extreme conditions
No argument there, but it wasn’t me that started with a 5 year trend when it was convenient for their argument and then moved the goalposts when it wasn’t.
yes Jim, there is much silliness about short trends on all sides. I try to be in the middle and avoid making silly comments about trends. plus they are not really talking about trends. they are talking about least squares fits. which are not exactly the same thing as a trend. but thats an ugly can of worms we need not open.
Steven,
I have referenced McWilliams (2007) before – frequently. The models are non-linear. They have ‘irreducible imprecision’ due to ‘sensitive dependence’ and ‘structural instability’. But unless you comprehend the underlying theoretical physics these are just words without a scientific meaning.
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’
Irreducible imprecision in atmospheric and oceanic simulations
James C. McWilliams*
Department of Atmospheric and Oceanic Sciences and Institute of Geophysics and Planetary Physics, University of California
http://www.pnas.org/content/104/21/8709.full.pdf
The answer to your question is that we do not know what the uncertainty is. We have opportunistic ensembles but lack the methodically designed model suites needed to systematically explore the extent of irreducible imprecision.
Tim Palmer – head of the European climate computing centre – is of the opinion that these types of estimates are not theoretically justifiable at all. The best we can hope for are estimates as probability density functions.
‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)
In fact I would define dynamically complex systems – examples being weather, climate and models – as being epistemologically different from the linear systems we are comfortable in thinking about.
Indeed – in a dynamically complex environment – sensitivity is one of the concepts that has no essential justification.
Chief
I need your brain here.
Randomly doodling about with that horribly addictive WoodforTrees thing, I plotted detrended CO2 levels by parts based on ENSO half years, and found correlation between temperature and CO2 levels.
Temperature trends led CO2 trends ever so slightly and with fair correlation when CO2 was detrended to obtain best fit to the temperature line.
Please disabuse me of the illusion that I’ve stumbled on anything other than an optical illusion and confirmation bias.
Or at least assure me what I’m saying is meaningless mumbo jumbo.
Thanks
I preferred you with a moustache and a cigar. ‘A child of five would understand this. Send someone to fetch a child of five.’
Anthropogenic carbon fluxes are what – 3% of the total. Natural fluxes are biological in origin primarily and these increase with higher temperature. There is a biological law with a name and a formula and everything. I just forget what it is right now. It was in the great thick book I had to buy for ecol901 – which I actually read and it stuck somewhat. Amazing. I’ve still got the book at home.
So I’m not surprised that temperature led CO2 – if what you say is true. I would hesitate to conclude that 9 billion metric tonnes of anthropogenic CO2 wasn’t in the mix somewhere.
Hope that sets your mind at rest. Take 2 aspirin and see me in the morning.
Cheers
Tim Palmer – head of the European climate computing centre – is of the opinion that these types of estimates are not theoretically justifiable at all. The best we can hope for are estimates as probability density functions.
######
I would agree with Dr. P. He is also a pleasant fellow. the difficulty is that to make sense of the system metric for the lay audience you almost always have to resort to a linear type system. just to show the basics: that sensitivity isnt assumed, that you can estimate it, that its different from instaneous response, transiet response ect.
so, folks who want to discuss the palmer issue, first have this struggle with people who dont even get the basics. Those are usually definitional fights or fights over we know nothing rather than the REAL debate which is where palmer lives.
I’ve told people before, if you want good arguments against models read or watch Palmer. but these arguments are not anti model nonsense, they are real practical state of the science debates. Basically the best arguments about the models are WITHIN the science, not outside the science.
in short, the ‘models suck’ meme is a cheap shabby weak attack.
I note that you dont make that argument
It is not that the models suck, it is that the models are not able to predict the future, something the people who believe in them just can not seem to grasp. It is tremendously sad really, but not nearly as sad as the harm they are inflicting on mankind.
I also quoted Professor Ole Humlum on appropriate use of models as tools to explore the physics of the system and the inappropriate use as predictions of future climate. At the latter – they do suck. Or at least – there can be no confidence placed in the outcomes for the reasons I cited earlier.
Can I assume by your approval of Tim Palmer that you have some grounding in dynamical complexity in climate? It is implied in the use of the term phase state of a finite volume dV – and of course elsewhere in Palmer’s work.
As McWilliams says – it places limits on what is theoretically knowable and these limits are well understood on this site. Apart from by Fred I might note.
The essential policy problem of anthropogenic carbon emissions is otherwise. As is known here – my native prudence suggests limiting the great atmospheric experiment for which we have not the wit to know the outcome. On the other hand, the planet is likely cooling for the next decade or 3 at least – as keeps popping up in the peer reviewed literature.
There is a quandery here – and a solution. I highly recommend having a quick look at the multiple paths and multiple objectives in the recent Hartwell thread.
Cheers
If I completely overlook the obvious, I know I’ll always be able to turn to Chief.
The model of treating ENSO like a reset on an overclocked system is visually compelling but so overwhelmingly complex with the tools I have at hand – it appears to say that with each La Nina or El Nino, rising CO2 levels remain a driver of sea surface temperature rise, but slightly less so with each ENSO, while SST remains a driver of CO2 levels at a pretty persistent level.
While you +/-AGW-types expect some logarithmic relationship between CO2 rise and SST (that’s how I stumbled on this, trying to detrend by parts to simulate the log trend, but the ENSO’s were such powerful attractors it seemed worth pursuing), this appears to be a more than logarithmic trend, so may reflect feedbacks or other negative ENSO-tied step trends.
Do successive ENSO’s increase particulates more? Decrease humidity? Storm intensity rise? Probably that last one.
If we have another longish ENSO-less period, will SST’s shoot up again?
Can we still have ENSO-less periods?
Then again, too imprecise a tool, not worth chasing down. This is likely all only artifact of the graph.
Steven, you left out that little step called wind tunnel, real world validation of where the model was screwed up — something we can’t do with climate.
The wind tunnel wont help you very much in flight control design, especially with handling quality and a whole host of other interesting things like high angle of attack handling, etc etc. But then you knew that. The point is what I said. Models provide evidence. its not physical evidence. And to let you know a wind tunnel IS A MODEL. so, you basically missed the point again. A wind tunnel and the model in it give you an idealized glimpse at what a real plane will do. its not the real thing. So to your point. You argued that models dont give evidence.
In fact they do. computer models do, wind tunnel models do.
That evidence is used to guide decisions. Its not the best evidence, but it’s evidence. here: I have a stupid model that says a human can walk 4 miles an hour, on average. Lets build a model of how far a human can walk D = 4*t. really simple model. Note it doesnt have terrain in it, so it would give the wrong answer for uphill walking or downhill walking. Now, using that model can you answer the question : can I walk across the united states in an hour? of course, the model, crude as it is, can give you evidence about a claim. Is that evidence physical?
nope. do you trust what the models says? do you think terrain could make a difference? Simply, you made an imprecise statement about what models can and cannot do. sharpen your language and move on.
Now you’re groping. There’s a world of difference between a computer model and a wind tunnel. The wind tunnel can be made as accurate as one wishes through use of dynamic similarity. The computer model is still just an approximation of unknown accuracy, depending on exactly what you’re trying to get out of it. Like Lucia said, the computer model does a pretty good job on the airplane flying forward and level. It does a much worse job on an airplane flying sideways. The wind tunnel doesn’t care. It’ a scalable analog, if you know how to do the scaling.
steven mosher, I agree with the other posts that your equating physical lab models with computer simulation models is at best defensive semantic hand waving and at worst flat wrong and misleading. Try to convince test pilots of that. Designers still breadboard electronic circuits despite SPICE runs which have physics and mathematics accuracy and certainty magnitudes better than GCMs.
None the less, I do agree with the thrust of your post. GCMs are indeed helpful evidence and I didn’t mean to imply otherwise. What they are not are iron clad irrefutable near iconic evidence, as is pushed by many in the climate science circles.
I’ve deduced that “steven mosher” and “steven” are different folk; is this correct?
Mosher,
The fundamental assumption you make is wrong. The model of a plane and it’s flight characteristics is an engineering model which has undergone engineering audit and and extremely precise and repeatable levels of testing. And it has withstood practical tests starting from wind tunnel tests to actual flight tests and perform as they are predicted to perform. And the technology and models constantly evolve and develop, based on one rigid criteria which is sound testable hypotheses backed by empirical performance. And the airline and aircraft industry are willing to admit mistakes, admit they were wrong when they went wrong, take correctional steps, improve their product and deliver results.
Please don’t even compare climate modes with aircraft industry
as it is a joke. The climate models are not even kids toys compared to aircraft industry models, engineering level audits and performance. It’s a joke.
Aircraft industry know what they are doing. They are honest and follow the scientific method.
An aircraft built with the science displayed to the equivalent of your models wouldn’t even take off.
What I get from this argument so far is:
The climate sensitivity in models is determined by a series of equations based on laws, theories and hypotheses. This leads to the equation:
Law x Theory x Hypothesis = Hypothesis
The models are not adjusted to change the climate sensitivity, rather they are adjusted to explain/eliminate differences between the hypothetical values derived and the observations. The typical input adjusted to explain these differences are things not well understood such as aerosols since these allow a wide range of adjustments while still remaining in their wide range of derived values.
These adjustments make perfect sense if the climate sensitivity as derived by the model is accurate. If that value is not accurate these adjustments make an inaccurate value appear accurate. Am I keeping up so far?
“The models are not adjusted to change the climate sensitivity, rather they are adjusted to explain/eliminate differences between the hypothetical values derived and the observations.”
Steven – I think that’s correct to an extent, but within limits, with aerosol forcing the best example. Models generate their own individual climate sensitivity values. Before being forced with a changing condition (e.g., rising CO2), they must be calibrated to the starting climate, which involves some tuning of model parameters so that the model will reproduce the existing climate. The model is then run with the imposed forcing, and whatever the result, the modeller must live with it – he or she cannot go back, change the parameters, rerun the model, and then publish only the “corrected” results. If modellers could do that, their data would match real world observations better than they do.
On the other hand, temperature responses to forcing require knowing both the climate sensitivity parameter (an unchanged attribute of individual models) and the forcing that the parameter translates into a temperature change. Some forcings are known to reasonable accuracy (e.g., the 3.7 W/m^2 response to doubled CO2 – estimates of this value have changed over the years but as the comparisons between observations and the estimates have multiplied, it seems unlikely future revisions will result in dramatic further changes). Others, however, are less well known, and so disparities between model outputs and observations legitimately raise questions as to whether the forcings utilized in the models were accurate. Aerosol cooling has been one of the more vexing challenges in this regard. If the forcing attributed to it is wrong, the model output will be wrong, and so modellers have tested a variety of forcings to determine which yields the most accurate model output, and have reported both the better and poorer matches. What they have also done, however, is attempted to use observational data on aerosol optical depth, as well as model-based calculations of indirect aerosol effects on cloud formation and persistence, to ensure that their applied forcings are not wildly inconsistent with reality. We know from extensive data on solar transmittance to the surface during the 1950s through 1980s that aerosols exert substantial cooling effects, and so a particular choice of forcings does not represent the invention of a fictional cooling effect to make things “come out right” but rather an effort to assign a quantitative value to a real cooling. Considerable uncertainty is involved. On the other hand, the magnitude of the observed “dimming” appears sufficient to exclude a null or warming role for aerosols. This in turn will tend to exclude very low values for climate sensitivity to CO2, although it leaves a wide range of values as remaining possibilities, including the 2 -4.5 C canonical range estimated from a large multiplicity of studies – a range that has changed little in recent years despite occasional paper reporting values outside of the range in either direction.
If we can acquire a more accurate picture of aerosol forcing, the range of compatible climate sensitivities will be reduced. Unfortunately, the recent loss of the Glory satellite will delay that objective.
Oops – did I make an italics error?
Fred, I don’t really see where you are disagreeing with me regarding aerosols. The fact is the best we know leaves a lot of room for adjustments. It was a shame about Glory.
yes in fact if you read my other comments to people concerned about tuning I refer them to chapter 9 of Ar4 ( the supplumental information if I recall its been a couple years) to look at how aerosol tuning is done. The aerosol issue is one of the reasons lukewarmers exist.
Law x Theory x Hypothesis = Hypothesis
############
no. All laws are themselves contingent hypothesis. Every last bit of science is contingent. not true by definition, not mathematically true. contingent. It all could be wrong. 2+2=4 is a mathematical truth. you cannot imagine it could be otherwise. F=MA is not a mathematical truth. it is taken as true to DO THINGS. It is useful for some things and less useful for others. I can imagine a world where F does not equal MA. To be sure, we are always taking certain parts of science and holding them to be more certain than others. That is, we have certain statements in science which we choose to accept in order to see what follows from them. If I choose to accept that F=MA, then I can make a prediction. Take a Mass, accelerate it, then the force should be MA. +-e. After a while, people stop thinking about F=MA as a contingent truth. they call it a law. That means one thing: they wont waste time trying to disprove it. But its still contingent, and someday somebody might suggest that it’s not really true and revamp a bunch of mechanics.
“The models are not adjusted to change the climate sensitivity, rather they are adjusted to explain/eliminate differences between the hypothetical values derived and the observations. The typical input adjusted to explain these differences are things not well understood such as aerosols since these allow a wide range of adjustments while still remaining in their wide range of derived values.”
read chapter 9 AR4 for a better explanation of tuning aerosols for attribution studies.
“These adjustments make perfect sense if the climate sensitivity as derived by the model is accurate. If that value is not accurate these adjustments make an inaccurate value appear accurate. Am I keeping up so far?”
Not really. The climate sensitivity exhibited by the models is from 2.1 to 4.4.
Other estimates from observations and paleo are much broader, say from 1.5 to 6 or 10. One could ( hansen does) argue that the models are conservative.
here’s an example; hansen’s ModelE, his GCM estimates the sensitivity at 2.7. hansen’s paleo work estimates it at 3C. Hansen thinks the paleo work is more solid than the model estimate.
Put another way. If criticisize the models for having too high a sensitivity, you still have the problem that the other stronger forms of evidence suggest higher values. You’re attacking the weakest evidence. bad debating approach.
The determination of law vs theory vs hypothesis designates a level of confidence in their output. In this way they are definately different.
I posted a very good reference on aerosol modeling earlier. I think if you examine it you would find it more complete then AR4.
It depends on what the purpose of the debate is. If it is to show that the best that can be done is only as good as the weakest link then the weakest link is what should be pointed out. If there is some other purpose to the debate then it would depend on the purpose as to what the best debating technique is.
As far as climate sensitivity goes, I see no reason to assume it is higher then current measurements indicate it is as per “why hasn’t earth warmed as much as expected”. Of course I didn’t mention climate sensitivity in my question so this was an answer to a question left unasked, but I am perfectly willing to share my views on this topic also.
I should say I didn’t mention climate sensitivity in terms of declaring it too high or too low as modeled.
the problem is you cannot use current observations to estimate sensitivity. why? its very simple.
Sensitivity is defined as delta T, AFTER suffiecient time has passed for the system to respond to the forcing.
Again, with my car experiment. You want to know the DELTA Mph of apply full throttle. You stomp on the accelerator. INITIALLY, for a nanosecond the car doesnt move. you dont measure delta mph then.
the car speeds up.. you can measure, but you dont see the FULL EFFECT in a short time period. then the car hits top speed. 100 mph.
and it stays there. Thats your steady state response.
So current observations dont really help you that much, you have to let the full effect play out. MORE IMPORTANTLY, in current observations OTHER VARIABLES are not constant.
BY DEFINITION sensitivity to doubling means everything else stays the same. So, that is why you cannot SIMPLY use observations. You need lots of assumptions (uncertainty) to reduce the observations to a form that can be used for sensitivity questions.
steven,
With all due respect that is an example of circular reasoning.
No model, however sophisticated, can predict climate change for number of very simple reasons:
-CO2 is only a very minor factor of forcing.
-Solar activity oscillations leave an imprint but not commensurate with major changes such as the MWP, LIA, 1950-80 decline, 1980-2000 rise.
– Natural forcing is not supported by cyclical regularity; AMO, PDO, ENSO etc. have no reliable periodicity.
We just have to accept the fact that the climate long term changes are not predictable!
So, uh, you wouldn’t expect another glacial period in tune with the Milankovic cycles?
No, it is not what I meant. There are certain natural causes affecting climate to which science is unable to assign ‘timeline of future occurrence’, hence no viable predictability.
Milankovic cycles, as the other well known orbital parameters, are forward calculable.
they certainly are predictable. the question is HOW ACCURATE and how useful is the prediction.
I predict by 2100 the avearge temp will be between 0C and 28C.
right now its 14C. I just made a prediction, therefore it is possible to make one.
the question is How accurate and useful that prediction is.
That question is not answered or addressed by anything you have ever said
Mr. Mosher
-Since it happen to be a science discussion, I referred to possibility of a prediction based on scientifically recognised facts (not the crystal ball one), which is likely to materialise.
That question is not answered or addressed by anything you have ever said.
-I am flattered that you keep record and review of ‘everything I ever said’, but of course you are entitled to your views and opinions, however wrong they may be.
Crude methods of indoctrination work with many, subtle ones with more, but free thinking mind is impermeable to either.
I wonder how the models of 10 years ago did on estimates of climate sensitivity.
We have a lot of data on actual forcings over the last 10 years.
The amount of black carbon.
The amount of TSI.
The amount of methane.
The increase of CO2.
etc.
Have we (or can we) take all the measured forcings over the last 10 years and plug them into the 22 climate models (as they existed 10 years ago – not as of today) – let them run out 10 years (and to equilibrium) and see what the sensitivity is?
I know we have only increased a fraction of a doubling of CO2 in the last 10 years – but it seems we should be able to evaluate the models of 10 years ago doing this – is that correct? Or has it already been done?
Would that provide any useful information or check on the models? Would this provide any insight into the projected climate sensitivity?
10 years would not get you an steady state response. Putting in actual forcings for the past 10 years would not give you the answer to the question.
The question is. If you double C02 and hold all other variables at their present values, what is the DELTA TEMPERATURE you see when enough time passes that the model reaches stability again.
Run the model for a thousand years till T doesnt change. Then double C02 and only C02. run the model for another 1000 years. Observe DELTA T.
the change in T due to the doubling of C02 is the “sensitivity”
Put in 10 years of real observations of forcings, some going up some going down, run the model for 10 years? you get an answer, but thats a TRANSIENT state.
steven,
But CO2, nor any other variable, exists in a non-dynamic state.
That renders the results to offer very little, if any, value.
On the contrary. Of course C02 is “dynamic” in a controlled experiment I tell you that flooring your car will lead to a top speed of 100mph.
You complain that real accelerators vibrate and it will vary between 99% and 100% floored. You complain that sometimes the wind is in your face, that the road might be bumpy. All true.
The question is this. Is it wise to floor the car when the speed limit is 25? well, no. You can’t count on unknown forces to slow you down.
So, you cannot say the result CATEGORICALLY has little value without first understanding the kind of decision being made.
this type of diagnostic exercise is absolutely science at its best.
Yes – that’s the sensitivity to CO2 – only.
But it’s not the “climate sensitivity” which is the quantity in question. And for that, one needs more than just variations in CO2. You know the list – ALL those forcings and feedbacks that have been ignored, neglected or just blown off as insignificant. But some of them may be proving to be not as insignificant as has been previously assumed.
Do you remember the breakdown on that word? ASS-U-ME. I learned that while designing nuclear power plants – it was reinforced by the Green Machine – and then hammered home by a group of atmospheric scientists and a really hardcore spacecraft systems engineer. Some of today’s “climatologists” would do well to learn it.
I’ll apologize for the bold. It wasn’t supposed to be that way.
But it’s not the “climate sensitivity” which is the quantity in question.
#####
that is ALL we have been talking about since my first post which was a correction to the huge error in the article which claimed that scientists INPUT a sensitivity. they DONT. any sensitivity is an output METRIC .
You should ALSO NOTE that I said the real uncertainity was two things
1. FORCINGS ASSUMED
2. MODEL COMPLETENESS.
To repeat. DELTA T (sensitivity) is not assumed. it is not an input to the models it is an output METRIC. It is dependent, as ALL OUTPUT IS, on certain assumptions. All physics, all experiments, all oberservation is RIFE with assumptions. Assumptions come at every level and in many forms. from assumptions about physical quantities, to assumptions about laws of physics holding. but DELTA Temperature, is just that temperature 1 – temperature 2. Not an input, its a output metric. Not assumed, but rather the result of assumptions.
Let me put it a different way. If hansen ran his GCM and the result was 14C, and then he doubled C02 and the result was 15C, very few people on this thread would have ANY issue saying the sensitivity was 1C per doubling. But, the think that since he says its 3C that there must be something wrong with this simple concept.
Put ANOTHER WAY, roy spencer has calculated rather low sensitivities. You do not see skeptics attacking the notion of sensitivity when he uses it. why? cause they like the magnitude of the answer. plain and simple.
steven,
No, I like Spencer’s estimate because it does not invoke apocalypse and calamity, which means it is much more likely to be true.
It is the likelihood of it being useful that is attractive.
The question you raise is why so many believers glom onto the historically rare apocalypse?
hunter,
‘…I like Spencer’s estimate because it does not invoke apocalypse and calamity, which means it is much more likely to be true.’
That’s a pretty amazing assumption about the relationship between those two ideas. Are there any physical reasons why we should believe that the level of ‘calamity’ involved in particular person’s interpretation of any situation correlates with the likelihood of that interpretation being true? It would seem like a rather novel finding to say the least.
Or is that just a convenient conclusion to draw in this context?
Pielke the Elder produced a paper a few years ago tying La Nina events to family outbreaks of tornadoes, while many others have made more conservative connections between those events. I guess Roger’s conclusion is more likely to be wrong then, huh?
maxwell,
To answer your question, “Are there any physical reasons why we should believe that the level of ‘calamity’ involved in particular person’s interpretation of any situation correlates with the likelihood of that interpretation being true?” The answer would be ‘Yes.”
The Earth is big, robust and has been around a long, long time. If it was prone to apocalyptic calamaties, we would not be here now. This is the dog not barking.
Additionally, history is full of apocalyptic predictions of global destruction due to man’s wickedness.
The track record for them is 100% wrong.
These tornados, as tragic as they are locally, are not apocalyptic in scope. They are examples of how historically dangerous weather actually is.
It is the parnoic/religious/magical thinking perspective that sees events like this as some sort of proof of punishment for some evil, whther it be sin or CO2.
hunter,
‘The Earth is big, robust and has been around a long, long time.’
So because the earth, as a rock floating through space, has been around a long time, we should believe people’s less calamitous interpretations of physical reality concerning our own behavior? Even with respect to our lives, which are significantly more fragile than the just the physical persistence of the earth?
Moreover,
‘If it (the earth) was prone to apocalyptic calamities, we would not be here now.’
is not right. We are here precisely IN SPITE of apocalyptic calamities. There have been substantial extinction events in which fossils records seem to show large portions of the entire animal population have been wiped out, never to be seen again. It’s actually a recurring theme on earth, in fact.
All have been ‘natural’ calamities in the sense that they have not been caused by humans. But they have calamities nonetheless.
I like this one though,
‘Additionally, history is full of apocalyptic predictions of global destruction due to man’s wickedness. The track record for them is 100% wrong.’
Again, this conclusion in the context of global warming necessitates a real connection between those past predictions and any current predictions about humans’ behavior, ie burning stuff. You are merely assuming that such a connection exists, despite the fact that basically none of the predictions to which I think you are referring are based on modern physical science. The vast majority of the predictions for worldwide collapse I am aware at least are based on religious belief. That is, little more than superstition and often as a way for power to be gathered by a smaller and smaller group of people.
So you are comparing predictions based on power-grabbing and superstition to predictions made using the same knowledge that makes your computer work. That doesn’t seem like a robust manner of making this connections.
I agree with your general skepticism toward many of the more extreme predictions with respect to the climate response to an ever increasing greenhouse effect. But the answer to those extreme predictions, whose foundations are very much debatable, is not to make equally undefended statements that are cast as truths despite the fact that for which there is little to no evidence to support.
If the whole idea of this post is hold practitioners of science, both professional and amateur, to higher standards when expressing concerns over the ‘threat of climate change’, again, we (skeptics) ought to be held to a higher standard as well.
So far, much of this thread is failing on that front.
maxwell,
You make some good points, however, I would point out:
We are here because of everything that has happened in the past. The apparent collision of the Mars-sized planetoid that led to luna, tides and probably active plate tectonics, etc.
But we are talking about the history of Earth, and over its multi-billion year history, most epochs have been mostly boring. It has been ~60+ MY since the Yucatan area strike apparently ended the dinosaurs and let small furry critters have a go at it, for example.
We are here as aresult of these events, and that is a significant difference from ‘inspite of’.
The era of h. sapiens has been notable in its lack of global apocalypse, and I think I will continue to place my marker on the bet that has that continuing for the forseeable future.
As to quality of predictions, I would point out that each era that makes them finds them made by the elites of that day. Noah and the other flood stories were not the stories of the ignorant, but of the ruling classes.
The apoclayptic literature of 2000 years ago was done by literate educated people.
I see little difference between AGW, Lysenkoism, eugenics and the older failed apoclayptic stories. They have more in common than they have in difference.
I think this thread is doing just fine.
hunter,
I have little doubt that you think there is a great similarities between extreme predictions of climate catastrophe and Noah’s Ark. I’m simply pointing out that there is no rigorous evidence that such a comparison is worthwhile or informative as a predictive measure.
That is, you’re point is that because past predictions of catastrophe have been wrong, for you and yours at least, the future predictions of similar catastrophe are also wrong. Yet, the only connection between these predictions is the, likely highly disputable, fact that ‘intellectuals’ wrote the predictions down. Even if we assume this connection to be true, it’s still a very poor standard for correlation.
On it’s flip side, why would intellectuals’ predictions, like those of Dr. Spencer, be correct? What is the operational difference between an ‘intellectual’ who makes a catastrophic prediction versus the ‘intellectual’ who does not?
You have provided nothing in terms of evidence that shows any distinctive difference in reasoning between those two scenarios. You simply assume that your conclusion is true, then present it as truth. It’s exactly the same logic employed by ‘alarmists’ who are willing to believe their own chosen predictions.
In a thread about the scientific method, such a methodology just does not cut it.
Moreover, how is eugenics an example of a catastrophic prediction?
I am not sure where this thread is supposed to be about climate snesitivity, but the fact is that we have absolutely no idea what the numeric value of climate sensitivity is. We cannot estimate no-feedback climate sensitivity (delta T = Delta F/lambda is nonsense ). When Fred talks about climate sensitivity as if it has some sort of meaning in physics, I want to shout, “Where was no feedback climate sensitivity ever measured?” Since the answer is never, then climate sensitivity is a meaningless, hypothetical concept.
we certainly do have an idea of what the numeric value is.
If it was too low, you’d never get out of the snowball earth.
Since we had a snowball earth and since we got out of it we can in fact use that fact to help bound the problem
http://astroblogger.blogspot.com/2010/02/snowballs-snowjobs-and-lambert-monckton.html
For a list of papers and the estimates you can view this page
http://www.skepticalscience.com/climate-sensitivity-intermediate.htm
here is a nice one that uses ice core data
http://www.agu.org/pubs/crossref/2008/2007GL032759.shtml
1.3 to 2.6
this uses observations to estimate.
http://www.sciencemag.org/content/295/5552/113.abstract
so saying we have no idea what it is is wrong. the range of estimates is large, but look, I’m pretty certain you are between 3 ft tall and 8 feet tall.
That’s useful knowledge. not perfect knowledge, not useful for every decision, but useful for some.
The question is: what decisions do we have to make if the figure is low versus what decisions do we have to make if it is high? that’s a practical question
Steve Mosher says: “Since we had a snowball earth and since we got out of it we can in fact use that fact to help bound the problem”.
A number of Canadian geologists do not agree that that deep-time “snowball earth” has been established. They observe that the deep-time geology interpreted as evidence of glaciation can be more plausibly interpreted as turbidites.
Cites please?
Look for articles by Nick Eyles of the University of Toronto or by E Arnaud of U of Guelph. I don’t recall which ones are most apt for your purposes offhand.
I note in passing that Richard Alley, familiar from climate change debate, has been a very strong advocate of snowball earth.
“I note in passing that Richard Alley, familiar from climate change debate, has been a very strong advocate of snowball earth.”
I note in passing the subtle insinuation of a relation of cause and effect. Note also the term “advocate”, by contrast with those (the two Canadian geologists) who “observe” and “plausibly interpret”.
Another question I have is related to the .8C of warming we have had since 1850.
Is there an journal article which trys to estimate what portion of this is UHI, what portion of this is black carbon, what portion of this is methane, what portion of this is CO2, etc.?
When I read Steven Mosher’s explanation of determining climate sensitivity I get a little confused about the holding all other forcings except CO2 constant.
What use is determining a climate sensitivity number for a doubling of CO2, assuming all other forcings stay constant, when in reality all of the other forcings will change over whatever period of time it takes to double CO2.
Wouldn’t it be more useful to provide estimates of each forcing for the next 30 years (or 10 or 50 or whatever interval is desired), and then look at what the climate models say the increase in temperature will be for 30 years?
At a minimum, if the estimates of the forcings were correct, and after 10 years the climate model results, compared to actual observations of temperature, turned out to bedead on – or really off, either way this would be useful information.
Maybe this makes no sense – but this was the thinking which lead to my last question about what would happen if we plugged the known forcings over the last 10 years into the existing climate models of 10 years ago.
“When I read Steven Mosher’s explanation of determining climate sensitivity I get a little confused about the holding all other forcings except CO2 constant”
the doubling experiments are designed to isolate the effect of C02. to provide diagnostics of system performance.
For forecasting, you build a dataset of all forcings. That’s a projection based on an emission scenario. thats different than the doubling experiments.
Take a look at the design of experiments for Ar4 or Ar5
DOUBLING experiments are DIAGNOSTIC. what happens if we slam the peddle down. It’s idealized. it simulates a controlled experiment. Its the kind of basic test you do with ANY model. They could also do a doubling TSI experiment, but that would be boring.
Steven,
In the UNEP page I linked to earlier – CO2 was increased at 1%/year – and stabilisation shown at double and quadruple CO2. It is done this way so that feedbacks in clouds and water vapour can be included in the output.
Cheers
Thanks for that I neglected to mention all the various experiments
Steven,
Since you answered a similar question I was going to ask about holding all other forcings constant while doubling CO2, you also said…
“Take a look at the design of experiments for Ar4 or Ar5”
Can you point me in the right direction with a link…. you seem to have a lot at your fingertips and I would be very interested in seeing the DOE(s) employed.
GIYF
Ar5 experimental design GCM
or start here
http://cmip-pcmdi.llnl.gov/cmip5/
http://cmip-pcmdi.llnl.gov/cmip5/experiment_design.html?submenuheader=1
best
http://cmip-pcmdi.llnl.gov/cmip5/docs/Taylor_CMIP5_design.pdf
there are many experiments, include the core diagnostic runs:
“For the diagnostic core experiments (in the lower hemisphere), there are the calibration-
type runs with 1% per year CO2 increase to diagnose transient climate response (TCR),
an abrupt 4XCO2 increase experiment to diagnose equilibrium climate sensitivity and to
estimate both the forcing and some of the important feedbacks, and there are fixed SST
experiments to refine the estimates of forcing and help interpret differences in model
response.”
But read the whole document.
look at modelE results for a variety of experiments.
here’s a small example
http://data.giss.nasa.gov/efficacy/#table1
If you need help in figuring it out I gave instructions some time ago on climateaudit. GIYF ;steven mosher modelE climateaudit.org
Steven,
Thanks for the links… will keep me busy awhile…
Steven, yes I am aware of the concepts of transient and equilibrium sensitivities and I understand their meaning. When I see an attribution that doesn’t attribute all the recent warming to the transient response and instead attributes some portion of it to forcings from earlier in the century is when I will take the argument seriously. Until such time as I see that happen I can only assume one of three things:
1. The effect is so small the warming of the past makes no difference to the present and therefor the warming of the present will make no difference to the future.
2. This is a poorly thought out hypothesis intended to to explain why the amount of warming disagrees with that expected.
3. The world was created 30 years ago.
You can look at Lucia’s lumped parameter model to get a sense of that. Its really quite simple. If you didnt follow that discussion sometime ago, the GIYF
Steven, it may seem quite simple to you, it is not so simple to me. If you look at the summary for policy makers from AR4 you will see the models using only natural forcings going down at about 1950 correlating closely with solar cycle strength. This would indicate a short lapse rate with that portion of the forcing having already achieved equilibrium. To mke matters even worse they claim this equilibrium was achieved during the same time period they are claiming aerosols were cooling the earth. Now you can make assumptions about aerosols and you can make assumptions about lapse rates and I don’t have the time knowledge or inclination to learn enough to argue the technical details with you. But what I can argue is that when you use contradicting arguments you need to be able to explain why. So my question is: why doesn’t natural forcing have a lapse rate equal to co2 lapse rates and why don’t aerosols prevent natural forcings from achieving equilibrium?
The meaningfulness of “climate sensitivity” has been questioned here, which is legitimate, because it is also questioned within the climatology arena. It is not the only metric available to assess the temperature response to a change in CO2; for example, some observers prefer to evaluate the “transient climate response” involving a 1 percent annual CO2 increase.
Furthermore, it is generally not considered convenient to perform the appropriate climate sensitivity experiment – instantaneously double CO2, make sure nothing else changes for the next one thousand years (keep the sun constant, for example), and then measure how much global temperature has risen. Nevertheless, the concept of climate sensitivity is one useful metric among others for evaluating climate behavior.
One reason is that it is a convenient means of permitting different groups to compare notes on important climate variables. If we instantaneously double CO2, the calculated forcing is about 3.7 W/m^2. This means that if there was previously little of no flux imbalance at the tropopause, an imbalance of 3.7 W/m^2 will now exist (after a brief interval for the stratosphere to adjust), and the climate response can be estimated. This is important because by definition, a forcing describes the imbalance that is created before anything else has had a chance to change. In particular, it is the imbalance that exists before surface temperature changes, and since the feedbacks critical to the full climate response to a perturbation are responses to temperature change (e.g.., water vapor, clouds, ice, etc.), we can ask how temperature and feedbacks will evolve starting from a 3.7 W/m^2 imbalance. Note that a gradual doubling of CO2 would also generate a 3.7 W/m^2 forcing, but because the climate will already be responding while that is happening, the imbalance will be undergoing reduction throughout and will be less than 3.7 W/m^2, and perhaps difficult to estimate.
If different groups all start with a standardized 3.7 W/m^2 imbalance, they can attempt to model climate responses, including the pace of temperature and feedback evolution. This can then be scaled to the magnitude of imbalances that are actually likely to be observed within reasonable timeframes from a variety of forcings (not necessarily restricted to CO2), thereby allowing the modeled estimates to be compared with observations. This will be particularly useful for assessing such phenomena as ocean heat uptake and the differences in timing among different feedbacks. In that sense, climate sensitivity is a metric of convenience. It is not indispensable, but it provides a useful tool for assessing climate responses and describing them in a way that is easy to visualize, even if impossible to measure in its own right.
Fred Moolton writes ” If we instantaneously double CO2, the calculated forcing is about 3.7 W/m^2″
First a minor correection. The word should be “estimated”, not “calculated”. There is no way to either calculate or measure the change in radiative forcing.
But it never ceases to amaze me the way people like Fred treat these hypothetical numbers used in estimating climate sensitivity as if they actually mean something. They are completely meaningless. They never have been measured, and they never will be measured. We have no idea what their numeric value is, and we never will know.
This is like believing that if you repeat a lie often enough, people will believe it. We are never going to be able to estimate cliamte sensitivity. In the end, we will get data as to how much global temperastures rise as a result of adding CO2 to the atmosphere. Until then, any numbers quoted are not worth the paper they are written on.
Jim Cripwell, I’m right on board with the thrust of your post, but, if I may, you go a bit overboard IMHO. As I have repeatedly said, the forcing calculations have tremendous looseness and ignored variability, but they are not useless or meaningless.
‘In particular, the global mean temperature change which occurs at the time of CO2 doubling for the specific case of a 1%/yr increase of CO2 is termed the “transient climate response” (TCR) of the system.’
‘The “equilibrium climate sensitivity” (IPCC 1990, 1996) is defined as the change in global mean temperature, T2x, that results when the climate system, or a climate model, attains a new equilibrium with the forcing change F2x resulting from a doubling of the atmospheric CO2 concentration.’
http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/345.htm
See figure 9.1 – both of these emerge from the same models. One is seen at the time of CO2 doubling and the other occurs some 100’s of years later.
Both are problematical because the ‘irreducible imprecision’ of models emerges from properties of ‘sensitive dependence’ and ‘structural instability’. Fundamental properties of complex dynamical systems – and these models in particular – in theoretical physics.
The problem of modelling hundreds of years into the future emerges from the dynamical complexity of climate itself. Climate shifts in 1976/1977 and 1998/2001 show that even short term modelling is fraught with uncertainties. In particular – the potential for cooling for another decade or 3 – as a result of a oceanic influence from both the Pacific and Atlantic Oceans – is an emerging theme in the scientific literature.
This study which uses the PDO as a proxy to model impacts of Pacific influences on Earth systems. ‘Decadal-scale climate variations over the Pacific Ocean and its surroundings are strongly related to the so-called Pacific decadal oscillation (PDO) which is coherent with wintertime climate over North America and Asian monsoon, and have important impacts on marine ecosystems and fisheries. In a near-term climate prediction covering the period up to 2030, we require knowledge of the future state of internal variations in the climate system such as the PDO as well as the global warming signal. ‘
‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’
http://www.pnas.org/content/107/5/1833.short
While these authors carefully suggest only a slowing down of surface atmospheric temperature rise – the intensification of the cool La Nina phase of the Interdecadal Pacific Oscillation – may be one of those surprises anticipated in the 2002 NAS publication: ‘Abrupt climate change: inevitable surprises’. It should be remembered that oceanographers and hydrologists have been studying these phenomenon for 100 years. We shall follow developments with amusement.
However, I can only see this as a problem. It may be somewhat understood that I am neither one nor the other when it comes to the climate wars. There is a risk in anthropogenic greenhouse gases that emerges from sensitive dependence in chaotic Earth systems. But the insistence that the planet continues to warm when it quite obviously is not can only be counter productive. Many people could use a new narrative and a bit of humility and grovelling would not go astray.
Chief –
But the insistence that the planet continues to warm when it quite obviously is not can only be counter productive. Many people could use a new narrative and a bit of humility and grovelling would not go astray.
So what’s the probability of that happening in our lifetime?
if we can separate certain issues, then we can move onto the real issue which is the one you raise. The red herrings that sensitivity is an input to models, that it is an assumption rather than the result of assumptions, are all issues I’d like put to bed, so that the conversation can turn to the issues you raise, which are recognized INSIDE the science. that’s my whole point.
steven mosher and Jim Owen
Let’s not spend too much time beating around the bush.
The range of climate sensitivity estimates claimed by IPCC is the result of model simulations, based largely on theoretical deliberations and assumed inputs.
As such, they are assumptions. They bear very little resemblance to real-life physical observations.
Theoretical physics is wonderful, but actual physical observations are better.
And these do not support the high estimates of 2xCO2 climate sensitivity claimed by IPCC.
That is the crux of the problem we are all discussing here.
Max
“The range of climate sensitivity estimates claimed by IPCC is the result of model simulations, based largely on theoretical deliberations and assumed inputs.”
WRONG.
I provided a link to knutti’s papers. models are but ONE piece of the puzzle and not a very important one for all the reasons Hansen says.
The evidence from paleo work and from work on observation datasets is the primary evidence. That work gives you ranges from ~1.5 to over 6. If the models did not agree with this observationally based range they models would have to be fixed.
Let me repeat. Take studies on Volcanos for example. They constrain the value to lie between 1.5 and 6. when the models prodduce a rnage from 2.1 to 4.4 that indicates that the models are consistent with the range established by observation. Get it? If you like observationally constrained estimates.. Guess what? You get higher averages.
steven mosher, HUH?!? These thousands of KLOCS that took hundreds of PhDs years to develop pale in the face of solid near-irrefutable paleoclimate data and evidence? Really? Those half-dozen trees and dozen ice cores are top notch, I’m sure, but significantly better than GCMs? Of course, if so, there’s still that little problem of the poor and reverse correlation between paleo CO2 and temperatures. Boggles the mind.
Steven Mosher
I have very much enjoyed your comments here and elsewhere, as well as your book, so don’t get me wrong here.
You have cited Knutti et al. as a source of information supporting the validity of the IPCC model-based assumption of a 2xCO2 climate sensitivity of 3C on average.
I have gone through this and am quite disappointed that it does not move out of the strictly hypothetical into the more practical or actually observed.
Let’s look at one of the first paragraphs:
OK. One at a time.
CCF is a purely hypothetical “red herring”; it is good that Knutti et al. did not attempt to “quantify” it; they should not even have mentioned it at all.
This is a statement of “faith”, as there are no empirical data to support it. The Hansen et al. paper that introduces the concept of energy “hidden in the pipeline” is based on model projections, circular logic and questionable arithmetic.
TCR is a hypothetical value.
The “ocean heat uptake” as postulated by Hansen et al. has been falsified in real life by the ARGO measurements since 2003.
The “model simulation in which CO2 increases at 1% per year” is unrealistic to start off with: CO2 has increased at a CAGR of around 0.4% per year since Mauna Loa started and around this same rate over the most recent period, as well. It is foolish to load a model with a rate of CO2 increase, which is unrealistically exaggerated to start off with.
This is another statement of “faith” relating two purely hypothetical concepts based purely on theoretical deliberations.
The rest of this report is all about model simulation results. Interesting but not conclusive of anything in real life.
It appears that these modelers really start believing their models, as if they were some sort of modern oracles instead of simply multi-million dollar versions of the old slide rule.
Knutti et al. is interesting reading, but does not reflect real life in any way.
We need empirical data based on actual physical observations or reproducible experimentation, rather than simply model simulations based on theoretical deliberations.
As Steve McIntyre has stated very clearly: we need an “engineering quality” estimate of 2xCO2 climate sensitivity, which shows that this is 3C (and that AGW is, therefore, a potential problem).
This does not exist to date, despite at least 30 years of search.
That is the inherent weakness of the IPCC premise that AGW, caused principally by human CO2 emissions, has been the primary cause of 20th century warming and, therefore, represents a serious potential threat to humanity and our environment.
Max
This discussion is motivating me to do a new post on sensitivity (not sure when I will have time tho). In case you missed it the first time around, here is my previous post on sensitivity http://judithcurry.com/2011/01/24/probabilistic-estimates-of-climate-sensitivity/
Thanks Judy.
A couple things that might be helpful is to for folks to read knutti.
here is my main concern. There are many misunderstandings of
1. the definitions of sensitivity
2. how it is derived
3. the various lines of evidence.
4. that its not an input to models.
People have learned some stupid pet tricks to get along in web debates. they need to understand that these stupid pet tricks prevent them from having a real debate. and there is a real debate over sensitivity. They can join that debate. But they cannot join that debate by arguing that sensitivity is either
1. input into models
2. derived SOLEY from models
Both if those are demonstrably wrong. THE debate is not over, but arguing about those two points is not a debate. If they want to argue that I suggest they go discuss Obama’s birth certificate or Bush’s plan to destroy the twin towers.
Sensitivity is an input to the model if the human model builders are in any way using the results of the sensitivity analysis to adjust the model parameters.
It may not be an automated input, but as soon as you start making manual adjustments to the model input parameters because the model is predicting CO2 sensitivity too high or too low for what you believe to be correct, then CO2 sensitivity is being fed back into the model as an input.
Remember “Clever Hans” the horse that could do mathematics? The classic example of unrecognized feedback during training affecting the final results.
ferd,
‘Sensitivity is an input to the model if the human model builders are in any way using the results of the sensitivity analysis to adjust the model parameters.’
You’re saying that if the modelers input different observational data that becomes available to constrain their results, that climate sensitivity is then an input? Or that if they realize that they can no longer neglect a term in a particular differential equation for the pertinent forces, by adding that term into the model they are making climate sensitivity an input?
The model itself is just equations defining the physics involved. Those can be set by boundary or initial conditions or both in the same way that any differential is solved. With so many equations involved, unless you showed me the exact path between the changing of ANY parameter in those equations to meaning changing sensitivity as an input, I am highly skeptical that your comment has any meaning in the context of actual models, model performance or model optimization.
Yet again, we have a specific scientific claim (any optimization leads to climate sensitivity as an input) about climate models being presented on a thread about the scientific method. Let’s see if the scientific method can validate or nullify the claims being made. My money is on nullify on this one, although I don’t think anyone will take this bet.
What is often unrecognized by people outside of computer science is that any computer model that uses learning techniques is subject to the same sorts of contamination as can occur in animal training experiments.
see:
http://en.wikipedia.org/wiki/Observer-expectancy_effect
The observer-expectancy effect (also called the experimenter-expectancy effect, observer effect, or experimenter effect) is a form of reactivity, in which a researcher’s cognitive bias causes them to unconsciously influence the participants of an experiment. It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.
Models are not purely physical, because many physical effects are at best estimates. For example, cloud albedo.
When you adjust the parameters and/or their weighting as is done during model backcasting, then you are training the model.
The model has no way to distinguish between those effects resulting from physics as compared to those resulting from unconcious cognitive bias.
Like “Clever Hans” the model may deliver the answer it does because that is the answer the modeller expects, not because the model calculated the correct answer.
“Sensitivity is an input to the model if the human model builders are in any way using the results of the sensitivity analysis to adjust the model parameters.”
the only documentation we have about “tuning” or adjusting models is in regard to hindcasting tests, not diagnostic runs.
So, until you can document that the results of diagnostic runs are used to adjust inputs, your point is speculation.
As I noted, the only kind of adjusting that people talk about is in chapter 9 of Ar4.
Adjustments to forcings, like aerosols, will not change the effect due to doubling C02, because forcings, it has been shown by diagnostic testing, combine linearly.
Simply: turn off aerosols. double c02. DELTA T ~2.7
turn on aerosols. double c02. DELTA T ~2.7
Aerosols are adjusted for hindcast purposes, so that argument is an attribution concern, not so much a doubling c02 concern. Forcings combine linearly.
ferd,
‘…any computer model that uses learning techniques…’
There is a distinction you are assuming about the numerical methodology used to converge on a specific solution to the differential equations in the climate model itself. It is not using past solutions in any kind of feedback loop that allows the research access to those solutions for tampering. It is just the digital computation of a finite difference method that is a stable algorithm for solving differential equations.
So all the points you’re making about experimenter bias are totally off base, especially from a computer science perspective. The climate models are solved numerically and the solutions are then tested against known historical data. Based on the agreement with that data, parts of the differential equations are ‘tuned’ to match that data more accurately. Since the climate sensitivity is NOT a piece of observational data, there is no way that researchers can input that into the model to begin with, but also no way for them to ‘tune’ it in later.
So again, there is no learning optimization involved in solving the differential equations based on well known physics that are basis of the climate model. It is simply the application of the well-understood and stable technique of finite difference used to numerically solve those differential equations.
Your point is just out of context.
“the only documentation we have about “tuning” or adjusting models is in regard to hindcasting tests, not diagnostic runs.”
If you check my previous you will find I am talking about hindcasting. (I called it backcasting).
As soon as you hindcast a model to improve the fit, you are talking about training. This imparts learning to the models and they suffer from the same problems as animals and humans during training.
Like Clever Hans, they learn to provide the answer trainer expects. Like the horse, the model doesn’t care how it got the answer. The trainer may be convinced the horse is performing arithmetic (forecasting climate), while in reality it is doing something quite different (analysing the trainer).
Say for example, say you use 30 for input X (eg: cloud albedo), with a 5% weighting. You backcast and find that you can equally improve the hindcast by adjusting input X to 32, or by adjusting the weighting to 6%. Which do you do?
Is 32 correct? Or a 6% weighting? Or maybe the historical data you are trying to fit is noisy and inaccurate. Each of the little decisions you make in selecting the input adjustments is subject to unconcious cognitive bias.
These type of decisions are magnified many times over as the number of input parameters increase. You don’t know if the hindcast fit errors are a result of the incorrect input parameters, inappropriate weightings, or historical data errors.
So in the end, you unconciously adjust the input parameters to deliver a result that is in line with your expectations. In other words, you have trained the model to act like Clever Hans.
Like Hans, the model satisfies the expectations of the trainer, so we assume it is predicting the climate (a horse performing arithmetic).
This is what you get with CO2 sensitivity. No matter how much we might wish, cognitive bias is unconciously affecting the models to deliver the answer the model builder expects. If you expect CO2 sensitivity to be > 1, then that is likely the answer you will get, unless you purposely adjust the model otherwise.
Like the horse, unless and until the model is removed from the bias of the trainer (and audience), you cannot determine if the horse is performing arithmetic (or the model predicting climate).
maxwell: “The climate models are solved numerically and the solutions are then tested against known historical data. Based on the agreement with that data, parts of the differential equations are ‘tuned’ to match that data more accurately.”
The question(s) that arises is: how many equations can be ‘tuned’? IOW, how many degrees of freedom are there? And, in turn, how much of this (presumably) multi-dimensional space have we explored? If each parameter (‘equation’) can be tuned to 1000 values, and there are only 3 that can be tuned, that’s a billion runs. I don’t know, but am presuming that there are many more than 3 such tunable parameters. Therefore, given the time required to generate a “run” of the model, we clearly cannot do a complete analysis – we need to prune the number of runs to something more reasonable. Undoubtedly, this has been done in (at least) two ways:
1) empirical and/or heuristic limitation – eg, we don’t do runs where a forcing is outside reasonable constraints of what we have measured or reconstructed.
2) vary one at a time to see what looks reasonable and get a “feel” for how it reacts.
While 1 is reasonable IMO, 2 is not – especially if these are likely to interact, and most especially if those intractions are non-linear (tipping points). For example, if temperature affects evaporation, evaporation affects precipitation and precipitation affects temperature (none of which would be unreasonable to assume IMO), and further more there are tunable parameters for more than one of these effects, it is extremely likely that more than one set of parameters exist that can hindcast to the same accuracy and precision – but we would never know if we restricted ourselves to varying but one at a time.
And all this before we even consider any issues relating to resolution (scale) – as far as I am aware, there has been very little work in climate models related to the effects of grid size (spacial or temporal) to results and what scales need to be resolved to prevent numerical noise from rounding and numerical resolution limits overwhelming the physics calculations. Clearly then, we are on very shakey ground if we are using the models as “evidence”, as I’ve said before. Perhaps we have been clever or lucky enough to get it right, but would you bet your future and life on it? If so, you are a braver man than I.
Judith
Thanks for link.
Yes. A new thread dedicated to Climate Sensitivity would be a good thing, as this still appears to be the big unresolved issue here.
At the time I followed your previous thread on this as a lurker, but have now gone back to it again.
There were many interesting comments.
The Schwartz 2010 paper was very interesting in analyzing why the model warming estimates did not materialize in real life.
We read that current model-derived estimates of climate sensitivity have thrown in the human aerosol “wild card” to justify a high 2xCO2 climate sensitivity.
At the time of AR4 human aerosols plus a minor negative forcing from land use changes were assumed to essentially cancel out the forcing of other minor GHGs, etc., except for CO2, so that the net forcing (by 2005) from all anthropogenic forcings was equal to that of CO2 alone.
It now appears from the Schwartz paper that the assumed aerosol forcing might be even greater and could theoretically account for much of the observed discrepancy between the model forecasts and the actual observations.
This is all interesting but still very hypothetical:
– Our models assume a certain range of climate sensitivity.
– Physical observations have shown a much lower rate of warming.
– We hypothesize a long lag in reaching equilibrium with energy “hidden in the pipeline” to account for the difference b etween the model assumption and the observed value (Hansen et al. 2005)
– When the atmosphere plus upper ocean appear not to be warming as our models though they should, we hypothesize that larger forcing from human aerosols may be hiding some of the assumed warming from CO2
I hope a new thread can get to the bottom of some of these strange rationalizations and give us some more solid comparisons with actually observed emp;irical data, rather than simply model simulations (or “assumptions”, as I have called them – to which you, and a few others have objected).
Max
“Our models assume a certain range of climate sensitivity”.
I can picture Steve Mosher literally pulling his hair out
“Our models assume a certain range of climate sensitivity.”
This made me laugh out loud as I pictured Steve Mosher banging his head on his keyboard in frustration.
Oops – oh well, worth saying twice
Dear god. The amount of time certain people waste on the WRONG argument is astounding. What they fail to realize is that the better argument comes through by accpeting the simple truth. Delta T (sensitivity) is a result. If they saw that then they could FOCUS energy, attention, words, time on the real debate.. like assumptions about aerosols, assumptions about model completeness.
But no. somewhere they learned a stupid pet trick in the echo chambers of contrarian land. these stupid pet tricks keep skeptics from engaging in the real debate. these stupid pet tricks minimize their influence and power.
Dear god
An appeal to a higher authority ?
http://www.worldscibooks.com/physics/0862.html
I suspect not, however the implications for Faustian distributions are unclear, as Goethe did say that nature is coauthored by the devil .
http://i255.photobucket.com/albums/hh133/mataraka/randomwalk-n_37231.gif