by Judith Curry
Curry and Webster (2011) discuss the important topic of uncertainty in climate research. While we agree that it is very important that uncertainty is estimated and communicated appropriately, their discussion of the treatment of uncertainty in the IPCC assessment reports regarding attribution is inaccurate in a number of important respects.
IPCC has placed high priority on communicating uncertainty (Moss and Schneider, 2000; Mastrandrea et al., 2010, 2011). Since detection of climate change and attribution of causes deals with distinguishing ‘signals’ or ‘fingerprints’ of climate change from climate variability, an approach requiring substantial use of statistics (see Hegerl et al., 2007), this area of research has always placed high priority on estimating uncertainties appropriately. Hence the chapter on attributing causes to climate change of IPCC AR4 (Hegerl et al., 2007) discusses the uncertainty in its findings in detail, including in an overview table where remaining uncertainties are explicitly listed for each finding. In this brief comment we will limit our focus to the four key errors and misunderstandings in Curry and Webster (2011) regarding the treatment of uncertainty in the detection and attribution chapter of IPCC AR4:
1) The authors claim that ‘The 20th century aerosol forcing used in most of the AR4 model simulations (Section 126.96.36.199) relies on inverse calculations of optical properties to match climate model simulations with observations’ and thus claim ‘apparent circular reasoning’. This is incorrect. The inverse estimates of aerosol forcing given in 188.8.131.52 are derived from observationally based analyses of temperature and are compared in Chapter 9 with “forward” estimates calculated directly from understanding of the emissions in order todetermine whether the two are consistent. But it is critical to understand that such inverse estimates are an output of attribution analyses not an input, and thus the claim of ‘circular reasoning’ is wrong. The aerosol forcing used in 20C3M (see http://www-pcmdi.llnl.gov/projects/cmip/ann_20c3m.php) climate model simulations was based on forward calculations using emission data (Boucher and Pham, 2002; references in Randall et al., 2007). Further, detection and attribution methods determine whether model-simulated temporal and spatial patterns of change (referred to as ‘fingerprints’) that are expected in response to changes in external forcing are present in observations. For example, the aerosol fingerprint shows a spatial and temporal pattern of near-surface temperature changes that varies between hemispheres and over time (see Hegerl et al., 2007 section 184.108.40.206). The solar fingerprint shows a vertical pattern of free atmosphere temperature changes that has warming throughout the atmosphere unlike the observed pattern of warming in the troposphere and cooling in the stratosphere, and also has a distinct temporal pattern, particularly on longer timescales. These patterns make the response to solar and aerosol forcing distinguishable (with uncertainties) from that due to greenhouse gas forcing. The amplitude of those fingerprint patterns is estimated from observations. Therefore, attribution of the dominant role of greenhouse gases in the warming of the past half-century is not sensitive to the uncertainties in the magnitude of aerosol forcing, or of other forcings, such as solar forcing. If the observed response were (at a given significance level) consistent with a smaller aerosol signal, balanced by a smaller greenhouse gas signal than that used in the models, then the results from fingerprint studies would include these possibilities within their statistical uncertainty ranges. Thus, attribution studies sample the range of possible forcings and responses much more completely than climate models do (Kiehl, 2007). Also, the IPCC AR4 assessment carefully explores other possible explanations, such as solar forcing alone, and finds that ‘it is very likely that greenhouse gases caused more global warming over the last 50 years than changes in solar irradiance’, based on studies exploring a range of solar forcing estimates and using a range of data (section 220.127.116.11, Hegerl et al., 2007). Such studies also attribute the warming in the first half of the 20th century to a combination of external natural and anthropogenic forcing and internal climate variability (table 9.4) Thus, Curry and Webster misrepresent the role of forcing magnitude uncertainties in attribution, and do not appreciate the level of rigour with which physically plausible alternative explanations of the recent climate change are explored.
2) ‘.. no traceable account is given in the AR4 of how the likelihood assessment intheattributionstatementwasreached’: Expert open reviews are designed to ensure that the steps taken during the AR4 were clear to attribution experts. An explanation of how the assessment was obtained is given in the introduction to the chapter, and includes a description of how the overall expert assessment is based on technical results and an assessment of their robustness, downgraded to account for remaining uncertainties (section 9.1.2, second-to-last paragraph). The detailed assessment of the causes of a variety of observed climate changes, including the results from published studies, the remaining uncertainties, and the overall assessment is given in table 9.4, which extends over more than 3 printed pages. However, improving the communication of such material to the broader audience of scientists who are not directly involved in attribution studies is also an important goal, and this exchange shows this can be improved.
3) ‘The high likelihood of the imprecise ‘most’ seems rather meaningless’: We disagree. The likelihood describes the assessed probability that ‘most’, i.e. more than 50%, of the warming is due to the increase in greenhouse gases. This statement has a clear meaning and an associated uncertainty, although explicitly listing ‘>50%’ in the text to ensure that no misunderstandings are possible could be helpful in future work.
4) The authors claim that ‘Figure 9.4 of the IPCC AR4 shows that all models underestimate the amplitude of variability of periods of 40-70 years’. This is an incorrect conclusion because Curry and Webster do not appear to have considered the uncertainties that were presented in the chapter. The figure (figure 9.7, not figure 9.4 of the assessment, see figure) clearly shows that the simulated variability of annual global mean temperature on time scales of 40- 70 years is consistent with the variability estimated from observations, given uncertainty in spectral estimates. Detection and attribution methods account for the contribution by internal climate variability to observed climate changes. Since the estimates of climate variability that are used for this purpose are generally obtained from climate model data, the chapter also contains a detailed discussion of the reliability of climate model variability for detection and attribution. Section 18.104.22.168 states that detection and attribution methods yield an estimate of the internally generated climate variability in observations and palaeoclimatic reconstructions (see section 9.3.4) that is not explained by forcing. This ‘residual’ is comparable to the variability generated by climate models, and the patterns of variability in models reproduce modes of climate variability that are observed (see chapter 8). The remaining uncertainty in our estimates of internal climate variability is discussed as one of the reasons the overall assessment has larger uncertainty than individual studies (see, e.g. table 9.4).
We would like to thank the authors of the Comment (Hegerl et al. 2011), all of whom played leadership roles in the IPCC AR4, for their interest in our paper (Curry and Webster 2011). The authors are correct that since the Third Assessment Report, the IPCC has placed a high priority on communicating their conclusions about uncertainty. Our paper raises the issue of how the IPCC nonetheless again, in the AR4, fell short in this priority again as well as in investigating and judging uncertainty. Hegerl et al. focus on the section in our paper on “Uncertainty in attribution of climate change,” which addresses the IPCC AR4 conclusion regarding attribution: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
We are encouraged that Hegerl et al. acknowledge the importance of improving traceability—a recommendation made by the InterAcademy Council (IAC 2010) as well. We believe an independent person or group—and not just members of the small community of attribution experts—should be able to understand how the result came to be and to walk through the decision process and achieve the same result. The IPCC should consult with the larger scientific and engineering community experienced in traceability standards to determine what is meant by IPCC’s traceability guidelines, and what kind of traceability is actually suitable for the IPCC assessments. Beyond the quote we provided in our article, the IAC Review provides a starting point for a description of what is suitable: “… it is unclear whose judgments are reflected in the ratings that appear in the Fourth Assessment Report or how the judgments were determined. How exactly aconsensus was reached regarding subjective probability distributions needs to be documented.”
Some fields (e.g. medical science, computer science, engineering) have stringent traceability requirements, particularly for products and processes that are mission critical or have life-and-death implications. We expect the level and type of traceability required of IPCC will be related to the complexity of the subject matter and the criticality of the final product. Increasing traceability in its assessment reports will enhance both accountability and openness of IPCC.
Hegerl et al. state: “The remaining uncertainty in our estimates of internal climate variability is discussed as one of the reasons the overall assessment has larger uncertainty than individual studies.” Translating this uncertainty in internal climate variability (among the many other sources of uncertainty) into a “very likely” likelihood assessment is exactly what was not transparent or traceable in AR4 attribution statement. We most definitely “do not appreciate the level of rigor with which physically plausible non- greenhouse gas explanations of the recent climate change are explored,” for reasons that were presented in our paper. In our judgment, the types of analyses referred to and the design of the CMIP3 climate model experiments that contributed to the AR4 do not support a high level of confidence in the attribution. Hegerl et al. take issue with our statement “The high likelihood of the imprecise ‘most’ seems rather meaningless.” Hegerl et al.’s proposal to add “>50%” to the attribution statement might have improved communication of uncertainty on this point. Nonetheless, this small change would still fall short of addressing the problems our article described (and quoted from assessment users) about the fundamental difference between 51% and 99% attribution.
Hegerl et al. object to our statement in the original manuscript: “Figure 9.7 of the IPCC AR4 shows that all models underestimate the amplitude of variability of periods of 40-70 years,” on the basis that we do not consider the uncertainties presented in the chapter. Figure 9.7 is presented on a log-log scale, and the magnitudes of the uncertainties for both the model simulations and the observations are approximately a decade (a factor of 10). Considering uncertainty, a more accurate statement of our contention would have been: The large uncertainties in both the observations and model simulations of the spectral amplitude of natural variability precludes a confident detection of anthropogenically forced climate change against the background of natural internal climate variability.
Acknowledgements. Comments from the Denizens of Climate Etc. at judithcurry.com are greatly appreciated. Particular thanks to Steve Mosher, John Carpenter and Pekka Perila on traceability.
I am disappointed that you would stoop to using your colleagues’ stolen emails to bolster your case.
Why? The golden rule of correspondence is not to send anything that would embarrass you should you see it later published in the newspaper. What is abundantly clear from the emails (are you sure that they have been ‘stolen’?) is that what some climate scientists say privately is not at all what they say publicly. Much of the fuss over AGW in the context of the IPCC reports is the apparently high levels of confidence that people express in public that they do not possess in private. My reading of the Hegerl et al paper is that it conforms to the principle that the IPCC position must be supported at all costs, even if those concerned say otherwise in private.
I know what I am disappointed in.
It is a matter of elementary decency. Would you have rifled through your colleagues mail for material for your arguments?
I doubt that the crew that was happy to “hide the decline”, use proxies upside down, ignore FOI requests and vow to corrupt the peer review process if necessary to support “the cause” are people Dr Curry would consider to be “colleagues”.
Which of these people did that?
Nick, in the context of Climate Change and Societies response to it, this is a ridiculous position to take. What matters is that the science is robust enough to make extremely important decisions on.
Furthermore, it was not Judith who rifled through anyones private emails, but once they were made public they substantiate the well considered case she made in her original paper. Since these were challenged or rejected by Hegerl et al, rather than accepted or acknowledged as having some validity, I would say that she is more than entitled to point out how theses assessments were made in private.
No, Nick is right. the fact is that this was private correspondence which has been released without the consent of the individuals involved or the institutions they worked for, or as a result of any request through official channels. People are justifying their release on the grounds that they exposed supposed wrongdoing by certain individuals but that doesn’t apply to any of the people in the emails Judith quoted.
I accept that the emails are out there in the public domain and so they will be quoted and discussed in forums like this, but that is different from being cited as evidence in a discussion between professionals on a scientific question – I’m surprised that people can’t see that distinction and I can’t believe Judith thought it appropriate. It is, frankly, a cheap shot and disrespectful to Gabi Hegerl. The way to answer the criticisms made by Hegerl et al. is to provide citations to the IPCC reports under discussions or the wider scientific literature which support the arguments made by Curry and Webster, the fact that Judith instead resorts to citing the emails gives the impression that she can’t provide such citations and actually undermines her case.
No, it was not private correspondence by any definition of the word, except one of convenience by the AGW believers.
Nothing about their personal lives is discussed. It is all work related. It was done on employer owned servers and computers.
Frankly that you would repeat this untruth two years after the fact and pretend it is true speaks volumes of your poor choices and untenable position on this than it does anything else.
If their personal lives were being discussed that would make it personal correspondence not private correspondence.
This was private correspondence in that the contents were not intended to be viewed by anyone other than the sender and the recipient (and possibly certain individuals withing their employers), as is the case with pretty all work related correspondence. I don’t have the right to see your work related emails, you don’t have the right to see mine – that makes them private.
This was private correspondence in that the contents were not intended to be viewed by anyone other than the sender and the recipient (and possibly certain individuals withing their employers), as is the case with pretty all work related correspondence. I don’t have the right to see your work related emails, you don’t have the right to see mine – that makes them private.
If the individuals had made their comments in a forum such as this or chosen to make the emails available in any public forum ten that would make it public correspondence.
It’s a pretty simple distinction.
Andrew Adams — If their personal lives were being discussed that would make it personal correspondence not private correspondence.
This is your opinion of what you think ought to be, and that’s all it is.
It’s common knowledge that correspondance on an employer provided email belongs to the employer, not the employee. There is no expectation of privacy of any sort. Your correspondance can be examined at any time by your employer. This is all part of the basic paperwork all employees sign regarding the use of electronic correspondance systems.
At any workplace if you desire to communicate with privacy then you take your personal laptop and use a private account.
It’s one thing for Dr Curry to illegally “rifle through” stolen materials. It’s quite another for her AFTER A PAPER IS WRITTEN to look at public domain material that supported her contention and then blog later saying, “oh, interesting, have a look at this, it supports what I said in the first place.”
Frankly, Stokes, the notion of common decency includes not making hyperbolic allegations (“rifling through”), thus you’re hardly in any position to comment.
To Andrew Adams: Correspondence regarding publicly supported work and exchanged from or between publicly owned computers, and archived in publicly supported servers, are NOT private. In fact, they are subject to FOIA, which means that the public is entitled to see them. Of course in this case the mails were not obtained through a FOIA requests (such requests have been fiercely contested in every instance, in fact, but the FOI Act applies nonetheless as seen in several cases). But that does not turn the mails into private correspondence. Especially if it does not concern personal issues but strictly technical-scientific issues closely related to the work done during work hours by scientists on a public payroll and concerning publicly supported research.
Actually the law in the UK does give employees at least some right to privacy even if they are sending personal emails from employers’ servers. But we are not talking about that kind of correspondence anyway.
“Private” to me means that the contents are confidential and not for public consumption – this can apply to emails which are sent for both work or personal purposes. Yes, our work emails are ultimately the property of our employers, so they are private just like any other propriety information which belongs to the employer.
Emails sent by employees of public bodies may be subject to FoI requests. There are valid exemptions and in the UK it is ultimately down to the ICO or Information Tribuneral to rule on individual cases.
The mechanism to obtain such information is through a FoI request. When the information is released it is then public information, until then it is not and the fact that someone believes information which has not been so released should be subject to FoI doesn’t give them the right to treat it as public information.
Elementary decency would also lend to people not submitting a comment that privately they do not support.
Hypocricy anyone? Thank god for whistleblowers – these people are beyond the pale.
Andrew Adams — Yes, our work emails are ultimately the property of our employers, so they are private just like any other propriety information which belongs to the employer.
They are decidedly *not* private when the employer who is employing you is doing so via public money. At that point everything belongs to the public via FOIA. You seem to be failing to grasp (purposefully, in my view) that *private* companies and those operating via *public* funding aren’t the same thing conceptually OR legally.
Well, if they claimed one thing in one set of public documents and I had other public documents that contradicted them, yes.
Mails sent as part of your job are not private, even if they are sent from a private account
I encourage everyone to remain calm as world order and the AGW scam crumble, trusting that the Universe is in good hands:
Remember Gandhi defeated the British Empire, . . .
a.) Not by having more funds or weapons, but . . .
b.) By doing what is right, and
c.) Turning the results over to Fate!
World leaders and leaders of the scientific community are as scared today as leaders of the British Empire were when they met an opponent that would not accept their arrogant, false view of reality. That is where we are today.
If the Sun sneezes, farts or belches, world leaders are all toast too. Today NASA is helping to get out that message:
Today all is well
Oliver K Manuel
They are decidedly *not* private when the employer who is employing you is doing so via public money. At that point everything belongs to the public via FOIA.
I can only repeat what I said to Hector – not all emails sent by public emloyees are subject to FoI, there are exemptions. The information belongs to the public at the point which it is released either voluntarily or via a FoI request.The public doesn’t have the right to just help itself to information it believes it is entitled to.
By the way, there is something which is puzzling me. None of the individuals in the email chain quoted by Judith work for UEA/CRU, so how did they become part of the “climategate” file?
Everyone knows the ethical thing to do is to huff in a superior fashion that it’s unethical to read the e-mails and proudly pronounce “I have not read the e-mails because it would be unethical), and then huff in an equally superior fashion that “the e-mails show nothing.”
The release of the emails is a good thing. Now figger that one out.
“Elementary decency” means keeping the broader world fully and accurately informed on matters with huge policy implications which will affect the well-being of billions of people. If Judith or anyone else finds from any source material which conflicts with or undermines the publicly adopted position of IPCC etc authors, then the moral imperative is to bring this to public attention. It is not “a matter of elementary decency” to conceal material which reflects badly on IPCC authors’ trustworthiness and reports just because of the manner in which it leaked into the public domain.
As for the e-mails being “stolen”, the US DoJ and Norfolk police have presumably not yet reached such a conclusion, given the recent raid on tallbloke and removal of his computers. The question is still open.
?…that what some climate scientists say privately is not at all what they say publicly.” This very much reminds me of what some dot.com stock analysts were doing during the internet bubble. When they got caught saying in emails that certain stocks were dogs while publicly proclaiming what great buys they were, they were eventually found out and lost their jobs. The same should happen here
You are right, Dennis. Uncertainty makes Big Brother mad!
1. Today Big Brother is running scared, as pointed out here:
2. Dangerous: May seize police control:
3. Out of control: But “Fear not, the Universe is in good hands!”
Oliver K. Manuel
Lying in private and onspiring to decieve is not in the ethical real world considered to be an activity immune from review.
Nothing produced on employer owned equipment, or on employer time, is subject to “privacy”.
Especially when it involves corrupting, distorting, undermining, etc. the work itself. Which is exactly what is revealed and why you so cynically seek to hijack the issue.
You have a rather odd view on what constitutes private. Discussing private matters while at work, using your work computer does not automatically make them private. Any private sector employer has the right to screen or audit emails created on its system by employees, whether it is during normal working hours or not. This applies to the public sector as well. If I work for the Dept of Health & Safety and am exchanging revealing photos with my girlfriend from work, those emails containing said photos are not private, no matter how much I might wish them so.
As far as who has the right – you are wrong. If I am your employer, I have the right. If you for a public agency, I, as a citizen, have a right. If you work in private employment, I may have the right, should I have enough cause to get a court order.
As I mentioned in a reply to another commenter I use the term “private” to refer to information which is not in the public domain, and “personal” to specifically refer to correspondence of a, well, personal nature. So I think you answering a slightly different point to the one I was making, as we are discussing work related not personal emails.
But as you mention it, although an employer has the right to monitor emails to identify abuse of the system they don’t in the UK have a blanket right to read your personal emails even if they are set to or from a work email address, due to the right to privacy laid down by the HRA. Employees have actually won court actions in this respect.
I can’t speak of what rules apply in GB. What you state is true their is not so in the US.
Still, the point – our at least what I thought was the point – is that these are not private or personal emails. If I send out an email discussing the options available for replacement of a transmission pole and installation of wireless comunications equipment – that is subject to review, particularly if something goes wrong. If in another email I mention to the recipient that a certain party we are working with is an idiot, while personal in nature, that too would be subject to review.
And when it comes to the Climategate emails, other than Phil Jones talking about his occasional wine drinking, there isn’t much personal content. It is about work. Although I guess you could call some of the one’s by Mann and a few others “personal” when they are trashing other researchers.
Judith recently said she wanted to keep an email exchange between herself and David Rose of the UK’s Daily Mail confidential.
She expects confidentiality of her own correspondence but she doesn’t respect the confidentiality of others.
I think he’s just upset because the emails expose what was really going on behind the scenes. Had the emails shown these guys in a good light and shown them being forthright and that their research clearly showed these trends (rather than showing scientists frantically searching for various “bodges” in attempts to make the data fit the desired conclusion), I don’t think he would be upset one bit if the “stolen” emails were in the public domain.
But in any case, the emails are of no consequence. The interactions noted at the start of the article stand on their own merit. The email simply exposes that these questions were well-known to these people at the start and aren’t anything new to them.
The emails are out there. To pretend not to know what is in them is to perpetrate a farce. They just are out there and seeing them brought in to discussions such as these will be the norm going forward as they are now part of our history. They clearly show what was in the minds of people at the time. But more importantly, nobody’s personal email account was accessed. I don’t know about the UK, but in the US you have no expectation of privacy when using your employer’s email services. Furthermore, if your employer is publicly funded, you have no expectation of privacy from the public.
These were not personal private communications, these were communications that impact nearly every nation on this planet. To say that these people somehow are above scrutiny from a world who is being asked to pay dearly as a result of their findings seems unfair to the people of the planet, wouldn’t you say?
Good try, Nick.
Your assumption is wrong, and your reaction is a great confession that the e-mails are exactly the correct thing to be using and that Dr. Curry’s point is spot on.
It also shows you are a typical cowardly true believer, looking for reasons to ignore that which you do not like.
Put them in here now, Nick…
Political scientists will learn from their mistakes over time too.
We all will.
“It’s something so radical that it would have been considered crazy had it been pushed by the Bush administration,” said Tom Malinowski of Human Rights Watch.
Gotta admire the unintentional candor of an NGO every once in awhile.
The information is now out in the public domain. Ignoring it won’t make it go away no matter how it emerged in the first place. What would you propose? Pretend it isn’t there?
That’s a very common line of defence … called denial
Judith is just trying to “build bridges.” Don’t you think that using those emails is the best way to build bridges?
I mean, it’s not like she’s engaging the debate from a partisan orientation.
Or anything like that.
I thought it was just camps.
“The personnel on site to be covered by these services will depend on the size and scope of the
recovery effort, but for estimating purposes the camp will range in size from 301 to 2,000
persons for up to 30 days in length.”
The camps are for the ‘recovery’ teams? That does not sound good.
Joshua the point is rather simple.
If any of you have every done a traceability study you would understand that these are the very types of communications that one would want to examine. what decisions were made, who made them, what process did they use, and what information did they rely on.
So, for example, if you worked in an environment were traceability was required you would know that you should, for example, Avoid verbal orders. why? because they are not traceable
here are some examples
you would also know that any and all communication about the process should be part of the record. Anything less than full disclosure is unacceptable. That discipline drives participants in the process to higher levels of accountability. It’s a proven method and one that an outside group of experts with no discernible “motivated reasoning” recommended for the IPCC.
You are free to take pot shots at Dr Curry, but on the substance of the post, exactly where is it she has it wrong with regard to the ability to track the review and decision making process within the IPCC?
Not knowing your background, I have no idea if you have any experience with tracability and chain of possession. They are practices as automatic as breathing to someone like myself who has spent time on subs and in the commercial nuclear industry. If climate change is the gravest threat facing mankind, don’t you think it demands the same level of quality control and accountability as sending a man to the moon, building a modern submarine, commercial airliner or nuclear power plant?
Science has its own procedures. As attractive as it sounds to embrace the practices that led to Chernobyl and Fukushima, or brought on the Challenger disaster and the Taurus XL crashing into the sea, of course you need to make that case for that with some clear evidence of benefit.
But the argument “If this is so important, you should do it like (other people do other things)” is, as it stands, a fallacy.
There is no evidence that importing this or that practice from the world of engineering would help catch the actual plagiarism, corruption, data falsification and CV padding that have been identified in the past several years (exclusively among “skeptics,” so far.)
The theft of these emails has completely failed to undermine or alter scientific conclusions in any way. You haven’t shown any ability to root out the Soons, Wegmans, or the Fred Singers of the world. Rooting around in stolen private correspondence is sleazy, pointless, and undermines the discourse of ideas between scientists, in which they are free to exchange ideas and critiques that are not up to the standards of a public statement or a scientific publication. If you have never had an idea that needed to be worked through with a smart colleague before you knew if it was worth airing in public, then I’m sorry for you. You must be in a very boring business.
You’ve confirmed you don’t have any experience. And for extra credit you show off your complete lack of understanding of events.
Lets see, Cherynobl and Fukushima – other than both being nuclear power plants, they have nothing in common. And as I recall from last March, there was something about an earthquake and tsunami. And as the Japanese conduct a review of what happened at Fukushima, they will be able to look in great detail at every aspect, from construction and maintenance, to operational procedures, to determine exactly what occurred, when it occured and what were the root causes of failure events.
If you want to believe it is just fine for scientists bring their own rules out of the lab and into the real world, that is your perogative. Fortunately others understand the difference between research guidelines and regulatory guidelines.
I’m not even going to try to follow you into that off topic little rant about CV’s and such, as it has nothing to do with what I was addressing. Not sure it has to do with anything at all.
As for the business I’m in, boring is all relevant. Compared to riding broncos, I’d have to say most certaintly it is. I’ll also say it is the sort of business where you often have to put your opinions, justifications and decisions right out in front of everyone. Can’t be afraid of the light of day, eeven if it involves a little heat.
That is what tracibility allows one to do.
And so the suspense builds: will timg56 deal with the substantive criticism offered, or will he decide he has nothing and his only option is to seek refuge in ad hominem?
Whoops. Apparently he’s got no actual argument. But thanks for playing, timg56!
I think that part of what’s going on here is a question of definitions. You and others are clearly referring to a very specific sequence of requirements when you speak of traceability, but you should understand that others may view the concept of traceability differently. In order to clarify the issue, it would make sense to work towards a common definition rather than to use rhetorical devices to imply hypocrisy in others WRT traceability when you might be using different working definitions.
That said, I think that in general the notion of traceability is a good one – although I think that there needs to be an open discussion as to whether the specific process of traceability that is being suggested is necessarily appropriate for the context of academic research and the objectives of the IPCC. I think it’s a worthwhile discussion – but I also think such a discussion is not well-served when partisans are using the concept of traceability to pursue a partisan agenda. I’m not making a blanket accusation there, but rather observing that it seems that for some involved the notion of traceability is just one more side battle in the 7th grade cafeteria food fight melee.
You could be right on it being an issue of definition. But I also think it would be a mistake to get too muddled down parsing the defintion of traceability. Both transparency and traceability can be boiled down to
– who said what and when did they say it
– who was making the decisions and what information did they have to act on in making those decisions
Is there an argument for this information not being in general circulation when the process is in motion? Certainly. But once a report is issued, there is no good argument for not opening the process up for review. This takes us back to the issue of the emails. I haven’t been paying close attention to the dates, but my recollection is many, if not all, are from 2007 and earlier.
I’ll leave the part about partisan agendas alone. There are plenty of agendas at play on both sides of this argument. Too easy to start pointing fingers, with nothing positive resulting.
Without parsing you are left to be advocating for what seems to me to be a pretty tall order. What you described above could be viewed as a requirement that any communication whatsoever that was even remotely a part of the process of writing any aspect of any of the reports be completely recorded, collected, supplied, and extensively cross-tabbed with anything that got into print.
Would that be appropriate? Would it have, to some degree, a stifling impact that would result in a big mess of CYA? Would it mean that no one would be willing to speculate about potentially very threatening scenarios that are highly plausible and scientifically supported unless they can reach an inherently unreasonable bar of absolute proof?
I don’t know the precise answers, but I think that you can’t avoid the questions — and I think that further, to pass judgement on the actions of others without having a shared, agreed-upon definition of terms ahead of time becomes essentially a projection of biases. How can you incriminate someone for a lack of traceability if you haven’t agree to a definition of the term very specific to the context?
It’s not entirely clear where Judith falls out, but it is abundantly clear that many participants in the debate – some of them people with considerable influence – view the IPCC authors as deliberately avoiding “traceability” so as to perpetuate fraud, practice bad science, achieve political goals, cover their a$$es, etc. In other words, maybe they define the terms on their own after-the-fact and apply their definition retroactively in order to prove their conclusions. Confirmation bias? That’s hard to say, exactly. Maybe they didn’t have conclusions already solidified before they began their investigation. Or then again, maybe they did. The problem is that when you don’t specify the conditions of the examination ahead of time, there’s no way to judge the validity of the assessment. That problem is exacerbated when such motivated interest in “traceability” is potentially indistinguishably mixed with legitimate interest in furthering accountability and precisely quantifying uncertainty.
So the point is to move forward with the knowledge of previous disagreement rather than waste more time slinging Sloppy Joe and Jello Mold around the lunchroom. The difficulty of reaching an agree-upon procedure for going forward seems to me to be rooted in partisanship, resentments, and juvenile bickering from both camps.
(Don’t know how to do italics, so I’ll have to use quotation marks.)
” What you described above could be viewed as a requirement that any communication whatsoever that was even remotely a part of the process of writing any aspect of any of the reports be completely recorded, collected, supplied, and extensively cross-tabbed with anything that got into print.”
This is not an accurate description of what would be required. Documenting and record keeping is common to almost every field or industry I’ve ever been associated with. It is common practice to keep files. No “cross referencing” , extensive or otherwise, is mandated. I’d note the airline industry has manged to operate with very stringent record keeping and tracability requirements. And with regard to emails, one doesn’t even have to try that hard to keep track of them. IT guys have all sorts of ways to go back and find them.
“Would that be appropriate? Would it have, to some degree, a stifling impact that would result in a big mess of CYA? Would it mean that no one would be willing to speculate about potentially very threatening scenarios that are highly plausible and scientifically supported unless they can reach an inherently unreasonable bar of absolute proof?”
I believe you are again overstating the case. How does transparency and tracability represent a stifling impact? This is similar to the claim that FOIA requests to UVA have a “chilling effect” on academic freedom or some of the recent claims that seizure of Roger “Tallbloke’s” computers have the same “chilling effect” on freedom of speech. The argument for transparency and tracability has a very strong foundation. To reject it requires something a bit better than just claiming a chilling or stifling effect.
As for speculation, people are free to speculate all they want. Nothing about transparency and tracability interfers with speculation. What it does do is make it easier to identify what is speculation and what is statement of fact.
(I’m headed down to Portland for the weekend, so unfortunately won’t be able to continue this discussion. Sorry.)
My point was that clearly described (and agreed-upon) specifications should be used (certainly before judgement is passed) – not to say that the hypotheticals I described were an inevitability. I took you as saying that such specificity wasn’t necessary and took issue with that perspective.
Kudos for your referencing the situation with Tallbloke as well as the FOIA’s to show balance in how these issues need to be addressed.
Nick – perloining the emails is unethical. Once they are in the public domain, they are public information. And should be used. They are part of the body of knowledge now. You cannot re-cork a bottle can called it uncorked.
Your outrage Nick wreaks of situational ethics. When you reveal your “disappointment ” with Mann, Jones, Santer, Schmidt, call me.
Have you considered that if the IPCC people had put out scientifically valid statements supported (as the journals require) by data and other SI, that nobody would have bothered to call them on the poor quality of their work?
Then, when challenged, they respond with ‘unhelpful’ statements that do little to improve the veracity of the claims made.
Under those circumstances, I would have thought it entirely appropriate that Judith uses information available in the public domain that demonstrates that her critiques of the IPCC are valid and that their responses are not adequate. Especially considering that those e:mails were produced by employees financed (largely) by taxpayers, using computers owned by their employers financed (largely) by taxpayers, and which would have been in the public domain had FOI procedures been followed in accordance with the intent of the relevant legislation. Further, it is highly likely that each of the parties revealing what they have in their e:mails were ON NOTICE that their e:mail communication were funded by taxpayers, and thus potentially exposed to FOI.
Your comment reminds me of the “leaked” Pentagon Papers coming out of the Nixon Administration that disclosed all types of subversive activities by the CIA and etc. These papers were used to help clean up such activities. Or should society not have “stooped” (as you put it) and ignored these illegally released papers. Whereas I do not condone an illegal acquisition of documents, both the Pentagon Papers and the leaked climate emails provided insights to procedures that of which we would otherwise be ignorant. In my mind, national security and environmental policy are far too important to ignore any evidence that comes one’s way.
Peter Webster, Frank Snepp said at some point in his book ‘Decent Interval’, that when he opened the CIA file on Vietnam Mission, it was empty.
When my time came to leave that night, with the last CIA contingent in the Embassy, we had to push Vietnamese out of the way in the halls to get to the chopper on the roof. I couldn’t look into their eyes.
Retreat is the most difficult of all military operations. But as a matter of honor you do not leave friends on the battlefield. In the evacuation of Saigon over half of the Vietnamese who finally got out escaped on their own with no help from us until they were far at sea.
The last CIA message from the Embassy declared: Let’s hope we do not repeat history. This is Saigon station signing off.
Victory NOW Iraq
This is not the Pentagon speaking. It is not even the IPCC. It is an exchange of comments in a scientific journal between colleagues supposedly on an equal footing.
People work on stuff, toss ideas around, and when they think they have got it right, they publish. It is those published views that should form the basis of scientific discussion. It is not only unfair for one side of a scientific discussion to jump on the part-formed, privately discussed ideas of the other side; it is misleading. You only have a small part of the thought process that lead to the final version.
If my work was being challenged the way cliamtegate is challenging the team’s work, and I had the context with which to vindicate myself, I would do things very, very different from the way the team is acting.
You are defending the indefensible, and I think you know it.
1. the mails were not stolen. they were copied.
2. this is EXACTLY the kind of information that would be presented
in traceability study.
This is what happened to the chap who copied Sarah Palin’s emails, after guessing her password.
The discussion in this post is not about a traceability study. It is about an exchange of comments between colleagues in a scientific journal.
You are assuming that the individual who copied the mail did not have authorization to access the computer. The issue is EXACTLY related to traceability. All communications related to decisions taken in Ar5 would be produced in a traceability study. I take it you have no relevant experience with these processes. Nothing is private. Not even mail sent from your private account.
As the following shows Nick this is about the following
” From your comments, I assembled a word file with our suggestions on
>>> the 5AR run
>>> proposal, but I am not sure
>>> I caught it all completely. Also, I had a chat with Jerry yesterday,
>>> and he said getting
>>> suggestions of what should be stored will be useful at this point.
>>> My plan is to communicate this with Jerry when we are done with it,
>>> and then propose
>>> it at the WGCM meeting.”
To provide traceability on Ar5 what you would need to produce is the documentation described above. what suggestions were made, who made them, were they all captured? which were accepted, which were not accepted, what was the rationale, would reasonable people make the same decisions? etc. Its not at all about the science or a science journal.
Also, you see why we have AVO forms. he said, she said, who said what?
in a traceable process where decisions are made, verbal orders are to be avoided. In fact, if you make a verbal request in a traceable process you are more often than not going to receive an AVO form.
Steven, these are two groups of scientists exchanging comments in a scientific journal. If traceability is so important, what do we have from Curry/Webster? Or any of the other thousands of such exchanges in journals.
You are assuming that the individual who copied the mail did not have authorization to access the computer.
You have evidence they did? Of course if this was the case it means they may not be guilty of the offence of illegally accessing a computer system, but it doesn’t mean it was legal for them to copy and release the emails.
Leaving aside the question of hacked vs leaked – there is still the little matter of these emails not being “private”. Any email I send out from my work computer is not private. Any email sent from a work computer belonging to an institution or agency funded by tax dollars is most certainly not private. The authors of the released emails where working on projects funded by public dollars and being paid by same dollars while they were sending the emails. At what point is it you see “private” coming into the picture?
Employers respect privacy. Even FOI etc have exemptions. But these emails were not released by the employers, nor by an FOI process.
“Employers respect privacy” and the tooth fairy is real :0
The truth is that employers respect privacy unless they have reason to suspect employees. It is a CYA world and security is a booming business.
Now that the US department of justice has stepped in, it might make it a little tougher for UVA to justify the legal expense.
NO, they don’t. Employee’s don’t have email privacy. Are you reading what folks are saying here?
Yes, they do. It is granted to them by their employers, especially universities, who seek to observe it strictly. Employers actually need to do that, so employees can do their jobs properly.
But again, this was not a case of employers using their ownership rights. It was a hack.
Mr. Stokes is purposely missing the point and nothing anyone can say will change his mind. Every FOIA law designates which records are presumed to be public records, and which records are presumed to be private or confidential. The fact that no one requests a particular “public” record be disclosed does not change the character of the “public” record to private or confidential.
Stokes has been told multiple times what the reality is and is currently engaged in making stuff up as suits him, arguing “what things should be like” rather than “what is.” For purposes of discussion here, further reply to Stokes on this matter is pointless.
Now you are rewriting history. (fibbing)
The UEA unlawfully stonewalled a legit FOIA request.
The first file, at least, was comprised of what was collected but not delivered.
Your persistence in fibbing about climategate, and your non-responsiveness to the historic use of leaks by every media and government group in the world makes you look rather shabby.
And we all know that if the shoe was on the other foot, and climategate showed brave heroic scientists working hard and honestly, you would not hesitate at all to quote from those letters.
But you know it does not show this, and you know you are fibbing and we all know you know it.
“Stokes has been told multiple times what the reality is”
You have been told, not by me, but by numerous inquiries, what the reality is. That has had no effect.
“The UEA unlawfully stonewalled a legit FOIA request.”
It did not. Under FOI, the responsibility for the initial decision rests with the institution. They lawfully made their decision – you didn’t like it. The remedy for that is an appeal, not a break-in.
Why were the team conspiring to hide and delete emails, if they believed their work related communications to be private? Is it not true that CRU only escaped prosecution for violating FOIA, on a time bar technicality? Are you even aware of what we are talking about here, Nicky?
I suppose you’re also disappointed when someone stealing a dealer’s drugs leads to the dealer’s arrest? There is no legitimate claim to privilege for those committing the indiscretions.
Were those posts sufficiently lofty no stooping would have been required. She could have plucked them from the clouds. As it is they were found in the gutter. You have spinach in your tooth.
Totally agree, I wonder if Judith reads the emails of all her colleagues at Georgia Tech ? The point is that you discuss half formed ideas in emails to colleagues when preparing a joint paper.
Tom, the emails I cited are discussions among a large number of people in IPCC leadership positions about the AR5, establishing guidelines for the CMIP5 simulations, in response to perceived problems with the CMIP3 simulations (which are the same issues raised in the uncertainty monster paper). It is the perceived problems with CMIP3 simulations used in the AR4 that are of interest to me in this context. The context for the emails I cited are not half formed ideas in preparing a joint paper.
Nick: IMO, a more appropriate ethical standard would be whether or not these particular emails could have been obtained via an FOI request if Judith had chosen to make one. Evidence obtained by the police during an improper search, for example, can be used during a trial if the prosecutor can show that the evidence would have been uncovered during a normal investigation. The release of these emails was certainly never authorized by CRU, the owner of these emails, making this evidence analogous to evidence obtained improperly by the police. Judith is researching/investigating how the IPCC has assessed uncertainty when making attribution statements and these emails are clearly relevant to this subject. The authors are usually working at public institutions supported by publicly-funded grants that were made because their research and work for the IPCC could be critical to the future of the planet. Although Judith may never have chosen to request this information from her fellow scientists via FOI, I think these emails could have been obtained legally.
So, when are we ethically “stooping” by using our colleagues stolen emails? I think one is on safe ground bringing up emails where Phil Jones mentions hiding the decline in a graph being prepared for the WMO and deleting emails apparently subject to a current FOIA request – those email certainly should be discoverable by FOIA. I think one would clearly be stooping by citing the email where one pro-AGW scientist discusses what he would like to do to a prominent skeptical scientist in a dark alley. Since the authors of the improperly-obtained Climategate emails were not afforded by the much-abused protections of the FOIA process, individual scientists have the burden of deciding for themselves which emails can ethically be used.
Many others are disappointed that this publicly funded work was hidden from the public in the first place – the motivation being to mislead the public.
And further disappointed that this secrecy for the purposes of deception is defended by others, like Nick Stokes.
Defended exclamatorily by Bart Verheggen.
“(thank you, hacker/whistleblower).”
Just as the courageous Deep Throat of the Nixon era was eventually revealed, someday this courageous Climategate Hacker/Whistleblower will be revealed, hopefully, so that Truth Seekers around the Globe can give their salute to this fine Comrade(s).
Deep Throat in PJ’s?
Clinton said: High Noon, was his favorite movie…
John Wayne said that the movie ‘High Noon’ was Anti-American
Let’s all ask Hollywood what the truth is again, since they have no agenda. Nixon will be erased in a hundred years anyway.
Where is the beef?
We will all be dead, execpt for Kissinger it seems.
What Bart said.
Thank God that not all climate scientists are like you.
You boys will get her for that. Right, Bart?
This is what happens when you persecute people, Bart. They fight back.
Great, someone else who can’t tell the difference between Steve McIntyre and Rosa Parks.
little andy adams,
I didn’t mention McIntyre, or Rosa Parks. Why bring them up? If you are trying to make some stupid point about degrees of persecution, you should have gone full-tilt left-wing looney and dragged the 6 million Jews who were killed in the Holocaust into the discussion. It’s in the climate scare playbook, andy.
I know you didn’t mention McIntyre in particular, your comment just put me in mind of a similar exchange I had elsewhere where his name was mentioned.
So who precisely do you think is being persecuted? Really, if you skeptics are going to persist with this victim mentality don’t complain if you get mocked from time to time.
The trouble with collecting these kind of ironies, aa, is that they don’t take a shine very well, and I’ve containers full of them in the railyards out back.
Recall how the early apologists for ‘the team’ said it was just boys being boys?
It would appear that at least some of your team are doing at least that, but in a sexist sort of way- can’t let the girl be to uppity, and all that.
Yes, you may be excused. We’ve had enough.
Well said, Don Aitkin. Hegerl & co are serially economical with the truth here and everywhere at ALL times.
Judith: “Models are too imprecise to attribute warming”.
Gabrielle: “Attribution is independent of this uncertainty. You misunderstood how forcing are computed (thereby making a false accusation of circular reasoning), you misunderstood the uncertainty figure, and you misunderstood how attribution is done. Oh, and it’s not our fault if you don’t understand that “most” means “>50%”.”.
Judith: “See, I was right, models are too imprecise to attribute warming. BTW, look at this totally unrelated email that has the words “models” and “attribution” in it!”
Precision is not the issue, tuning is the issue. If you do not understand the relevance of the emails then you do not understand the issue.
I often wonder how many people recognize the difference between precision and accuracy.
To use an analogy my nephew (2 tours so far in Afghanistan) would concur with, we have the technology to deliever ordinance to a very specific place, with CEP’s measured in a few feet. That is precision. But if the Forward Observer or Fire Control Director provide incorrect data, that ordinance will not be on target. That is called accuracy.
You can have the most precise model in the world and its utility may matter little if it isn’t accurate.
Tallbloke’s computers are already confiscated (see latest WUWT) .
It is disappointing that you have to resort to using your colleagues’ leaked emails to bolster your case.
Pretty similar to what Nick said, has a different ring to it though.
You think that Judith has to use her colleagues’ emails to bolster her case? I would have thought that you’d think that she could make her arguments stand up through direct debate about the topic at hand.
A clear case of more AGW string theory from of All places, Joshua.
Joshua, No I don’t think she HAS to use the emails to bolster her case. It is getting to the point that she has to use the email to counter the BS attempts to trivialize her case.
There is a difference. The emails are just verification of all the BS that should be obvious and has been obvious to many people not in the system. Many of the more overly confident climate scientists are blinded by their own enlightenment. With are few more energy models and some realistic non-linear analysis it will become more obvious. Keep the bacon.
If Judith claimed publically that she didnt believe in ID, and you had in your possesion an email sent from her using his university computer where she claimed that she did believe in ID, what would you think?
You know her public statement is at odds with the statement she made in email. How do you counter her public claim without referring to the email.
In this case there are two accounts of how things were done. they appear to be at odds.
I wouldn’t release her private e-mail to me …
Has doghoser had a scruples transplant? Must be an impostor. Don’t send him any emails, Steven. He’s wearing a wire.
The believers are basically arguing that evidence of the scam should not be reviewed because the believers do not like how the evidence gathering has gone.
Their position is untenable in an ethical world.
Nick Stokes, your mock “decency” play, is to be frank, sickening. Defend the science. The information that is out there is there, so deal with it and stop holding your hanky over your mouth.
The other side of this is that TallBloke has had his PC’s taken away by Norfolk Police today. I assume Nick Stokes that you will take your hanky away from your mouth long enough to state your outrage at such an affront to civil liberty.
Further to my last, Hegerl et al like Kaufman et al (PNAS 2011) simply have no grasp of numerical magnitudes. The latter solemnly claimed that SO2 aerosols accounted for what they called the “hiatus” in global temperature rises since around 2000, oblivious that their own figure for that was 65 Million tonnes p.a. as against emissions of 35 GtCO2. Be aware of the huge decrease in SO2 emissions by US power stations:
While coal’s CO2 emissions increased by nearly 200 billion tons over that period, SO2 emissions by the coal-fired power industry fell from 14.28 million tons to 5.5 million tonnes, and similarly for NOx (7.1 million tons to 2.4).
But then for Hegerl et al, facts count for nothing, as their models show SO2 and other aerosols exceed CO2 by zillions.
Other than the fact that SO2 has an amplifying effect on cloud nucleation so a one to one comparison with CO2 emissions is not necessarily useful, care to mention what the increase in SO2 emissions from China, India and SE Asia were? And oh yes, why have you not considered emissions from other sources besides coal fired plants?
Your facts have negative value
Eli, Sulphate emissions have been rising sharply in this region since the 1970s. Is it reasonable to compare this with the regional temperature figures? There has been no obvious or uniform decrease.
Definitely shows that scientists still have no clue how this planet works and trying to use temperatures was a VERY BAD idea.
A fascinating exchange. I think it goes a long way toward confirming the dual conclusions that the IPCC attribution claims are both unsupportable and untraceable. But these really are two very different issues, which need to be untangled. But it is hard to question the specifics of a conclusion when you can’t see how it was reached.
I take it the convention is to use climate variability to refer to natural change and climate change to refer to AGW, or something like that. But I think the IPCC defines climate change as including natural change. Bit of a confusion here, and rather serious.
No confusion for the IPCC who are willing to forgo any correctness of the science to generate more control and funding.
I doubt the confusion is deliberate.
You doubt the confusion is deliberate? As in Mann et. al. “undeliberately” splicing the Tiljander series into the Hockey Stick upside down? Or the CG emails telling us explicitely that the Team “undeliberately” went to great lengths to corrupt the peer review system.
There is nothing “undeliberate” about anything that comes from the IPCC groups.
I don’t share your cynicism. These people really believe what they say. That is the big problem.
I tend to agree with you.
Which is one reason I think talk about being driven by the desire to maintain funding is a distraction from the central arguments.
The decency argument aside I’m confused on the detail. The email concerns AR5 while Curry and Webster (2011) appear to be talking about Ar4. How does one support the other?
The email shows that the modelers DO tune their models to the historical data, the very circular reasoning that Judith points out.
The email relates to AR5 and suggests such tuning will be a change leading to a loss of previous ability. It also seems clear that Hegerl does not support this change and would prefer it not happen.
Curry and Webster 2011 relates to AR4.
So again, how does an email about a change in AR5 support an argument about AR4? From the mail we don’t even know what approach was eventually picked for AR5 only that the issue was debated!
Curry thinks she can infer what was done with AR4 based on the discussions of what not to do in AR5. It’s a shaky inference.
Ever discuss a project with your coworkers, only to find that everything you hashed out on your last project had to be discussed all over again, and old discarded ideas were presented as fresh suggestions?
Typical of Nick Stokes, to come up water carrying with mealy mouthed excuses and pseudo moral indignation. Shameless, as usual.
Thank you for posting this. Read in the context of the on going misuse of the legal system to intimidate bloggers, it is particularly damning.
How much longer will the conflicted arguments that consist of claiming the science is right but the boys are just behaving badly hold up?
I found this (from Karl Taylor’s email) troubling:
“Likewise, suppose two candidate models were identical in most
respects, but one could accurately simulate the climate of the 20th
century (when all forcings were included), whereas the second had a
very low global sensitivity and produced too little warming. The
developer would again want to choose the model that reproduced the
observed trends. In fact this model would probably produce a better
estimate when forced by future emissions scenarios too (because,
presumably, its sensitivity is closer to the truth).”
Maybe when Taylor says the models are “identical in most respects” he implicitly means that they have the same number of “tuned” (estimated, calibrated) parameters. But if not, this is a troubling statement. Does anyone here know how the climate model folks sort out the in- and out-of-sample differences? My own experience with models based on finite samples is that even when you are using AIC or BIC penalties, these don’t really give you good guidance relative to a out-of-sample prediction criterion. But, this particular quotation isn’t very inspiring.
Here are my non-expert comments:
Hegerl et al. start off with a claim that C+W “discussion of uncertainty in the IPCC assessment reports regarding attribution is inaccurate”
They make a second statement that IPCC places“high priority on communicating uncertainty”, citing references that verbalize this claim (one curiously even by the late Stephen Schneider!)
To the specific points:
1. assumptions on aerosol forcing: “circular reasoning” or not?
I re-read the Hegerl et al. rebuttal several times; to me it sounds like double-talk rationalization. The IPCC conclusion that “it is very likely that greenhouse gases caused more global warming than changes in solar irradiance” is a silly statement in itself, since it assumes (by inference) that “solar irradiance” is the only impact of the sun (an assumption that is based on an “argument from ignorance” and circular logic). Hegerl et al. refer to the “level of rigour with which physically plausible alternatives of the recent climate change are explored”, but limiting the solar impact to that from direct solar irradiance alone does not display such rigour IMO.
2. lack of traceability: I read Hegerl at al. as more double talk with no hard data, saying in effect, “gosh, we did our best and listed it all in tables”. Sorry, no cookie. Traceability means traceability.
3. what does “most” mean? Here Hegerl et al. agree that “greater than 50% with an associated uncertainty range” would have been better
4. do models underestimate amplitude of observed 40-70 year cycles? Hegerl et al. provide more double-talk but do not specifically address the uncertainty of a ~50 year estimate (1950 to today) in light of the observed amplitide of these observed cycles.
It appears to me that Hegerl et al. have “circled the wagons” with a quick salvo of double-talk without responding with hard facts to the specific points raised by C+W.
But this is just my opinion. Maybe Fred Moolten has another.
They do not understand the concept of traceability. They think that the IPCC authors “best efforts” and “rigor” constitutes adequate work, but that is not traceable.
I disagree, Craig. The concept of traceability is very well understood
This is precisely why Stokes and others are making so much hullabaloo about CG1/2 and “ethics” … distraction. Remember that the name of the PR game is capturing the hearts and minds of the generally uninformed public, so distracting from the contents of CG1/2 with “ethics” is a useful ploy
Max I reckon you’ve got a good handle on the Hergel response and Craig, that’s it in a nutshell.
1 is assertion that they were rigorous, not demonstration of C+W error
2 is deflection, not a demonstration of any error by C+W
3 is acknowledgement C+W correct abt communication of that issue, not a demonstration of error
4 is like 2
What did these giant minds of IPCC climate science pitch this response as? They pitched it as addressing 4 errors by C+W. They didn’t manage 1.
Judith, I believe you have made the mistake ot thinking that the Hegerl et al criticisms are scientific. They are not; they are political. What you must realize is that, following Climategate, the reports of WG2 and WG3 have been thoroughly trashed as being mere opinion pieces, with no science to support them. But to a large extent the WG1 reports have survived relatively intact; this despite the fact that in vital parts, e.g. climate sensitivity, they are sheer scientific garbage. But in the spirit of Ben Franklin “We must all hang together, or most sassuredly we will all hang seperately”, any attack on the TAR or AR4 of WG1 must be repulsed.
However, this is not about the TAR or AR4; it is all about the AR5 of WG1. As long as we have the same corrupt group of people, the “Team”, attempting to write the AR5, then they will have enormously difficulties. We deniers are waiting with baited breath, salivating, and will knives and pencils sharpened.
The problem is how does the Team exclude documents that do not support the Cause? One way is to prevent awkward documents from ever being published in the peer reviewed literature. Another is to include such documents, but have a “refute” document all ready to show why they are incorrect.
What is happening to you, is the same thing that happend to Spencer (Dessler), and Ludecke (Trenberth); and many others. You are victims of the “refute” game. Incidentally, congratulations on using the emails from Climategate 2. That is precisely how they should be used.
I have to wonder what part of the IPCC Cross-working group plan for AR5 to address uncertainties you did not read or understand.
This use of emails borders on the personally pathetic: they do not say what you want them to say, Judith.
On the other hand, the content of these emails in context actually suggest/ anticipate discussions of pluralistic modeling approaches that use the divergence, rather than convergence, of model assumptions and results more satisfactorily to address uncertainty now as well as future forecasts of the climate system. The size of the dispersion in results informs the measurement of uncertainty. Along with an emphasis on falsifications and modular structure in model construction, this is the most current modeling and, together with methodological pluralism, forms the approach from the climate science community for AR5.
Since the reality of this is interpretation of these emails is demonstrated in the science, including the scientific contributions of those involved, I suggest you may instead wish to consider how to be more productive e.g. fewer cocktails and more time increasing integration of current knowledge into your own understanding, so that you can constructively facilitate a public blog and really contribute at least in some small way (since I believe that is what you want to do).
Pretty withering. Or is that petty weathering?
Dunno. Didn’t bother reading to the end. Lots of convoluted sentence structure and long words, but no actual point.
Therefore definitely an academic and probably a climatologit.
If what we learnt from the CG1 and CG2 emails had applied to politics, the politician[s] involved would have been crucified [ref Pentagon papers]. If what we learnt from those emails had applied to a private sector corporation, the inviduals involved would be doing serious time by now [ref Enron].
But since the emails only expose the workings of the core of the IPCC machine, all regular norms, expectations and rules no longer apply.
As for the fewer cocktails comment, I suggest you go find ourself a mirror and take a hard look.
Each time you try to do the opposite, you just dig it deeper and deeper. The science is slipping away from your side. Just admit it.
You also appear to not understand the concepts of tracability & chain of possession.
Maybe things have changed, but when I was in grad school my understanding of published papers and the review process was that one of the primary purposes of the paper was to thoroughly document the procedures used and how the data was then analyzed so that someone else could conduct the same experiments or incorporate the data into their own analysis. Peer – review was to ensure that this was done without any obvious errors or mistakes committed. The conclusions of a paper were an after thought to the reviewers.
I don’t get that impression when following discussions of climate science. I suspect Dr Curry is trying to return the field to the level of professionalism normally associated with scientific study.
With the discussions here regarding Nick’s comments over “privacy” of e-mails perhaps a look at the attached will make him think again ?
Climate models operate on “forcing” and are completely incapable of relating increased CO2 to global temperature without a CO2 forcing parameter and a climate sensitivity factor; both of which havwe zero certainty as to their validity. There is no proper derivation for either of these parameters which are based entirely on the false assumption that increaased atmospheric CO2 concentration will enhance the greenhouse effect limiting outgoing longwave radiation when 31 years of satellite measurements show no detectable decrease in OLR completely disproving this “greenhouse gas theory” conjecture.
Until these two parameters are properly validated to scientific standard any comments about uncertainty mean nothing.
Norm, you write “There is no proper derivation for either of these parameters”
I have been trying to bring people’s attention to this for some time. What you have written, in it’s entirety, is precisely what I have been trying to say, and to bring people’s attention to; without any success. I wonder why it is that none of the proponents of CAGW seem to want to discuss this issue.
The satellite measurements actually back up the GHG theory. Odd how that works out. Here is just one paper of many which confirms the results:
Norm, I hope you dont mind, but I have copied and kept your comments. I am sure the subject of climate sensitivity will come up again, and I will then comment about how wrong the proponents of CAGW are to pretend the numbers are the equivalent of being written on tablets of stone. Then when someone like Fred Moolton is patronizing to me, and tries to paint me as some sort of outlier who does not understand the science, I will quote you to show that I am not alone.
if you object to this, please let me know.
Miraculously model #4 in Hansen et al 1981 projects 2.78°C for a doubling of CO2 from 300 to 600 ppmv.
A CO2 forcing parameter of 5.35ln(2)=3.71W/m^2
A climate sensitivity of 0.75°C per W/m^2 times 3.71 = 2.78°C so apparently Hansen knew about Mehre (1997) CO2 forcing back in 1981!!
The CO2 forcing parameter is supposedly based on a 100ppmv increase in CO2 from 280 to 380ppmv that caused a 0.6°C increase in global temperature over the hundred years from 1880 to 1980. (A short digression to the fact that the natural warming since the Little Ice Age occured at 0.5°C/century demonstrated that only 0.1°C should be attributable to CO2 instead of the 0.6°C that was used in the derivation of the CO2 forcing parameter)
Staying with the 0.6°C the CO2 forcing parameter should be
5.35ln(380/280) = 1.6338W/m^2.
If this is responsible for 0.6°C of warming then the climate sensitivity factor should be 0.6°C/1.6338W/m^2 = 0.3672°C/W/m^2.
Therefore a doubling of CO2 using 5.35ln(2) = 3.71W/m^2 should produce 3.71 x 0.3672 = 1.362°C and not the 2.78°C from Model # 4.
The current climate models converge on forcing around 3.5W/m^2 so in order to project 2°C to 5°C of warming from a doubling of CO2 the climate sensitivity must be somewhere between 0.57°C/W/m^2 and 1.42°C/W/m^2.
The Trenberth 1997 energy balance shows OLR at 235W/m^ and the 2008 revision based on 2004 data shows OLR at 238.5W/m^
The global temperature increased by approximately 0.1°C with this 3.5W/m^2 increase in OLR
Between 1997 and 2004 CO2 increased from 363.71ppmv to 377.49ppmv.
since according to Trenberth OLR increased by 3.5W/m^2 there is now a negative CO2 forcing parameter of -94.12ln(377.49/363.71) = -3.5W/m^2
Using Trenberth’s numbers, since this -3.5W/m^2 of downward forcing produced a temperature increase of 0.1°C the climate sensitivity is now negative with each W/m^2 of forcing from CO2 producing a reduction in global temperature of 0.1°C/-3.5W/m^2 = -0.0286°C/W/m^2 as a climate sensitivity factor.
Between Hansen 1988 which predicte between 2°C and 5°C for a doubling of CO2 and Trenberth 2008 which shows a 3.5W/m^2 change in OLR from 1997 to 2004 we have a range of climate sensitivity from
1.362°C/W/m^2 to -0.0286°C/W/M^2 and a range of CO2 forcing parameter constants from 5.35 to -94.12 and in my book this is not exactly what one would call scientifically verifiable parameters.
It is no secret that Mrs. Hegerl et al twist, bend and fiddle at their models….we know this already a long time…..they want to get more
certainty in and some percentages of uncertainty out……
I told her various times: This is all waste of time as long as one substantial
input variable (the Earth’s orbit libration) has been completely excluded….
Models have to be complete and all major variables must be in them…because if one major variable is missing, then predictions/forecasts must be simply wrong….plain to understand…
This has been additionally proven by the flat temp plateau since AR3 (2001); the temp plateau since 2001 is missing in their models….no matter how they try to twist…..
In the unrelenting focus on averages, the clear message from time rate of equator-pole spatial gradient change has fallen off the mainstream radar.
Climate models that cannot reproduce EOP (Earth Orientation Parameters) do not merit consideration.
The problem is multidisciplinary and help needs to be accepted from other fields.
A critical error in the CAGW model is the assumption that the accumulation of CO2 in the atmosphere is 100% attributable to anthropogenic emissions. I calculate less than 10% with a high degree of statistical significance. http://www.retiredresearcher.wordpress.com.
Slight error there Mr. Haynie. The 10% should be subtracted from 100% and you will be more on track.
Your blog makes no sense in that you can’t seem to explain 390 PPM yet you make projections based on this number increasing.
The basic problem is that you don’t understand the distinction between residence time and adjustment time.
You say that “The mean residence time of CO2 in the atmosphere as a gas is likely a matter of days,” That statement is so far-fetched that I question whether you have any intuitive feel for physics or chemistry. So we are supposed to believe that an average airborne CO2 molecule that will quickly go aloft and get swept up into the atmosphere will condense within a matter of days? No way that this will happen, and the only way that I can rationalize this is that you must be confusing a CO2 molecule for a particle of dust. Hint: They aren’t the same thing.
You don’t seem to understand either the science or the statistics. my analysis shows that more than 90% of the rise in atmospheric concentration of CO2 is from natural sources and at least 75% of that is from inorganic sources.
CO2 doesn’t condense. It is absorbed by condensed moisture (clouds) and returned to the surface in rain. Rain in equilibrium with atmospheric CO2 has a ph of around 5. Those CO2 molecules only have to travel as far as the nearest cloud or any body of cold water or the biosphere.
You don’t seem to understand the statistics of dispersion and diffusion. There are vast regions of the planet that are without clouds for long spans of time. Even if you tried to incorporate the idea of condensing into clouds, the dispersion in uptake rates would extend the residence times well beyond a few days.
You apparently have this little model world created in your mind that will be immune to criticism. I doubt you will ever retract your assertion of a residence time of only a matter of days because if you do then your entire argument will likely fall apart.
The current thinking in textbooks on atmospheric physics and chemistry is that the CO2 residence time is on the order of years and the CO2 adjustment time is hundreds of years with no statistical moment defined (which is typical of a diffusional process).
Fred, your theories about CO2 are nonsense.
I suggest you refine your argument and submit it for publication (if accurate, of course, you’d win a Nobel for your revolutionary discovery.) The experience of submission and hopefully peer review could be a very educational one, regardless.
Because of the sheer volume of pseudoscientific nonsense in the world, you will find many people inclined to wait for a peer-reviewed publication before investing a lot of time on your — unique — perspective.
How very interesting, Fred. I had not heard of this hypothesis before.
Robert, journals do not usually publish hypotheses. They publish experimental results. This is one of the AGW dodges, just as the models exclude many skeptical hypotheses, when playing with hypotheses is what models are supposed to do in science.
Having done atmospheric research at EPA for over twenty years and having worked with plume dispersion models and published over 60 peer reviewed papers, including an award winning paper on the use of statistics in research, I think I have a pretty good understanding of the science and statistics. As to mean residence time, the matter of years in the environment depends on the average length of cycles between air and surfaces. I calculate it to be around ten years. The biosphere cycle is most likely controlling. Plants grow more with higher concentrations, producing more decaying matter that produce more emissions of 13C depleted CO2 around 10 years later. As to publishing, you are free to improve on my analysis and publish. I’ve been comfortably retired for over twenty years and no longer maintain membership in professional societies. I’m just using my knowledge and experience trying to get to the truth. Try replicating my analysis on any set of CO2 data and see what you get.
So why then did you say that it was a “matter of days”?
Science is a process of being precise with your arguments and the fact that you give numbers that contradict each other by a factor of 100 to 1000 (depending on what a “matter of days” means) does not bode well for your own argument.
OK, here is a thought experiment. Say the CO2 does get taken up by cloud droplets and then it rains on a patch of desert. The puddles of water will dry up in a matter of hours and the CO2 is back in the atmosphere. Thus the CO2 is still in the environment and not permanently sequestered.
It seems that you might want to define three time constants for CO2:
1. A fast residence time for assisted condensation/evaporation.
2. A slower residence time associated with the carbon cycle.
3. A diffusion-limited adjustment time associated with permanent sequestering.
For AGW, the only time constant that matters is the last one.
This violates a law of mass balance. If there is a growing biomass, then the amount of CO2 is close to the same in the biomass as in the atmosphere. Unless you are saying that the CO2 is oscillating between growth and die-off, yet I see no evidence of this.
Once you have the hypothesis, then all you do is devise a test of that hypothesis, and before long, you have an experimental result.
This is science 101. It’s isn’t designed to discriminate against crazy ideas or philosophy majors. It’s just a simple process that has proven to be effective.
Then you can have absolutely no excuse for not writing up your revolutionary thoughts about CO2 residence times and submitting it for peer review.
Please drop me a link of the pdf when you publish.
Robert, you skipped a couple of steps, that I as a working scientist am painfully aware of, but that you might not be. It is all about funding that skeptical experiment.
First you have to find an RFP that is looking to fund such things. If there is none then you are simply out of luck. All you can do is lobby the funders to try to get them to offer such, which may take many years, if it works at all. This is why very little skeptical research gets done.
Second, if there is such an RFP you have to submit a proposal, which still may not get funded, as most don’t. There is a great deal more science proposed than is ever done. This is important.
Right now the AGW folks control the funds, the RFPs, and the funding decisions. You only have to read the USGCRP reports to see this. These reports are much more militantly CAGW than anything the IPCC has ever produced.
That is a bunk assertion, meant to scare someone off the trail. If you truly have found something original through analysis, which one can do with very little overhead, then conventional funding becomes irrelevant. With open-access journals, you can publish just about anything and the peer-review will work its way out naturally.
As you can see, I am willing to give Haynie some peer-review right now, and the comments I make will last at least a few years, if not longer. If someone wants to argue my criticism, they are free to do so as well.
Given that you seem to consider yourself to be so much of an expert at science communication and argumentation, I am surprised that you don’t understand this here citizen-based science thing.
Bottom line is that Robert and I are offering criticism, and if some skeptics want to defend Haynie privately funded work, they are free to as well.
“. . . I as a working scientist . . .”
That’s just not true, David, and we’ve discussed it before, at length. Your degree is in philosophy. You have never done any science.
“It is all about funding that skeptical experiment.”
Nonsense. You can fund it yourself, or use a publicly available dataset, or seek funding from any of hundreds of sources, including the fossil fuel interest who have given generously to fund people like Willie Soon.
“Right now the AGW folks control the funds, the RFPs, and the funding decisions. You only have to read the USGCRP reports to see this. These reports are much more militantly CAGW than anything the IPCC has ever produced.”
You’ve boasted of writing “hundreds of articles” denouncing “the great green menace.” That is militancy (not science). The USGCRP reports reflect the facts of the physical world, as does the IPCC: that’s science (not militancy.)
Robert, once again (read my lips) cognitive science is science. http://www.osti.gov/innovation/research/diffusion/PopModeling.pdf
The mean residence time of CO2 as a gas in the atmosphere is likely a matter of days. The “measured” mean environmental residence time is an average of the cycle lengths between air and absorbing surfaces. Mass is conserved between them but the ratio is not constant. The natural seasonal variation in CO2 tells us that. Natural rates of emissions and absorbtion are always changing and not at the same rates. They have to be considered when making your input-output model atmospheric mass balance. The natural rates are an order of magnitude greater than anthropogenic emission rates.
OK, let’s try another thought experiment. Argon represents about 1% of the atmosphere’s composition and although an inert gas, it has a solubility in water similar to oxygen and nitrogen. By your reasoning, Argon should have a residence time in the atmosphere that is relatively short since the argon can get dissolved in cloud droplets. Yet everyone knows that argon is inert and won’t easily condense out of the atmosphere as it doesn’t have anything to react with. As the water evaporates that argon is dissolved in, it returns to the atmosphere. In that sense, the practical residence time is much closer to that of helium, another inert gas, than that of methane or sulfur dioxide, which will decompose readily. So the residence time of argon equals the adjustment time and it is likely measured in the thousands of years (or more).
Here is another thought experiment. If H2O vapor constitutes between 0 and 4% of the atmosphere’s concentration (depending on the local humidity), then given a solubility of CO2 of 3 g/kg of water at 3C, then the amount of CO2 captured in the atmosphere by water has to be no greater than 4% * 3/1000 or 0.012% which is 3 times less than the current atmospheric concentration of 390 PPM of CO2. And that is on the high humidity side, not reflective of the average. So, CO2 is essentially saturated in whatever water droplets exist — it could go back and forth between a dissolved gas and a free gas, but this is really meaningless in the larger scheme of things.
CO2 is one of those almost inert molecules that will hang around until it gets photosynthesized (temporarily only) or it finds a deep sequestering site by diffusion.
Exactly! CO2 is not an inert gas and should not be treated as one. Dissolved in water, it is a week acid and readily reacts with basic surfaces such as concrete, limestone, zinc, etc. besides its reaction as a gas with the great biosphere. You must consider changing natural rates when making an atmospheric mass balance that includes the rate of anthropogenic emissions. Think about it.
This is actually turning into a good discussion. I do sense that you have something to add when it comes down to understanding reaction kinetics.
What are the rates of dissociation for a very weak carbonic acid? Everyone understands that a can of Coke can dissolve all sorts of materials, but I think we are talking very slow kinetics at that pH level. That’s why I think the sequestering is diffusion limited in that the CO2 has to randomly walk to find dissociation sites before it gets permanently sequestered. This then gives the 1/sqrt(t) impulse response time constant that is characteristic of a diffusion-limited process
Fred said, “Exactly! CO2 is not an inert gas and should not be treated as one” CO2 is definitely and interesting gas.
The residence times seems to be pretty variable. Snow cover changes the rate of biological release and absorption. Rainfall changes the rate of lower atmospheric residence. Low temperatures changes the thermal properties and reduces its chance of combining in atmospheric solution. At very low temperatures, near -90 C, it seems to have do some strange things as well. It seems to be a fairly complex molecule.
Of course it is. That is why I derived a master continuity equation expression with a diffusion coefficient that varied to a maximum extent. Lo and behold, the same general shape for the impulse response holds. This has more to do with the mathematical characteristics of diffusion than anything else, in the sense that the shape is invariant to further disorder..
So this natural variability, dispersion, and disorder does not impact the general behavior, and if anything the math comes out cleaner if one assumes maximum entropy on the disorder.
Actually it is closer to 5% because most of the increase is from outgassing of CO2 from oceans warmed by the sun (not by CO2) and to some unknown extent by changes in geothermal heat transfer.
Most of the CO2 comes from the untold thousands of sub sea volcanoes and fissures and as the ocean slowly warms more and mopre of this is expelled as thye ocean temperature sets new saturation points.
The Carbon 12 and Carbon 14 ratios match those of CO2 emissions from fossil fuels because both are sourced from within the Earth.
If you check the CO2 emissions from 1979 to 1983 you will see that there is a three year to year drop in CO2 emissions without any detectable change in the rate of increase in atmospheric CO2 concentration which if you were correct in stating that 90% of the increase is human sourced this w2ould not be happening and there would be a discernable response to the three years reduced emissions.
Mr Haynie was off by a factor of two you were off by a factor of nine! You have no evidence for your conjecture
Norm, there are a lot of factors that impact CO2 levels. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0889.1983.tb00023.x/pdf
The 1979 to 1983 period you mention had some pretty major dust storms. Under the right conditions, the dust can react with carbonic acid. This actually makes a pretty good CO2 scrubbing system if you have enough salt water and dust.
I welcome informed comments on my analysis by anyone. It gives me an opportunity to explain it. As to your thinking that rates are diffusion limited, the data does not agree with you. In a forest on a calm night, the concentration of CO2 can change significantly. Most of the dispersion of CO2 is by rapid turbulent diffusion and wind. It gets very fast in jet streams.
“Rapid turbulent diffusion” is a misnomer. Diffusion is a random walk process. Any upward turbulent convection will certainly cause the CO2 to get carried along with it.
The real random walk is with the kinetics of the individual CO2 molecules searching for sequestering sites in which they can precipitate out into a stable state.
If you can’t follow this, think about how all the fossil fuel hydrocarbons became sequestered in the first place. It took millions of years of sedimentation and tectonic activity for the decaying biota to get properly buried so that CO2 could no longer interact with the natural carbon cycle. That is the adjustment time we are talking about and it cannot get accelerated through natural means. Diffusion in fact is a glacially slow process at these distances. Weathering is also slow at the pH levels.
So now we have accelerated the reverse process by releasing a significant fraction of this buried carbon within a one hundred year time span. This excess CO2 really and truly has nowhere to go, because the natural carbon cycle has established a steady-state over this time, and has no room for anything extra. So the CO2 randomly walks around until it finds a sequestering site.
I guess I really don’t understand why you don’t refute this conventional explanation, and choose instead to work it from a magic beans perspective. The magic beans are the 100 PPM rise that your theory doesn’t address.
Yes, I am refuting the “conventional wisdom” of CAGW modelers using actual data and a statistical curve fitting technique that is much less “magic beans” than what you use to average out rapid processes assuming some kind of steady state that is being shifted by anthropogenic emissions. My analysis shows that the 100 ppm rise is at least 90% from natural sources.
Natural sources generate the CO2 that gets exchanged with anthropogenic CO2 during the mean residence cycle time. This is a basic statistical mechanical property of distinguishable particles and which definitely supports the AGW theory.
Try again when you have a finding that is significant, as this seems to be a rehash of Salby’s debunked ideas.
You are caught up in circular reasoning that does not address how to do a proper mass balance on a complex input-ouput system with constantly changing rates. You must assume that natural changes in input and output rates have no effect on the accumulation in order to declare that the accumulation is 100% anthropogenic. My statistical technique uses the 13CO2 and CO2 data with anthropogenic emissions to identify a unique signature for the anthropogenic and quantify it’s contribution to the accumulation. I’ve asked you to do the math and see if you can debunk what I have done. I’ts not enough to say that CAGW modelers are write and I am wrong.
You have it backwards. I have done a complete accounting via a mathematical model of carbon emissions against a CO2 impulse response function. This is documented in book-form here:
http://TheOilConundrum.com (click for the PDF)
and in recent updated bloggery here:
I ask that you carefully go through my math since I did the same for you. The difference you will find is that I only use conventional physics principles and standard math to arrive at a fit to an anthropogenic source of excess CO2 that perfectly aligns with the historical data.
It is up to you to debunk this much more parsimonious explanation that I have come up with. You are blown out of the water when you realize I only need a single parameter to generate a model agreement to the data. This single parameter is the adjustment time parameter for CO2 based on maximum entropy diffusion. As a side effect, I can also derive the pre-industrial baseline CO2 level.
Both you and Salby are using questionable premises as to what the isotope ratios of the CO2 signify. I can only add that you are probably not alone on this point, in that this has been a red herring for a long time, and that has only been rectified when climate scientists started to realize what the greater implications of the adjustment time was. In other words, the residence time argument was a canard that allowed Salby and yourself to create unnecessary FUD.
It will be interesting to see if and when Salby ever publishes his work. You will see the claws come out then because anthropogenic CO2 is probably the most solid part of the AGW model. It is a noise free signal with lots of dynamic range so the aleatory and epistemic uncertainty levels don’t play a part.
I’ve looked at your website and your “standard math” and conclude that your “random walk” still depends on the basic assumptions that CAGW models use to come up with the 100% accumulation caused by anthropogenic emissions. It is revealing that you think that you can determine multiple functions with a single perameter you call an “adjustment factor”. Your site is more revealing in that it shows that your agenda is to control the use of fossil fuels and the 100% caused by anthropogenics is vital to that agenda. There isn’t any objectivity in that kind of science.
Do you have an agenda?
This comment might suggest that you do:
CAGW “climate sciences” was birthed by politics and continues to be sustained by politics. Al Gore jumped on the environmental movement feed trough long before he became vice president in charge of environmental activities. That’s about when NOAA, NASA, EPA, and other agencies started doing subjective research.
Have you met Oliver?
If I have an agenda, it is to get the politics out of research and return to objective science. One reason I took early retirement from EPA over twenty years ago was because politics was starting to influence research. that was about the time Gore was promoting himself as an “environmentalist” and the IPPC was being set up. An example of the political influence at that time was Executive Summaries to Criteria Documents had to be carefully worded to pass OMB inspection.
Yeah – that seems consistent with the paragraph I excerpted. Well, except for the “if” part. You clearly have an agenda, and I would suggest that you are not above having your agenda influence how you interpret the science.
So let me ask you, is the only political influence that you have seen on the science coming from those lined up at the environmental movement feed trough?
Hey Mr. Haynie, this ain’t Green Acres and I ain’t no rube.
The value is not an “adjustment factor” but rather it comes directly out of the average value of a diffusion coefficient, which is standard nomenclature when solving the master equation of Fokker-Planck. The reason they refer to this number as an “adjustment time” I am not so sure, but I am simply trying to use the same lingo that the climate scientists use. In general, what I am trying to do is ground the theory in terms of conventional statistical mechanics that have been used for years.
Bottom-line is that your analysis is really dumpster-quality and time for you to try some other angle. Now I think that you have absolutely no grounding for whatever you are talking about, and it is naive curve-fitting and guesswork on your part to come up with correlations.
The isotope fractions are easily explainable from compositional intermixing via diffusion. Salby also went on this wild goose chase, thinking that he has stumbled on some revelation, but it turns out the carbon cycle smears out the isotope fraction effect so as to be a meaningless indicator. The residence cycle time is the great mixer in all this and it is confusing you and Salby both.
And I also wonder why you aren’t referencing Salby. Did you come up with this idea before he did?
Nick is typical of AGW zealots. Create a red herring and steer the discussion off topic. When you can’t defend your position just change the subject. Weak.
“How exactly aconsensus was reached regarding subjective probability distributions needs to be documented.”
It’s the false assumptions being made unconsciously that are causing the most serious conception problems. Data misinterpretation has led to severe misrepresentation of nature.
Not 1 but 2 double solar helices split dominant multivariate terrestrial temporal modes. Under carefully chosen transformations, the pattern is linear. In places the helices are nonrandomly snipped and rotated a half-cycle on the cylinders of dominant modes. Nowhere in the climate literature nor in online climate discussions have I seen mention of metrics that can detect & summarize such patterns.
It’s not inspiring following ‘leaders’ who are naively & innocently ignorant. This is no safer than following leaders who are maliciously deceptive.
More careful data exploration is needed to understand nature. More appreciation of nature might be helpful in getting people to look more carefully.
The use of the emails in this post is somewhat troubling to me. More troubling is the 100% split between warmers and skeptics regarding the appropriateness of using them. I am not sure. On the one hand, the emails are in the public record, it is not proven whether they were hacked or leaked, and some of them (not so much the ones quoted here) are pretty damning toward some actions of the authors. On the other hand, my gut reaction is that introducing them in a blog post below an exchange of published letters is not best practices (sorry Judith). On the third hand, I think the emails more or less show what Judith says they show. The arguments above simply escalate. People are calling Nick pathetic, Martha accuses Judith of being a drunk? Really? Paging Keith Kloor…
They acknowledge that greater traceability is desirable.
They disagreed with the following statement from Judith:
When Judith argues this about the emails:
It doesn’t prove her case that “no traceable account” is given.
And at any rate, the use of the emails is yet more indication that Judith is not merely interested in “building bridges.”
The jr. high school cafeteria food fight is ongoing, a big dollop of chocolate pudding is flying through the air, and Judith has an empty chocolate pudding bowl in her hand.
I have to say that’s pretty good :) It’s not very civil, but then Dr C is a grown-up and climatology is indeed a food fight
No traceable account is given.
The fact that you do not understand what a traceable account looks like does not change the fact that no traceable account is given.
It is easy to prove that no traceable account is given.
when did they meet?
who said what?
what information did they look at?
what information did they not look at?
can you reconstruct their decision given all the information in the text.
Don’t go confusing him with all that complicated management science stuff.
Yes, Now you see Joshua actually does have the intelligence to understand traceability ( unlike the science ), but rather than engaging on the actual issue, he tries to divert the discussion to his favored turf.
This is known as bad faith. Joshua is much happier attacking mommy, or calling mommy on her unfairness ( perhaps he was a neglected child ) than he is engaging people on the real question.
If Ar5 is to be traceable, these mails would actually be a part of the record. The concerns raised in them would be addressed and answered. Everything would be open and transaprent so that we could see who did what. The main issue here is to create a process that is “free” of motivated reasoning. Of course that is an ideal, but full disclosure goes a long way toward minimizing “motivated” reasoning. The actual reasoning process is exposed when you have traceability. When you have traceability somebody with different “motivations” can walk through the steps you took. Conflicts between people with different motivations become instantly apparent and the factual issues that lead to these conflicts come to the foreground.
Without traceability you are left with distrust.
While your account of the post seems off base, your analogy doesn’t seem half bad.
And admit it, who doesn’t love a food fight.
“It doesn’t prove her case that “no traceable account” is given.”
Her case was proven when she was unable to show, as a third party, how the results were arrived at. A functional test for “traceable account” is the ability for a knowledgeable third party (not necessarily an expert) to recreate the the results. Once Judith showed she couldn’t using the information provided, it failed the test.
It seems that all would benefit from an agreed-upon determination of what a “traceable account” is or isn’t.
The following excerpt doesn’t bode well in that regard:
Wow, that’s an odd appearance of an emoticon – wasn’t my doing:
There’s also this:
“It seems that all would benefit from an agreed-upon determination of what a “traceable account” is or isn’t.”
Both definitions and verification tests. Otherwise, it becomes an exercise in semantics.
BillC, be a Mensch.
You are, in derending the sad effort of the believers, defending the indefensible.
The beleivers know it, but they are not backing down now. They sense the disaster ahead as more and more people link the e-mails to actual events and full context.
The only defense left tot he believers is to deny (ironically) the validity of the evidence.
Having had some prior work experience in law enforcement, I don’t have any issues with the use of the emails (since the consensus seems to be that they are genuine). To use a different example, if your car is stolen and when recovered, stolen goods with your fingerprints on them are found in the trunk, you can expect to be charged and have them used as evidence. Evidence is only excluded due to illegal actions by the police (or someone acting as their agent). Beyond that, the issue is whether the evidence is linked to the suspect, not how it came to light.
As to the other part of your comment, I agree. Using personal slurs instead of rational argument is counter-productive and childish.
There’s more going on here than simply a discussion of legal parameters.
When Judith uses the emails to “bolster” an academic disagreement, she is making a decision to partake in the food fight, directly. I’m fairly agnostic WRT to the debate about the morality and ethics surrounding the emails; I see people on both sides equivocating morally to advance their own ideological perspective. But without a doubt, by using the emails as a part of this post, Judith is engaging in tribalism as opposed to bridge-building. She is taking herself out of the scientific and academic debate, and placing herself in the jr. high school melee.
It’s her prerogative. But then she shouldn’t pretend that she isn’t fully engaged in a tribal battle, and her assessments of the symmetry in the tribalism need to be viewed in that full context.
The stolen, leaked, hacked, whatever emails… at some point, they’re just out there. It becomes evidence. It can’t just be set aside and ignored, even at the academic level– and there are many academics across the spectrum that fully integrate these emails into their discussions (RC included). It is not up to you to be the arbiter of who gets to assess the emails for explanatory analytical value, or to award/rescind merit badges– these things are out there now. In as much as the academic community wants the perpetrator put to justice, my sense is most of them aren’t saying “pay no attention to the man behind the curtain” or somehow deriding appeals to the emails. Instead, while repudiating the act of hacking, they’re fully engaging every part of every email (which is respectable). Judith should feel free to engage the material contained within the emails as well.
It’s like you want to be on the OJ jury, congratulating yourself on the refusal to examine all relevant evidence, dismissing as much as possible. I for one, am at least grateful for the other scientists in the IPCC community who are willing to engage these emails without the supercilliousness that is (frankly) quite weak.
I never suggested that it should be. My point was that I see what I think are weak arguments on both sides WRT the emails: decontextualized point-scoring, rationalization for bad behavior, etc.
In my view, her use of the emails in this case did not effectively add to the weight of her analytical argument, and in my view, it isn’t likely to increase the transparency and traceability of the science, which I agree should be the shared goals – because when people on one side of the debate engage in openly tribal behavior, it begets tribalism on the other side.
There are different ways to “engage” with the emails. Of course, some at one more extreme end will dismiss any engagement with the emails out of hand, and some at the other end will see the emails as proof of a vast conspiracy to destroy the world as we know it and to install a climate scientist cabal at the head of a one-world government. The question is how to use the emails in a way that betters the science and advances the debate. I do not see Judith’s use of the emails in this situation as furthering those goals.
Specific statements from the emails that bolster my arguments and seem to refute the arguments made by Hegerl et al.:
>> We’re having a lively debate in the Hadley Centre about whether climate
>> change experiments should be run as part of the model development
>> process, ie whether model developers should test their model against
>> climate change as they are developing their model. I think it might be
>> worthwhile us developing and expressing a view on this as we don’t want
>> to risk getting into a position where attribution results in AR5 are
>> undermined by the development and model tuning procedure adopted by
>> modelling centres.
(JC comment: with the implication that this is what was done for the AR4)
> Likewise, suppose two candidate models were identical in most
> respects, but one could accurately simulate the climate of the 20th
> century (when all forcings were included), whereas the second had a
> very low global sensitivity and produced too little warming. The
> developer would again want to choose the model that reproduced the
> observed trends. In fact this model would probably produce a better
> estimate when forced by future emissions scenarios too (because,
> presumably, its sensitivity is closer to the truth).
> It would be hard to argue that information about 20th century trends
> shouldn’t be used in model development.
> I agree that this may rule out attribution studies (following the
> established approaches), but wouldn’t we have to argue that
> attribution studies are more important that model projections to
> convince the groups not to consider trends in the model development
(JC comment: this is very germane to my circular reasoning argument. Again, there is the implication that this was done in AR4, and they are trying to improve things for AR5)
So using the 20th c for tuning is just doing what some people have long
suspected us of doing…and what the nonpublished diagram from NCAR showing correlation between aerosol forcing and sensitivity also suggested. Slippery slope… I suspect Karl is right and our clout is not enough to prevent the modellers from doing this if they can. We do loose the ability, though, to use the tuning variable
for attribution studies.
Should we ask to admit in their submission what variables were considered when tuning, and if any climate change data were considered and at what temporal and spatial representation (global mean trend?), and advise that we will not be able to use those models for any future attribution diagrams? That would at least lay it in the open…
(JC comment: this supports my assertion re inverse modeling. Again, there is the implication that this was done in AR4, and they are trying to improve things for AR5)
Their disagreement is whether or not your specific assertions about insufficient transparency, attribution (quantification of uncertainty), traceability (that there was none) are accurate. The fact that they acknowledge in the emails that improvements should be made – which they also acknowledged in their official response to you – does not confirm the accuracy of your assertions.
Joshua, reread my uncertainty monster paper. The reason they are discussing improvements for the AR5 is to fix problems with the AR4 simulations, basically the same problems that I describe in my uncertainty monster paper.
Strange, I don’t get the same info out of the emails as Judy does. The email exchange shows the attributors all arguing to do all they can to avoid circularity. The first excerpt says that they don’t want to “risk getting into a position”, which on its face implies that they are not already in that position with AR4. The second excerpt says that using 20thC info “may rule out attribution studies (using established approaches)”, implying that, since previous attribution studies had in fact been conducted rather than ruled out, they were not based on circular reasoning. The third excerpt seems clearest: “using the 20thC for tuning is just what people have long suspected us of doing” implies that the 20thC had not been used in the past, only suspected, and they wouldn’t want to start doing it now.
this seems pretty unambiguous to me:
So using the 20th c for tuning is just doing what some people have long
suspected us of doing…and what the nonpublished diagram from NCAR showing correlation between aerosol forcing and sensitivity also suggested. Slippery slope… I suspect Karl is right and our clout is not enough to prevent the modellers from doing this if they can.
The scary thing is that not even the lead authors of the IPCC AR4 seemed to have known what went on in the CMIP3 models in terms of tuning and selection of runs to actually submit.
‘Strange, I don’t get the same info out of the emails as Judy does. The email exchange shows the attributors all arguing to do all they can to avoid circularity. ”
John, now perhaps you understand the importance of traceability.
In a traceable process you would not have a document that is ambiguous.
Ambiguities are identified and closed. The process is constructed to avoid misunderstanding. Requirements are set out, understanding is checked. Everyone signs off on the same documents.
Ok, Mosher, now you done it. You’re intolerant of ambiguity. That makes you intellectually inferior. The smart people even said so. Well sorta.
Years ago @ Andy Revkin’s DotKim(2008) I wrote that the modelers were trying to keep their toys on circular tracks on the ceiling.
Maybe the reason the email appears ambiguous is that it is being used for a purpose other than that which was intended. It’s an opinion expressed in the context of a particular discussion which was going on at the time – it was not intended to resolve the current argument that Judith is having with Gabi Hegerl, so it doesn’t. The fact that Judith has had to resort to quoting it would indicate to me that she can’t find any actual authoritative sources to back up her argument.
No, you have it wrong. I made an argument in the Uncertainty Monster paper. Hegerl et al. criticized that argument. My reply to Hegerl et al. argues that their criticism does not hold up. The emails support my argument that their criticism does not hold up. IMO, our original argument in the Uncertainty Monster paper stands (which was well documented in the paper), as a result of this exchange.
I know that’s your argument, but some of us just consider the email to be rather flimsy evidence, both in terms of its content and by its very nature.
“My reply to Hegerl et al. argues that their criticism does not hold up.”
Oh, that’s pretty bold statement considering you didn’t even try to address the issue #1 from their comment.
I agree with Judith. It is natural for modelers to use all available data to tune their models. This is true in all fields where complex simulations of questionable accuracy are used. I believe there is also some independent evidence in the literature cited by Lindzen that in fact modelers do modify aerosol forcings to get better historical matches to the data. I can look this up if people are interested. You will note that the NCAR model run for AR5 with historical best estimates for the forcings initialized for pre-industrial conditions overestimated warming by 50%. Are there such mismatches in the AR4 runs. I seem to recall that there were not.
I see people on both sides equivocating morally to advance their own ideological perspective.
That’s absurd. The **only** ideological perspective presented herein is that of those wanting to claim that the emails were “stolen” private property thus somehow wrong.
The non-ideological argument is that these emails were never private and subject to FOIA, period. How you can impute “ideology” into that is no mystery — it’s simple partisan spin.
Anyone who has followed the climate change issue quickly realizes it is mostly one big food fight. What I like about Dr Curry’s site is that such fights are tolerated, unlike some others where the moderators provide free food to the side they like and kick the people with the best arms out of the cafeteria.
The Crusted Law of Food Fights is not to start them until everyone is full.
I’m outarmed and there are bigger bodies out there, but the trick is to wait for post prandial stupour and then tickle with fingertips.
Keep your fork, there’s pie.
With all due respect, you might consider re-reading those excerpts, in context. I think it is more reasonable to interpret this a lot more like Dr. Curry has.
If the IPCC process and assessment were transparent, the emails would be completely uninteresting. In the absence of appropriate transparency (I believe I have made the case for this both in the main paper and in my reply), the emails provide critical glimpses into what actually went into their assessment.
At this point, I am not interested in building bridges with the IPCC, but rather in holding their feet to the fire re transparency, traceability, etc. Recent efforts by the IPCC to make its deliberations immune from national FOI laws reinforces the importance of the emails.
“Recent efforts by the IPCC to make its deliberations immune from national FOI laws reinforces the importance of the emails.”
As a constant critic of your self limitations of criticism and counter productive nuancing of obvious motives of participants I do commend you on this point.
If only you would cross the Rubicon with the others.
You, a critic of Dr Curry? Common on. Say it ain’t so.
I agree with most everthin’ cwon14 says except his opinions about Judy. Her curiosity is like the carved wooden figurehead at the prow of the questing craft. This finely wrought specimen has sensory perception and processing ability and guides like grey eyed Minerva.
I tend to agree with a fair amount of what he says as well. Not all, but there is more to agree with than not.
Which is why I tease when his “hard-on” for Dr Curry comes out.
Are you able to definitively confirm if the IPCC have managed to make themselves immune to FOIA or if they will still have to cough up the information if requested? Or perhaps its still in the melting pot?
IPCC is not immune to foia–they just assert it. Since almost all the work is done by people still employed by governments (mostly) who are subject to foia, the IPCC ipso facto is not immune, but would like to be.
The IPCC as a part of the UN is immune to national FOIA laws. The issue is whether the work products of national/local government employees working on IPCC reports are immune. If it ever got to court, it would probably come down to whether there was a formal seconding of the employee. EINAL (fortunately)
Very few US climate scientists are employed by government. They are employed by universities (Mann for example) or contractor operated national labs (Santer for example). As such the federal FOIA law does not apply to them. It does apply to the federal program managers who fund them.
While Eli does not have a direct count there are probably more climate scientists working for NASA/NOAA/DOE than for universities. In any case the national labs are owned by the government although operated by others. FOIA applies.
Tony it is immunity (exemption actually) by assertion as Craig suggests. I can make an FOI application today, just little me, to any of the Australian folks who are acting as lead authors, coordinating authors, reviewers etc etc, the actual guts of the IPCC workings, all of whom are employed by other institutions and organisations. There is no exemption in Oz FOI law for IPCC deliberations and until there is (not gonna happen IMHO) the IPCC’s assertions are pure blather. I am sure Pachauri and pals are trying to get specific protection in place for themselves and the secretariat who are actually employed in their UN roles (labyrinthine is how I’d describe the way the UNFCCC and IPCC etc are set up) but they’re whistlin’ in the wind for the rest. Heck they even want to say Zero Order draft can’t be seen for AR5 – chapters are already leaked. They really really don’t like it up ’em.
Excellent points. The UNFCCC has permitted the IPCC to turn into an out of control unreliable organization.
Their work is no longer credible.
BTW, for some reason, I am showing up as ‘lurker’ in my posts. This is not an attempt to become more than one poster or be deceptive in any way. I will try and find what setting I messed up and fix it.
Hunter/lurker, the UNFCCC organization has no control over the IPCC, which in fact predates the UNFCCC. The IPCC is jointly owned by UNEP and WMO. UNEP is political and ideological so it is the WMO who should be most concerned about the IPCC’s integrity issues. That is where the pressure needs to be applied.
I’m not sure why you bring “bridge-building” into this. This is a post about, as you term it, “an academic disagreement”. I would agree that it’s rare for such disagreements to feature unconventional references like these emails, but if there are inconsistencies in what’s said there vs in publications, I don’t see it as an ethical issue when Dr. Curry calls it out.
Regarding “tribalism”, perhaps we have different definitions. My definition would encompass things like the Sonja B-C episode from ClimateGate I and the red-baiting some of the players here like to engage in. Use of that email may be embarassing to Dr. Hegerl, but it’s not an ad hom attack.
Maybe this will explain why I bring up “bridge-building.”
I’m not saying that is is an “ethical issue.” From what I’ve seen, the “ethical” arguments are mostly characterized by over-reach on both sides.
I’m questioning whether this form of debate is useful in advancing Judith’s putative goals.
And, again, Judith is using the emails, rhetorically, to suggest that the authors are hypocritical WRT the goals of better quantification of uncertainty and increased transparency and traceability. That seems to me to be a personalized argument, and not an academic argument. Disagreement about the accuracy of Judith’s claims – for example of “no traceability” – does not confirm an intent on the part of the email corresponents to avoid all traceability.
Look at the comments in this thread. You will find comment after comment claiming that the emails confirm an intent to avoid all traceability. Judith is not stupid enough to think that her post wouldn’t elicit tons of comments exactly like that. She’s fanning the flames of the conspiracy theorists here – IMO to the detriment of the debate (as it will only further justify extremism on both ends).
Maybe this will explain why I bring up “bridge-building.”
As I noted above, I think we’re operating off of differing versions of what constitutes “tribalism”. As to the utility of using the emails to impeach Dr. Hegerl’s position, that is a valid question. It is, however, one that Dr. Curry must answer: when does advancing her position become more important than angering Dr. Hegerl? I don’t see that as a black and white issue. People get emotionally invested in their positions; sometimes it’s worth winning, sometimes you’re better off walking away.
I’m not saying that is is an “ethical issue.” From what I’ve seen, the “ethical” arguments are mostly characterized by over-reach on both sides.
Sorry if I mischaracterized your position. Others above, have framed it that way, and I agree, there’s over-reach from both camps.
And, again, Judith is using the emails, rhetorically, to suggest that the authors are hypocritical WRT the goals of better quantification of uncertainty and increased transparency and traceability. That seems to me to be a personalized argument, and not an academic argument.
I don’t see a suggestion that the authors are avoiding transparency. I do see that the confident assertion of adequate traceability in the paper isn’t reflected in the email. Dr. Curry didn’t speculate as to a motive.
She’s fanning the flames of the conspiracy theorists here – IMO to the detriment of the debate (as it will only further justify extremism on both ends).
Sorry, those making the comments bear the responsibility, not Dr. Curry. Self-censorship to avoid someone going overboard leads to a world of silence. Some will overreact one way, some will overreact the other. In the end, she can only be responsible for what she states.
I don’t know that they’ve asserted adequate traceability. That is, inherently, a subjective standard. But let’s look at the specifics of the debate.
Again, here’s what Judith asserted:
The authors disagreed with that statement and provided specifics. They went further to say:
And I don’t see any inherent inconsistency between their official statement and what they said in the emails. I think that Judith is using the emails as a rhetorical device, and in doing so, distracts from the academic debate.
Is advocating for a careful and controlled and specific and precise rhetoric – in opposition to rhetoric that is easily used by both sides to justify extremism – a call for “self-censorship?” Not as I see it. There is a middle ground between silence and rhetoric that will fan the flames of extremism in the climate debate. Sometimes fanning extremism is the price one has to pay for an honest appraisal of a situation. I don’t see this as being one of those cases. The merits of Judith’s academic disagreement with the authors should be able to stand on its own independently of what the email correspondents wrote to each other. If she wants to, responsibly, address the implications of what is written in the emails, I think that there’s room for that – but mixing the two here seems to me to be tribalistic in nature.
Finally, I think that this post needs to be viewed in the full context of the debate. That isn’t a call for self-censorship.
The authors disagreed with that statement and provided specifics.
I saw that. The first item referred to was more of a policy statement on how the assessment should be conducted. I briefly looked at table 9.4, but I’m really not qualified to say that it definitely is not a traceable account. For what it’s worth, it would have been more helpful had Hegerl et al detailed how it satisfied the requirements of traceability rather than just asserting that it was.
I think that Judith is using the emails as a rhetorical device, and in doing so, distracts from the academic debate.
As such, it’s nothing more than your opinion of her motives.
Is advocating for a careful and controlled and specific and precise rhetoric – in opposition to rhetoric that is easily used by both sides to justify extremism – a call for “self-censorship?”
That assumes that “rhetoric that is easily used by both sides to justify extremism” has been employed and employed as such. Beyond your opinion of Dr. Curry’s motives, what evidence do you have that she is using the email for emotional impact rather than to show inconsistency between the two fora?
Traceability is not subjective. It is quite testable.
1. Is the procedure required to produce the answer defined in
such a way that an independent person can reproduce the answer.
Go ahead and try.
2. Is the process described in such a way that you can reconstruct it from start to finish and identify all the decision points, and further who contributed what.
You cannot just make things up concerning an area you dont understand
Steve, that’s right up there with McIntyre’s engineering quality report. When challenged he went all around Robin Hood’s barn trying to avoid defining what he was saying
Good morning Eli. Traceability and engineering quality controls both come in many flavors and degrees, but each degree has well defined exemplars. Consider the information requirements of IRS tax audits, FDA drug testing, NASA software development, etc., as examples. There is no mystery here.
You are mixing apples and oranges. Each of your examples has very different rules.
Another point re the emails. I suspect that few climate scientists in the “community” (outside of those whose emails were published) have actually read the emails. Why haven’t the “community” scientists read the emails? Some hypotheses. Many just can’t be bothered, and are secretly relieved that their emails weren’t made public. It is ungentlemanly or otherwise unseemly for a scientist to read emails that were intended to be private (Jerry North said something like this). Gavin’s blogospheric interpretation and defense, along with public statements from UEA, was sufficient, no need to delve in.
Given my engagement with the skeptical climate blogosphere, as soon as this hit, I knew it would be a big deal in that community. So I paid attention. With my growing concerns re IPCC transparency and traceability, I see a wealth of important information in these emails, regarding the IPCC process and also uncertainties about topics for which the IPCC provided confident assessments.
I was puzzled and disappointed by the reactions of even hard nosed people like Dr. N-G in dismissing and ignoring the e-mails. There seems to have been at least until now a galvanic reaction against learning what was going on in the revealing discussions climategae demonstrates.
A very good science journalist I know pretty well has refised to dig into the e-mails first hand, which is against his typical excellent standards. He has literally accepted the explanations offered by those who were outed, and even talks about how uncomfortable he is looking at the e-mails. Even as his newspaper regularly and without hesitation uses leaked documents in any other area of interest.
It is as if the believers know the truth represented in climategate is going to unwind their movement, but their love of the movement overcomes their desire for truth.
Now with the police acting as they are, I suspect we are closer to finding out what is behind door number 3 much sooner than we were a week or two ago. The released data will very likely contain much more context of how the AGw movement/IPCC has misled and manipulated the process. The real test of a religious faith is when the truth comes out that shows many faith based claims were incorrect, does the faith survive? Refusing to look at evidence is a good way to keep faith alive.
In America, when the group “Anonymous” hacked into the Chamber of Commerce (and security subcontractors of the lawfirms they had on retainer), there was much less outrage when their emails and proprietary materials were funneled to the ThinkProgress blog (and other outfits). The material immediately became ‘evidence’ and was discussed as such, resulting in the Chamber of Commerce deep-sixing the lawfirm who contracted the security outfit, etc.
To my knowledge, at no point did any one declare to ignore the pilfered correspondence as inadmissable— simply put, the Genie doesn’t go back into the bottle.
An unfortunate effect of this, is that the scientists may be driven even further from transparency, only communicating via personal emails and all that. It’s not ‘correct’ to do that under the IPCC guidelines, but at some point, it’s just unenforceable to police.
The MSM gleefully use illegally leaked information on a daily basis. Out-of-context cherry-picking is par for the course. Wikileaks is a very obvious recent example, but it really is an endless cycle of schlock-horror
Yet, most of the MSM in Aus dabbled only very briefly and very belatedly in CG1 reporting (again, no context) and have not acknowledged CG2 at all. Nothing will be allowed to disturb ‘the cause’
I too am wondering whether the release of the password to the CG2 7Zip file will be hastened by the confiscation of Tallboy’s computers. I cannot see how the Tallboy episode can delay said release
“Mommy, mommy, they do it tooouuuu.”
Never seen that before.
That time were you at least sounding clever to yourself?
Dr. Curry –
It seems to me that most commenters – on both sides – are missing the central fact that there are two entirely different issues regarding the e-mails: their release/hacking/discovery and their subsequent use.
Their release/hacking/discovery is obviously a legal matter, and if the offended party demands investigation it should be investigated. If laws were broken (hacking), the perpetrators should be prosecuted. If it was whistleblowing, UK law would seem to let the releaser off the hook. So be it.
The subsequent use is another, separate, matter. The e-mails are now in the public domain and are going to be used regardless of considerations of ‘gentlemanliness’ or ‘decency’. There is a long history of public use of illegally, immorally or even inhumanely obtained information and data. In the end, the source and method of an information release are of no relevance to its use.
So yes, somebody possibly did something illegal to get the e-mails to the public. The e-mails are still going to be used against their authors when warranted.
“Everybody is going to do it” is a slightly altered version of “everybody does it,” and neither one is a compelling defense of unethical behavior.
“Public domain” is a concept that applies to copyrighted works, not stolen personal correspondence.
The golden rule is a good guide here; if someone rooting through your trash found hundreds of personal letters, and left them in a public place, would you consider them belonging to everybody, or would you still feel that those that read, copied, and circulated your private correspondence violated your privacy?
“…if someone rooting through your trash found hundreds of personal letters, and left them in a public place, would you consider them belonging to everybody, or would you still feel that those that read, copied, and circulated your private correspondence violated your privacy?”
I would assume that the worms were out of the can and couldn’t be put back in. If the letters contained information of use to other individuals it would be the height of folly to ask or expect everyone to please ignore that information. My personal feelings would not enter into the question.
On the other hand, if I felt the letters were obtained illegally, or the information was used for an illegal purpose, I would have no qualms about going to the police or courts.
As for ‘public domain’, I was using it in the sense of “the state of belonging or being available to the public as a whole”. I think you know what I meant, but if you don’t like that phrase, I’d welcome another suggestion.
Robert if climategate showed your side in good light they would have been published into a book, excerpted at length worldwide. You, especially, would be quoting them fervently as evidence against skeptical wickedness.
Your phony santimony is not even convincing you, I would bet.
Certainly no one who is looking at this from a reasonably informed point of view believes a thing you claim on this (well on nothing you say, but that is another topic).
“The golden rule is a good guide here; if someone rooting through your trash found hundreds of personal letters, and left them in a public place, would you consider them belonging to everybody, or would you still feel that those that read, copied, and circulated your private correspondence violated your privacy?”
That you think this is a good analogy is depressing. Are you really that dense? Or naive?
A closer analogy is this: if someone discovered in the trash hundreds of letters wherein you described the state of your company as bordering on bankruptcy, and that person knew you were publicly touting the strong performance of the company to potential investors in an attempt to pump up the share price, would you expect that person to not release the letters to the public? In fact, doesn’t that person have a responsibility to release the letters?
Think Enron booby. The emails are from/to “.gov”, “.edu” which last time I checked were not personal email servers. No private emails released here.
Might try visiting your local Office Depot. They carry something known as a paper shredder. Lacking that, you can try the fireplace. In a pinch, a metal trash can in the back yard or even an old fashioned burn pile can be put to use.
In other words, if you are dumb enough to place items, personal or otherwise, into your trash can which you don’t want anyone to see, then you are subject to your own stupidity. Of course this would only be significant to the issue if we were talking about personal mail. Which we are not.
Speaking of rules to guide by, here is one for your consideration:
“Better to remain silent and be thought a fool, than to speak and confirm that you are.”
Robert, You have to get over this fascination with an artificial ethical concern. In fact, given the gravity of the consequences of climate science conclusions, it would be UNethical to not use the information in the emails. The behaviour exposed in the emails is unethical and its disconcerting that people who are climate scientists stonewall the issue and refuse to deal with the emails. It’s just another manifestation of how politicized the field has become. The leading climate scientists (those with the most clout in the field) must take responsibility for this. If they can’t separate science from politics, then no one can and they are the cause of the problem. You know, its a little like the President of the US arguing that his opponents are unethical. Even if true, that complaint is not leadership. It’s childish partisanship. Leadership implies unique responsibility. Unto him to which much is given, much is expected.
Why haven’t the “community” scientists read the emails?
Why do you think they should neccessarily wish to do so?
It appears there’s a whole slew of people out there that wish to be the final arbiters of what constitutes as “Bettering the science” and “Advancing the debate”. It is sort-of ironic to come to the home of someone else’s blog (a blog about what they think makes the science better and advances the debate no less) and make these declarations. Perhaps posting it on your own blog makes sense..? Or, instead, engaging on the merits– which is a parallel road you have taken. I’d concentrate on that that one.
I have many opinions (that’s a surprise, eh?), and I voice them. That doesn’t mean that I wish to be the “final arbiter” of what constitutes bettering the science and advancing the debate.
You’re arguing with a straw man that exists in your imagination.
OMG … Somehow my email must have been hacked.
how did you know my name was ‘dude’ ;)
JOshua writes with his customary charm” Nick -Judith is just trying to “build bridges.” Don’t you think that using those emails is the best way to build bridges?
I mean, it’s not like she’s engaging the debate from a partisan orientation.
Or anything like that.”
That’s right Josh, that’s just how to win friends and influence people. A typically scurrilous, shameful, gratuitous slam that says a great deal about you, and nothing at all about J.C. By the way, didn’t you just write a teary-eyed, self-pitying letter of farewell? I knew it was too good to be true.
I’d like to comment on several points.
First, the citation of the emails is legitimate, but the comment “thank you, hacker/whistleblower” is petty and would be better off deleted. It diminishes rather than enhances any intended points.
Second, the emails reveal little to me about known practices that I was unaware of based on public knowledge, but did show a proper concern about a potential conflict between what some modelers might be inclined to do and the use of models for attribution. It is commendable that this concern was expressed, but it clearly needs to be followed up.
Third, I think that Dr. Hegerl’s defense of the IPCC attribution for post-1950 warming as mostly due to GHGs was not entirely convincing, because none of the atmospheric profiles perfectly matches a GHG response. At the same time, the Curry/Webster claim challenging the high likelihood that most (i.e., more than half) of the waming was due to GHGs is one I have found untenable, for reasons I’ve described in some detail in at least three or four past threads.* In fact, if one tries to quantify the relative contributions of all plausible positive warming influences during that interval (ghgs, black carbon, solar, and internal climate modes – aerosols were a slight net negative and their modeling uncertainty was therefore largely irrelevant for this particular attribution), the IPCC attribution was conservative, and a good case can be made that at least 70% of the warming was attributable to anthropogenic ghgs. Given that there have been other IPCC assertions that are less solidly based, I believe that a continued focus on this particular attribution will not be fruitful and will weaken the case for other criticisms.
*It would take me a while to find and link to those previous comments. However, the main points made previously was that solar forcing (even with some unspecified enhancement) was small, black carbon forcing was also small at the level of the surface temperature (described by Ramanathan et al), and internal climate modes for that particular interval could have contributed only minimally based on ocean heat content data, as discussed by Isaac Held in his blog. All of these were substantially outweighed by GHG forcing.
Petty is saying one thing in private and then saying something else in public to mislead people, which is what the e-mails show.
It is shaping up as being more and more likely we will all owe a debt to the leaker/hacker of the same type owed to Ellsberg and Deep Throat.
Dr. Curry, I would submit, is an early adopter of this possiblity and its implications.
What is petty as well is the continued refusal of journalists to bother themselves and actully look at what the climategate leaks actually are illuminating.
Looks like Judith is tired of being dumped on by the consensus mob. She has moved on from naively attempting to build bridges, to burning bridges. She is putting distance between herself and the dishonest lot that you reflexively defend. Girl got some guts.
One does not want to burn bridges in this situation.
One should charge across the bridge, get into the opponents rear and destroy them. You only destroy bridges when you are in retreat.
I agree with you about the emails. Just referring to them wouldn’t have irked me, but the ‘thank you’ comment is counter-productive. At least that was my reaction. If you like, I think it was a misjudgement.
Funnily enough I absolutely agree with you on the much more substantive point – that focusing on the >50% attribution is misguided. That sense has been brewing in me for quite a while. When I think of the ‘consensus’ and where I sit with my understanding and beliefs, I always pass by the ‘greater than 50%’ as if that is obvious and very conservative.
If AR4 had said more than 75% I’d have to go over everything again and do some tortuous thinking because I couldn’t be sure if I’d say yea or nay to such a claim. But 50%? Not a problem. There are many claims issuing from the orthodoxy that strike me as unsupportable and speculative – particularly concerning the imagined negatives attributable to small amounts of warming, but maybe only half, of barely half a degree? How could that be contentious?
I think there are legitimate concerns about how you get to the numbers you end up with, but in this case I’d overlook it because the case is so strong for a higher proportion. Perhaps I think the conservative estimate partially circumvents an intractable problem.
Finally, some people chime in that didn’t follow my “warmers mad, skeptics defensive” theme (Fred and Anteros).
Anteros, Mosher, Judith et al,
Has it been substantiated that “most” was intended to mean >50%, or have all such pronouncements been made after the report was out?
I think they intended to mean
and could not bring themselves to write all those zeros.
Clearly, to make a comment that the figure is greater than 50% suggests that an ACTUAL CALCULATION was performed.
1. what was this calculation
2. what data was used
3. who performed it
Maybe it meant a filibuster-proof supermajority.
Don’t you think it more likely that they realised there was no way they could convincingly justify any quantification at all and therefore plumped a little ‘low’ just to head off some of the criticism?
Of course I only say that cos it’s just what I’d have done :)
I doubt it. See this for a description of how uncertainty was determined in one case. Strange, but from a participant in the process.
My mind is truly boggled. I was under the impression this was the work of 2000 of the worlds finest minds tearing at nature to unlock her secrets……seems a bit more like put-the-tail-on-the-donkey.
I have to admit that my jaw bounced off my desk when I first read Dr. Tol’s comments.
In a way, though it makes some sense. Not long after ClimateGate I broke, Dr. Curry mentioned on another site (Collide-a-Scape or Climate Audit, don’t remember which) that the players involved were used to doing academic science rather than regulatory science. As such, stringent data management, etc. was a foreign concept.
Similarly, a nice collegial way of determining uncertainty kind of makes sense when the only stakes are ego. In a regulatory environment, not so much.
“Most” was not defined in the IPCC report; subsequently it was said to mean >50%. This seems to be admitted by Hegerl et al.
Then wouldn’t that add to the whole uncertainty question? What does it matter if the calcs say 50% or 75% if you have uncertainty in the confidence of your answers?
For someone like me, seeing a statement that says “We believe it represents approximately 60% of the rise, plus or minus 20%.” is a more useful method of presenting their conclusion. It allows me at least some idea of their best guess (and when there is uncertainty an educated guess is better than a WAG) and indicates to some degree the uncertainty factor. As the receipient, I have at least some room to reach my own conclusions, rather than have the report’s authors decide for me – i.e “mostly”.
Good point. My first reaction is I like it but I’d like to choose my own answer as to how much uncertainty to add to the attribution – perhaps with the error bar slightly asymmetric.. A bit selfish, I know, but there you go!
60% +/- 20% seems reasonable, but would the +/- have to be something of the order of a 95% confidence level? The quantification problem still looms…
What about ‘a fair bit +/- a small-to-medium sized bit’. Policy makers would love that.
“Don’t you think it more likely that they realised there was no way they could convincingly justify any quantification at all and therefore plumped a little ‘low’ just to head off some of the criticism?”
Absent any traceable procedure I would have conclude that they pulled a number out of their.. err out of thin air.
Well at least they have made progress. http://www.realclimate.org/index.php/archives/2004/12/co2-in-ice-cores/
“In other words, CO2 does not initiate the warmings, but acts as an amplifier once they are underway. From model estimates, CO2 (along with other greenhouse gases CH4 and N2O) causes about half of the full glacial-to-interglacial warming.”
They have moved from about half to most, maybe.
One would assume that whatever “most” means, when related to 95% certainty it would refer to a lower bound on the estimated range of human influence. So for example, 60% +/- 10% at 90% confidence. Hey, I’m playing catchup here.
I kind of agree with Anteros and Fred that the attritribution statement is not that huge a deal. I still don’t know how it is possible to rule out other positive forcings conclusively because we don’t understand the solar contributions. The indirect cloud effects could be a positive forcing if they are decreasing over time. The aerosol forcings mainly affect sensitivity estimates. I read elsewhere on this blog that in fact sulpher emmissions have gone down dramatically over the last 20 years. If in fact aerosol negative forcings have been decreasing, it means that they could be a NET positive forcing over time which might impact attribution. I guess the argument boils down to “if it wasn’t the CO2, what else could it be?” This is a weak argument, because it requires that you rule out everything else, clearly a task that is almost bound to be incomplete. My puzzle is what caused the large swings in temperature in the past? Solar? Aerosols? Internal variability? I haven’t heard convincing evidence on this. Particularly the little ice age. It is relatively recent so we should have a pretty good idea.
Let’s play one of my favorite games, “shoe on the other foot.” We’re in another universe where the Climategate I and II emails were hacked/stolen but reveal that, lo and behold, all climate scientists have the purest of intentions to follow the scientific method to the very best of their abilities, not overstate certainties, let truth flourish wherever it surfaces, etc., political viewpoints and self-interest be damned. Meanwhile, the AGW argument is not going over well in the world at large. Is it possible that Joshua (and others) would cite passages from those emails? Or would they eschew that course of action, because the emails were stolen/hacked?
Or perhaps they believe that Climategate I and II did indeed show that these scientists have the purest of intentions.
The emails refer to concerns that model development processes might compromise attribution studies:
“… we don’t want to risk getting into a position where attribution results in AR5 are undermined by the development and model tuning procedure adopted by modelling centres.”
Dr. Curry states that the emails “…seem to refute the arguments made by Hegerl et al.”, in that they imply that this (the compromising of attribution) is what occurred in AR4.
I suppose I would have asked Dr. Hegerl (privately) if this was the case, before using the emails to draw such conclusions (publicly).
Regarding standards for traceability, one might look at the rules for US federal advisory committees, under FACA, the 1972 Federal Advisory Committee Act. See http://en.wikipedia.org/wiki/Federal_Advisory_Committee_Act and http://www.gsa.gov/portal/content/104514
FACA requires that all meetings be open to the public, that there be a publicly available docket including all correspondence, etc. The procedures are well established. Given that the IPCC is basically an advisory committee it would seem that FACA is a perfect fit.
Judith makes the point that she suspects that few climate scientists in the “community “ have actually read the emails. I would suggest that the same could be applied to members of the hoi polloi community in spades, though for different reasons. What they do know most likely is that the Climate science “community” has a different message for us than those within the community. Doesn’t even have to be destructive of their cause, the very fact that they are saying something different to what they are communicating via the various bodies etc. increases the disconnect between the scientific community and the hoi polloi and decreases trust.
Then add the defense that they are private or illegally obtained to the mix. That they should not be read by and promulgated by Judith for example.
Who cares how they were obtained, they exist. But then I am an Australian of the hoi polloi class.
I would love to know the nationality and class of the illegal, private, bad, should not be considered proponents. And I think Hunter/Lurker nails it with his
“I would do things very, very different from the way the team is acting.”
And kch says
“If it was whistleblowing, UK law would seem to let the releaser off the hook”.
It seems to me that this argument is informed by the legal precepts and beliefs of different societies in an important way.
US, 4th Amendment, exclusionary rule, and hence “Under the exclusionary rule, when police violate the Fourth Amendment and obtain evidence as a result, the victim of the violation can suppress the evidence at his or her criminal trial.” Canada, was inclusionary now exclusionary, (adoption by Canada of its Charter of Rights and Freedoms in 1982) England followed an inclusionary practice whereby relevant and reliable evidence was admissible, regardless of its source. Though this has been blunted somewhat because of the Criminal Evidence Act 1984 (“PACE”) Australia, judicial discretion.
Underlying the various approaches is the need to deter improper police behaviour. However there is this,
“Deterring police misconduct is the sole rationale of the United States Exclusionary Rule and a significant, if underacknowledged component of the discretionary rules in Canada, England and Australia, yet there is almost no evidence that it is effective. In fact since Mapp, the prevalence of suppression hearings suggests that illegal searches have increased rather than decreased” Debra Osborn
So even if the emails were illegally or unfairly obtained and hence arguments that they should not be mentioned prevailed, I would suggest the same would happen in this case as with the police, more material would surface rather than less.
Back to Hunter’s play it differently, it is argued that the US exclusionary principle has had a very deleterious effect on respect for the law and legal system in the US.
“excluding rather than admitting reliable illegally obtained evidence is more likely to bring the administration into disrepute if viewed from a community perspective.”
“Perhaps of all the rationales used to justify exclusion, the judicial integrity/repute rationale is the most misguided. It relies on public support to justify its existence yet that support is almost non-existent outside the legal profession. As United States Solicitor-General Lee said ‘for the overwhelming majority of people in America, the thief, the rapist, and the kidnapper pose a significantly greater threat than the policeman and the jailor’ Debra Osborn.
For most of us the case of AGW poses a far greater threat to us than the “stealers” of emails.
And to quote Jeremy Bentham
“Now and then, it is true, one error may be driven out, for a time, by an opposite error: one piece of nonsense by another piece of nonsense: but for barring the door effectually and forever against all error and all nonsense, there is nothing like the simple truth.”
So as a member of the Australian hoi polloi class I think that reliable evidence however obtained should be included. And as this is a global issue with severe ramifications for all of us I have no patience with the sniffy or outraged objections of those who are devoted to exclusionary positions (e.g. US players) or belong to the inner crowd. As Hunter advises, play the game differently, if the case for AGW and CAGW in particular is valid, to argue that it isn’t cricket to consider the evidence of the emails does damage to the cause of AGW with the community at large, and in the end it is our decision what we believe and what we think should be done.
Thank you for your thoughtful analysis.
Another group who could play it differently are the e-mail authors.
In my job we get accused form time-to-time of making mistakes by clients.
When this happens, we freeze all docs, and produce them under proper process in full, no redaction, let the chips fall where they may.
If I was an author of those e-mails and I knew I was doing nothing wrong, I would be document dumping in full, no redaction warts and all.
That these highly educated extremely successful academics have not apparently even considered this at all in over two years of climategate, and resisted for years of perfectly legit FOIA requests speaks to either an incomprehensible naivete (which climategate e-mails show is not at all likely) or it strongly implies ‘the team’ knows that a full and proper good faith disclosure would not serve their interests at all.
It really boils down to this.
And it really looks like FOIA is going to have to do some soul searching, because either the encrypted files show one thing or the other.
It is, I would respectfully submit, time to wind this up.
I wish FOIA, whoever he or she is, good luck. It must be a difficult decision.
You keep referring to the swinish multitude as “the hoi polloi”; hoi means “the”, in ancient Greek, and polloi means “many”, so “the hoi polloi” means “the the many”.
Here is the definition for the hoi polloi is in the general use:
You are being a wee bit overly picky on this one.
You are quite right, and your criticism of my point as being “a wee bit overly picky” nicely encapsulates the whole problem with the pseudo-scientists pushing this anthropogenic global warming lark and my failure to accept it all credulously.
It is “a wee bit overly picky”, I own, to ask that awarmists prove their bizarre conjecture: it is “a wee bit overly picky” to ask them to explain why, for instance, the Mediæval Warm Period was warmer than now even though the level of carbon dioxide was 38% lower than today; it is “a wee bit overly picky” to complain that awarmists and their lackeys are stupidly imprecise when they use “carbon” and “pollution” to refer to “carbon dioxide”; it’s equally picky, I imagine, to mock their silly but willfully deceptive use of the phrase “climate change” as meaning “amazingly catastrophic though hardly noticeable anthropogenic climate change”; it is “a wee bit overly picky” to ask alleged scientists to provide data whereon they claim to establish the need to grab billions of taxpayers’ dollars; and it probably is “a wee bit overly picky” to ask people who claim that they can accurately foretell what temperatures will be a century hence to prove that they can understand basic statistical methods, just as it might be “a wee bit overly picky” to ask people, who demand that I pay much more for my electricity, and that my entire country should destroy its industrial economy, to make their points reasonably and to prove that their reasoning is valid.
I apologise, then, for being a wee bit overly picky.
The connection would seem to be that, both in grammar and in climate science, you don’t know what you’re talking about, but you don’t let that stop you from revealing your ignorance with ill-thought-out quibbling.
I thank you, Robert, for your concise assertion instead of argument. You correctly admonish my quibbling. In future, if Mann or Jones of Briffa or you insist that 2 + 2 = 5, or that the chocolate ration has increased, I shall demur without unseemly quibbling.
We could look at the AR5 draft for models
I guess FOIA didn’t hear about the DOJ investigation.
Thanks for those
So far, we seem to have Chapters 4,5,9,10
Visit Jeff’s for a few more. missing chapters 1, 6 and 7
Still no password though.
And Chapters 2,3,8 now
Off topic: http://sciencedude.ocregister.com/2011/12/15/nasa-warming-will-transform-natural-world/165681/
This will never end. A drawn game is the only outcome.
Not really a drawn game. Two sides both convinced that they won. It doesn’t matter what evidence accumulates. Reality is bifurcating.
I guess I agree, but I spell it a little different :)
PE, opinion is bifurcating, not reality. But bifurcation (or polarization) is a draw in politics, which is all I want, because a draw is a loss for those advocating radical action. Climate change is heading in the direction of gun control, a dawn political game where nothing much ever happens. Fine by me.
David Woijck writes “Climate change is heading in the direction of gun control,”
It is waaay OT, but gun control is a red flag issue for me. It varies from country to country, and here in Canada, from province to province. I have very high hopes indeed, that Quebec will introduce a provincial long gun register, in the near future. And that this will set the stage for other provinces to follow suite.
As Curry & Webster report, a key finding of the IPCC AR4 report with which
97% of climate experts agree is this statement: “Most of the observed increase in global average temperatures since the mid-20th century is very likely [>90%] due to the observed increase in anthropogenic greenhouse gas concentrations,” There are several things I have to say about this pronouncement it but first and foremost is their choice of start time. Had they chosen 1940 instead of 1950 as their starting point they would have had to explain why the temperature dropped by 0.2 degrees at the start of their curve and did not regain the initial temperature for the next forty years. That is because 1940 is part of the temperature hump that all ground-based temperature curves show for the World War II period. They insist that World War II was fought in a heat wave while quite the opposite is true: after a thirty year period of steady warming, temperature dropped severely in the winter of 1939/40, just in time for the Finnish Winter War which was fought at minus forty Celsius. This was not a temporary cooling but something that put a definite end to the 1910 to 1940 warming that preceded it. The cold stayed around long enough to contribute to Hitler’s defeat in Russia. Even in 1947 there was still a blizzard in New York City that paralyzed it for weeks. The temperature did eventually ameliorate and from 1950 on showed even a slight warming of 0.01 degrees Celsius per decade. This brings us to 1976, the year of the Great Pacific climate change. There are several reports that estimate it to be a step warming of about 0.2 degrees Celsius. I have looked at a number of temperature curves and find it difficult to pin it down because of the surrounding El Nino peaks. But comparison with satellite temperature records that begin with December 1978 makes it quite feasible to believe that the step warming of 1976 was real. That is because the average temperature of the eighties and nineties was constant , a horizontal straight line that is compatible with and even demands a step down just before the start of the satellite record. At this point it is important to note a major discrepancy between the satellite and ground-based temperature records for the eighties and nineties. Ground-based curves show this period as a steady warming known as the late twentieth century warming. According to satellite data this warming does not exist. It is faked and I show how it was done in figures 24, 27, and 29 of “What Warming?” available from Amazon. And this is the same warming that Hansen spoke of in 1988 when told us that warming had started. The satellite record by now comprises a continuous 31 year stretch of observations. There was only one short period of warming during this entire time. It started with the super El Nino of 1998, in four years raised global temperature by a third of a degree, and then stopped. It was oceanic, not greenhouse in nature, and there was no warming before or after it. It should be clear by now that this multifarious temperature history can in no way be said to be :…very likely,
greater than 90 percent, due to the observed increase in anthropogenic greenhouse gas concentrations.” The only anthropogenic greenhouse gas they observe is carbon dioxide as measured by the Mauna Loa Observatory in Hawaii. Hence the sentence should end with a singular, not a plural of concentration. Secondly, carbon dioxide as measured by Mauna Loa has been increasing essentially linearly throughout this period. If you want to show that it is warming the world you should demonstrate that temperature increase was likewise linear. This they obviously cannot do. They know that but they still spout that falsehood about the second half of the century being warmed by greenhouse gas. They picked the starting point to avoid having to explain cooling. Had they gone to the early part of the century they would have had to explain why the warming that had such a good start in 1910 did not persist. There is more abstruse science proving that greenhouse effect is impossible, but from a simple analysis of warming curves we can already say that we are being told a lie. And like the folks viewing the emperor’s new clothes, no climate scientist dares to say that the warming does not exist.
Enjoyed your post and the very good point concerning IPCC cherry-picking 1950 as the start date for the claim of “most” of the warming “very likely” caused by AGW.
I had never analyzed the data before, but it is pretty obvious once one does so.
This sort of confirms the C+W critique that using a 50-year time sequence in a record with known, observed 40 to 70 year natural cycles, in order to demonstrate non-natural causes for observed changes, is not a good idea, scientifically speaking.
andrew adams | December 15, 2011 at 3:55 pm
Allow me to put you out of your puzzlement. The “Hi All” at the beginning of the E-mail from Hegerl to Karl very strongly suggests that there were probably a good number of cc’s. And if you take the trouble to look at the source E-mail (#5066), amazingly enough you’ll find in the cc list one Phil Jones with an E-mail address that includes “@uea.ac.uk”.
A pertinent test would be to run these models without aerosols, which certainly simplifies things and removes one area of uncertainty. I am sure this would lead to warming greater than observed. This can be shown to be largely due to GHG increases by running another test without the GHG forcing which won’t warm at all. This would then be an attribution of over 100% to GHGs, a large fraction being CO2 itself, and only an offset from aerosols would account for the actual increase being less.
That assumes all the “forcings” are known
In that case, what are the causes of Nino and Nina cycles, please
You have to remember that these are natural unforced variability which is self-canceling in the long term. The major forcings are known.
But they can come up with some BS story about missing heat hiding down in the ocean depths as one of their s highly implausible explanations/excuses, for the “warming hiatus”. We are not talking about the long-term. The alleged unnatural warming is over less than a two decade period, period. Catch up, jimmy.
How much is the temp rising without the El Ninos, jimmy? We are not concerned about the La Ninas, are we jimmy? We only fear the balminess.
Don, you are not sure what an El Nino is, are you? Is it some kind of infinite heat source to you, or a redistribution of ocean heat content along the Pacific equatorial surface due to an anomalous current like the oceanographers think?
I know what El Nino is jimmy. I know that we could easily have a period of two or three decades with an unusual number of strong El Ninos, and it would look just like the period that you clowns are squealing about. Shouldn’t the inexorable escalation of manmade Co2 forcing produce bigger and bigger and more frequent El Ninos, and shouldn’t cool La Ninas disappear, as the winter snows have disappeared? Poor kids. I hope someone thought to save video.
You have to remember that these are natural unforced variability which is self-canceling in the long term. The major forcings are known.
Indeed the “unforced variability” for the MCA and little ice age is problematic,as the major forcings are known.We cannot blame singulatrities such as volcanics as they were aperiodic,or a decrease in solar forcing ( a factor of ten to small eg Svalgard.)
This implies either observations or theory are uncertain.There is also another more interesting problem, a run of bad luck eg Willie Feller
The results concerning fluctuations in coin tossing show that widely
held beliefs about the law of large numbers are fallacious. These results
are so amazing and so at variance with common intuition that even
sophisticated colleagues doubted that coins actually misbehave as theory predicts. The record of a simulated experiment is therefore included.
This problem was posed by Ghil 2001 eg
The GCM simulations used so far do not, however, exhibit the observed interdecadal regularities described at the end of Sect. 3.3. They might, therewith, miss some important physical mechanisms of climate variability and are, therefore, not entirely conclusive.
As northern hemisphere temperatures were falling in the 1960s and early 1970s, the aerosol effect was the one that caused the greatest concern. As shown in Sect. 2.2, this concern was bolstered by the possibility of a huge, highly nonlinear temperature drop if the climate system reached the upper-left bifurcation point of Fig. 1.
The global temperature increase through the 1990s is certainly rather unusual in terms of the instrumental record of the last 150 years or so. It does not correspond, however, to a rapidly accelerating increase in greenhouse-gas emissions or a substantial drop in aerosol emissions. How statistically significant is, therefore, this temperature rise, if the null hypothesis is not a random coincidence of small, stochastic excursions of global temperatures with all, or nearly all, the same sign?
The presence of internally arising regularities in the climate system with periods of years and decades suggests the need for a different null hypothesis. Essentially, one needs to show that the behaviour of the climatic signal is distinct from that generated by natural climate variability in the past, when human effects were negligible, at least on the global scale. As discussed in Sects. 2.1 and 3.3, this natural variability includes interannual and interdecadal cycles, as well as the broadband component. These cycles are far from being purely periodic. Still, they include much more persistent excursions of one sign, whether positive or negative in global or hemispheric temperatures, say, than does red noise.
The inherent problems then are almost obvious (by which we mean conducent to a theorem) that both climate sensitivity and climate are irreducible ( what is meant is we cannot write a program of a shorter length then the observables ie Kolmogorov complexity ).hence negative proofs are almost surely likely eg Matiyasevich’s theorem.
Jim D, you summarize one side of the debate nicely when you claim that “You have to remember that these are natural unforced variability which is self-canceling in the long term. The major forcings are known.”
There is no reason why unforced variability should be self cancelling. In fact the averages should oscillate with scale, such that there is no long run, just different runs, as it were. Nor is there any reason to believe that the major forcings are known. There are a number of major hypotheses regarding forcings that are unresolved.
Your two simple sentences exemplify the hubris of AGW very well.
Jim D said, “You have to remember that these are natural unforced variability which is self-canceling in the long term. The major forcings are known.”
The first sentence is about right. Natural variability would be self canceling over the full period of the oscillation. What are the combined periods of oscillation?
The second sentence may be right, but the response of the system to the forcing is not known if the combined period of the natural variability is not known. Dang known unknowns!
“Natural unforced variability which is self-canceling in the long term” is a convenient phrase, but who defines “long term”?
We have observed 60 to 70-year warming/cooling cycles in the temperature record since 1850 and IPCC is talking about a time sequence of 55 years (1950 to 2005) for its “most” warming…”very likely” caused by.. statement.
A bad combination, as C+W point out (and Hegerl et al. are unable to refute).
Besides, we have no notion how much of this may be caused by natural (solar) changes and how long these cycles might be.
A naive random walk process will have extremes that are unbounded, and this would model a natural process with the ill-defined range constraints that the skeptics cravenly crave for.
However, a realistic natural process is modeled more accurately by an Ornstein–Uhlenbeck random walk, which has reversion to the mean properties. This naturally derives from a shallow potential energy well that the minimum free energy steady-state climate wanders around within.
Small disturbances and stimuli can shift the current climate around within this well, but it will not venture that far from the center, unless significant external forcings make this happen. The longer and further the excursions last, the more likely that a strong external forcing is responsible.
What is not precisely known is how high any of the secondary energy barriers are — and if this barrier is breached, then the temperature can conceivably free range until it reaches the next larger potential barrier. The Stefan-Boltzmann feedback is the primary concrete barrier, so it seems that a breech likely won’t happen and any cAGW scenarios would have to involve huge stable positive forcings to counteract the S-B feedback.
A steady forcing is likely how Venus got to the point where it’s atmosphere is at right now. The positive feedback started small, but eventually the mantle provided enough thermal energy that all the CO2 outgassed into an extreme greenhouse gas environment. The only scenario remotely close to that on Earth is if the peat bogs and methyl clathrates start to outgas. That would push us up the S-B energy barrier curve more than just the CO2 could provide.
The bottom-line is that these ocean oscillations are considered spurious O-U random walk noise that as JimD says will self-cancel in the long-term, and the real concerns lie in the more significant, stable steady forcings.
Web, a chaotic oscillation is neither a naive random walk nor an O-U random walk. Its statistical properties are fundamentally different from both, so it is not usefully modeled by either. Chaotic oscillations do not occur about a mean, they occur within an attractor. The averages oscillate with the time scale. Different intervals have different averages, which is in fact what we see in most climate data.
My greatest hope is that one of these days the climate community will take nonlinear dynamics seriously. Your and Jim D’s comments are not reassuring in this regard. If it turns out that all the observed climate changes are simply oscillations due to constant forcing, future generations will look back at us and laugh. But then, we are in fact back in somebody’s Middle Ages, are we not?
David’s got you. Which mean is it reverting to:
That last chart proves my point. Bringing up chaotic attractors is simply a distraction.
The baseline temperature is around 288 degrees Kelvin. The absolute temperature scale is important because that’s what all the thermal perturbations are compared against. During the Holocene, you can see that the random walk is bouncing around this by what looks like than a degree. The Vostok excursions turn out larger than a degree but that is wandering back and forth between ice ages.
So a one degree shift out of 288 degrees is very slight and no one should ever do a deterministic analysis on that kind of small fluctuation level when a stochastic analysis will do. Look for correlations and I doubt you will find any in the Holocene Greenland data. Thus we model that kind of behavior with a random walk.
BTW, Ornstein-Uhlenbeck is nonlinear dynamics.
It’s funny the way that you people think in terms of gotcha’s, when the evidence you present totally contradicts your assertions.
Yes to David and IanL. Climate science needs to get beyond the naive idea that every little wiggle in the average global temperature can be “explained” as a direct result of some “forcing” via a simple linear relationship and a fixed “sensitivity”.
“You have to remember that these are natural unforced variability which is self-canceling in the long term.”
You need to worry about more than just ENSO. For self-canceling, the long term might be very long (~30,000 years).
http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Emergence.climate.merge.25.10.11.pdf, figure 10.
Yeah, what causes the centennial to millenial scale temperature variations such as the Minoan, the Roman, and the Medieval Optimums and the intervening little ice ages?
Those need to be understood, whether they have one cause or more, before we can safely guess about the next century’s temperature course.
If this last decade’s flat temperature course is from ENSO or some oceanic cycle, then we can expect warming to start up again in a couple of decades. If this last decade’s pause has been caused by the sun’s peculiar spot behaviour, then we may cool for centuries, even possibly for millenia.
Kim…you are right…. but, th IPCC is hiding the cause…..
it is the orbital forcing, not according to the overly simplified
Milankovitch calculations (eccentricity of the Earth’s orbit), carried out
using a “Two Body”-analysis (Sun, Earth) , …..but it is the real Earth’s trajectory (3-body system), taking the gravity of outer planets into account.
The NASA JPL calculations of ephemerides takes the “Three-body-Problem” into account, adjusting the Earth’s orbit accordingly.
The IPCC (AR4-wg1-chapter 2,6,9) all insist on Two-Body-Milankovitch
rejecting further detailed study and requesting from me silence on it and
threaten if I “Disseminate” the TSU-reply on this matter…..
xactly, take out the el nino’s and what do you have left
You can take out El Ninos and La Ninas with a ten-year average, and what you have left is a rising temperature.
Jim D’s “all that is left is rising temperatures” after 10-year periodic Nino/Nina cancellations doesn’t work for the last 15 years (just flatlining)
Hence Santer’s recent “>17 years” fudge, and Jim D’s “major” forcings (ie. minor ones unknown, and consequently their effects unknown)
Chief Hydrologist had it much better than Jim D: chaotic, dynamic, non-linear with no empirical knowledge of the thresholds of any of the parameters or how they interact with each other in any order
Who says I’ve not been paying some attention ?
“You can take out El Ninos and La Ninas with a ten-year average, and what you have left is a rising temperature.”
Rising ENSO too. ENSO shows trends at multidecadal scales. For instance MEI:
I think taking out basic parts of the climate/weather system is sort of like saying in a medical research, “take out the heart, and what do we have left”?
Take out the El Ninos (and all the other observed multi-decadal cycles) and you have a net underlying warming rate of 0.04 to 0.05C per decade, or 0.7C over the 160+ year record.
That’s it, folks.
Assuming this record is more or less accurate, the unanswered question is: “WHY has it warmed by 0.04 to 0.05C per decade over the past 160 years?”
And no one has the full answer to that question.
Wrong analogy. Blood pressure monitoring; don’t sweat the brief transitory spikes.
My wife just told me that Christopher Hitchins has died. I am very sad.
Yes me too. Sad to see him go. However, I didn’t agree with him when he was a young revolutionary communist and I didn’t agree with him in his later life when he moved well to the right – supporting US interventions in the Middle east for example.
He didn’t say much on the topic of this blog, AGW, but I did agree with him when he wrote:
“The argument about global warming is not whether there is any warming but whether or not and to what extent human activity is responsible for it. My line on that is that we should act as if it is, for this reason…….”
You cite a good line from Hitchins:
(Don’t agree with his proposed course of action though, since the “argument” he states does not support it…)
The irony of Hitchens invoking Pascal’s wager when one considers his other major interests in public discourse is sad.
Actually that’s not quite right. How much of it is anthropogenic is one issue, but at least as important is how catastrophic the consequences are. A devout Thermotarian would disagree that that matters.
Extensive interview with HItchens:
The interview is wide-ranging and wonderful. Thank you, Joshua.
But, it *is* an hour long! FYI. ….Lady in Red
My pleasure, M’ lady. So sorry ’bout the length.
Hitchens was an Olympic-level shark jumper, and I disagreed with on much, but he was an interesting guy.
It seems we all have disagreed with Christopher Hitchens at some point, and I must confess to some unkind thoughts towards him at times; but anyone who can describe Mother Theresa as a “fanatic, a fundamentalist, and a fraud” must be a sad loss to us all.
I’m just wondering what constitutes a scientific paper these days. Am I being a bit old fashioned in thinking they should include such things as graphs, equations, data etc?
What attributes does the “uncertainty monster” piece possess that an op-ed wouldn’t?
Agree fully with your inference that Hegerl et al. was NOT a “scientific paper” as others have also noted here).
Accuracy is FAR from being a scientific strong point.
The single calculation misses this planet having a multitude of velocities which means that every latitude would also have to have it’s own calculation number.
Scientists took the fast track and screwed up science.
Interesting how pretty pictures and cheery picked data gets your attention when their is a vastly greater degree to science.
Generating the single calculation takes the changing of an orb into the cylinder by averaging the whole planet.
This then leaves out planetary tilting, velocities, angle of solar radiation, pressure differences, precipitation, etc., etc, etc.
Following strictly temperatures is the monster that has to be created from many different activities.
With the corruption of process that AGW has led to, I am wondering the same thing.
I figure probably another year before you’ll be deprogrammed enough to realize temperatures are were NOT the path science should have taken.
So uncertainty is properly covered by the IPPC. Well if that’s the case why are we presented with certain future scenarios. Which we now know are based on mmmmm ‘ now what’s that word when someone tries to trick you out of your money’ of course it’s called fraud.
Scientists NEVER did understand the planet first and just jumped into the temperature data. Far easier to generate an incorrect path by generation of the single calculation. This then leaves out ALL the factors that created temperatures, from motion to velocities to solar penetration angles.
Stacey, fraud is an ugly word. People can perpetuate a fraud without being guilt of fraud. Ponzi schemes are sold by the believers not the instigators. Since I am not all that eloquent, dumb as a box of rocks, eat up with the dumbass or peter principle would be more descriptive of the situation.
So it is hard to tell if people that ignore evidence are being fraudulent or which ever other option they may chose.
There is an interesting post on masterresource on the topic of the AGU communications sessions,
including a flattering assessment of my AGU presentation. A comment states:
“I am reminded of what Dr. North’s colleague Tom Crowley said in a Climategate 2.0 email: “I am not convinced that the ‘truth’ is always worth reaching if it is at the cost of damaged personal relationships.””
It seems like this is why many scientists don’t want to read the emails or criticize publicly other scientists’ work, if there is a personal/professional relationship.
I don’t have any relationship at all with Hegerl, Solomon, Zwiers, Stott; i have met Solomon a few times but have never engaged in a one-on-one conversation. I personally don’t have any qualms about criticizing their work. And I see no reason that I should have consulted with them first before interpreting their emails (which seem pretty self explanatory to me).
I am trying to translate this to others that I would be more reluctant to criticize publicly. If I disagree with Peter Webster, we hash it out over the kitchen sink. If I disagree with one of the faculty members at Georgia Tech working under me, I would say nothing publicly but advise them that they might find themselves in hot water. Other than that, truth is the objective.
Now translating this to a junior scientist, whose future prospects depend at least in part on the good opinions of senior scientists in their field. This is a somewhat tricky one especially for pre-tenure academics, and keeping quiet is understandable. But it is hard to argue, especially for tenured scientists, that anything other than the truth is the objective
How about commenting on McKitrick for displaying overt tribal behavior, calling someone he’s (presumably) never met, on a public forum, a “groveling, terrified coward?”
I will say it. Ross McKitrick shouldn’t have said that. it was intemperate.
I still like to hear what he has to say on climate issues.
Didn’t mean to make you cry, Bill. I apologize.
And yes, Judith also said that McKtrick’s comment was “intemperate.” I stand corrected.
My point was that in contrast to what she described, Judith’s willingness/reluctance to criticize colleagues and influential participants in the climate debate seems highly selective.
Joshua, here is a clue. I criticize scientific arguments. I criticize attempts to hide what went into making a scientific conclusion. I don’t typically bother with criticizing how one scientist chooses to criticize another scientist.
As yours seems. Maybe you are both purposeful. Geez oh boy.
Me, on the other hand, I am entirely impartial ;)
the above @ Joshua.
Just for the record I think the comment by Ross was intemperate and uneccesary.
Just for the record I think the comment by Ross was intemperate and unnecessary.
I agree, much like, if you leave out ‘intemperate’, most of Joshua’s
Thanks for reading, Gras.
It’s nice to know you care, especially in this holiday season!
Agree 100% with you.
Critique weak spots in the science and leave the ad hom insinuations to others.
Where the emails reveal uncertainty by even the promoters of the “consensus” view, this is a valid reason to suspect that things are more uncertain than the “most”…”very likely”… claim would have us believe.
As a result, I see absolutely nothing wrong with referring to these emails in stating in an op-ed why you have concluded that there is more uncertainty than the “consensus” view concedes.
All very logical and has nothing to do with ad homs. (as some posters here try to insinuate).
At this point, this discussion is old and offers no new contributions. That’s unfortunate.
I mean, for example, did all the participating scientists agree on the term ‘most’? No, and e.g. Singer, Akasofu and Curry are making as meaningful a claim as those who argue that it is not possible to show that Santa does not exist because you can’t prove a negative. More advance and nuanced reasoning skills have been expected of Curry and she just does not deliver.
Is some quantification greater than that which no one would agree means ‘most’ — no one would agree that less than fifty percent means ‘most’ – to be reasonably and meaningfully inferred? Yes. Further quantification is needed and it is worth understanding how and why this does not affect the core evidence, reasoning and meaning behind the main summarizing statement of the science constitutive of AR4.
Curry has described the IPCC statement (re. most and most likely) as “mostly” a negotiation. While not requiring quantification, her use of ‘most’ or ‘mostly’ often requires explanation of a sort that she never even attempts to provide. To be sure, Curry posts often have a kernel of accuracy; but routinely take a wrong turn, in part due to a pattern of unself-critical thought.
Recognition of what is being satisfactorily addressed through what has been a self-conscious process of change for the IPCC and its engagement with scientists, the public and governments, both as a result of internal and external corrections and challenges, would really be more realistic and productive.
Papers don’t usually fund themselves, and I am more than a little surprised why anyone would pay for this one since objectively speaking it provides no new insight, criticism or assistance. :-(
Judith uses entirely unqualified terms frequently to describe, in scientific presentations, phenomena she believes she has observed – for example whether “most” skeptics doubt that the Earth has warmed, or whether there has been a public “crisis” of confidence in climate science as the result of climategate.
Perhaps her focus on quantifying uncertainty would be better received if she applied similar standards to herself that she feels should be applied to others?
You are just hand waving. Stop wasting your computers electricity.
Your Judith hating is tedious. Try to find yourself a girlfriend. We are really tired of your compulsive yammering.
Why anyone pays for climate science papers is really the question. It’s a modern ideolology mixed with a dash of science. Martha posts are the most revealing, especially, her last paragraph.
Surprised that JC and her Praetorian guard do not see the
asymmetry in this case. In an academic argument between
peers one side has access to unofficial/tentative/ambiguous
/spontaneous material of the opponent, the other one does not.
Of course these mails should not have been cited in this context.
Leaks will probably be part of the process from now on, but leave
it to the “auditors” to bite into that material.
That is an interesting comment. It implies a code of ethics that should be adhered to by academia. Which I think Judith is in touch with.
In response to Menne et al taking Watts surface station data, not properly attributing the data to Watts and publishing copyright protected material, Gavin Schmidt informed me, “If he didn’t want the data used, he should not have put it online.”
So perhaps the code of ethics should be more clearly worded.
Since when is this an academic argument?
“Gabrielle Hegerl, Peter Stott, Susan Solomon, and Francis Zwiers have published a comment to our paper “Climate Science and the Uncertainty Monster.” Webster and Curry respond. “
Sounds academic to me.
If the Menne – Watts incident happened exactly as you describe,
then GS was clearly insensitive to the code
:) You can fact check on realclimate in reference to the Steig O’donnell dust up over who should be allowed to be an anonymous reviewer.
It is a food fight.
11 Feb 2011 at 4:38 PM
[Response: Nobody has a ‘right’ to review any paper. Editors choose reviewers as they see fit. I’m surprised no-one has pointed that out to you. – gavin]
Even if intellectual property is used without prior approval? That is why Watts asked to be allowed to review the paper. Perhaps he should have exercised other rights?
[Response:No, he should have exercised some common sense. If you don’t want people to use data, don’t put it online. And as above, editors choose reviewers, reviewers do not choose themselves. – gavin]
To save you a Google
Spoken like a true Team member.
“Your Judith bating is so tedious.”
More than tedious, it’s disturbing. Why are you so angry Joshua? Seriously. Your side still appears to be winning (as long we don’t count the science and that small matter of no warming for the last 13 years.) The MSM and the science establishment continues to successfully marginalize skeptics as blogosphere nut cases and professional deniers (bought and paid for by the fossil fuel industry.) Life is good. Why do you have to be so, well, Joshua-like?
Joshua is angry because he has already lost and he knows it.
Since we are voting, my judgment is that Prof. Curry’s citation of the revealed emails was appropriate, and we all owe a debt of gratitude to whoever released them. Major climate scientists made confident claims in public while disputing those claims and the confidence with which even justifiable claims might be made. Without those disclosures, we would not know what the quoted scientists actually believed, which was discordant to their public assertions.
Paraphrasing what was said of Richard Nixon, they held the truth so dear that they kept it to themselves.
WebHubTelescope wrote this: What is not precisely known is how high any of the secondary energy barriers are — and if this barrier is breached, then the temperature can conceivably free range until it reaches the next larger potential barrier. The Stefan-Boltzmann feedback is the primary concrete barrier, so it seems that a breech likely won’t happen and any cAGW scenarios would have to involve huge stable positive forcings to counteract the S-B feedback.
That is a small subset of what is not precisely known. The wobble of the Earth’s rotation and the perturbations of the Earths asymmetric revolutions around the Sun, and the many other small perturbations of one kind and another are all chaotic. How a chaotic stochastic system (a dissipative system with non-linear throughputs) responds to chaotic stochastic drivers (“forcings”) is not well known for any system that has been studied (partially, though, for mammalian heart beat.) What the clouds will do next if CO2 increases is another particular unknown. The list of well-documented imprecisions, approximations, and unknowns is quite long, including variations in the inaccuracy of the Stefan-Boltzman law itself (these are cited in Pierrehumbert’s book, Principles of Planetary Climate.)
I second his assertion that the changes in temps should all be scaled by the baseline approximate spatio-temporal average of 288K. That is another approximation, of course, because the spatial-weighted abverage varies as a function of Earth’s distance from the sun, and we don’t have thermometers placed where we can compute an unbiased estimate.of the estimand anyway.
OK, I will bow to popular demand and succinctly sum this one up:
Judith published a paper that dissed the disingenuous defenders of the dogma. In response to the team’s customary attempted drive-by debunking, Dame Judith played her hole card to win the hand. The usual suspects, who hate Judith because she is a woman and has strayed from the consensus reservation, squealed about STOLEN! emails, as usual. 96.8% of the participants in the discussion, told the Judith haters to shut up. If I left anything out, it is unimportant.
Those who are complaining about Curry and Webster’s use of the purloined e-mails to expose the hypocrisy of their critics are saying, basically, that the authors of the e-mails are entitled to a free pass to say and do any two-faced thing because on technical grounds the e-mails were “stolen” and therefore shouldn’t be used against them. That “inadmissible evidence” argument might work in a court of law, but we’re not talking about legal discourse here. However the e-mails came to be released, they’re out now, and unless they are classified or there is some other reason to prohibit their being read or quoted, which is doubtful, I don’t see what the problem is.
“I don’t see what the problem is.”
Well it wouldn’t be so bad if the contents of the emails were taken at face value. But no matter how many time the phrase “hide the decline” is explained it don’t stop its use continually being distorted!
Yes, the use of some phrases was unwise. The emails also show up a problem with the workings of FOI acts. Requests for information, systematically co-ordinated, poured in to CRU, the purpose of course being to take up as much time as possible and to induce a technical breach in the workings of UK law.
Unfortunately, the blogosphere is full of sad geriatrics who, instead of learning to cultivate roses, have taken it upon themselves to spend their retirement hours disputing scientific findings they don’t understand, and don’t even want to understand. It’s doubtful that the authors of the UK’s FOI act imagined such a possibility.
The authors of the emails in question have been largely cleared of any wrongdoing. Learn to live with it, get over it and move on.
The party-line rears it’s ugly head again. It ain’t working for you all. Most (68.725%) of the informed public know that climate scientists have faked their story. See Durban. Sorry.
“It’s doubtful that the authors of the UK’s FOI act imagined such a possibility.”
One of them got a surprise!
Nick, Any sensible FOI law has exceptions for classified information. But in terms of domestic political deliberations, what’s the problem? Perhaps Blair had some internal problems in his government that he is sensitive about. Journalists are probably in a more disreputable profession than even Congressman. On the whole however, disclosure laws in the US have been a good thing. One can at least see who the politicians are taking money from and draw your own conclusions. I think business and government are a lot cleaner than they were 50 years ago. The contracting process is a lot more open and honest and politicians have to be careful about conflicts of interest. Certainly, you don’t want to return to the 19th century when corruption was rampant, all protected by a total lack of transparency? The only question is what are the legitimate limits of the public’s right to know. Bottom line is always adhere to the highest standards of honest in all your dealings. And if something questionable comes to light, take the heat and admit error.
Blair’s problem is not just with classified matters. He says:
“If you are trying to take a difficult decision and you’re weighing up the pros and cons, you have frank conversations… And if those conversations then are put out in a published form that afterwards are liable to be highlighted in particular ways, you are going to be very cautious. That’s why it’s not a sensible thing.”
I think that’s true, and it does hamper people in their jobs in all sorts of ways. Which doesn’t help anyone.
RE: Here they are discussing the the Holland FOI appeal, and their ‘interpretation of whether we actually
‘hold’ this information.’
cc: “Mcgarvie Michael Mr (ACAD)”
date: Thu, 10 Jul 2008 15:46:04 +0100
from: “Palmer Dave Mr (LIB)”
subject: RE: FOI appeal [FOI_08-23]
to: “Briffa Keith Prof (ENV)” , “Osborn Timothy Dr (ENV)” , “Jones Philip Prof (ENV)”
I was about to email you…. I am in the process of assisting Jonathan
with his deliberations and have drafted a response which I will forward
to you shortly.
I have been in communication with the ICO to get guidance on raising
additional exemptions at this point (it’s ok), and also with the Met
Office who have shared some of their approaches.
A key question has arisen on the interpretation of whether we actually
‘hold’ this information. The full explanation will be in the note I
send to Jonathan, but essentially I need to know the corporate interest
that UEA has in the emails requested in regards IPCC participation.
Recent guidance from the ICO indicates that it could be argued that
where a public authority has no interest in information (ie. personal
emails), it is not considered to ‘hold’ that information. The Met
Office are claiming that Dr. Mitchell’s participation in IPCC was
essentially outside it’s organisational remit, and any storage of emails
was for Dr. Mitchell’s personal benefit and nothing to do with the
organisation itself. The emails were not created by the Met Office, nor
used by the Met Office for it’s own purposes. ICO Helpdesk staff
indicated that this interpretation, given that the facts were as stated,
would find some favour with the ICO.
So…. What interest does UEA/CRU have in the IPCC correspondence and
work that you are doing?
>From: Keith Briffa [REDACTEDREDACTED]
>Sent: Thursday, July 10, 2008 3:34 PM
>To: Osborn Timothy Dr (ENV); Jones Philip Prof (ENV)
>Cc: Jones Philip Prof (ENV); Palmer Dave Mr (LIB)
>Subject: Re: FOI appeal
>no have not heard , will forward this to Dave
>At 10:32 10/07/2008, Tim Osborn wrote:
>>Hi Keith and Phil,
>>if I remember right, then the decision on Holland’s appeal needs to be
>>returned to him in about a week’s time. Have you heard anything from
>>Jonathan Colam? I thought he might ask us about it before making his
Nick, sometimes you just have to take the bad with the good. FOIA is a result of lack of trust in government. Florida has a Sunshine Law, every political meeting has to be public. It slows progress down in the hopes of reducing corruption. Before FOIA there had to be reasonable suspicion and court involvement before information could be legally obtained to fight corruption. FIOA is an effective regulation to slow progress and fight corruption.
Seems like over regulation to avoid corruption is not a good idea in your opinion? Is it unintended consequences?
Any sensible FOI law has exceptions for classified information
The US government is well aware of this!
The email files may be a bit more difficult to hack than at the CRU but, if your master hacker has doing better to do……………
Note Nick that blair is claiming that GOVERNMENTS might wantto have “frank” conversations. Expanding his comments to cover science is putting words in his mouth.
“Expanding his comments to cover science is putting words in his mouth.”
No, actually it’s not. The BBC took them out of his mouth. The Guardian fills in those dots:
“If you are trying to take a difficult decision and you’re weighing up the pros and cons, you have frank conversations. Everybody knows this in their walk of life. Whether you are in business – or running a newspaper – there are conversations you want to have preliminary to taking a decision that are frank .
“And if those conversations then are put out in a published form that afterwards are liable to be highlighted in particular ways, you are going to be very cautious. That’s why it’s not a sensible thing.”
Their walk of life. Not a stretch to include science.
TT, I’m not buying it. Freedom of information laws are a good thing and they have helped to expose a lot of abuses. If only they were applied to Enron and Goldman Sachs and Fredie Mac we might not be in our current predicament.
What is your explanation of “hide the decline?” It was clearly an attempt to hide the divergence problem and the inevitable conclusion that tree ring proxies were probably of little statistical significance. Muller’s reaction to this is the correct one, namely outrage. It is the duty of every scientist who is ethical to show all the data, even if it is adverse and to discuss the problems it raises openly.
I think you are wrong about the burden of complying with FOI requests. Usually, its quite straightforward. Any well run organization has easy to access archives of data that might be subject to such requests. I have more confidence in the public to distinguish context and judge these things.
You are also wrong about the expectation of privacy. On any large companies computers, there is a notice on the login screen stating that this computer is for the conduct of company business only. Any information is subject to review and is not private in any sense. They are owned by the employer, at least in the US. Some companies allow limited personal use of company computing resources, but everything is still subject to review.
The problem here is that public policy issues are at stake. In medicine you will find much more transparency. In nuclear regulation, aircraft regulation, automobile regulation, more transparency is demanded. If you think that Ford doesn’t know that any internal emails are subject to review upon subpeona, you are mistaken.
It is the duty of every scientist who is ethical to show all the data, even if it is adverse and to discuss the problems it raises openly.
No. To be considered as science, all of the data is used, period. If *any* data is counter to the theory then the theory isn’t correct. This business of hiding data or deciding to use dataset A but not B for any other reason than physical data acquisition problems isn’t science. I don’t know how anyone allows the use of e.g. bristlecone pine dataset A amd B but not C solely on the notion that C doesn’t trend like A/B. Unless there was a real and identifiable physical core acquisition problem obtaining C then C should be used. Any theory that can’t handle dataset C isn’t a theory. It might be damned interesting, but it’s not science.
It’s not just ethical to account for all data. It’s REQURED by definition.
Interesting. When it was passed opponents argued enemies of government could use the FOIA to paralyze government.
“hide the decline” means “fudge the data” and you can dance around that until the Arctic refreezes, but it is what it is.
“The team”, are, as they like to point out, the smartest guys in the room. They did not simply use that- and so many other- terms as a result of poor word choice,and they are not just ‘boys being boys’.
“Yes, the use of some phrases was unwise. The emails also show up a problem with the workings of FOI acts. Requests for information, systematically co-ordinated, poured in to CRU, the purpose of course being to take up as much time as possible and to induce a technical breach in the workings of UK law.”
Wrong. since I coordinated the 50 or so requests I can tell you exactly that I knew that CRU could very well COMBINE all requests into one request. In fact, this is what they did as the 2nd natch of mails details
FURTHER, we knew full well that the request could be met in under 18 hours, the statutory limit. CRU had no problem denying requests that took over 18 hours. I tested that by giving them an FOIA that would take more than 18 hours and they rejected it on the grounds that it exceeded the 18 hour limit
CRU actually ANSWERED the 50+ FOIA we sent in.
The violation, nitwit, came WRT Hollands FOIA
AFAEKS, that was Curry w/o Webster. Peter can speak for himself.
Thanks for the correction.
There are times when I think that the following sums up the debates on climate change:
‘I played for Aberavon in 1898,’ said a stranger to Enoch Davies.
‘Liar,’ said Enoch Davies.
‘I can show you photos,’ said the stranger.
“Forged” said Enoch Davies.
And l’ll show you my cap at home”.
“I got friends to prove it”, the stranger said in a fury.
“Bribed”, said Enoch Davies
(Dylan Thomas – Quite Early One Morning)
Yes, that is why you must start by establishing the grounds for changing your belief up front.
Am I correct in assuming that you would accept the modification below to the AR4 text for AR5?
MostSome of the observed increase in global temperature since the mid-20th century is very likelymore likely than not due to the observed increase in anthropogenic greenhouse gas concentrations.
This is an advance since the
TAR’sAR4’s conclusion that “most of the observed increase in global temperature over the last 50 yearssince the mid-20th century is likelyvery likely due to the observed increase in greenhouse gas concentrations.
This would appear to be a reasonable modification to put AR5 on a sounder scientific basis, in light of the ”insufficient traceability of the CMIP model simulations for the the IPCC authors to conduct a confident attribution assessment “, as you put it in your comment to Hegerl et al.
Except that reference to “the observed increase in global temperature” hides the significant uncertainties in what the global temperature has actually been. For example, is this the surface temp or atmospheric? If the latter then the UAH record does not show a warming that is consistent with any GHG influence. If it is just the surface temp then there are other problems.
In short, the reference to temperature changes must itself be qualified, prior to attribution. We are not all that sure what the temperature has been, much less why.
Some of the observed increase in global average surface temperature since the mid-20th century is more likely than not due to the observed increase in anthropogenic greenhouse gas concentrations. The majority of this observed warming is noted in the Arctic region which should be subject to polar amplification. Much less warming is noted in the Antarctic region which deserves future investigation. Amplification of GHG response in the upper troposphere is inconsistent with previous estimates. This deserves further investigation.
Based on the current rate of warming relative to GHG emissions, future warming on the order of 1.5 to 2.5 C is predicted by the end of the 21 century.
“Hang on guys! Something isn’t happening the way it was supposed to.
Calling the output of the surface statistical models “observations” is just as misleading as calling GCM runs “experiments.” There is enormous uncertainty as to what the global temperature (and heat content) has actually been over the last 50 years. The community is trying to explain a diagram that has no underlying validity.
RE: ““Hang on guys! Something isn’t happening the way it was supposed to.”
And as far back as ’97 this is how it was dealt with:
” . . .Either the scale needs adjusting, or we need to fudge the figures…
I’ll let you know if I find anything else.”
Perhaps the ‘context’ would help here . . .#0723
date: Wed, 9 Jul 1997 16:40:37 +0100
from: Elaine Barrow
to: Mike Hulme
There is a slight problem with SPECTRE. If you select UKTR, IS92e and the max temp. change and then look at the graph for Santon Downham in the extremes section for maximum temp, you’ll see a warming which is off the scale.
I have checked the figures for this site and they are correct – the problem
is that it is in a land grid box and for August and September there appears to be an anomalous warming cf. the other months (3.2 and 3.8 deg C, resp. cf. approx. 1 for the other months).
Either the scale needs adjusting, or we need to fudge the figures…
I’ll let you know if I find anything else.
Climatic Research Unit
University of East Anglia
“Some of the postulated increase in the globally and annually averaged land and sea surface temperature anomaly is more likely than not…etc.?”
You may think to be so, but Judith couldn’t possibly comment. Her uncertainty monster has been delegated to answer these kind of questions and he’s just not sure of the answer right now. It’s unlikely he ever will IMO.
Some (>10%) is very likely (>90%) . . . I would be ok with very likely confidence level for >10%.
At the likely level, I would go >30% of the warming.
But as I wrote in the null hypothesis paper, this binary approach to the attribution statement is unsuited to the problem, whereby you pick a threshold (most or whatever), then put a confidence level on it. WHat you want is a likelihood distribution of the different combinations of anthro-natural warming, e.g. 90-10, 80-20, 70-30 . . . 10-90.
Has anyone tried to put this into a fuzzy logic framework?
It seems like this is what people are struggling to do.
this is exactly what I proposed
“Has anyone tried to put this into a fuzzy logic framework?”
That seems to be where most of the resistance to her ideas lay. Makes perfectly good sense though.
I’d be surprised if “fuzzy logic” came out with a different answer to that currently proposed by mainstream science ie GHG concentrations need to be controlled.
It would only make “perfectly good sense” to climate sceptics/deniers if it did and I turned out to be wrong in saying that. If not, they’ll have a jolly time poking fun at the concept, though.
“At the likely level, I would go >30% of the warming.”
You mean its likely that more than 30% of the recent warming is anthropogenically caused, right ?
How much more than 30%? And how much more warming would you say is still to come even if GHG concentrations stayed constant? How much cooling has been caused by particulates and how long will that last? What are the risks of allowing CO2 levels to rise uncontrollably? Are the risks of acting greater than the risks of inaction?
Your uncertainty monster says “pass” to all these questions, right? And that’s what you want the IPCC to say too?
You are hypothesizing what Judith’s statement “means”.
HadCRUT3 tells us that over the period 1950-2005 the “globally and annually averaged land and sea surface temperature” increased at a linear decadal rate of 0.108°C per decade or 0.6°C over the 55-year period.
Accepting JC estimates, we would now accept that it is
– more than 90% certain that more than 10% of this warming, or more than 0.06°C was caused by the increase in human greenhouse gases
– more than 66% certain that more than 30% of this warming, or more than 0.18°C was caused by the increase in human greenhouse gases
Are you still following me?
Atmospheric CO2 level was 308 ppmv in 1950 and 379 ppmv in 2005.
Taking the logarithmic relation into account, this means that
– there is a greater than 90% likelihood that the 2xCO2 temperature response will be greater than 0.2°C
– and a greater than 66% likelihood that it will be greater than 0.6°C
This sounds quite reasonable to me, tempterrain.
How about you?
What might “sound quite reasonable” to you is neither here nor there.
There’s no hypothesising what Judith has said. You obviously didn’t notice that my post consisted of 8 questions. You can know that a sentence is a question when you see this punctuation mark “?” !
However, I may have been able to provoke Judith into into answering your question but not mine. Despite what she says about building bridges, coming down from the ivory tower etc, she isn’t interested in answering too many questions.
Judith can’t create an atmosphere of doubt and uncertainty by doing that.
Thanks for response.
more than 90% likelihood that more than 10% of warming is due to the observed increase in anthropogenic greenhouse gas concentrations
more than 66% likelihood that more than 30% of the warmingis due to the observed increase in anthropogenic greenhouse gas concentrations
This would “box it in” a bit better than IPCC has done in AR4. Let’s see if IPCC revises AR5 to reflect the “uncertainty” a bit more realistically.
Judith saying that there it is very likely that more than 30% of the warming is due to anthropogenic GHG concentrations is rather like an economist saying that it is very likely that the US national debt is more than $3 trillion.
Its actually about $15 trillion, so, of course, that statement is true but it gives quite the wrong impression.
Judith is careful to only say things that on this blog that give a certain impression without them actually being untrue. She’s probably slipped up a few times in this regard, and when she does it no doubt causes her some embarrassment with her more straightforward colleagues who are happy to walk on only one side of the street.
Another silly analogy from the Judith haters. We know what the national debt, to the penny. Not hard to make a statement on that with high level of confidence. Try something else.
Dr. Judith Curry has given her educated estimate on the likelihood of a certain %age of the warming since 1950 being due to human GHGs, giving two likelihoods for two different %ages.
Are you telling me that you have more knowledge in this area than she does and are therefore correcting her estimate?
Get serious, mate.
You ask “Are you telling me that you have more knowledge in this area than she does and are therefore correcting her estimate?” Goodness me – surely you’re not invoking an “argument from authority” , are you? I thought you guys didn’t agree with doing that?
However, as it happens, I’m not disagreeing with her estimate I’m saying she hasn’t given one. See next part.
Firstly, just because I, and many others, disagree with Judith doesn’t mean we hate her. I’m sure you’re smart enough to know that really.
Secondly, if you’d like another analogy, and there are plenty, it would be a tobacco company suggesting that the likelihood of a typical smoker suffering serious health damage was over 10%. The true figure might be over 50%. So the statement, strictly speaking, might be literally perfectly true but misleadingly gives the wrong impression. It’s just no sort of estimate at all.
Judith is smart enough to play this game on blog after blog. She use lots of ? ? , even mid -sentence. She’ll say certain articles are interesting. She’ll avoid saying she agrees with them though. She doesn’t want to do that – she just wants you to think she might. On the other hand she doesn’t want her climate science colleagues to be able to challenge her. It all has to be deniable.
He’s really not. It’s kind of sad.
There you go again, confusing things.
An example of an “argument from authority” is citing the fact that the political leadership of the Royal Society, the NAS or (any other august scientific body) have endorsed the IPCC climate summary report, therefore everything in this report is, by definition, absolutely true, despite the fact that the RS or NAS leadership has no specific expertise in the scientific discipline of climatology..
When Dr. Curry, an acknowledged expert on climate, states her personal, scientific opinion that the likelihood of X is Y%, this is an expert opinion.
If Peter Martin, who is no expert on climate, rejects this opinion, without even presenting any scientific justification for his position other than citing the IPCC report, this is more like arm waving by a non-expert.
I hope you can think about this a bit, so you will be able to grasp the difference.
You write about Judith’s expert opinion. I notice that you yourself have asked her to clarify herself several times and even now just what she is saying isn’t easily intelligible. Clarity isn’t Judith’s message these days.
If she is saying that only 30% of the 20th century warming is of natural origin, that’s fine, providing she writes it up and publishes her conclusions in the proper manner. Saying, on an uncontrolled blog, she “would go >30% of the warming” isn’t at all the right way of going about it and she surely knows that.
Taking the forcing alone from the IPCC forcing histogram and using the period since pre-industrial to 2005 as they do, the anthropogenic component (0.6-2.4 W/m2) allowing for its error bars and the natural (solar) component (0.06-0.3) , goes from 66% to 97% with the most likely value being (1.6 and 0.12) 93%.
If we take the period since 1950, the solar-change contribution goes down, because the solar part hasn’t increased since then and may be now negative.
True. On the basis you propose, the solar impact after 1950 is lower than it was before 1950.
But there is a lot of different opinion out there on just how high the total long-term solar impact was over the entire record.
IPCC puts it at 7% (as you confirm), but several independent solar studies put it at an average of ~50%, as a result of the unusually high level of 20th century solar activity (highest in several thousand years).
These studies show a higher solar impact before 1970 than thereafter.
But you are correct, that by “cherry-picking” 1950 as the start date, IPCC have given the AGW hypothesis some “breathing space”.
Clearly, this is all guess work. We can’t even agree on the extent of the warming, or if it’s in any way unusual, much less dangerous. Moreover, we’re not including ocean temps. The supposed warming is limited to 30 percent of the planet. And not for nothing, but we’re only talking surface temps as the atmosphere doesn’t seem to be cooperating. So when you say Judith that you’d be comfortable with the statement that > 30 percent of the warming is likely cause by man, I can’t see that you’re saying anything particularly useful.
I think you hit the nail on the head, Pokerguy. The kind of uncertainty that is present in these statements arises from the immaturity of AGW as a scientific hypothesis, and is not of the type that lends itself to meaningful quantification. Not that the hypothesis is bad or good, it just hasn’t been around long enough to have been vetted and shaped through a process in which skepticism plays a vital role. And the science behind the hypothesis will remain stunted for as long as its stakeholders continue to short-circuit this process by stifling any voice that raises serious questions about them.
Curry, et al
Good job on finally attempting to add some rigor to the estimation of accuracy, error or confidence for all IPCC claims. And I especially appreciate the call out to fields like mine (Space Systems; NASA, Defense and Commercial) in terms of being a gold standard by which uncertainty and traceability is performed.
As a senior systems engineer I do a lot of mission and program reviews, including the how well the system can answer the science and to what confidence. Safety is also driven by absolute understanding of uncertainty. Consider the quality of uncertainty required to KNOW how long you have from detecting a problem at launch to making the call to destroy a manned mission because it is flying off target at launch. Think about the trade here (lives on the ground versus the lives of the crew).
In the Space arena we understand the power and deception of statistics. Painfully. Challenger and Columbia were lost because someone miscalculated the risks, tried to read too much into past patterns where destruction was randomly avoided, not safely avoided.
From knowing how data and statistics decay over time and distance (e.g., orbit propagation and determination) to knowing how to allocate error through the system of elements and drivers (thermal noise in the sensor vs pointing noise from the spacecraft to computation error in the ground calculations) we do know how to audit error and uncertainty.
Your proposal for folks like us to help IPCC trim up their current claims is a good one. A job we do internally on every mission through reviews, etc.
Good point about “Consider the quality of uncertainty required to KNOW how long you have from detecting a problem …….. Think about the trade here (lives on the ground versus the lives of the crew).”
Similarly, we should consider how long we have from detecting a problem with the atmosphere to being able to fix it. In a sense, we, and our descendants, are all the crew and lives are at stake. A good engineer will always give safety the highest priority.
Specifically, WHAT lives are at stake, tempterrain?
Specifically again, HOW is this the case?
Please try to be specific, citing clear references to support your claim rather than just arm waving.
Lives are at stake in several ways. Judith herself has stated in the scientific press that AGW is responsible for increased frequency of hurricanes, for example. Mind you she may say, or give the impression she’s saying, just the opposite on this blog!
IMO we, or most of us, could probably live with that. The choice is essentially an economic one:
You may wish to ask our host whether or not our she is currently of the scientifically qualified opinion that AGW has resulted in an increase in the frequency and magnitude of hurricanes.
I seem to recall that this postulation has been shown to be incorrect (or, at least, so uncertain that it cannot be claimed to be true).
I also seem to recall that our host has more or less confirmed that this is her current thinking.
But if you have other evidence, please bring it forth.
Your video clip featured a faculty member teaching international relations, talking about global warming.
PS If this dude wants “July temperatures in December”, all he has to do is move to your country.
No i never said that. Our paper clearly stated that no change in frequency; we argued that there is an increase in the percent of cat 4 and 5 hurricanes. Read my previous hurricane post
You haven’t watched more than a couple of minutes have you? The guy you’ve seen is just doing the introduction to Frank Ackerman.
Yes OK. If you feel I have misrepresented your opinion, let’s all be crystal clear about what you have and haven’t said. I certainly wouldn’t wish to do that. This is the testimony you gave to the US Congress:
“the total number of hurricanes has not increased globally, but the
proportion of category 4 and 5 hurricanes had doubled, implying that the distribution of hurricane intensity has shifted towards more intense hurricanes. The following paragraphs summarize the arguments and data that support the link between increased hurricane activity and global warming, including uncertainties in the data and its interpretation.”
Yes, that is exactly what i said. I did not say the number of hurricanes has increased, which is what you said that I said. Intensity is increasing, not number
” increased frequency of hurricanes, ”
That is what we call wrong.
admit it. you got it wrong.
Steven and Judith,
Yes, I admit it. I got it wrong. I said ” increased frequency of hurricanes”, when I should have either said “increased intensity of hurricanes” or “increased frequency of high intensity hurricanes”.
Sorry about that!
Judith Curry and tempterrain
Maybe this is just a nitpick.
The testimony made in 2006 by our host before US Congress was apparently based on a comparison of the hurricane frequency and intensity for the period 1945-1955 and the “most recent period” 1995-2005. This showed a higher incidence of high category storms in the latter period.
Just looking at an update of the raw NOAA data which is cited, it appears that using today’s “most recent period” 2000-2010 instead of 1995-2005 would have given a different answer (i.e. no net increase in category 4 and 5 storms).
Am I interpreting the data correctly?
You say that Judith is just “nitpicking”. I’m not sure about that. It is important to distinguish between intensities and frequency of hurricane activity. According to Judith’s paper the intensities have increased generally and the frequency of the two most dangerous categories of hurricane has approximately doubled.
However, you are making the suggestion that if the data in the paper were brought up to date, different conclusions may be reached.
That sounds like a fair enough question to ask. I hope Judith will give you a clear answer on that.
This issue was addressed in my previous hurricane post, with updated data
“Similarly, we should consider how long we have from detecting a problem with the atmosphere to being able to fix it. In a sense, we, and our descendants, are all the crew and lives are at stake. A good engineer will always give safety the highest priority.”
Nobody has suggested neglecting that. The discussion is on how to do that with sound science, with accountability, and tracibility , and without nature tricks, so that trillions of our’s and our children’s money are not wasted on foolishness.
“Nobody has suggested neglecting that.[Planetary saftey] ”
Really? You mean you’ve never seen any arguments suggesting that there are no grounds for concern on AGW and there really isn’t any need to control GH gas emissions? In the face of all the scientific arguments to the contrary that sounds a lot like neglect to me.
You keep making crap up. There are several billion who believe that there is nothing to worry about., or that they can’t do anything about it. Do you expect them to be alarmed and willing to give up trillions to solve a problem they don’t believe exists, or that is not within their abilities to solve? If you want them to do something urgent and drastic, you have to convince them. They have made it very clear that they are not just going to take your word for it. Nature tricks and appeals to dogmatic theory masquerading as settled science are not working for you. Smearing the nonbelievers is not working for you. Hating on Judith is not working for you. Your lot are feckless and impotent. Change your ways. Get smart. Get a backbone. Stop whining and get busy.
Hardcore climate deniers, overwhelmingly from the hysterical right-wing fringe, are a few percent of the US population, and less elsewhere, not “billions.”
You have failed to persuade scientists of your conspiracy-theory denials of basic physics and chemistry. You have failed in your crusade against scientists. Most of all, you have completely failed the test of reality; the world is warming, humans are the cause, and you’ve done nothing to support your assertions this radical change in the earth’s climate is safe.
Pro-science folks don’t need to convince the fanatic deniers. You need to convince scientists and the public that despite your nasty habits (lying, plagiarizing, falsifying data, threatening climate scientists with murder, threatening to rape children) your anti-science crusade should be embraced.
You’re losing that argument. Look at the polls. The Republican moderates are turning away from you and towards the science. Your denial is a sideshow, not a major issue.
As far as what “scientists” believe WRT CAGW, I’m sure you are aware that several hundred scientists have openly stated that they do not support the postulation of IPCC that AGW is a potential threat to humanity or to our environment.
Whether most of those contributing to the IPCC report are convinced of this premise is unknown – many are simply “doing their job” by contributing one or another piece of data and toeing the PC consensus line as closely as possible, in order to avoid problems.
Then there is the hardcore group, whose names we all know, who are known as the “Team”; these individuals dogmatically defend the mainstream paradigm and allow no admission of conflicting science.
That’s about how I see it stacking up today among “scientists”, despite phony “studies”.by Oreskes, etc.
How do you see it, Robert? (Or have you not given it any real thought?)
…… several hundred scientists have openly stated that they do not support the postulation of IPCC that AGW is a potential threat to humanity or to our environment
You’re just playing the same game as the creationists. They too have their lists of those scientists who believe in the literal truth of the Biblical creation.
Opposition to aspects of biological science is based on fundamentalist religious motivation, just as opposition to aspects of climate science is based on right wing political motivation.
You are a pathetic whinging idiot, bobbie. Copenhagen, Cancun, Durban. The crowds of rabid warmista junketeers has dwindled more and more at each successive destination. By the time the increasingly irrelevant clowns get to Detroit in 2032, there will be about four of them left. And they will still be kicking the can down the road to the next meeting, in Addis Ababa, or wherever. Your lot are impotent, feckless failures.
Robert and Don
Whether you call them “rabid warmista junketeers” or not, polls have shown that the percentage of the population who believes that human-caused greenhouse warming is potentially catastrophic has dwindled since Climategate and revelations of IPCC bamboozling plus the past decade’s hiatus in global warming, after reaching a high point shortly following AR4 , Al Gore’s “AIT” film, the Oscar and the Nobel Peace Prizes, all of which had received extensive ballyhoo from the mainstream media.
The most significant “climate change” that has occurred recently is a change in the wind direction – it is no longer blowing in favor of IPCC and its CAGW paradigm.
As Bob Dylan wrote: “The times they are a’changin'”.
Yes Max, the climate scare done peaked a couple years ago. The handwriting is on the wall. That’s why, with the exception of the Chief Dude from Ethiopia and the head muckety-mucks from a few other countries nobody ever heard of, there were no “world leaders” at the sad Durban fiesta. I hope the punching bag clown Judith haters who haunt this board are invested in the carbon markets.
At the risk of oversimplifying something that is more complex than I have assumed, I have plotted your two likelihood points regarding various percentages of the observed 1950-2005 warming being attributed to human greenhouse gases versus the likelihood as stated in AR4 with “most” = more than 50%, as was agreed by Gabi Hegerl:
Does this give an approximate pictorial presentation of your estimate?
Thanks in advance for any clarification or correction.
Interesting way to plot this, i would extend the y-axis up to 100% (the IPCC “most” is arguably the range 50.1 to 95%)
Thanks. Makes sense.
One phrase that I am tired of seeing is “considerable uncertainty” without that uncertainty being quantified at the “executive summary” level of various things.
Due to the “Precautionary Principle” agreed to as part of “Sustainable Development” doctrine of the UN agreed to in Rio, scientific uncertainty is not to be considered a barrier to action. You don’t have to take my word for it, simply look up the wikipedia article on “Precautionary Principle” and the “wingspreading statement” in particular.
Additionally, there is, according to the “Precautionary Principle” no need to establish cause/effect. Once you are aware of that context, what the IPCC does can be seen in a different light. It does not matter if climate change is happening or that it is damaging the environment in any way. What is important is that consensus is reached that CO2 emissions COULD impact climate and that the resulting change in climate COULD impact the environment in a negative way. It isn’t about saying that CO2 emissions ARE impacting the environment in a negative way.
Once consensus is reached that CO2 emissions COULD impact the environment and that the impacts COULD be detrimental, the UNFCCC has all that it needs to produce policy recommendations under the Precautionary Principle.
There is no requirement that the uncertainty be quantified, there is no threshold where the uncertainty becomes actionable, and there is no requirement for observations to show that it actually is a problem in any way to the environment. Simply reaching consensus that it could possibly be a problem is enough, scientific uncertainty notwithstanding. This should be renamed from the “Precautionary Principle” to the “It Could Happen” principle.
Once the IPCC produces its assessment that “it could happen”, that is all that is needed to enact various mitigatory policies under Agenda 21 “Sustainable Development”.
This is not about science. It is about “it could happen”.
What do you mean by “actionable uncertainty”? What action are you proposing?
If it is scientific investigation to reduce uncertainty, that is already happening.
If by “actionable” you actually mean “un-actionable” that’s the fallacy of treating an uncertain distribution and magnitude of harm as evidence that great harm is unlikely. If a car is heading towards you at 30mph, the consequences of the impact are very uncertain. It could kill you. It could merely break your leg. If it breaks your leg, it could be a simple long bone fracture or it could be something nasty, like an open fracture or a pilon fracture. You could do well in the hospital or get an infection . . .
Now, you might argue (if you were not very bright) that that uncertainty means that there’s no reason to avoid the car; you should instead focus all your intelligence and energy on adapting to the impact and surviving your hospital stay with minimal functional impairment.
Or you could get the hell out of the street.
“What do you mean by “actionable uncertainty”? What action are you proposing?”
There is no requirement that there be any particular level of certainty that something is going to occur before the UNFCCC proposes mitigating policy. There is no requirement that the events forecast are actually shown to happen over time, either.
All the IPCC needs to do is produce an assessment that it is possible that CO2 emissions “could” cause the climate to warm and that a warming climate “could” cause detrimental environmental impacts. There is no requirement that IPCC state with any degree of certainty that it believe CO2 emissions are going to warm the climate and that such warming will be enough to damage the environment.
Further, there is no requirement, after issuing mitigating policy recommendations, to monitor these scenarios and then retract the policy recommendations if it turns out that it never comes to pass.
CAGW is simply being used and the foundational enabling mechanism for Agenda 21 and “Sustainable Development”. Without that issue, they do not have any global issue that would justify their existence on a global basis. They need a global issue in order to have these COP meetings and issue policy recommendations to the various global environmental bureaucracies.
It isn’t about if CAGW is actually happening or not, that is orthogonal to the issue. The principle states in writing that scientific uncertainty should not be a barrier to action.
So what if the chances are only 1 in 1,000,000? It doesn’t matter. What if subsequent observations show that it is not happening at all? That doesn’t matter either. It just does not matter. The only thing that could possibly make a difference is for the IPCC to state that it is implausible that CO2 emissions can change climate and that is highly unlikely as at that point the UNFCCC would either cease to exist or have to create a new issue in order to justify their existence. It would also mean the end of “Sustainable Development” and Agenda 21 on a global basis except for various local issues. So it isn’t going to happen.
This is not about climate. This is about USING climate as an enabling mechanism for a global bureaucracy.
I know nobody asked…
You have constructed the question backwards. With what level of certainty can you assure me that radically changing the Earth’s climate is safe?
You need certainty, because you are proposing we change the climate in a radical and largely irreversible way. As the late Christopher Hitchens put it, quoting Jonathan Schell from another context: “We don’t have another planet on which to run the experiment.”
Uncertainty as to the scale and distribution of the harm does not mean safe. Back to the car accident: can you say with any certainty what your injuries will be?
Agenda 21, Fema recovery camps, science is everywhere…
helping themselves, for mankind. They don’t care about the price of gas, either.
Plausibility and the “it could happen” precautionary principle are saddles with the entire spectrum of risk: from absolute zero possibility of an event happening, to 100% that it will happen. And therein lies the destructiveness of plausibility: paralysis to act at all; go off and make a damn fool of yourself squandering treasures for no useful purpose; or making a difference in the world for the good of humankind. History is replete with all three behaviors; but, the one that is by far and away the most common, is making a damn fool of oneself. If the paleoclimitologists’ refrain to learn from the past is true, then it would be nice if they quit saying: “…it could happen again” because the most likely plausible scenario is that they will be…damn fools.
So at what point is Gabi going to notice that the stratosphere hasn’t cooled since 1995 so there is no actual fingerprint there: Quite the opposite in more enlightened times they could have concluded that the hypothesis is disproven. Neither is there any fingerprint in the troposphere, but then the good ship realclimate told us not to expect that to be a fingerprint anyway because it is also indicative of solar-induced warming. Maybe Gabi should know these things prior to attempting attribution?
The aerosol stuff is just completely made up. The truth about aerosol uncertainty is found on Stephen Schwartz’s site: Uncertainty is far too huge for any such rationalisation.
Using modelling studies to determine climate sensitivity presumes the models are capable of it but they palpably are not.
Finally the idea that the fact that model outputs vary up and down is in no way any kind of proof that they have matched the ups and downs of nature. One model predicts a drought and the other predicts a flood for the same region. So what? It is just another demonstration they are unfit for the purpose of attribution studies. Hey it’s just injected randomness. We can all do that but it is utterly meaningless! Predict something correctly both spatially and temporally then you may have something useful! Until that point it is mere bloviating.
The emails demonstrate at least that they know all this (or nearly) but are more concerned with finding devious ways to sell the product. They sometimes worry about the truth being lost but they don’t worry too long or too deep it seems.
I would just like some of them to realise that they might be doing more harm than good. Of course maybe they do but they just don’t care.
Citation: Randel, W. J., et al. (2009), An update of observed stratospheric temperature trends, J. Geophys. Res., 114, D02107, doi:10.1029/2008JD010421.
After a spike following Mt. Pinatubo and a subsequent sharp drop, the lower stratosphere stopped cooling around 1995:
A picture is worth a thousand words
[No stratospheric cooling since 1995.]
What is interesting about the Mt. Pinatubo thing is that the hurricane ACE peaked just before. So Pinatubo may be a happen stance or a trigger. It is hard for me to believe that just the aerosol effect of Pinatubo is the cause of the change. That leaves a few interesting possibilities :)
So Robert, as you have now been shown, there is no stratospheric cooling since 1995 so the facts say there is no actual fingerprint in the stratosphere for anthropogenic global warming. Now I know there exist handwaving excuses for that (with of course zero foundation) but at the very least you can surely conclude that IPCC authors mislead the policymakers.
The thing is, if you are naturally pessimistic about mans effect on the planet then you can read almost anything as being due to man. In the 70’s the fossil fuels were supposed to be the cause of the cooling, now the warming. Maybe, just maybe that extra 2 percent of C02 pa from man has only 2 percent effect.
22.214.171.124 Remaining Uncertainties
“…A further source of uncertainty derives from the estimates
of internal variability that are required for all detection
analyses. These estimates are generally model-based because
of difficulties in obtaining reliable internal variability estimates
from the observational record on the spatial and temporal scales
considered in detection studies. However, models would need
to underestimate variability by factors of over two in their
standard deviation to nullify detection of greenhouse gases in
near-surface temperature data (Tett et al., 2002), which appears
unlikely given the quality of agreement between models and
observations at global and continental scales (Figures 9.7 and
9.8) and agreement with inferences on temperature variability
from NH temperature reconstructions of the last millennium.
The detection of the effects of other forcings, including aerosols,
is likely to be more sensitive (e.g., an increase of 40% in the
estimate of internal variability is enough to nullify detection of
aerosol and natural forcings in HadCM3; Tett et al., 2002)…”
The whole chapter avoids ‘internal variability’ at all costs. The focus is always on forcing, even though there is little evidence that any of the forcing used can lead to a change in climate (i.e a trend).
Volcanos have an effect, but there is no trend. What else is there? It’s quite clever really, put CO2 forcing in a basket with other forcing which provide no competition.
And then they make a last ditch effort and appeal to the hockey stick-like data which only weakens the already flimsy attribution.
100 years is not significant for climate.
So it boils down to whether we even need a forcing on these times scales to explain the global warming trend, and the answer is no.
They avoid the fundamental questions of complex systems with long time scales because if they acknowledge that it’s possible that no forcing or cause is required, then it destroys their own argument.
It is extremely unlikely you can attribute 100 years of climate change, because it is not statistically significant time when compared to recent known climate changes which operate on 100,000 year time scales. It’s like saying 8.76 hours is climatically important in ‘one year’.
Not unless it’s a deep impact event!
A new Nature Geoscience paper by Markus Huber and Reto Knuttiprovides confirmatory evidence for the dominant role of anthropogenic greenhouse gases in mediating the warming observed since 1950, complementing the methodology described in AR4 WG1 with the use of energy balance models to arrive at a similar conclusion, but with greater confidence and greater attribution to the ghgs. After accounting for uncertainties, they estimate an extremely likely (95%) probability that at least 74% of the warming was due to radiative effects, of which almost all were anthropogenic and almost all of those were due to ghgs.
Although I can’t evaluate the models used, I have a fondness for the energy balance methodology in preference to the approach used in AR4, because I see it as solidly grounded in principles simply relating the rate of Earth’s energy intake to the magnitude of forcings minus the rate of energy loss radiated to space; for these particular attribution purposes, there’s probably less that can go wrong with this approach in my view than with the simulations used in AR4 (although some of the modelers may disagree). My own earlier very rough back of the envelope calculations led me to very similar conclusions – at least 70% and probably more of the warming was due to the ghgs, with both solar forcing and internal climate modes playing a much smaller role. Recent results from transient climate response studies have led to similar conclusions, based on the same energy balance principles.
I doubt that it will be possible to become more conclusive about post-1950 warming. A 100% certainty is never justifiable, but there seems to be little evidence that we can anticipate that would greatly alter the conclusions from all these studies. Even if values for solar forcing, for example, were greatly magnified by presuming overlooked forms of solar energy input, the results would change little.
The estimates did contain one surprise for me. That was a continued negative aerosol forcing trend after 1980 despite observational data suggesting “global brightening” that has been attributed to reduced aerosol cooling. It may be, however, that although the forcing persisted, its net contribution to TOA flux diminished after that time, with a consequent reduction in the rate of aerosol cooling. This is something that would come out of the models but would be hard to estimate on an informal basis.
Yea, Fred, I looked at this paper and there are the usual questions. I agree that the assumption about aerosols seems to contradict the data concerning sulphates. And of course there are the uncertainties. Much lower aerosol forcings don’t affect the attribution but could impact the sensitivity estimate a great deal. Their central value (3.7K) seems high compared to GISS or Schmittner. I’m still not sure I agree that the simple models are adequate just because they ignore complex feedbacks that might be critical. And then there is uncertainty about indirect solar forcings. Another small thing is the CH4 forcing which seems to be almost constant even though I’ve seen data indicating a dramatic decline. Also it would seem that decreases in CFC’s are pretty dramatic, and have led to a decrease in annual additions to forcings. That doesn’t seem to show up in the model. Another thing I don’t get is the 95% confidence concerning “natural variability.” This must it seems to me include things we don’t understand such as the causes of the Roman climate optimum, the little ice age, etc. And yes Fred, these can include unknown forcings and feedbacks. Our ability to simulate variability is not very good so it seems to me difficult to rule it out.
I would argue that the fact that we can’t explain the significant climate variations of the last 3000 years means that the stuff that we must lump under “natural or unknown variability” must be large. We are still struggling to explain the ice ages and interglacials in detail. We know they correlate with various things that couldn’t have made much difference in total forcing and thus the simple models based on energy balance are incapable of predicting them.
Summarizing, it seems to be just the same old argument: If it wasn’t CO2, then what else could it have been? Certainly CO2 is contributing to warming. I remain skeptical about how much and how serious it will be. In general a warmer world will be good for life and humankind.
Hi David, ….hi to Fred as well:
We can see Fred is still behind in science, but David a few more
steps further ahead with his insight, quote:
…..”Summarizing, it seems to be just the same old argument: If it wasn’t CO2, then what else could it have been? Certainly CO2 is contributing to warming……..”, let me continue: But the figures wont add up, the present high temp plateau has not been predicted in the Milleniums AR3 and SRES graphs and forecaster Henson sees “heat in the pipeline”
but in which place?
We have to concentrate on the “What Else”! This must and is something completely underrated, foregotten, overlooked ENTIRELY and which produces RF on a great scale and not only tinkering on the fringes…! Its already determined, calculated and published…..but
ignored by AGW, they shudder and back away from it as the Devil from the Holy Water….
Any climate forecast must be able to precisely forecast and determine the reason for and exact length of the present temp plateau…..
There is no aditional heat waiting in the pipeline, the plateau will stay flat until 2039 and falling off thereafter…..
What I think this paper does, David, is quantify better the attribution of most post-1950 warming to anthropogenic ghgs, using an energy balance approach that is better grounded in the most relevant principle, the First Law of Thermodynamics. Even with the uncertainties regarding natural forced and unforced variability, it now appears extremely unlikely that the ghg contribution was less than 70%, which is less open-ended than the previous attribution using the word “most”. In fact, it’s the uncertainties that lead to that range as low as about 70%, because if they were narrower, the attribution would be higher. Figures 4 a and b suggest that if we only used a median value for unforced variability without an uncertainty range, the attribution would probably exceed 90%.
You refer to a “same old argument: if it wasn’t CO2, then what else could it have been?” Like you, I have always found such an argument very foolish, and it’s not one I’ve ever seen in the climate science literature although I’ve seen this type of argument in blogosphere commentary. The potent warming influence of CO2 is based on knowledge about CO2, and doesn’t require one to attribute warming to CO2 because of the absence of alternatives.. I think this is well understood within the science, but it sometimes gets loss in blog discussions. I’ve tended to refer to the concept as the “default hypothesis of AGW”, which holds that anthropogenic warming is what’s left over after everything else is accounted for, and if other things can have a range sufficient to account for everything, CO2 might have very little effect. This has never made any physical sense, and yet it seems to be a pervasive theme in some blog approaches to climate change that seek alternative explanations for most recent warming. I do agree, though, that because unimagined, unidentified mechanisms can never be completely excluded for complex phenomena, an attribution confidence of 100% is always unacceptable. In this case, 95% seems about right.
I’m reluctant to use the dreaded term “settled science”, because it’s so commonly misrepresented to suggest that people are claiming that the entirely of climate science is settled, which of course, it isn’t. The attribution of most warming (or better, at least 70% of warming since 1950 to ghgs can properly be called settled science, and I expect it’s seen that way among professionals who work in this area and have now had a chance to scrutinize all the evidence, uncertainties and all. That doesn’t mean that it will be considered settled in the blogosphere, where nothing ever seems to be settled. Still, you’re correct in identifying areas in the more distant where our knowledge is more fragmentary. The one thing to be guarded against is the logical non-sequitur that implies that if we don’t know everything, we know nothing (or more precisely, if we are highly uncertain about some things, we can’t be more certain about others).
Finally, one aspect of the paper that bears relatively little on the attribution but is probably more uncertain than the attribution itself is the estimated climate sensitivity of 3.6. This is slightly higher than the most commonly cited mid range value of about 3 C. If it were lower, it would have very little effect on the apportionment of forcings, but would slightly shift the mean of the unforced distribution. However, the sensitivity range cited of 1.7 to 6.5 is already broad enough to explain the cited uncertainty range for unforced variability, and so a different mid point would not make the upper bound much higher. One of the reasons for the uncertainty in the sensitivity estimate is the uncertainty related to the curve describing ocean heat uptake. This involves a number of assumptions that are untestable for an equilibrium sensitivity value, which requires many centuries to be reached. It’s also one of the reasons I like some of the transient climate response approaches, which also use energy balance methods, but don’t depend on precise knowledge of ocean heat uptake rates to arrive at their estimates.
Fred, You didn’t respond to my issues with the recent forcings. The aerosol forcings are criticial to the high sensitivity estimates and the paper seems to me to be clearly wrong on this given the recent dramatic decline in CFC and methane forcings.
I just watched the video of the Michaels vs. Santer Congressional testimony, I believe Judith testified at the same hearing. I must say that Michaels clearly got the better of the argument. What about black carbon Fred? Santer fell back on sulfate aerosols again while failing to acknowledge that the error bar is 100%. What about the sea surface temperature corrections for the mid 10th century? And indeed this is something that I believed is not taken into account in this paper. Their aerosol forcing is monotone increasing over the historical period, despite evidence of decreases over the last decade or two.
Fred, The more I see of this dispute, the more I believe that people like Santer just don’t respond to criticism very well. The thing I didn’t like was Santer’s condescending tone, which was not justified by his data and arguments.
David – My point is that there isn’t any dispute about the conclusion that most post-1950 warming (almost certainly more than 70%) was due to anthropogenic greenhouse gases.. That doesn’t apply to blogs, however, because almost anything will be disputed there by someone. Your other comments indicate, to me at least, that this is an area of climate science that you appear not tot understand – for example, the magnitude of aerosol forcings is largely irrelevant to the attribution, as is the sea surface temperature corrections, although they are legitimate topics for discussion in their own right.
I have no comment on Santer and Michaels, because I haven’t watched this particular episode in detail. Again, though, the attribution doesn’t depend on Santer. Michaels, in presentations I’ve viewed, has made statements that are either blatantly dishonest or blatantly ignorant, or both, and has nothing to contribute on this issue, but I don’t want to digress into an extensive discussion of Michaels and his faults, because I don’t think he is taken seriously within the science.
Fred, I am disputing the almost certainly more than 70%. I think such a statement is unjustified by the evidence. I further think that strong conclusions about more than 50% are also unjustified. My disputation is now published (uncertainty monster paper). This does not mean that I have a more convincing explanation, it just means that the arguments that have been presented by the IPCC do not justify such strong conclusions, given the level of ignorance surrounding natural variability and feedbacks.
I swear, no matter how long I look at the curves I can’t find ANY evidence of CO2 effect. Sure, I believe there must be one, but why must my faith be so tested, that the effect should hide so well?
Given the scale and ferocity of the response of the natural world to the warming we have experienced, one could argue that a scenario in which 55% of the warming since 1950 is anthropogenic would be more concerning than one in which 95% of the warming is anthropogenic.
If 45% of the warming is natural variability, then unless something truly strange is happening, then that 45% has gone in and out of the climate system many times before.
Only now, however, are we seeing the emergence of 30,000-year-old carbon from the permafrost, the disappearance of 18,000-year-old glaciers, the discovery of human settlements buried in the ice for thousands of years.
If you posit a 45% natural variance in today’s warming, you are implying that the dramatic shifts in today’s climate came about due to a mere fraction of a degree of human-induced warming, meaning that the planet apparently responds much more dramatically to AGW than the 95% would imply.
Hmmm, ‘the discovery of human settlements buried in the ice for thousands of years’. I think you’ve got it Robert.
Now, what do you have?
Judy – I believe one of the virtues of an energy balance approach to attribution is that it requires both forced and unforced variability to conform to the First Law of Thermodynamics. This limits the range possible for unforced variability because ocean heat uptake depends on the forced/unforced ratio. Naturally forced warming is primarily solar, because volcanic activity post-1950 operated in a cooling direction. If one chooses to magnify solar forcing by positing amplifying factors not considered in the estimates, it would require very substantial magnification to change the attribution by more than a few percentage points. Similarly, feedbacks would only make a large difference to attribution percentages if climate sensitivity to solar forcing were much higher than to ghg forcing- perhaps more than 6 C for the solar forcing equivalent of a CO2 doubling. I suppose it’s possible, but I know of neither evidence nor mechanism to support this.
The arguments you refer to presented by the IPCC probably don’t justify the 70% figure, as you say, but this figure comes from the new paper and not from the IPCC models. I’ve also read your Uncertainty Monster paper, and while I was impressed by many of the points, I was unconvinced that the evidence for either the IPCC attribution or this latest one were contradicted by anything in the paper. As one example, model uncertainties surrounding the magnitude of aerosol cooling have little direct relevance to the percent attribution of warming that should be assigned to ghgs even though they signify that this is a point not yet well addressed in the models.
Fred, the disadvantage is that the simple model (and your argument) implicitly assumes all temperature change is forced (and it ignores multi-decadal and longer modes of unforced variability). And it ignores the second law of thermodynamics in thinking that you can recover back to the surface heat that is mixed into the deep ocean.
Judith has written her opinion that the premise that human GHG emissions have “almost certainly” been the cause for “more than 70%” [of the observed 0.6°C global warming since 1950] is “unjustified by the evidence” and, further “that strong conclusions about more than 50% are also unjustified”.
In another post she stated that, in her opinion,
It is more than 90% certain that more than 10% of this warming, [or more than 0.06°C] was caused by the increase in human greenhouse gases
more than 66% certain that more than 30% of this warming, [or more than 0.18°C] was caused by the increase in human greenhouse gases.
This brackets her opinion on this matter in a bit more closely. She states the reasons in her “uncertainty” paper.
You seem to disagree with her position.
Let me comment on two points in your post to Judith.
This is a statement of faith IMO, which could be prefaced with, ”I believe that” and ended with ”and I am unaware of any other natural forcing mechanisms”
But, moving on to the solar impact on our climate, you write:
It is certainly true that the total effect of changes in direct solar irradiance is too small to have a great impact and, thus that ”it would require very substantial magnification to change the attribution by more than a few percentage points” based on this forcing component alone.
But there may well be totally other mechanisms of which we are not aware today. I would not refer to these as ”amplifying factors”, necessarily, but rather as ”other solar-caused climate forcing mechanisms”.
The cosmic ray/cloud mechanism might be one of these, for example, but there could well be others, of which we are simply not aware today.
Since we know a) that there were significant fluctuations in our planet’s climate before human GHG emissions played any role, b) that these historic fluctuations have correlated fairly well with swings in solar activity and c) since we are unable to explain these simply by differences in the solar irradiance, we should suspect that there may be something else at work here.
And, since the climate changes were as significant as (if not more so than) the changes we are now attributing to human factors, we know that their impact on today’s climate changes could also have been significant.
A final observation, which should move us into this direction in our deliberations, is the fact that the past decade has shown no warming despite atmospheric CO2 concentrations reaching record levels, and that this has occurred after the record high levels of solar activity in the 20th century (highest in several thousand years) have diminished significantly.
I would be less concerned with the impact of feedbacks, as these would operate no matter what underlying forcing factors are the root cause.
To me the basic problem is that we do not even know yet what these natural forcing factors were in the past and, by assuming that the solar impact is limited to that of direct solar irradiance alone and that all the other forcing impacts were anthropogenic, we are making a basic “assumption from ignorance”.
And, while putting it into more scientific words and backing it with specific arguments, I believe that this is the basic conclusion of Judith in her “uncertainty” paper.
The assumption from ignorance about solar is necessary because Fred is so certain of the effect of CO2. Ignorant in one makes it easy to be ignorant in the other.
Judy – I think you’ve misinterpreted the Huber/Knutti paper. Rather than assuming all warming is forced, an energy balance approach requires the total forced plus unforced warming to account for all the energy that goes into warming – that’s one of the great strengths of energy balance that adds to evidence provided earlier by the IPCC without the use of energy balance models. In their paper, Huber/Knutti allow for sufficient unforced variability to permit as much as 26% of the warming to be unforced (Figure 4), although a median value is close to zero, consistent with inspection of the recorded observational data for internal climate modes over the post-1950 interval. (This would not necessarily be true for other intervals, particularly shorter ones).
There is nothing in any of these estimates requiring deep ocean heat to resurface during warming episodes; the inertia involved is well recognized by these approaches. In fact, in general, energy balance approaches assume that the deep ocean is a nearly inexhaustible sink on timescales as short as a century or less, but in any case it isn’t a source. (It would only be a significant source during long term cooling intervals, when the upper ocean temperature has dropped below that of the deep ocean, so it’s irrelevant to the warming attribution).
The same is not true for the upper ocean. During internal climate modes that warm the surface, the upper ocean gives up heat. If internal modes had played a dominant role in the post-1950 interval, one would see a net loss of ocean heat. In fact, ocean heat uptake was strongly positive for the interval as a whole, with intermittent ups and downs, and this positive heat uptake excludes more than a small fractional role at most for interval unforced variability. Some of these concepts were recently addressed by Isaac Held in a more quantitative fashion, but they are intrinsic to energy balance analysis of forced vs unforced temperature change.
Fred, they are very good at accounting for ENSO and maybe the 10 year modes of variability. They do not adequately account for the multi-decadal ad longer modes. And setting a lid of 26% on natural internal variability leads you to certain conclusions. Held’s concluding statements:
“I am not aware of any study summarizing the strength of the global mean radiative restoring of low frequency variations in control simulations in the CMIP3/AR4 archive. It would be interesting to look at these if someone has not already done so. Supposing that we accept the model results for this radiative restoring of low frequency internal variability,what does this yield for the value of at which the heat uptake changes sign?
Like many others, I am watching with great interest and, I hope, an open mind, as the heat storage estimates from ARGO and the constraints imposed on steric sea level rise by the combination of altimeter and gravity measurements slowly emerge. And I would like to understand the effects of internal variability on heat uptake a lot better. But I see no plausible way of arguing for a small- picture. With a dominant internal component having the structure of the observed warming, and with radiative restoring strong enough to keep the forced component small, how can one keep the very strong radiative restoring from producing heat loss from the oceans totally inconsistent with any measures of changes in oceanic heat content?”
Define “deep ocean”.
Judy – The excerpt from Held you quoted as well the remainder of his post makes the point I was trying to emphasize – that for the post-1950 interval, internal unforced variability could have contributed a minor fraction at most – i.e., most of the warming was forced. The Huber/Knutti paper quantifies this in more detail, along with the associated uncertainties, and then apportions the forced warming component (exceeding 74%) into ghgs and other factors, with ghgs accounting for almost all of it, again allowing for uncertainties. Held, Huber/Knutti (and others) agree on the general point about the relationship between ocean heat gain/loss and forced/unforced variability. I don’t see us arriving at much more precise figures in the near future, but the dominance of ghg-mediated warming is now supported by additional evidence not provided in the IPCC material.. Given the weight of evidence that has accumulated, I suspect that this particular issue won’t be a subject of much more study in the literature, except to the extent that it’s indirectly addressed in studies aimed at something else.
Fred, the other problem is i have with this argument is explaining the rapid warming from 1910-1940, then the dramatic cooling in the 1940’s. You can’t explain these features using the same arguments.
Fred, My reply got misplaced a little further down thread. Another thing that I think is essential to getting this right is to understand what caused the warm and cold periods over the last 3000 years. Those must according to your logic have been forced by something. What is it?
Fred, I understood your point the first time. Repetition does not make it more believable. Your point is that negative forcings cannot affect attribution. However, they effect sensitivity. However, positive forcings affect attribution and that may be a problem. I think the thrust of Michaels is that when you correct for black aerosols, correct the SST biases, and make a couple of other corrections, you end up with a warming trend that is less than half of that shown by the IPCC. Now this residual trend could contain other effects besides CO2. Or there could be complex combinations of forcings that end up reinforcing each other. This doctrine of simple linearly additive forcings seems to me to be naive, and Michaels analysis is also suspect for this reason. Santer’s response was to say that aerosols in fact would bring the CO2 temp change back up to the IPCC value. Assuming of course that we know what they are. He seemed to imply that they amounted to -1.5 W/M2 +- 1.0W/M2. Michaels did not have error bars on his data and that was not good.
Some things that it seems to me we don’t know in detail are albedo changes, like people putting composite shingles on their house instead of cedar. The huge growth of cities has albedo effects. Another is indirect solar effects, which are currently not included. Yet another is black carbon. And, then there is the distribution of the forcings, which it seems to me is a much bigger effect than is being assumed in this paper.
I think, even though I need to check, Michaels criticism of Santer’s early paper was precisely an attribution issue. Basically, Santer was looking at a small region of the globe with models and claiming that data matched the models very well. The only problem was that he truncated the data. With all the data, the trend was virtually nill and didn’t match the model. As usual in this field, Santer said that Michaels was right but that it didn’t affect anything.
It does seem to me that attribution is a very difficult thing to prove. At one point, I thought that the so called tropical hotspot was supposed to be the fingerprint. But the data does not show it. Now, they look at the outgoing radiation spectrum. It does seem to me that the hot spot is a big problem.
Summarizing: additional positive forcings or feedbacks for forcings other than CO2 do affect the attribution. I don’t think you understood what I am saying. Certainly, this is what Lindzen and Rapp say. To quote them verbatim: “If it wasn’t CO2, then what else could it have been?”
As a medical practitioner, you know as well as I do how hard it is to diagnose a complex problem like Uveitis or Ruematoid Arthritis. The diagnosis is based on a check list of I think 10 things. In 50% of cases of Uveitis, the cause is never determined. It’s called “ideopathic” I believe, The planet’s climate is much more complex that even the human body. The paper relies on claims that we know not only all the forcings, but also their sign and magnitude.
I always thought that idiopathic meant the doctor was an idiot and the patient pathetic.
David – Like you, I don’t think repetition will serve much purpose, but I also tend to doubt that this issue will be revisited much by way of new data focused on exactly what percent attribution of post-1950 warming should be assigned to ghgs. The IPCC attribution was “most” (more than half), and that was already solidly supported prior to this paper. My reasons for seeing the current upgrading of that attribution to more than 70% as accurate are given above, and anyone interested is encouraged to read the paper with these figures in mind.. To recapitulate only briefly, the paper estimates quite large uncertainty figures on variability, which is why an attribution as low as about 70% was reached, because if only median values are used, the fraction exceeds 90% – there’s a large amount of “wiggle room”. Within that range, the emergent climate sensitivity values can also vary considerably, but this has very little effect on the attribution percentages. and is more a function of the strength of aerosol cooling. Regardless of how strong or weak the latter, the relative roles of warming influences change little, and the competitors to ghgs are so much weaker that small variations will mainly operate in the range of 70% to above 90%.
The competitors are shown in the figures, but they conform well to other data – for example, Ramanathan has extensively analyzed the black carbon forcing and shown it to exert a much smaller surface warming than the ghgs. Regarding Michaels, his testimony was discredited here and elsewhere many months ago – he has been consistently false (?dishonest), and if you revisit some of those threads, you’ll see examples.
In my view, this particular issue is settled in a general sense, but I acknowledge that precise figures will remain uncertain for the foreseeable future – whether it’s 70% ghgs (as this paper concludes), or simply more than half (the IPCC) will probably remain somewhat controversial. I doubt there will much prospect that the more conservative estimate can be seriously challenged, but of course, surprises are always possible. My personal sense is that the 70% or larger figure is well justified by the data, and particularly reinforced by the use of energy balance estimates – something lacking in the earlier IPCC approach. Other readers who visit the paper (and earlier ones) can make their own judgments. That might make more sense than further rehashing of existing points..
Fred, I finally had an hour or two to read this paper in detail and it becomes clear why Judith and I are skeptical. It seems like every other statement is questionable and doesn’t have a reference. I do find your tone similar to Acquinas and to be very exegetical and reverential in nature. In the words of Walter Kaufmann I believe, Acquinas knows everything and proves everything with equal certainty from the most trivial statement to the most profound statement about the nature of God. You are either a direct descendent of Sir Frances Bacon or incredibly naive. You show none of the understanding and challenge typical of those who actually understand the theory and can make contributions. Repetition of the literature does not constitute science, but merely the mindset of the true believer.
My brother visited me the last 3 days and he agrees. He has never heard of you incidently. We reviewed the Vioxx fiasco. Merck gave the negative data to the FDA but buried it in the report. It was absent from the scientific literature. My brother followed the research done by Group Health into the FDA data that found the adverse data and made it a test case for evidence based medicine in his HMO, preventing a number of deaths. Others with your attitude of course just lapped up the scientific literature citation and prescribed Vioxx widely. No doubt, your explication of the “scientific literature” would have been given with the same arrogance and dismissive attitude as your explication of Huber and Knutti.
So now for the details which Judith wisely didn’t want to take the valuable time to dig out.
1. The results apparently result from a large number of runs of the Bern 2. 5D model. This model has spacial resolution but no vertical resolution, thus rendering it incapabile of modeling such complex processes as convection and the moist adiabat.
2. The model is as usual tuned to the past record and then used to predict the future temperature and also the sensitivity. The parameters are tuned to one set of observations of the past and therefore it will predict the future. There are a host of questionable assumptions here.
3. They state that in optimal fingerprinting studies, the global energy budget is not necessarily conserved. To the extent that these studies rely on GCM’s this is incorrect, as Lakis for example points out that GCM’s do conserve energy.
4. “The energy balance model has no natural interannual variability but is able to reporoduce the observed global trend of past temperature and ocean heat uptake.” Wow, and from a model that doesn’t include internal variability, we are supposed to be able to quantify the effects of this variability. Sounds like a leap of faith to me.
5. They assert that the effects of black and organic carbon are negative forcings, which is contradicted by the literature.
6. Stratospheric H2O is asserted to play a minor role, despite the recent literature on this subject that show it can have a large effect on temperature trends.
7. “The model results for 1950-2004 … compare very well with recent observational estimates, partly as a result of calibrating the model to the observed total ocean and surface warming.” Wow, if I tune the model to agree with the data, it agrees with the data. In my book, that shows that the model can predict the past if the past is a know input. That gives me great confidence in its predictions of the future.
8. “The near constant ocean temperature over the past five years are not simulated by the model, and the causes remain unclear.” Oops, I guess even calibration isn’t perfect. Does this result from the model not including natural variability?
9. “We assume that all forcings have equal efficacy.” That is clearly badly wrong at least as concerns the strong effect of the distributions of different forcings.
10. “The contributions of statospheric water vapor and ozone, volcanic eruptions, and organic and black carbon are small.” What’s the basis for this?
11. It appears that they use the WCRP model intercomparison program to calculate the effect of unforced variations. So, its the usual GCM models many of which cannot simulate these effects either. This is one of Judith’s problems with this type of analysis.
12. They make a strange statement about past temperatures that indicates that their model might be proven wrong if in fact the MCO and LIA were significant. “This is consistent with reconstructitions over the last millenium indicating relatively small temperature variations.” So apparently, their model would be contradicted if Mannian reconstructions were significantly wrong as is indicated by an increasing number of reconstructions.
13. “The ocean warming is similarly anomolous, but observations are more uncertain and the evaluation of model variablity more difficult.” So, even the calibration period is not very well defined. But once again, despite this, the model can predict a century in advance.
14. “It is assumed that the feedbacks are constant over time and the forcing uncertainty can largely be captured by a time independent scaling factor.” What’s the basis for this? No reference is given.
15. If you dig into the methods section, it appears that even the simple 2.5D model is too costly and it is replace by a neural network model for purposes of parameter estimation. At least that is what I infer from this section. Neural network models are of course questionable outside the range of the parameters for which they are trained. They are not “physics” based models.
Summarizing, the usual uncertainties apply to this paper’s results. Very simple models are used that ignore critical features of the forcings and vertical ocean and atmospheric physics. However, the model is tuned to match past global atmospheric and ocean temperatures, I would assume that global averages are used, but its not clear from the text. This would seem to me to be a gross simplification and to leave out almost all the critical physics. It is indeed strange that they claim to be able to rule out internal variability using a model that does not model these effects. I believe that my and Judith’s assertions that the paper uses weak methodology is clearly correct.
David – I believe you make a few points that are reasonable but which have relatively little influence on the final results. I’m thinking particularly of your points 5 and 14; the others are ones that I see as wrong or irrelevant for reasons I’ve described in past comments in this thread as well as in earlier threads. However, even with these concerns in mind, the Huber/Knutti paper encompasses rather large uncertainty limits that allow the role of ghgs to be as low as about 70% but with a more likely attribution percentage closer to 90%. Even if they are off by some margin, the attribution is still going to be very high. More important, though, is that this paper draws conclusions that are derivable from many other sources, and which are largely independent of their particular models or of GCMs in general – they are not hostage to the imperfections of GCMs, a point I have tried to make on several previous occasions in considerable detail but perhaps not clearly enough, since many of your listed points seem to involve mainly model imperfections. One can arrive at similar estimates, for example, from Gregory and Forster, Padilla et al, Held, and from at least some of the data In AR4 WG1 Chapter 9, with particular emphasis on the treatment of uncertainty by the specific authors I mentioned. These are all topics I’ve addressed in previous threads. In fact, uncertainty estimates are a virtue in several of those descriptions. This allows some reasonable doubt about precise figures, but no reasonable doubt about the accuracy of the general range that is cited. The limits to a contribution by internal variability have now been addressed in some details in many places, including several excellent posts by Isaac Held. To revisit this still one more time would be excessive in my view.
You may not wish to believe it’s a settled issue, and if it requires 100% certainty to be settled, then it isn’t. If it requires a rigorously objective and comprehensive understanding of all the evidence derived from multiple sources at a very high level of confidence, it can be considered settled in that sense, and I expect those most expert in this area had already arrived at that conclusion prior to this particular paper. Any reader still following this dialog would probably do best to revisit all the previous comments than for us to repeat their essence, which is what seems to be happening at this point.
I’m not sure whether to be flattered or annoyed by your obsession with analyzing my thought processes, but the relevance to ghg warming of Aquinas, Walter Kaufmann, Francis Bacon, God, the LIA, Vioxx, and the FDA is dubious.
Despite my current opinion,I will continue to examine any new points you or others make that might alter that opinion. The problem is that at this point, what you saying strikes me as extremely repetitive, which is why you don’t find a point by point rebuttal to your long list. The larger problem is that in the long run, it’s not my opinion that will matter that much. If the professional experts in this field who know most about the subject, and approach it with great rigor, find the kind of challenges expressed here to be unconvincing, the credibility of those who mount those challenges will be weakened. In this regard, you and I have relatively little to lose, but Dr. Curry, who has raised legitimate issues about uncertainty and overconfidence in general, will find her ability to influence others seriously weakened by having chosen to attack a target that can easily withstand her assaults.
Fred and David, the biggest problem I have is that their model doesn’t include natural internal variability. They deduce the magnitude of natural internal variability from the CMIP3 simulations, which has factor of 2-3 too little natural internal variability on timescales 20-80 yrs, encompassing the AMO PDO. So if natural internal variability is factor of 2-3 larger than their analysis, how can they continue to say ~25% is natural internal variability? And then combine this with very large error bars on their calculated amount of energy into the system. IMO they have made absolutely no contribution to assessing the magnitude of natural vs anthropogenic variability.
Detection and Attribution studies have been done with the natural variability doubled. It would be an interesting study to turn the whole thing around and ask how large internal variability would have to be to make the various results non-significant.
The limited potential contribution of internal unforced variability to post-1950 warming is found in the several posts by Isaac Held on the subject – particularly his post #16. The Huber/Knutti paper does evaluate this, and finds a most likely contribution close to zero for that particular interval, even though it might be larger for a different interval. Their 24% possible contribution therefore includes a large uncertainty range. To me, though, the way they arrived at that figure is less important than the principle that the contribution must have been quite limited, based on ocean heat uptake data. In that regard, any challenge to that conclusion would have to refute the cogent analyses by Held (and others elsewhere), which I suspect will be very difficult. If it can be done, the process should be described in detail. It hasn’t been, and if anyone reads Held and others, they might conclude that it probably can’t be, but I’ll keep an open mind.,.
26% not 24%
Happy new year, Fred. You haven’t contributed much recently but I expect that is more a function of the type of articles that Judith has posted recently. Like you, I am looking forward to reading more about the science and less about communication and politics in the coming weeks. Cheers to you and yours.
It does seem to me that the error bars on the forcings may be pretty large, and not just for the negative forcings. At least superficially, stratospheric water vapor might have a lot to do with the modulation of forcing and explain a portion of the recent temperature record. The issue of indirect solar forcings is an unknown that is currently under investigation at CERN and elsewhere. I also noted many times that these models can’t explain the recent past such as 1910-1940, the Little Ice Age and the Mideval Climate Optimum. This is explicitly acknowledged in the paper. That’s OK, but it seems to me that these events argue for an unknown forcing or feedbacks that are not represented correctly in these models. If we lump this into the “natural variability” then the issue takes on a new dimension.
Fred the test is how well the CMIP3/IPCC-AR4 models do in decadal simulations against independent data.We already know that the models have little scientific status with stratospheric ozone and its various peturbations due to the non archiving of forcing and hence an abscence of reproducibility.
In other independent series such as the actinometric stations (surface solar radiation) data they are very poor performers with no clear indicator of why they perform badly eg M. Wild, E. Schmucki: Assessment of global dimming and brightening 2011.
The most reliable observational records from the Global Energy Balance Archive (GEBA) are used to evaluate the ability of the climate models participating in CMIP3/IPCC-AR4 as well as the ERA40 reanalysis to reproduce these decadal variations. The results from 23 models and reanalysis are analyzed in five different climatic regions where strong
decadal variations in surface solar radiation (SSR) have been observed. Only about half of the models are capable of reproducing the observed decadal variations in a qualitative way, and all models show much smaller amplitudes in these variations than seen in the observations. Largely differing tendencies between the models are not only found
under all-sky conditions, but also in cloud-free conditions and in the representation of cloud effects.
The inability of climate models to simulate the full extent of decadal-scale variability is not just seen in SSR as documented in the present study, but also in other simulated climate elements such as the tropical top of atmosphere radiation budget (Wielicki et al. 2002), tropical precipitation (Allan and Soden 2007), the hydrological cycle in general (Wild and Liepert 2010), soil moisture (Li et al. 2007) and surface temperature/diurnal temperature range (Wild 2009b). Of course these elements may not be entirely independent, and misrepresentation of decadal variations in one of these, such as the SSR discussed here,
may strongly impact the simulation of others. Further work is necessary to disentangle to what extent these underestimated decadal variations are due to an underestimation of forced or unforced climate variability.
The lack of any deep understanding ( theory) of either interactions between say aerosol forcing ( of which the sign is inconclusive ) or cloud base etc is a constraint for forward modelling and the observations and divergence in the SH is counterintuitive eg Liley 2009.where a decrease of SD from the 1950s to 1990 and a recovery thereafter was also found on the Southern Hemisphere at the majority of 207 sites in New Zealand and on South Pacific Islands .
To Rob- Thanks, and a very happy New Year to you.
To David, Maksimovich,, and others – I have tried but appear to have failed to convince you that the model imperfections are largely irrelevant to these particular set of attributions. I hope others reading this entire thread as well as earlier discussions will consider my point on this, because I don’t what I can do to make it clearer. We hardly need the GCMs at all to arrive at the basic range that makes ghgs the dominant warming component since mid century. The models are important for climate sensitivity and aerosol forcing, but not for dividing up the warming influences. Their errors may make a difference at the margins, but not for the general range, which really involves the very great predominance of ghg forcing over other known forcings, even allowing for uncertainties and errors..
I am going to assume that at least some readers will be interested in the principles underlying the conclusion that internal variability could not have contributed greatly to the warming. The basic principles are implicit in energy balance approaches by Padilla et al, Gregory and Forster, and others, but are most explicitly and quantitatively addressed by Isaac Held in his blog. In simplified terms, surface warming ΔT caused by a forced radiative imbalance at the top of the atmosphere (amplified or diminished by feedbacks) will add some heat to the ocean (Ho) and send some to space (Hs). If the same amount of heat were internally released by the ocean, it would represent only the Ho fraction of the total and would warm the surface by less than ΔT since escape to space is now coming exclusively from this component and thus reducing the rise in temperature. In other words, even a net ocean uptake close to zero would signify a greater contribution to warming from the forced than the unforced component. (I’m not convinced, though, that this oversimplified explanation is much clearer than Held’s more precise and quantitative description, and so I strongly recommend the latter).
The most relevant point, in any case, is that net ocean uptake over the past half century was not only not zero, but entailed an increase of more than 10^23 joules. Since even a zero net implies a dominant role for forced warming over internal variability, the dominance is reinforced by the strongly positive uptake. During that interval, sub intervals with no net uptake or even transient reductions were noted, and may have involved a transient predominance of internal modes (that is what we see very transiently during some phases of El Nino). The overall increase over the entire interval, however, is very difficult to reconcile with more than a minor role for internal variability.
Because model errors, aerosol cooling, climate sensitivity, and internal variability are not in their present form refutations of the attribution to ghgs, it will require a very detailed and quantitative description of how this conclusion should be modified to seriously challenge that attribution. If that can be done, though, I think it should be described in specific detail and not generalities. In particular, saying that there are uncertainties won’t suffice.
Three times in the last century and a half temperature has risen at the same rate, and only in the last of these episodes has CO2 also risen.
You should remember that there are many types of internal variability. One of them does not change the overall energy balance of the Earth. In that case one part of the Earth system can warm only, when another cools. You have been discussing that.
Another possibility is that the internal variability leads to changes in cloud cover affecting albedo and also emission of IR. For this kind of variability the argument does not hold. The whole Earth system can alternatively warm and cool for internal reasons.
Another point is that the transient climate sensitivity and the share of warming due to CO2 over last 50 years are linked directly. Thus they are known equally well or badly. As long as there are large uncertainties on the transient climate sensitivity there are also equally large uncertainties on the amount of historical warming due to CO2.
I’m sorry Fred, but it appears that once again we have reached a point where you didn’t respond to the main points but claim that you did at some point in the past.
To say that model details are largely irrelevant to the attribution is quite a statement. Then why not use GCM’s or simple Dessler type flux box arguments? I note that Dessler got yet a different sensitivity. If we assume that different forcings have different effects and feedbacks, then the simple models will get even the attribution wrong. Forcings generally don’t add linearly in the real world. In any case it seems to me that the distribution of forcings is critical to the feedbacks which are relevant to sensitivity. In fact in a strange way, attribution is in some sense a question of little interest. Of much more critical importance is the magnitude of the various effects.
I would be a little more impressed if you at least acknowledged that the sensitivity is very questionable. It is critically dependent on the physics in the models, which is pretty simple. This is where Fred you lose credibility with me. It is the “know everything and prove everything” syndrome. It’s not an effective way to communicate.
In any case, the error bars on the forcings (and the outgoing radiation) seem to me to be documented in the literature to be pretty large, even for the positive forcings and there appear to be some unknown ones. In fact, it seems to me I remember that the error bars in outgoing radiation in Trenberth’s energy budget analysis were still very large in absolute terms, at least as large as the CO2 forcing. This is the importance of the inability of the models to simulate past climate changes, i.e, it implies that we are missing something critical that influences the system. If there are other positive forcings then you can’t conclude that its CO2. Despite all the mumbo jumbo concerning neural networks and statistical analysis of uncertainties, this is the bottom line. The point can be succintly summarized as: If climate changes can only occur in response to changes in forcings, then how can past variability be explained? Either we don’t know the forcings or there are additional forcings. Of course, they could be explained by feedback mechanisms and be the result of small changes in distributions in forcings. These things could be operating today as well and make attribution to CO2 largely irrelevant to the physics of climate.
The other issue has to do with the surface temperatures themselves. The hot spot not appearing would seem to imply that either the tropospheric temperature measurements are wrong of the surface temperatures are wrong or both. Lindzen believes there are reasons to believe that the surface measurements may be wrong. Certainly, intuition says that human impacts are probably large right at the surface. And if the surface temperatures and ocean temps to which the model is tuned are wrong, then the predictions are probably not right either.
I appreciate your desire to save Judith the embarrassment of maintaining an opinion that will discredit here with the establishment. It’s the same kind of argument that would have told my brother to allow prescribing of Vioxx since almost everyone was convinced it was safe. This argument has no merit and it seems strange that you bring it up. It’s the reason that the experience in medicine is relevant to these discussions. The establishment arguments are generically the same even though the science is different. In medicine, the obvious fallacies are becoming more appreciated because of the clear influence of money in the process. That’s why I find it strange that you never comment on these issues since it would seem to me that your expertise is higher there than in climate science. I would guess that oncology is indeed a field where one might expect to see similar issues prominently on display, since cancer treatment is such an emotional issue for a lot of people, not least the patients and the temptation to offer patients any hope, even if totally baseless, leads to a lot of over treatment.
David – I think readers will find your points have all been addressed previously, but perhaps my response to Pekka will help clarify my perspective.
Pekka – I think some of your points have also been discussed in earlier comments that are now buried in various places. A virtue of the transient sensitivity studies I’ve cited is their treatment of uncertainty, and for example, one can revisit http://www.princeton.edu/~gkv/papers/Padilla_etal11.pdf Padilla et al to see the extent to which estimates are affected by the uncertainties regarding internal variability (as well as aerosol forcing) – see, for example Table 1 and Fig. 6. Even with those uncertainties, the estimated range is limited, particularly by the relative strengths of different forcings, with ghgs in all cases remaining dominant, and the uncertainties affecting sensitivity estimates more than the relative role of different positive forcings.
The question has also been raised as to whether 10^23 joules of heat could have accumulated in the ocean during 50 years due to an unknown mechanism not involving heat shifts from one compartment to another, and which caused clouds to behave in an altered fashion for 50 years (individual clouds have lifetimes of days or less). Unidentified mechanisms are, I suppose, a reason why one should never attribute anything to known causes with 100% certainty, although in this case, the strong role of ghgs is supported by extensive data. Such an alternative must be considered seriously, however, only if it can be described in physically and mathematically plausible detail consistent with observations – it’s not enough merely to state that maybe something mysterious has been happening. We know, for example, that ENSO phenomena can alter cloud albedo as a feedback, but these changes always invoke restorative forces that return the clouds to their earlier state, indicative of the principle that transient perturbations lead to transient cloud changes. I’ll leave it to others to describe in quantitative detail an undetected internal mechanism that itself operates constantly for 50 years, or operates transiently, but induces such an extremely strong and persistent feedback as to imply a climate sensitivity of enormous proportions (not the limited feedbacks one sees with ENSO/cloud relationships). We should remain open-minded, but without that description, and with the evidence for the dominance of ghg forcing that exists even independent of ocean heat data, it is not an alternative that deserves to be taken seriously at this point. If the description and quantification do emerge here or elsewhere, we should reconsider, but even then, that evidence would need to be followed by striking evidence to reduce the apparent role of the ghgs. This is found in many of the cited sources, including Huber/Knutti, with the latter paper citing evidence that ghgs exerted more warming influence – 0.85 C – than the warming that was actually observed – 0.56 C. Even if not completely accurate, that leaves relatively little room for an alternative warming source that is more dominant. In the meantime, we should probably content ourselves by leaving the confidence in ghg dominance in the 95% range rather than higher.
I find at this stage that I’m only repeating past points and so I should probably refrain from commenting further unless new evidence is introduced. Over the past year that I’ve followed the ghg attribution with interest, I’ve found that pursuing both the principles and the data has helped me greatly refine my understanding of how the climate system responds to perturbations. I’ve particularly appreciated both the importance of energy balance and the expertise of individuals who know this area backwards and forwards, Isaac Held being an example I’ve cited frequently. It’s clear to me at least why those who are most knowledgeable have high confidence in the attribution of well over 50% of the post-1950 warming to ghgs. My purpose in these discussions has been to continue to test my own understanding against possible alternatives, but also to explain to readers who may not have participated in the discussion why the attribution is seen as so strongly supported by available evidence. These bystanders can make their own judgments, and I see them as a more important audience than participants who have already decided what conclusion they intend to reach. There is only one reality, and a conclusion embraced by most scientists and apparently justified by evidence might always turn out to be wrong. No-one has yet come close to showing why that is likely, but perhaps that will change if we wait.
Fred, I don’t have the time to memorize your truly volumonous contributions to Climate Etc. I will spend time looking into the reasoning of people I respect such as Judith, Lindzen, Muller, and even Likas. The thing that destroys your credibility is just the repetition and the seeming inability to respond to actual issues. Most real scientists in my experience have papers they believe (often with reservations) and ones they dispute and they are much more open about their own ignorance and problems with their area of expertise. I am a big believer in honesty and the ability to admit the issues with your arguments.
You leave me once again without any real answers regarding what seem to me some very important points. The paper itself was much more informative and succinct and at least admitted some of the limitations so that I could draw my own conclusions.
On one further small point related to empirical data, there is evidence for significant increases in downwelling IR over past decades – see, for example, some of the articles in this DLR archive, whereas ISCCP and HIRS data indicate relatively small changes in cloud cover and albedo, with some slight reductions consistent with a positive cloud feedback on ghg-mediated warming. For a dominant cloud albedo effect, one would expect major reductions in albedo accompanied by little increase or even a reduction in DLR due to reduced cloud greenhouse effects; this is the opposite of observed changes. One might also expect an increase in the diurnal temperature range (DTR), again opposite to observed data. The dominant role of ghgs is supported by much more than this evidence, but the evidence further tends to exclude alternatives.
The point of my message was to point out that:
1) The argument of Held is essentially irrelevant, because it applies to a possibility that very few have in mind anyway, not to that one that people have in mind, when they are skeptical of the result.
2) When we are considering two essentially equivalent variables (share of CO2 in warming and transient climate sensitivity) all or at least most of uncertainties brought up in the discussion of one apply also to the other.
To me the most plausible explanation for long term variability is a combination of large scale ocean dynamics and change in the Earth external energy balance induced by the changes in the state of oceans. The long period comes from ocean dynamics and the strength of the effect from the influence of the state of oceans on the atmosphere. This is similar to what happens in ENSO, but due to some unspecified longer term variability in the state of oceans. As that is the most plausible mechanism to me, any argument that does not apply to this mechanism is essentially irrelevant.
I’m not at all convinced that the above mechanism is important and capable of creating so strong long term oscillations that the conclusions on the attribution would change significantly, but as I wrote already, this is the mechanism that should be discussed, not some other, which is less plausible to begin with.
I’m pretty sure that the Padilla paper is also rather powerless in excluding the influence of such a mechanism from its results. As all statistical analyses, the Padilla et al analysis is based on certain assumptions, which would probably not be valid, if such a mechanism would be strong.
Ocean circulation models do not predict such a mechanism, but again we may face a situation, where the state of science is far from conclusive. Long term oscillations in the ocean system are very difficult to model, but may still be possible.
I have been following this discussion and am rather in Fred’s camp on these issues. The way I would look at attribution over the last 50 years is to look at the temperature change (0.6 C) and the effect of just the CO2 change without feedback (0.3 C). By the time you use the minimal 2 degrees per doubling and other GHG increases, you are already looking for negative effects to keep it at 0.6 C. These are provided by aerosols which are generally negative in effect. So now we are left with increased CO2 sensitivity above 2 degrees per doubling balancing increasing negative aerosol effects, and that leaves no room or even need for more positive effects. This is where the mainstream view sits.
There are two different interpretations of the issue.
One is to use all available information to put limits on climate sensitivity, calculate the range of likely CO2 effect on temperature from that and compare that to the observed change. Your message is written in this spirit.
The other approach is to exclude all arguments based on paleoclimatological data. Many main stream scientist maintain the view that this has a major effect on the limits on climate sensitivity. If that’s so it means also that the shorter term history used often in other approaches to the attribution are weaker and leave more uncertainty in the result.
What we do conclude depends on, how we frame the question and on how far we do believe to indirect evidence.
I’m not claiming that more than 50% of warming due to CO2 would not be likely, but I’m protesting against arguments that I don’t consider valid. This is another place, where fully traceable and fully objective results are much weaker than probably correct results that are based to a significant extent on subjective judgment of scientists.
I am also of the opinion that natural variability is self-limiting due to the Planck response. Somewhat analogous to Heisenberg quantum mechanics where large vacuum energy perturbations last for durations inversely proportional to their magnitude, natural temperature increases are wiped out by added outgoing longwave flux in proportion to their magnitude, and larger perturbations like El Ninos don’t last long at all. These quantifications don’t lend credence to perturbations of even tenths of a degree lasting many decades, and no one claims that the PDO and AMO are more than 0.2 degrees globally-averaged anyway making them irrelevant to a climate discussion of expected changes more than a degree.
Comparing to the warming of 1910-40 the warming since 1950 or 1970 is not very different. To me the most obvious difference is that the staring level is higher, not the warming itself. Known natural forcings explain some of the warming of 1910-1940, but only a rather small part of it.
There are clear indications of larger natural fluctuations than all well understood. Some models give sizable fluctuations, but that doesn’t mean that they would be understood.
Pekka, I think the 1910-1940 increase was a forcing variation due to the solar change (maybe a second step out of the LIA minimum). The number of sunspots increased significantly, and the Be10 proxy support this, but it is unfortunate we have no direct measurement of solar strength at that time. As we know, forcing variations are not subject to the Planck response, and there is no denying that solar variations have led to changes up to plus or minus 0.5 degrees over the past millennium.