by Judith Curry
Pat Michael’s testimony has been generating significant controversy, both in the hearing and in the blogosphere.
Michael’s Objective #2 relates to the attribution of climate change
Michaels concludes that:
Consequently EPA‘s core statement (as well as that of the IPCC and the CCSP), “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG [greenhouse gas] concentrations”, is not supported.
My quick take on this is that I like the kind of approach he is using, as a complement to the model-based attribution of the IPCC. With regards to Michael’s specific analysis, since he introduced one anthropogenic factor (black carbon), he was obliged to use sulfates, also.
What we really need to do is look at the range of datasets of solar, sulfate, black carbon forcing, plus the multidecadal modes of natural internal variability.
On this thread, lets discuss the different observational forcing datasets for the period 1950-2010 in the context of the global average surface temperature anomalies and also the various statistical attribution studies. I will leave it to the commenters to introduce the relevant studies.
You also have to consider which temperature profile you are trying to explain. The Jones type surface temperature statistical models (GISS, HadCru and NOAA) are highly uncertain and differ significantly from the satellite readings. The satellites show a flat trend from 1978 until the big ENSO cycle in 1998-2002, followed by another flat trend, but at a higher level. This appears to be a step function, which is very different from the steady warming shown in the surface estimates. (Prior to 1978 the surface models show no warming since 1950.) So what is it we are trying to explain with these forcings, Jones et al’s steady warming or the satellite step function? Temperature uncertainty is the most basic.
I am not trying to hijack Pat’s thread. It is just that when it comes to the politicization of climate science, choosing the temperature profile that best fits AGW may be the best case.
It’s worse than that. The temperature records that best fit AGW aren’t chosen, they’re manufactured. AGW theory can’t explain the 1940-75 cooling, so “corrections” are applied to the temperature data to flatten the cooling out. It can’t explain the large warming differential between the Northern and Southern Hemispheres, so “corrections” are applied to make the Southern Hemisphere show more warming. AGW says that sea surface temperatures should track marine air temperatures, but they don’t, so “corrections” are applied until they do. AGW says that the troposphere should be warming, so the satellite and radiosonde data are “corrected” to show warming in the troposphere. The only temperature record that doesn’t get “corrected” is the stratosphere record. Why? Because it shows cooling, just like AGW theory says it should.
None of this of course disproves AGW, but it sure would be nice if we could verify the theory against “uncorrected” temperature data.
Pielke, Sr. has written about warming bias and other problems, such as here:
An essential point is being missed which is leading to the (wrong) idea that since Pat discussed black carbon he had to discuss sulfates…
Pat did not introduce black carbon forcing, he introduced a reassessment of the magnitude of the BC forcing produced by Ramanathan and Carmichael made after the IPCC AR4 report. So there is no need to discuss sulfates as the estimates of their impact has not changed much since the AR4–but the estimate of the magnitude of the BC forcing has!
And yet… Ramanthan and Carmichael have a NET aerosol forcing (sulfate + BC) which is actually MORE negative than the AR4 net aerosol forcing estimate. Given that the NET is usually considered to be better constrained than the individual components… why is it that Pat can consider their BC estimate (which is a high outlier, in any case) apart from that net???
A html version of Pat’s testimony is available here.
On the subject of attribution and uncertainty:
The three principal drivers of climate per the IPCC are the sun, clouds, and GHG’s. It is conceded by the IPCC that there is a low level of scientific understanding of the influence of the sun and of clouds, while it is asserted there is a high level of understanding w/r/t GHGs.
In light of the conceded lack of understanding of 2 out of 3 principal drivers of climate, it is not logical to claim 90+% confidence that the warming in the 2nd half of the 20th Century (but not other prior warming episodes of equal rate) was caused by human emissions of GHG’s.
Although I do not understand the details myself, I suggest that we consider Piers Corbyn’s remarkable success in making long weather predictions and climate forecasts based on his Solar-Lunar-Action-Technique (SLAT) [Weather Action, 18 Nov 2010] http://www.WeatherAction.com/
Didn’t I read that we were going to get a thread on other theories of global warming? 750 words*?
BC would be one of those.
*Have you any idea how difficult it is to get a comlicated theory down to 750 words. I’ve not even managed to squeeze in the cod and the elvers.
The thread will be forthcoming in a few weeks. the idea is to briefly describe in 750 words, and provide other links. The purpose of the 750 words is to convince people to actually read the rest of your material.
No Judy. It’s an incredible waste of time. It’s been blogged about and bloviated about for many years now. You’re being kind of Internet naive here (which surprises me, given you have history back to 2006 with the class examination of Climate Audit). Both deniers and warmers are rolling their eyes at this retread.
If you want to get the deniers to firm up their ideas, tell them to write papers (or even start more blogs). But please….don’t launch this thread. It’s going to be a trainwreck worse than The Bridge over the River Kwai.
Quinn’s logic is superb.
Further the data sets are not exact measurements and are “adjusted” and thus have major possible error bands. Data is then run in models showing we need to change world economies. We hear lots about uncertantity but lets see the models run wiht all data biasign to max and min temp increase. WE SEE a fine line OF TEMP prediction but I suspect if we used max and min data the FINE LINE would be rather Broad.
Having read Harry_Read_Me and Jones’s remarks about his data retention and archiving procedures (or lack of them), I would not trust UEA/CRU to be capable of reading a thermometer on their campus correctly and report the information 24 hours later without transposing digits, inserting a negative sign or some other horror.
Within a week they would have lost the fag packet they had written it down on, and claimed that it was confidential anyway. But a year later, another number would suddenly have appeared in its stead – with ‘irrefutable proof that global climate change is worse than previously thought’
I have nothing but contempt for these shambolic amateurs. And nor does any experienced IT professional who has looked at Harry’s remarks. They were not doing anything fundamentally different form hundreds of IT shops around the country. The yjust chose not to even think about dopting practices that the rest of the world has learn the hard way over 50 years of data processing. And then to claim that they are ‘global custodians of the most important data in the world’ is beyond ludicrous.
In UK we have an expression ‘not fit to manage a whelk stall’. It applies to these guys 100%.
All their ‘data’ should be discounted as completely unreliable. If that means that the case for AGW is diminished to nothing, so be it. You cannot make a good case with rubbish data.
Plus one on that…
Do al the weather station based datasets use Jone’s UHI adjustments to ‘correct’ urban stations….
ie the chinese weather station issue (Keenan calls it fraud) which only has UHI at 0.1 degree a decade.
Yes, they now carry a crushing “burden of the doubt” when it comes to anything having to do with “data”.
Perhasp they could relieve themselves of this ‘crushing burden’ by publishing all the raw data(*) they have. There are plenty in the blogosphere who would be happy to help them analyse and interpret it. This approach seesm to work well for the digitisation of old historical records and written archives.
(*) Assuming that they have any at all..that it wasn’t lost in an office move (did you ever hear of a serious grown up IT shop losing data in an office move????), eaten by the dog, carried off by one of Norfolk’s extreme weather events (an unreported tsunami hitting Norwich perhaps) or otherwise came to a mysterious, unverifiable and implausible end.
Well there were the unreported floods of 2006.
The IPCC states that over half of the observed warming is attributable to ghg changes.
But, as I noted, .15 of that is accounted for by the SSTs being specified too cold in the early part of the record, .08 from nonclimatic effects, .06 from stratospheric water vapor changes and .10 from soot. Unless those numbers are way off, it is impossible to attribute 50% to ghgs.
The analysis would be different if one invented a potential warming, but that is not what we have. There have been many attempts to do this, but they can’t be tested against anything in the real world; so far as I know there hasn’t been a thermometer invented that measures this quantity.
If some one can direct me to this measure, I would be happy to use it!
I read your testimony and wish to thank you for having done such an excellent job of factually speaking. It will be interesting to see what happens in Cancun
I would ascribe part of the post-1980 warming to UHI (particularly important as many parts of world urbanized) and to the PDO-type cycle. As much as 75% of this period warming could be not AGW.
Even Gavin Schmidt published a paper giving heightened solar activity 10% so there’s another 0.1K to subtract from co2 at a bare minimum.
If the ACRIM team are correct, it’s a lot more.
And if I’m correct, it’s even more, because the Sun stored a lot of energy into the oceans from 1935 to 2003
Craig and others: ‘UHI’ (I prefer ‘built environment effect’) is also AGW. It’s anthropogenic (A), it contributes to an upward trend in regional on-land surface raw data temperatures (W) on all continents (G), and as Jones and co-workers have demonstrated is difficult to filter out of the data. It would simplify the debate to accept the urbanisation effect as simply another climate forcing. After all every year more and more of us live in buildings (in both urban and rural areas), travel in cars on asphalt roads and consume manufactured goods so ‘UHI’ is what we increasingly produce and experience. The IPCC would present a more balanced and objective view if it took more note of the publications of Pielke Snr and his co-workers in this important field, and at the same time played down its rather uncritical acceptance of those of Parker.
There are two UHI effects:
1) The amount our expanded urban environments actually heat the world up. This is negligible, although local effects can be significant.
2)The degree to which the expansion of our urban environments around measuring stations have skewed thermometer data which leads us to believe we’ve warmed the world up more than we have. This is non- negligible.
You cite the Thompson et al. (2008) article as the basis for you adjustment to the 20th C temperature record. Reading Thompson et al. I see that the author’s include this caveat in their concluding remarks:
“The adjustments are unlikely to significantly affect estimates of century-long trends in global-mean temperatures, as the data before 1940 and after the mid-1960s are not expected to require further corrections for changes from uninsulated bucket to engine room intake measurements. However, compensation for a different potential source of bias in SST data in the past decade—the transition from ship- to buoy-derived SSTs—might increase the century-long trends by raising recent SSTs as much as 0.1 °C, as buoy-derived SSTs are biased cool relative to ship measurements”
Did you consider this potential adjustment to the temperature trends over the last decade? The authors indicate that temperatures may have to be adjusted upwards thus potentially increasing the trend over the last half century.
I have a further question regarding the origin of your estimate for the effect on 20th C temperature trends due to stratospheric water vapor variability as derived from Solomon et al (in Science). You indicate that the magnitude of this correction is -0.06 (derived from the difference in trend from figure 6 to figure 7 i.e. .0.408 – 0.468 = 0.06 oC/decade). Reading Solomon et al. I can’t find the origin of this calculated shift in trend. Can you explain this please?
Doing my own calculations based off Solomon et al, and considering both the implied decrease and increase in trend (since in the last decade there has been an observed decrease in water vapor in the strat), I find that the correction should only be 0.01 oC /decade. Explanation:
Solomon reports that the decrease in trend over the last decade would be 0.04 oC per decade due to the *decrease* in water vapor in the strat. Solomon says that this decrease represents a 25 % decrease in temp trend. Solomon goes onto say the increase in temp trend over the 1990-2000 period would represent 30% of the trend. The trend over that time was 0.17 oC per decade, which equates to a portion of the trend explained by stratospheric water vapor changes equivalent to 0.05 oC/decade. Taking the difference of these attributed changes in trend gives 0.01 oC. So, can you please attempt to explain this inconsistency?
Judging by Solomon’s reported water vapor trends in the strat it seems that a much of the variability in water occurs over the time period 1990-2010 rather than in the 1980 time period, so its hard for me to understand why the trends as reported for single decades in Solomon et al. can be applied over a 60 year period as reported in your testimony.
See my analysis over at MasterResource.org that goes into the determinations in a bit more depth than Pat did in his testimony.
I see that you are using a quoted warming contribution of 15% of the observed warming. Can you clarify where in the Solomon paper you got this from? I can see that Solomon estimates a 30% warming contribution over 1990-2000, and a quoted cooling (or reduced rate of warming) of 25% over 2000-2010, but I don’t see anything about 1980-1990 that would allow the generation of the 15% warming contribution.
Final point, I promise. I have a comment on your assumed reduction in trend due to black carbon radiative forcing. Black carbon radiative forcing is highly uncertain, discussion at the recent IGAC 2010 conference indicates, for instance, that forthcoming review publications on radiative transfer calculations considering all of the effects of black carbon could even yield a net negative forcing for BC. The most likely scenario is that BC will have a small but positive net forcing. The reasons for a small forcing and potential negative forcing are to do with the burn off effect BC can have on low altitude clouds. Therefore, you should be framing your calculations in the correct levels of uncertainty, and from the document available online from congress it’s not at all clear that this was done in your presentation to the committee.
I am not familiar with that publication. Ramanathan and Carmichael is a review paper that indicated a sizeable positive forcing from BC and suggesting even that 25% of the positive forcing may not be high enough since they did not account for snow darkening effects.
There’s a paper being readied for publication for ACP, I think. When I have more time I’ll find the authors name for you.
PaulH, of course there will be more papers and more discussion of this, and I’m sure Pat and Chip would agree there is some uncertainty in the numbers, but keep in mind what we are discussing here – whether the IPCC’s claim “very likely”, ie 90% certainty, is justified.
That does seem to be the point. PaulH’s comment seems to further support Michaels position.
PaulH consider what greater uncertainty in the BC forcing actually means to any attempt to attribute warming to GHGs.
I’m not sure if I made this point clearly enough and if I effectively linked the highly uncertain remark to the sign of the BC forcing. Just the fact alone that a negative sign for the BC forcing can not be ruled out, according to the Bauer study, is pretty noteworthy and merits the proper qualifier on any statements about BC forcing. It’s not that atmospheric BC forcing (excluding snow effects) is highly uncertain and could be significantly larger than Ramanathan and Carmichael, it’s that R&C probably represent the upper limit on the estimate and that BC forcing could even be negative. I fail to see how this further supports Pat’s specific claims and calculations on this matter.
Here’s the paper relating to the talk at IGAC:
A global modeling study on carbonaceous aerosol microphysical characteristics and radiative effects
S. E. Bauer, S. Menon, D. Koch, T. C. Bond, and K. Tsigaridis
Atmos. Chem. Phys., 10, 7439-7456, 2010
They might be planning another publication since there is some material that was presented at IGAC not in this publication. The key points, I think, are that this new work demonstrates that the co-emission of BC and other aerosols needs to be considered, and that the sign of forcing due to BC isn’t straight forward.
The paper below seems like a good start with BC in that they attempt to standardize the findings from 4 different sourses, a useful excerise I think. To summarize their findings we have
Aerocom (IPCC) with 0.02 Wm−2, with a 90% range of −0.27 to 0.23 Wm−2
Hansen with 0.22 Wm−2 (range of 0.05 to 0.40 Wm−2)
R&C with 0.25 Wm−2 (range of −0.06 to 0.47 Wm−2)
Jacobson 0.37 Wm−2
From this they suggest a “Best” RF estimate of 0.22 Wm−2
I don’t know whether this provides a basis for an average result or extends the possible value for BC forcing.
My earlier comment was based on the fact that the more estimates one brings into play the greater the possible range of the values and the greater the uncertainty when trying to attribute warming to other forcings such as CO2.
This is in reference to article present here (http://www.masterresource.org/2010/02/why-the-epa-is-wrong-about-recent-warming/)
In the conclusion, it states:
“So, if we take what the best science gives us, we find that pretty close to half of the warming that is currently indicated by the extant global temperature datasets may be from influences other than anthropogenic greenhouse gas increases—perhaps a bit less, perhaps a bit more.”
Based on this conclusion, it then lays out a supposition of how the EPA might have stated in their justification:
“It is about as likely as not that most of the observed increase in global average temperatures since the mid-20th century is due to the observed increase in anthropogenic GHG concentrations.”
Shouldn’t this (hypothetical) statement actually be:
“It is highly likely that about half of the observed increase in global average temperatures since the mid-20th century is due to the observed increase in anthropogenic GHG concentrations.”
I am just noting the switch in percent contributions to warming (if deriving such quantities were meaningful) to percent likelihood values.
Please correct me if I am wrong.
I would probably stay away from words ascribing high likelihood to a sum of individual components that have sizeable uncertainty.
So I like my hypothetical statement a bit better!
I’ve asked similar questions elsewhere on this blog, and this relates to the Italian flag/IPCC method of assigning confidence levels in estimates, so please excuse my pestering. :)
I am comparing this:
“we find that pretty close to half of the warming that is currently indicated by the extant…”
“It is about as likely as not that most of the observed increase in global average temperatures since the mid-20th century …”
In the first instance, the amount of warming is half, in the second instance the likelihood estimate is half.
I am trying to understand how this works. Meanwhile, I do understand the larger point your analysis makes.
I am not trying to conflate those two statements.
“Half” in the first case is just a particular value of the IPCC’s “observed” trend that bears on my examination. 50-50 in the second case just refers to my belief in the odds of being above or below that particular value. So I am not really relating “half” and 50-50–it is just circumstance that they seem to be the same quantity/quality in this case.
I hope that helps.
How do you subtract the effects of ocean cycles to come up with the 50% of warming attributed to An-CO2? What is going to happen to that number as the cycles turn and we get colder for 10-30 years?
Pat Michael’s testimony is a prime example of a “spot-the-errors” exercise that should be given to any undergraduates in lower-level atmospheric science courses. Why he is invited to talk alongside the likes of Santer, Alley, and even Lindzen is beyond me. What credentials gives him this ability aside from the journalistic-like necessesity for a “dissenting voice?” I am also shocked that Curry has not been more aggressive toward his sloppiness. I’m really starting to think that the need to make everyone happy and “hear both sides” (perhaps to get rid the the “dogma”?) is getting in the way of reason.
Neglecting the cooling effects of sulfate aerosols, or neglecting thermal inertia lag effects, are only a taste of Michael’s ability to stand up and lie. You can’t just sit up there and make up your own temperature record by half-assingly and incorrectly pulling apart information from various studies, issues which generally work over different time periods in the record, without any real attribution effort, and throw it at non-scientists. Well you can, if you’re Pat Michaels, and then you can supplement it with several pages of conspiracy and ‘publication bias’ hogwash.
Cut the ad hominem and address the science. Unless you have supported evidence, don’t accuse anyone of a “lie”. Criticizing “credentials” is appealing to authority.
Michael’s showed a slide of actual temperatures being near the bottom of all IPCC projections. Address that, rather than specializing in yellow journalism.
Nothing wrong with appealing to authority when someone actually has authority on the subject in question.
No, in science, appealing to authority is ALWAYS wrong. If you can’t make the argument yourself, you have nothing to say. Living species did not evolve to their current state because Darwin said so. Darwin’s evidence and arguments stand on their own – entirely free of the nature of the man.
PS Recommend using Galileo as a better example. A “scientist” who a priori excludes the alternative hypothesis by selecting presupposition(s) or uses straw man arguments does not engender confidence in his model, let alone when one tries to argue against observed statistics and model probabilities.
Who said it’s anything to do with the nature of the man? If you’re ill you don’t consult your doctor because he’s a nice guy, you do it because he has authority in medical matters.
This might be true, but let us not forget argument from authority is fundamentally argument from ignorance. If I know nothing about the doctor or condition, of course I will choose authority. I am ignorant and it is my best bet. If I know something about either doctor or condition, that is going to be how I make my decisions.
Keep this in mind. When some one argues from authority, they are essentially saying: “I am ignorant, but says” and just because one person in the argument is ignorant does not imply that the other is also ignorant”.
We’re not doing science here, are we?
A scientific hypothesis requires only one contrary fact to disprove it. It matters not the training or credentials of the person raising the fact.
Chris disparages Michaels’ “credentials” claiming a “spot-the-errors” level response, without addressing the substance of Michael’s evidence, graphs or what those “errors” are. He alludes to (Santer’s) claims of “Neglecting the cooling effects of sulfate aerosols, or neglecting thermal inertia lag effects”, but not Michael’s evidence.
The IPCC’s “projections” being above temperature trends does not impress, nor the facts that those “temperature trends” appear to incorporate a strong degree of positive “adjustments”.
Perhaps you can provide some scientific explanation for the latest example of turning a cooling trend into a positive trend:
USHCN Adjustments Add Four Degrees To Liberty, Texas
For statistics on temperature trends see Lucia at The Blackboard. e.g.
http://rankexploits.com/musings/2009/hadcrut-july-temperature-anomaly-up/“>HadCrut June Temperature Anomaly: Up. 21 July, 2009 (12:43)
For IPCC GWM models that claim to be “robust” to wrongly predict the SIGN of the temperature trend over a decade let alone the magnitude does not give me great confidence in their ability to predict “climate” into the next century.
David L. Hagen | November 18, 2010 at 4:52 pm | Reply
A scientific hypothesis requires only one contrary fact to disprove it. It matters not the training or credentials of the person raising the fact.
ah, wrong. When a “fact” contradicts a theory you are left with a choice: your fact is wrong, the theory is wrong, the theory is incomplete, some portion of the theory is wrong, the theory and the fact may both be wrong, shit happens, your error bars get wider.
you might wish it were black and white, in practice it doesnt work that way
Contrary to Mr. Mosher’s assertion, a properly constituted theory states claims, each of which is either true or false.
Contrary to Mr. Olberg’s assertion, holism wins:
Or to use famous words:
> [S]cience in its globality is like a force field whose limit points are experiences […] [A] particular experience is never tied to any proposition inside the field except indirectly, for the needs of equilibrium which affect the field in its globality.
Ah, wrong. You are confusing an actual “fact” with an incorrect assumption or an incorrect theory.
A “fact” is something that actually exists or is an actual occurrence. In other words, a fact is the truth of the matter.
If your “fact” is “wrong” then it isn’t actually a “fact”.
If a real “fact” goes against a theory, the theory as it currently exists is incorrect.
If on the other hand, something that is claimed as a “fact” goes against a theory, either the factual claim is incorrect (an error made somewhere in the interpretation of the data leading to an incorrect conclusion) OR the theory as it currently exists is incorrect.
I can make claims of facts all day long, that does not mean any of my claims are or are not actual facts.
Chris’s point which people seem to see as an unacceptable appeal to authority was to question Michaels’ credentials to testify alongside the likes of Santer and Alley. If the latter’s “authority” is irrelevant then why ask any of them? Why not just pick people at random off the street?
As for the temperature trend since 2000, it is still within the range of the IPCC projections so I don’t see how that disproves anything. Making a judgement either way based on a period of less than ten years is a bit premature.
As for the Liberty, Texas data, I have no idea. What is USHCN’s explanation?
Take a look at the raw vs “High Quality” Australian temperature data:
Perhaps you would like to hang you hat on the Darwin Zero data? A cooling trend was adjusted to +6deg/century. Not bad support CAGW!
Two edged sword. You might equally ask what authority Santer and Alley have to testify alongside such an outstanding scientist as Michaels.
Note – I am not necessarily endorsing this view just pointing out the consequence of Andrew’s remark.
If your best argument for a proposition is ‘It must be right because Very Important and Authoritative Person xyz says so’, then the existence of any evidence for it must be in as much doubt as is your possession of any working critical faculties.
Like the man says..in science appealing to authority is never a winning argument. Just ask Aristotle…he ought to know! :-)
If your best argument for a proposition is ‘It must be right because Very Important and Authoritative Person xyz says so’, then the existence of any evidence for it must be in as much doubt as is your possession of any working critical faculties.
The point is not that his opinion must be right but that it would carry more weight than someone without any comparable qualifications. I notice from your profile in the previous thread that you have a MSc in Chemistry. I have an ‘O’ level (showing my age here) and I’m sure you would object if I claimed that people should give my views on the subject equal weight to yours. Quite rightly IMHO.
I would still consider your argument and reject it, like all you have said till now, not on the basis of my degrees but on the basis of the flaccid points you are making.
I’m not getting all of this.
In science papers scientists use footnotes to an authority. Is that a fallacy?
Well, one of the nice things about chemistry is that it is testable by experiment.
If I, with all my clever education and degrees and stuff were to suddenly assert that an acid solution turns litmus paper blue, and you remembered your O level and said that I was mistaken and that an acid solution will turn litmus paper red, then it is very easy to conduct an experiment to which one is right.
The litmus paper knows nothing and cares less about my qualifications, or yours. It will just do what litmus paper does. And in this case t would show that your view is correct, and I was incorrect. And we would confirm once more that acid = red and alkali = blue in the litmus test.
But, being good scientists ( I hope), we wouldn’t determine the outcome by me using my ‘authority’ to accuse you of lying, of being a member of a religious sect or of being paid by the makers of litmus paper just to prevent the truth being known, nor that your motivation was purely to bring about the death of the firstborn (or at least I hope I wouldn’t :-) ). And to assert that therefore the matter was concluded in my favour. And maybe get my mate Fred to back me up.
And that is because of the way I perceive ‘science’ should be done. You may also have read in my bio that one of my formative experiences was to have toiled away to produce some theoretical calculations from first principles about the speed of reactions in the high atmosphere (in those days, ‘ozone depletion’ was the AGW du jour in chemistry/physics/weather studies). But when my research buddy came to measure the real world, the answers he got were widely different from my theoretical results. We binned the theory. Sad for me, but right for science.
Climate science seems to have managed not even to get that far in thirty years. It hasn’t (yet) produced a single prediction that has been made in advance and then tested by experiment and found to be either correct or incorrect. Even the daily weather forecast undergoes these verifications every 24 hours. And by focussing on the ones that don’t work, the forecasts improve.
The whole AGW scare requires us to suspend disbelief and assume that clever people with MScs and stuff can confidently and accurately predict the climate 100 years out. And that the consequences are so terrible that we must make immediate unpleasant and fundamental changes to our lifestyles to avoid these happenings. And yet and yet, none of the models has been able to accurately forecast even 10 years out (has anyone looked back at the models from 2000 and shown that they hit the spot for 2010?). Nor have the modellers gone out of their way to come up with predictions that it would be easy to verify, preferring instead to ask us to ‘believe’.
Chris Colose argued that simply by the application of the first principles of physics, the models would somehow ‘come right’ in the end, and also appealed to authority – his own – that only his superior intellect was capable of judging whether they were ‘right’ or not. Nobody else was even entitled to an opinion.
If you wish to call that ‘climate science’ be my guest. Appeal to authority as much as you like. But until the theoreticians come up with some verifiable predictions, it will all be as much use as my predictions were. If they pass the verification test, then my scepticism will decrease. If they fail, then I very much hope that I don’t hear you suggesting that the results of the test are junked as ‘incompatible with our best theory’ or ‘not what Very Important and Authoritative Person xyz says it should be’
The litmus paper still turns red or blue, no matter who says what about it.
I’ve read Pat Michaels’ testimony and your ‘rebuttal’
His seems to have plausible numbers in it that could conceivably be argued about in a rational sort of way. and he expresses himself coolly and rationally.
Yours seems to consist of nothing more than an accusation of lying. some general handwaving in the direction of possible other causes that he may have missed, and a sideswipe at your host at this blog. Your post does not show to me signs of a cool head thinking about this topic.
Even if I knew nothing at all about the subject, guess which one I’d be more likely to believe??
I stopped reading your trivial attribution after reading “Why he is invited to talk alongside the likes of Santer, Alley, and even Lindzen is beyond me. “…
It appears that you are attacking personally vs. explaining what portion of the testimony your disagree with based other data or a different analysis. It seems rather silly actually. Please present facts as to why you believe that CO2 must be the driving cause of climate change since 1950
Disagreeing with your opinion is not a definition of being a liar.
When you on the AGW hysteria mongering side of this issue start policing your own, then we can start a mutual list of those who should be ignored.
Starting with, say, Heidi Cullen on your side would be nice.
Even nicer would be to see your side condemn those making careers and money off of apocalypse promoting.
I thought Santer did a fine on the spot fisking, but i’d still like to see somebody take Pat’s approach and do a better job of it. just for curiousity sake
Okay with me if somebody takes the approach and does a better job of it, but shouldn’t he already have done a better job of it before presenting it to the American people? Once there is a better job of it, maybe it can be said it belonged there. Fine, no damage. But what if it was half-baked nonsense and didn’t belong in a rational discussion of climate change?
And how would you determine whether it was half-baked nonsense or full of the finest wisdom and insights since Einstein?
In any other field of science you would conduct these things we call ‘experiments’. They are not a new idea, even if foreign to ‘climate science’. Their use was popularised by Francis Bacon, an early English scientist in about 1250. They are very useful ways of determining whether a theoretical pronouncement accurately reflects what the real world does.
But these ideas may be a bit too revolutionary for climatology. So perhaps you will rely on the tried and not trusted at all method of ‘consensus’. Or take Santer’s approach and threaten the evil dissenter with violence.
I don;t care which you use. Just don’t call it ‘science’.
Do you think this “experiment” should have been done before it was presented to the American people as rational climate science, or later?
Both Curry and Mosher seem to think somebody should do the ‘experiment.”
“My quick take on this is that I like the kind of approach he is using, as a complement to the model-based attribution of the IPCC. With regards to Michael’s specific analysis, since he introduced one anthropogenic factor (black carbon), he was obliged to use sulfates, also. …” – Curry
I take her to mean Michaels left something out.
“…but i’d still like to see somebody take Pat’s approach and do a better job of it. just for curiousity sake. …” – Mosher
I take him to mean the job should have been better.
All you are saying is “I have no idea what experiments have been done in regards to climate science, therefore there have been no experiments done”.
The very first thing that got me interested in climatology was the repetition of the claim that ‘The Science is Settled’.
‘Wow’, I thought, ‘they must have done some really clever experiments to be able to say that. I must go and read up about them’. So I started looking for the experimental evidence.
And so far, two years later, having been a pretty diligent reader and an active blogger for nearly all of that time, I still haven’t found any. Since such a thing would be a very powerful argument in favour of AGW theory, I would have expected its existence to be well known and very well publicised. That it hasn’t been strongly suggests to me that it doesn’t exist.
If you know better, and there is some really good experimental proof out there, please let me know.
I know of only one major experiment with the climate. It gave puzzling results, as evinced by Prof Wigley.
Thanks for the link.
I may have misunderstood the context, but at first (and second) glance this seems to be Prof. Wigley first deciding what he wants ‘the answer’ to be, and then suggesting ways in which that answer might be plausibly achieved…and where others might spot the ‘trick’ so used.
I have no other reason to think ill of Prof Wigley’s work, so please point out where I have grasped the stick by the wrong end. Cheers.
We share equal puzzlement. Why, when a scientist of Prof Wigley’s calibre is presented with an interesting set of data that contradicts a previously-held theory, does he fail to explain it? Instead he explains it away like someone stamping on the budgerigar under the new-laid carpet. Had he followed it through, and if I’m right in my diagnosis what the blip shows, he’d now be a hero and we’d all be considerably better off.
It is worthwhile contemplating the HADCRUT3 graphs: look at the ‘blip’ and see how cleverly the colours and scales are chosen so the blip barely breaks through the zero line. One of the few bits of my staff training that remains with me is the module we did on making graphs tell the story you want. These graphs tells a story, but perhaps not the one wished for, not when presented to someone forearmed with a little bit of knowledge.
The graphs, and how they are presented, are part of the reason that I am now a lukewarmer.
The blip is a big deal IMO, I am waiting on some papers to get published (or at least in press) before i discuss.
I don’t know how relevant it is to AGW theory, but people in the aerosol community do experiments or test their models versus true atmospheric measurements. For instance, models on secondary organic aerosol (SOA) formation almost universally underpredict the amount of SOA in the actual environment.
I don’t know if those kind of things make their way into GCMs or anything, but if you’re curious about what I’ve mentioned, I can try to look up some references for you (not my exact field, but I’ve seen talks on SOA models at conferences, so I’d have to put in some effort to get the papers).
I believe Pat Michaels’ testimony was deceptive, although whether that was intentional or inadvertent is difficult to determine. His claim that less than half of the warming since the mid-twentieth century is attributable to GHGs appears to be based on a misinterpretation of how the various warming factors were assessed. In general, what the IPCC did was add up the estimated forcings, from which they concluded that the GHG forcings (from CO2, methane, N2O, tropospheric ozone, stratospheric water, CFCs, etc.) significantly exceeded the forcings from solar intensity and black carbon. Even if the black carbon estimate is elevated according to Ramanathan, the GHG excess remains substantial.
What Michaels did was to adjust the Hadley published temperature trend to account for what he claims were discrepancies between the published calculated anomalies and the actual anomalies – e.g., an upward adjustment for underestimated mid-century SST. Having analyzed the SST data, I have reservations as to whether the magnitude of the adjustments is fully justified, but that is not really relevant. What the IPCC did (and Ramanathan also with his black carbon estimates) was to ask, “how much of whatever warming occurred was due to GHGs, and how much to other warming influences such as solar forcing and black carbon aerosols?”
To get actual numbers (in deg C), one would have to take the respective percentages and multiply by the total warming. For example, if black carbon accounted fo 25% of forcings, it would be necessary to multiply 0.25 by 0.7 C if the unadjusted values are correct, but only by 0.4 C if the warming can legitimatelybe estimated to be only that much. In each case, black carbon still contributes only 25 percent. One can’t take 25% of 0.7 and subtract it from 0.4 as Michaels appears to have done. Even if a few of the adjustments beyond the SST adjustment were actually reflecting warming influences rather than a measurement adjustment, they would only add a small fraction. In addition, the stratospheric water changes almost completely cancel out (warming for two decades followed by cooling later), and of course, stratospheric water vapor is itself at least partly anthropogenic due to methane oxidation. When all is said and done, black carbon, solar, and possibly a claimed small warming effect on land (Michaels didn’t elaborate) still end up contributing considerably less than half of the total warming, whatever its magnitude, Even if one adds in the averaged out effects of internal climate variability, the GHGs still end up with more than 50 percent of the warming.
I found flaws in many of the other claims, but that is a topic for a different comment.
Actually, the black carbon percentage calculation is OK, but it is the conflation of warming events with measurement adjustments that creates the error. To assess the relative role of GHGs, one can only compare it with other actual warming phenomena. If the true warming is much less than 0.7 C, then whatever that value, the critical question is whether the percentage contribution of GHGs to that actual value (not 0.7) exceeds 50 percent. It appears to.
“What we really need to do is look at the range of datasets of solar, sulfate, black carbon forcing”
Why doesn’t the IPCC include the major forcing, trade winds?
Observations clearly show temperature increase when trade winds decrease.
Here represented by SST and ENSO: http://virakkraft.com/sst-deriv-enso.png (exceptions are mostly afer large volcanic eruptions)
SST increased between 1977 and 1998 when ENSO was mostly positive, and leveled off after 98 when ENSO went more neutral. The integral of ENSO correlate much better with temperature than GHGs do.
See data on Length Of Day (LOD) and an increasing number of papers comparing LOD with other terrestrial parameters, especially wind. Presumably “climate” changes temperatures which change winds which changes LOD etc.
Data set at:
International Earth Rotation and Reference Systems Service
Geophysical fluids data
See: Solar forcing of the semi-annual variation of length-of-day Jean-Louis Le Mouël et al. GEOPHYSICAL RESEARCH LETTERS, VOL. 37, L15307, 5 PP., 2010; doi:10.1029/2010GL043185
You could even take a look at the very first post on my blog for an LOD comparison with detrended global temperature and the motion of the sun.
Can’t tax wind. Try again.
Yeah, it’s kinda funny that no one wants to touch the linearity problem.
Midrange concentration models tend to predict linear warmings.
We have a linear warming since the second warming of the 20th century began in the mid 1970s.
So this actually corresponds to the functional form of the model response.
The models just produce different rates.
So you adjust your forecast for the observed constant rate.
If the functional (linear) form of the models is itself wrong, then we know far too little about climate change to even hold hearings on it.
What, precisely, is wrong with that logic? I’d love to know!
You will notice that there was absolutely NO response to it yesterday…
a) it makes the decision to “hold hearings” conditional on an unknowable capital-t Truth (correctness of models)
b) the contingency of lower-case-t truth is not an argument against making decisions
c) “too little […] to even hold hearings” is an empty assertion
c) “too little […] to even hold hearings” is an empty assertion
Hearings are fact finding missions used to determine policy recommendations, not academic wrangling. If the facts are in question then Dr Michaels is correct — hearings are useless.
Dr Michaels doesn’t appear to be arguing for Absolute Trvth as you contend. Rather, he’s simply saying that the facts aren’t necessarily facts; e.g. it’s one thing to say that basic physics shows that CO2 is a GHG. It’s something else to place model forcings in the “fact” category.
We have a linear warming since the second warming of the 20th century began in the mid 1970s.
Excuse me if I’m being a bit dumb here, but wouldn’t “linear warming” mean that temperatures have gone up pretty much in a straight line?
No response today either, neither here nor elsewhere.
What astounds me is that neither yourself, nor Curry nor any other expert should require my ‘inexpert opinion’ to point out the many obviousness(es).
Linear model …
First derivatives determines rate …
Zero second derivative indicates linearity …
That the immediate, infinitesimal first order rate is constant and reliable …
That there is much tuning of models that have many parameters …
That nothing reliably changes, …. that there is no bifurcation to some alternative attracting basin without the existence of nonlinear aspects …
If experts need a raving incompetent (meaning “myself”) to remind them of the barest rudiments of their expertise?
… It just leaves me gobsmacked.
Forgive my intrusion.
But if the warming since the 70’s is linear, that would imply that temperature is a function of independent variables thus, eliminating any possibility of positive (or other) feedbacks?
Or, 40 years is too short a time to detect the feedback mechanisms (maybe 120 years is enough?), or positive and negative feedbacks are somewhat balanced overall.
Or am I, as usual, wrong or stated the obvious that everyone else is already thinking:)
I was particularly entertained by Ben Santer’s criticism of Pat’s slide of comparative instrumental record, pre- and post-adjustment – criticism specifically regarding the lack of error bars in the chart.
I had something of a comedic double-take. Ben Santer was, of course, correct and there were no error bars in Pat’s graphic. And so I freeze-framed the stream on Pat’s graphic and I imagined those error bars – or something like them at least. And as they appeared in my mind’s eye, the whole instrumental record became pretty much a grey blur, and ANY measure of certainty about global warming in the 20th century simply evaporated.
“Good point Santer,” I thought to myself. “Now, what was your point again?”
pick a trend, any trend you like…
Gray blur is correct. More precisely there can be no probabilistic error bars on these temperature guesstimates, because the area averaging method being used has no foundation in probability statistics. Ironically it most closely resembles a method used in the oil industry to crudely estimate reserves. But no one there would dream of claiming the levels of precision that the climate folks claim. Yet these supposedly precise, actually painfully rough, estimates are the basis for AGW. The satellites, which actually measure something, are ignored as an inconvenient truth. Is this funny or what?
Two new papers, not CO2 causing this “warm” trend and expect future cooling:
My model suggests cooling soon too. Not drastic cooling, but cooling.
Did you read the abstract by the ‘German scientist’ that you cited?
He states that carbon released from the oceans is responsible for atmospheric warming. Ouch.
We’ve determined from isotopic fingerprinting that this is not the case.
It says nothing of the sort:
It was found that the South Pacific Oscillation (SO) is influenced by solar activity, similar to the North Atlantic Oscillation (NAO). Especially during the warming period from 1980 to 2009 the oscillation of solar wind – Index “aa“ – was in good resonance with the delayed South Pacific Oscillation. The same observation was found between the oscillation of cosmic radiation, which is controlled by Forbush– reduction by the magnetic fields of the sun protons of the solar wind and the delayed SO (K=0.8). The consequence of these observations is the postulation that the increase of global temperature in the Southern Hemisphere was caused by solar activity with strong emissions of proton-rays in the Earth ‘s direction during the 22nd and 23rd sunspot-periods, reducing cosmic rays. This led to a reduction of cloudiness, increased solar rays and warming up the lower atmosphere (Svensmark –Effect). As a consequence, dissolved CO2 was continuously emitted by the slowly warming ocean, providing fertilizer for the flora of the world. A relevance of CO2 concerning climate change could not be found. With the end of solar activity in 2006, a cold weather period has also started in the Southern Hemisphere.
Is it not the case that far too many disparate influences on surface and altitudinal temperature variation, intuitively natural, are conflated and thus obscured by globalising the datasets. Would it not be advantageous to analyse temperatures in meridional belts approximating to the meteorological wind/pressure/convection systems?
Not meridional, I mean latitudinal.
I watched the entire hearing live. You’ll find my comments scattered here and there in the ScienceInsider page as well.
Your back and forth with Ben Santer was very informative. Both parties had very strong points to make
I would like to draw your attention to Heidi Cullen’s assertion that :
“the basic climate sensitivity experiment of CO2 doubling suggests an 8 F rise in temperature. This is Svante Arrhenius’ calculation. The IPCC gives a range…including all the feedbacks”.
What experiments are these? Who performed them?
We hear all the time that “we have only one earth and we should not be doing experiments”.
Frankly i thought Santer’s point about Pat’s failure to put error bars on his analysis was spot on. I also thought Santer’s position was quite right.
If you are going to take on a central conclusion of the IPCC you had better do a Bang up job of it. I Like Pat’s approach, but the execution was NOT what I would call good analysis as Santer pointed out. I wanna see all the cards on the table.
Also, Judith I’ve looked into the attribution studies a little bit, The blogosphere has not focused much on the actual mechanics of the testing.. Might be interesting. heck, I would offer Santer a guest post slot.. maybe about fingerprint studies or attribution.
Assuming the error bars are normally distributed about the central value of each influence, I don’t see how this impacts Pat’s conclusion.
show your work. and yes you have to include all factors. Basically answer Santer’s argument, we really don’t need to fear debate and math and data.
its way better than talking about mails.
The work is more fully described here .
And no, you don’t need to include all factors–only those that have changed. For instance, if measurement errors could conclusively be shown to be responsible for 51% of the apparent “observed” warming, the IPCC’s AR4 statement is wrong–no need to look any further or consider any other factors (presuming the IPCC’s understanding of them has not changed).
… the 51% would be the change in the magnitude of the observed warming… not the change in the fraction explained by anthropogenic ghg’s
Take a look at the work he uses. Solomon’s work has no data from 1950-1980. From 1980 – 2000, the data is very limited. Not only that, but the work only looks at the inter-decadal changes due to SWV. The Thompson has yet to even have an adjustment made to the SST’s over the time period discussed and the full picture it paints also would add .1C of heat to the last decade. While using other work to get exact numbers on what causes what, the presentation can not be compared to the IPCC. The IPCC does not take a number as observed anomaly increase. It combines ALL the forcing, cooling and heating, to get observed changes for both. Using the .7 warming, then adding other factors does not give a correct assessment of what the IPCC does.
While this type of work could provide useful, a fair amount of it’s content has incredibly high uncertainty and the conclusions are misleading.
That should be addressed Chip.
The major point is that the IPCC’s certainty claim of “very likely” seem unsupportable given the high levels of uncertainty that surround the various components of “observed” warming–including a lot of new info that has appeared in the literature since the publication of the AR4.
With all the uncertainty, that you seem to agree is there, do you remain comfortable with the IPCC’s “very likely”?
For the IPCC analysis:
As a non-scientist, I tend to think that quite a few of these discussion threads lose sight of the forest for the trees, not that there is anything inherently wrong in esoteric debate.
Chris Colose seemingly dismisses any dataset other than the ‘official’ ones. Fair enough – although I think there may be a case to be made for distrusting some or even all of them – lets use the longest official dataset: HadCRUt3. According to this, the total rise in ‘global temperature’ since 1850 (the recognised start of mankind’s significant emissions of GHGs) is 0.8 deg C. 50% of that – the anthropogenic bit – is 0.4 deg C. This is over a period of 160 years. This is averaged at 0.025 deg C per decade or, if you prefer, 0.0025 deg C per year. Not a lot, really. As the temperature has not increased at all since 1998 (according to HadCRUt3 and the satellite datasets), this effectively defies ‘accelerated warming’, which the CAGW theory demands should be happening right now. (My, that ‘aerosol cooling’ must be very effective… and is ‘thermal inertia lag’ the climate science equivalent of ‘the cheque’s in the post’?)
At the root of this debate, the theory of CAGW is poorly devised, disgracefully over-hyped and certainly unproven. If you take the ‘C’ away, there might be some merit in the theory – right up until anyone decides they know enough to put a figure on the amount of ‘W’. (According to RealClimate, CO2 is responsible for 9 – 26% of the Greenhouse Effect…). Unfortunately, the whole issue was sold to Joe Public (ie, me…) as a ‘big problem’, or a ‘catastrophe’, under the guise of ‘scientific authority’. Does anyone who is still reading this genuinely feel that the climate science behind the prophecies of doom holds up to scrutiny? Are pro-AGW proponents just going to keep on coming up with excuses why the data does not support the theory?
Andrew, that is correct. Plot out the CRU data beginning in 1976, which is the beginning of the second warming of the 20th century. It is obviously linear…but you can go ahead and fit a second order term to it if you want…and you will find that it fails a partial f-test. Same to third order.
BTW Judy Curry was kind enough to invite me to give the EAS seminar at GaTech a couple of years ago. Again there was no challenge to the linear comparision, which is central to the hypothesis of moderate warming.
Hmm.. It’s actually kind of funny that you’ve been using the ‘it’s all linear’ argument for a while, though I think Nierenberg was the first to do so. He predicted 0.1 degC/dec warming (based on linear trends) in 1991. You predicted 0.13 degC/dec in 1999, then 0.15 degC/dec and then 0.18 degC/dec as recently as 2006. Given that the linear trend keeps changing for some reason, you might want to reevaluate how useful it is…
@Gavin “Given that the linear trend keeps changing for some reason,”
The “some reason” becomes clear in light of Hofmann et al’s CO2 model of C = 280+exp((Y-1790)/46.9) ppmv, which is a much better fit to the Keeling curve than NOAA’s previous polynomial estimates as well as being 281 ppmv in 1790 A.D. which was nothing like the polynomial fits!
2 °C per doubling makes the temperature T grow as 2.89 ln(C) (since 2/ln(2) = 2.89). As the year Y increases, 2.89 ln(C) tends to 2.89/46.9 = .0616 or 0.61616 °C/decade in the limit, but the limit is a very long way off. In the meantime the warming per decade from 1981 to 1991 according to this formula is 0.116, from 1989 to 1999 it is 0.133, and from 1996 to 2006 it is 0.150. This is a bit less than Pat’s numbers, which would correspond to his using more like 2.4 °C per decade.
If you look at the last 150 years, Pat is even more of a global warming proponent than Arrhenius was in his second paper, where he cranked his 4-5 ° estimate down to 1.6 ° per doubling.
@Pratt “correspond to his using more like 2.4 °C per decade.”
I meant per doubling of course. (Why can’t one edit one’s postings?)
Pat (hi,Pat!) is making what I would call the “positive forcing fallacy”. He finds things that add up to more than 50% of the observed temperature changes and concludes that what’s left must constitute less than 50% of the observed temperature changes. This sounds logical on its face, but it’s false.
Suppose we assume for the sake of argument that all of the effects he lists are accurately quantified and appropriate to include: SST errors, land biases, water vapor losses, and black carbon. Together, these account for 0.39C. That leaves 0.31 C. Suppose that the remaining temperature change is due to sulfate aerosols (-0.35 C), carbon dioxide (+0.46 C), and natural variability (+0.20 C).
Notice how the presence of a negative factor causes the positive factors to add up to more than 100%. Notice how CO2 can still account for most of the observed warming, even if it accounts for less than half of the positive effects on observed temperature. This is true even if all of Pat’s numbers are correct and even if they represent the complete set of modifications to our knowledge of temperature observations and forcings since AR4.
Bottom line: Pat has failed to disprove what he claimed to disprove.
John – Above I made a point similar to yours although looked at from a different angle. One thing I think is valuable from your own comment is the emphasis on the importance of anthropogenic aerosols in reducing the combined effects of all warming influences, of which you and I agree GHGs play a majority role.
CO2 is continuing to rise, but anthropogenic sulfate aerosols are not rising at the same rate and are actually declining in some regions due to pollution controls. The result is an increasing “unmasking” of GHG effects, which are likely, if uncurtailed, to bend the curve upward. Currently, it is clear that the warming that has occurred is less than that to be anticipated from the CO2 increase. As the offsetting effects of aerosols decline, the warming is likely to approach the values calculated from the positive warming influences, with CO2 being the one rising the fastest. In that sense, current trends are likely to be deceptive.
Fred, a simple question, these aerosols producing this offsetting effect, do scientists know how long will they remain in the atmosphere versus the GHGs they are offsetting?
Yes, the aerosols remain in the atmosphere for very short intervals – months to a few years – compared with CO2. Once pollution controls are implemented more widely, they will decline rapidly, while the effects of existing CO2 levels remain, and are magnified by further CO2 emissions. (Note that some of the best data come from volcanic eruptions, which emit some of the same aerosols, but whose effects can be accurately timed)
I think the mistake that you are making is that you are not a priori taking the warming from measurement error out of the warming equation. You need to do this first before allocating the forcing.
Assuming as you did that all Pat’s numbers are correct, you first have to remove the combined effect of non-climatic warming (e.g. measurement errors) from the IPCC’s assessment of “observed” warming. In Pat’s case, that is the combination on Thompson et al and McKitrick and Michaels. Together they make up 0.23C of the original 0.7C. You have to treat this as warming that doesn’t exist at all. So then you are left with 0.47C of actual warming. According to Ramanathan and Carmichael, black carbon makes up 25% of the positive forcing, leaving GHGs the other 75% (regardless of how much negative forcing there is). So, 75% of 0.47C is 0.35C—half of the original “observed” warming. And this isn’t even considering Solomon’s stratospheric water vapor.
You have to realize that about 33% of the IPCC’s “observed” warming wasn’t really warming at all—it was problems with measurement (assuming Pat’s numbers are correct). Now it may be true that GHGs account for more than half of the *true* warming—but that is not what the IPCC claimed, nor what Pat is arguing.
The Radiative forcing concept is not a zero sum game in the sense that it has been laid out. If someone finds a mystical 0.2 W/m2 forcing you don’t just remove that from the global mean temperature (however that should be translated, since there’s uncertainty in the climate sensitivity, and in the transient cases being discussed, the ocean heat uptake). We don’t know with much confidence the total forcing at all over the last century, in fact, and there’s also considerable temporal distribution in where specific factors discussed by Pat are most important.
Finally, formal attribution is not done based on agreement between models and observed global mean temperature, and could be done even after removing the global mean signal.
If someone discovers a new 0.2W/m2 forcing, the contribution of the warming from the known forcings has to be adjusted downward in order to accomodate the new discovery.
Chip, not at all.
We know the RF of the WMGHG’s to very high confidence (see Myhre et al., 1998 for accepted values for CO2, CH4, N2O, and CFC’s). Whatever is happening to Black Carbon is not going to change that picture. The total anthropogenic forcing is on the order of some 0.5 to 2.5 W/m2, so there’s plenty of room to incorporate possible-but-unknown relatively small forcings within the framework of the current forcing and sensitivity uncertainties, and absolutely no need to take away what we do know in order to accomodate their existence..
An interesting treatment of uncertainty. You are saying that the uncertainty is so great that it protects what we know! I find this problematic. It reminds me of the old NAS/NRC workshop that looked at the contradiction between the surface and satellite temperature trends. They concluded that the uncertainties in both systems were so great that they did not really disagree. But everyone keeps using the precise numbers as though they are meaningful, which clearly they are not. Uncertainty cannot be a source of knowledge, but people ignore it in order to have something to say. False precision is a fallacy.
…….. sounds like there’s a need for a means of expressing uncertainty. Oh I know, what about an IF :)
Is that true? For instance, if black carbon absorbs IR at the same wavelengths as CO2, doesn’t that lower the amount absorbed by CO2 and thus lower its forcing? Or, if black carbon were to scatter more UV/vis incoming light back into space, wouldn’t there be less IR coming from the surface and wouldn’t that also change CO2’s forcing? Because the absorbance spectra of species can overlap (though I’m not saying that they do here), don’t they all have to be treated together? Am I missing something here?
@Scott: “if black carbon absorbs IR at the same wavelengths as CO2, doesn’t that lower the amount absorbed by CO2 and thus lower its forcing?”
No connection. Black carbon absorbs like a black body, namely broadband across a huge number of octaves including visible light, a long way from FIR. CO2 absorbs at specific wavelengths concentrated in very narrow bands in the FIR neighborhood. Totally different absorption mechanism based on the straightline σ and π bonds between the carbon and oxygen atoms (as opposed e.g. to the Mickey Mouse ears formed by the two hydrogen atoms in water vapor, another GHG unrelated to black carbon).
Black carbon is so absorbent it is both antigreenhouse going in and greenhouse going out. Basically it captures all heat whatever the wavelength.
Sorry I didn’t make my point clear. I know the properties of black carbon, but I was trying to point out that the radiative forcing of all components are intermingled. Thus I thought that “whatever is happening to black carbon doesn’t change that picture” isn’t necessarily true. For instance, if the upper atmosphere were heavily loaded with black carbon aerosols, say at a ridiculous value like 500 micrograms per cubic meter, wouldn’t this effect the radiative forcing values for essentially every other component in the atmosphere, or is my understanding mistaken?
You’re absolutely right about the high impact of aerosols. But to the extent that these are well correlated with CO2 itself, their impact is automatically lumped in with that of CO2 as explained here (in particular the last paragraph).
Later chronologically (but earlier in the page) I developed this theme further by pointing out situations where it made sense to separate the CO2 and aerosol contributions. But while useful in planning mitigation, for simple projections of global warming this separation is neither necessary (since with business-as-usual CO2 and aerosols will remain well correlated) nor desirable (since it unnecessarily complicates what is basically a very simple demonstration of AGW per se).
Michaels’ elaborate calculations are relevant to mitigation, but my impression from the gist of his testimony was that he was far less interested in mitigation than in simply denying AGW, for which his elaborate calculations are just a smoke screen to mask the simplicity of the proof that AGW presents a real threat.
Chip – That doesn’t address my objection at all. As I did before, suppose CO2 provides a net effect of +0.46 C. In that case, CO2 would account for more than half of the “observed” warming, and would account for all of the “true” warming, with some left over to compensate sulfates and the like.
I’ll back what Chris C said too.
“BREAKING: UN IPCC Official Admits ‘We Redistribute World’s Wealth By Climate Policy’
By Noel Sheppard | November 18, 2010 | 11:27
If you needed any more evidence that the entire theory of man-made global warming was a scheme to redistribute wealth you got it Sunday when a leading member of the United Nations Intergovernmental Panel on Climate Change told a German news outlet, “[W]e redistribute de facto the world’s wealth by climate policy.”
Such was originally published by Germany’s NZZ Online Sunday, and reprinted in English by the Global Warming Policy Foundation moments ago:
(NZZ AM SONNTAG): The new thing about your proposal for a Global Deal is the stress on the importance of development policy for climate policy. Until now, many think of aid when they hear development policies.
(OTTMAR EDENHOFER, UN IPCC OFFICIAL): That will change immediately if global emission rights are distributed. If this happens, on a per capita basis, then Africa will be the big winner, and huge amounts of money will flow there. This will have enormous implications for development policy. And it will raise the question if these countries can deal responsibly with so much money at all.
(NZZ): That does not sound anymore like the climate policy that we know.
(EDENHOFER): Basically it’s a big mistake to discuss climate policy separately from the major themes of globalization. The climate summit in Cancun at the end of the month is not a climate conference, but one of the largest economic conferences since the Second World War. Why? Because we have 11,000 gigatons of carbon in the coal reserves in the soil under our feet – and we must emit only 400 gigatons in the atmosphere if we want to keep the 2-degree target. 11 000 to 400 – there is no getting around the fact that most of the fossil reserves must remain in the soil.
(NZZ): De facto, this means an expropriation of the countries with natural resources. This leads to a very different development from that which has been triggered by development policy.
(EDENHOFER): First of all, developed countries have basically expropriated the atmosphere of the world community. But one must say clearly that we redistribute de facto the world’s wealth by climate policy. Obviously, the owners of coal and oil will not be enthusiastic about this. One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with environmental policy anymore, with problems such as deforestation or the ozone hole.
For the record, Edenhofer was co-chair of the IPCC’s Working Group III, and was a lead author of the IPCC’s Fourth Assessment Report released in 2007 which controversially concluded, “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
As such, this man is a huge player in advancing this theory, and he has now made it quite clear – as folks on the realist side of this debate have been saying for years – that this is actually an international economic scheme designed to redistribute wealth.”
Do you think the Cato Institute will have a problem with this?
A question better asked of the Chinese government. With most western nations currently functionally bankrupt, any redistributed money will have to be borrowed from nations in surplus.
You’re actually in error. The US is sending ‘borrowed money’ for ‘redistribution’ all over the planet. If we nationalized our oil gas and coal, we’d have a surplus and then some.
Where’s the socialist president when you need one?
He’s in North Korea. Have a nice trip and life. Don’t forget your earmuffs. It still gets cold there.
Wait, what happened to the Chinese model and all of those surpluses? If we ran things right, Obama would’ve strung up every last banker who ran their operation like a casino. And small businesses would now be able to get, what are they called?, business loans.
But now many of us have to wear ear muffs, since we can’t pay the gas bill, like North Koreans.
How did the energy cartels do in the last quarter?
I do not disagree that a nation that employs its best and brightest minds in the financial sector, a part of the economy where those minds are most often used to gin up new paper pushing ways to make money, is failing to make the best engineering use of its best and brightest talent.
I also don’t disagree that jettisoning moral and criminal hazard is in every way unwise.
That said, I find little merit to your argument that a Hugo Chavez led government would change things in an appreciably better way. He would drive up demand for (hopefully) American-made earplugs, but other than that I find little merit of to your argument in opposition to human nature.
As for how the energy cartels did last quarter, since we seem to be too busy assuaging our guilt over perceived past sins by focusing on how we might ship resources we don’t have to third world dictators and their cronies via a global carbon tax scheme (better known as the “Dinero for Despots” tax scheme), and since we haven’t troubled ourselves with building thorium reactors, nor been about researching Tesla’s wireless magnetic coil technology, ZPE technology, or a host of other non-carbon emitting energy technologies, carbon fuel companies did rather well last quarter.
When the AGW hypothesis dies a well deserved and scientifically unsupported death, perhaps we can then have truly meaningful and important debates about how to power productive human activity in more sustainable and (environmentally) benign ways.
I never argued for a H Chavez-led government, but you do appear to be arguing a corrupt and decaying economic system is simply ‘human nature’. Or if you’re arguing the mythological free market is a manifestation of ‘human nature’ I’d have to laugh, especially given the recent fate of GM.
I look at the natural resources of this country the same way Powell viewed the oil of Iraq. With ownership, Americans could be doing quite well given the insane profits of the international energy concerns who control this country to a large degree. But that case for nationalization is and will always remain off the table: Americans would rather suffer.
What will remain on the table is AGW, and there’s no evidence the issue is going to die. There’s also plenty of discussion going on about sustainable development paths in the face of peak oil, even if you disagree with the climate science. The world is not waiting.
If you (ie the US) ran things right, Obama would have strung up all the Congressman who pushed American banks to provide housing loans to people who couldn’t afford them. There were plenty of them, from both parties.
The fact that a consequence of a global policy to combat climate change may be to redistribute wealth from the developed countries to developing ones doesn’t meant the theory of AGW was concocted for that purpose.
No, it doesn’t. However, with 99.9% of government grant money going to scientists with a working hypothesis wholly consistent with redistributive state interests, you would have to believe that scientists care nothing about tenure, government grants, or career advancement to make the (rather delusional) case that their research findings have been uninfluenced by the influx of enormous amount of agendized government grant money directed to a hitherto regarded “backwater” science.
If climatology is a ‘backwater science’, what scientific evidence did you use to conclude that AGW is ‘unsupportable’? (Please let Chang answer)
Judy, this has nothing to do with Michaels/Santer dispute. It’s just some conservative yuckyuck (and I can say that since I am one too) repeating the lastes WTFIUWT dribble.
I have been asked by several commenters to elaborate on my criticisms of Pat Michaels. Part of my frustration is not just here, but also his history on this topic. I think Ben Santer did a good enough job at criticizing him, and I find it amazing that Curry could think he was bested by Michaels. Several people here such as Fred and John N-G echo similar criticisms. Actually much of Pat Michael’s testimony was bearable, though the segment in discussion here (the graph) is just indefensible. Taking out only a select number of factors (only in the direction he wanted) with a 1000th of a degree precision in how that should work (especially with the Solomon and Thompson stuff) is just not convincing, and the nail was hit on the head when it was suggested that if you decide to challenge a IPCC central conclusion you probably want to come to the battle with a gun, not a rubber band you’re reading to fling at someone.
I found much of this event frustrating in general. Richard Alley, who I think most scientists should take lessons from in talking to the public, was a very good breath of fresh air. Ben Santer was good, as well as others, but I still see a continuous struggle to learn how to talk to politicians and lay people. There were plenty of mistakes and inability to directly answer questions on both ends as well. I’m not sure Dr. Cicerone understands the Clausius-Clapeyron relationship as it relates to the saturation vapor pressure change (even if he’s arriving at the right conclusions), and he made an obvious strawman attack on Dr. Lindzen’s own beliefs. Dr. Lindzen seemed like he just wanted to argue any possible point, even increases in the number of hot spells, and was completely off-base with his remark that taking all the CO2 out of the atmosphere would cool the planet by 2.5 C. If anyone in the room had been familiar with the Schmidt et al 2010 or Lacis et al 2010 papers that just came out, they’d know his estimate is n order of magnitude off, and even with no feedbacks would cool the planet by some ~7 C.
My frustration was generally amplified significantly when Mr. Rohrbacher decided he should put himself out there to accuse and challenge the scientists at the hearing, amplified with the most idiotic of “skeptical” arguments like “how is Mars warming if it’s due to us!.” The fact Dr. Alley remained patient in his responses to him is rather commendable.
Your objection to PJM can be excerpted thus:
“Taking out only a select number of factors (only in the direction he wanted) with a 1000th of a degree precision in how that should work (especially with the Solomon and Thompson stuff) is just not convincing,…”
That’s it? “Not convincing”?
No whys, buts etc?
John N-G can be countered easily.
If the IPCC 2007 “got it right” even without data/mechanisms presented in Thomson, Solomon and Ramanathan, how do we place confidence in the method of arriving at their derivation of ‘high likelihood’?
Either the newer mechanisms significantly affect the attributed amount to CO2, given the wide range of effect magnitude allowed for aerosols, or they affect the confidence in attribution. How can either not take place?
Shub – An example of how the IPCC could have “high likelihood” without knowing about Thompson, Solomon, and Ramanathan, using my previous hypothetical numbers:
If the IPCC’s estimate of warming due to CO2 was 0.46 C +/- 0.07 C, it would justify “high likelihood” by itself without any knowledge whatsoever of the other forcings.
The real numbers are different, but one important takeaway message from the IPCC: of all the possible forcings, the direct forcing of due to CO2 and other anthropogenic greenhouse gases over the past century has been determined more precisely than any other important climate forcing, known or unknown. I know of no climate scientist, skeptic or otherwise, that would dispute that statement.
Shub – Oh, I see what you’re getting at. Full credit for making an argument that’s more logical than the one made to Congress.
Given that attribution was one of the prongs of the IPCC argument, the discovery and quantification of several previously unknown factors would decrease confidence, IF the IPCC had not considered the possibility of unknown unknowns in their likelihood statement, AND if the pattern of warming of the unknowns matched the CO2 pattern.
I’m sure they expected unknown unknowns, but I don’t know how large they thought they were. Attribution relies on patterns of heating, though. Of the effects that Pat factored in, Thompson affected only oceans, Michaels affected only high latitudes, Solomon (I think) would have the same sign effect in the troposphere and stratosphere, and Ramanathan affected mainly the Northern Hemisphere. So attribution studies would probably not have mistaken the signal of one of these for a CO2 signal.
> I’m sure they expected unknown unknowns, but I don’t know how large they thought they were.
If we have an idea how large an unknown unknown is, is it really unknowingly unknown?
Is it not the case that arguing from the ignorance of our ignorance is more or less arguing from ignorance?
I’d pretty much agree.
The panel i would like to see
1. Lindzen ( but put him on the clock)
2. Santer. ( sharp)
3. Curry. ( more questions for her)
4. Alley ( good natured and patient)
5. Dude from NCAR ( for his clear statement of the divide between values and science)
Puh-leeze. You know darned well that the linear trend is exceedingly robust around .16 or .17 c for decades after warming starts in 76 in the CRU data.
If it was all so predictably linear why have your linear estimates kept changing?
Surely if they were any good as predictors, we’d still be on the linear trend you started with….
@PM: “the linear trend is exceedingly robust around .16 or .17 c for decades after warming starts in 76”
That’s within the error bars when the CRU data is convolved with up to a 5-year point spread function. With ten years however the error bars are small enough to falsify this. A more accurate statement in this case is that the CRU data between 1975 and now is curving upwards, consistent with a log dependence of temperature on CO2 mixing ratio (a la Arrhenius) but not with a linear dependence of temperature on time (a la Michaels). The former explains why your (PM’s) linear estimates kept changing, and refutes (i.e. is inconsistent with) your new claim that they are fixed at 1.6-1.7 °C/decade over that 35-year period.
Oops, 0.16-0.17 °C/decade, obviously, sorry. (Normally I use per-century because I don’t have any sense of what 0.15 °C feels like—it’s like saying how far your car goes per microsecond.)
Pat Michael needs to answer this.
Erratum: Pat Michaels with-an-s.
Addendum: the RSS of this site still misses to feed lots of comments.
@willard “Pat Michael needs to answer this.”
Willard, Michaels is a denier. Since when has a denier ever offered a straight answer to anything? I’m only interested in rational discussion of climate change, not in arguing with deniers.
Vaughn, can you elaborate on your “the error bars when the CRU data is convolved with up to a 5-year point spread function”? Is there anything published? I assume these convolved error bars have nothing to do with (1) statistical significance error bars based on sample variance in the original station data, or (2) error in the grid cell averages for that matter. Please correct me if I am wrong. This is a new kind of error to me.
Hi David, no I don’t know of anything relevant in the literature. The point is that the error bars of monthly temperature data are prima facie huge, whereas when smoothed by convolution with a 30-year PSF (what Paul Clark calls “mean:360” in woodfortrees URLs) they are just as obviously tiny (central limit theorem and all that). My estimate of 5 years as insufficient and 10 years as starting to get the error bars down to a statistically usable level is not even back-of-the-envelope statistics but mere intuition (which doesn’t always pan out though I generally trust mine). It would a useful exercise to work this out more rigorously, which I assume is what climate scientists do for a living (my original training was in physics where statistics is put to rather different uses, I’m a mere layman in matters of climate).
Also Gavin, do you disagree that the IPCC A1B models aren’t linear?
Does anyone know where support for…
can be found in…
Ramanathan, V., and G. Carmichael (2008), Global and regional climate changes due to black carbon, Nature Geosci, 1(4), 221-227, doi:10.1038/ngeo156
Figure 2 puts it at about 30 percent, but that is for pure black carbon. In reality, black carbon is generally one component of “atmospheric brown clouds” (ABCs), whose other components tend on average to cool rather than warm. ABCs are products of fossil fuel combustion as well as natural events such as forest fires. Because of the mixture of warming and cooling components, reducing ABCs will mitigate warming much less than is theoretically possible from BC alone.
Clarification – Figure 2 puts it at about 30 percent of GHG-mediated warming, but if you factor in the slight solar component, it declines toward 25 percent.
Thanks. Two problems…
1) The forcing is from the pre-industrial to present… values from 1950-2000 may (will) be different…
2) Some (about one third) of that is inluded in GCMs and therefore already accounted for in the IPCC attribution stt….
VERIFYING IPCC CLAIM
In its Fourth Assessment Report of 2007, IPCC’s claim regarding global warming was the following :
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
Let us verify this claim using the observed data from the Climate Research Unit (CRU) of the University of East Anglia . In this claim, “mid-20th century” means year 1950. As a result, according to the IPCC, global warming since 1950 is mostly man made.
To verify the claim that global warming since 1950 is mostly man made, we may compare the global warming rate in degree centigrade (deg C) per decade in one period before 1950 to that of a second period after 1950 to determine the effect of the increased human emission of CO2. To be able to do this, we need to identify these two periods, which may be established from the Global Mean Temperature Anomaly (GMTA) data of the CRU shown in Figure 1.
In Figure 1, the GMTA could be visualized as the sum of a Linear GMTA that has an overall warming rate of 0.6 deg C per century and an Oscillating GMTA that oscillates relative to this overall linear warming trend line. This Oscillating GMTA indicates the relative warming and cooling phases of the globe.
As our objective is to verify the claim that global warming since 1950 is man made, we need to identify two global warming phases before and after 1950. To clearly see the global warming and cooling phases, we plot just the Oscillating GMTA, which is the GMTA relative to the overall linear warming trend line shown in Figure 1. This can be done by using an online software at http://www.woodfortrees.org by rotating the overall linear warming trend line to become horizontal by using a detrend value of 0.775 so that the Oscillating GMTA has neither overall warming nor cooling trend. The noise from the Oscillating GMTA is then removed by taking five-years averages (compress = 60 months) of the GMTA. The result thus obtained is shown in Figure 2.
Figure 2 shows the following periods for relative global cooling and warming phases:
1.30-years of global cooling from 1880 to 1910
2.30-years of global warming from 1910 to 1940
3.30-years of global cooling from 1940 to 1970
4.30-years of global warming from 1970 to 2000
If this pattern that was valid for 120 years is assumed to be valid for the next 20 years, it is reasonable to predict:
5.30-years of global cooling from 2000 to 2030
Figure 2 provides the two global warming phases before and after 1950 that we seek to compare. The period before 1950 is the 30-years global warming period from 1910 to 1940, and the period after 1950 is the 30-years global warming period from 1970 to 2000.
Figure 2 also provides the important result that the years 1880, 1910, 1940, 1970, 2000, 2030 etc are GMTA trend turning points, so meaningful GMTA trends can be calculated only between these successive GMTA turning point years.
Once the two global warming periods before and after mid-20th century are identified, their rate of global warming can be determined from the GMTA trends for the two periods shown in Figure 3.
According to the data of the CRU shown in Figure 3, for the 30-years period from 1910 to 1940, the GMTA increased by an average of 0.45 deg C (3 decade x 0.15 deg C per decade). After 60 years of human emission of CO2, for the same 30-years period, from 1970 to 2000, the GMTA increased by an average of nearly the same 0.48 deg C (3 decade x 0.16 deg C per decade). That is, the effect of 60 years of human emission of CO2 on change in global mean temperature was nearly nil, which shows IPCC’s claims are not supported by the data.
 IPCC: “Most of the observed increase in global average temperatures since the mid-20th century is very likely” man made
 Global Mean Temperature Anomaly from Climate Research Unit of the University of East Anglia (GRAPH)
Michael is correct. No doubt he will be blackballed for stating what the government does not want to hear.
This is where Judith is extremely smart in playing the government’s game yet keeping her views and opinions on the blogosphere.
The day science actually stopped looking into the planetary changes or even asked a question of why is it happening or what reason is the planet doing this change was when most atmospheric research jumped ship into the AGW theory.
Can I ask why one has to have a tommy gun to go after the IPCC? The principals are, in fact, nominated by their governments, which (dare I say) have their own agendas. It seems to me the standard of argument against the IPCC should be lower than it would be for a debate in the refereed literature (though that, too, suffers from some problems related to Public Choice). The same should apply to the CCSP, by extension.
Santer’s statement to that effect was simple appeal to a very political authority.
You’re logic is backwards. The IPCC, essentially by definition, represents what the refereed literature says on a particular issue. Except that, much like a review, it summarizes and assesses a broad range of literature in order to come to some convergence on the best available weight of evidence.
Send me your email (to firstname.lastname@example.org) and I will be happy to give you an annotated bibliography of the relevant refereed literature that IPCC AR4 and CCSP 2009 ignored. Almost all in major journals, BTW. Accidental? Given so many grey literature citations in the IPCC??
Free the data, free the code, open the debate.
Do you really believe governments are that value-free? Or, for that matter, a community of scientists reliant upon support from that government??
Science is supposed to be value free. Are you arguing here that the climate science promoted and supported by the Cato Institute (and its ‘community of scientists’) is tainted in some way?
Science has never been value-free. Been to a faculty meeting recently?
So why do you bother to offer to send ‘refereed lit’ if you believe it’s not value free? Why would I want your recommend garbage reading list from Cato when I can read the ‘garbage’ from the AGU? Is your garbage filled with libertarian ‘value’, and therefore, odor-free?
Incivility and arrogance are virtues in no civilizations of which I am aware. And in anonymous practice, they’re also regarded as cowardly.
I agree. The US had a VP arrogantly pretending he was the President and the entire Cabinet felt it was best to kill a few hundred thousand residents of Iraq. So much for acting civil. People should ask questions for clarity , as I do here. But note that anonymity is best, especially when posters start talking about tommy guns and acts of violence. Chills the blood.
But you can do so without the garbage packing you put it in. Was Iraq germane?. All you do is catapult the argument into dimensions that have nothing to do with the thread of discussion.
The garbage posted was by Michaels. Try and pick through that in your head. He appears to be claiming science paid for by the government is tainted with the researcher’s values. Wouldn’t that mean his own ‘science’ is suspect and filled with the libertarian values of Cato? Why should we read it?
He never answered.
All you do is play tone cop and miss the real point.
Been to a faculty meeting of a top notch department at a top notch university lately? There it’s all science, at the other extreme it’s all back-biting politics and CYA. That’s how the MIT faculty viewed the situation in 1972 when I came on board as a freshly minted Ph.D. I’ve been emeritus at Stanford for 10 years now and the situation is completely unchanged—if anything it’s gotten more extreme.
Don’t quite get what you are saying.
@PM: “Don’t quite get what you are saying.”
Sorry if I was unclear. You said “Been to a faculty meeting recently?” as though faculty meetings of all departments in all institutions behaved the same way. This is so far from the truth as to make it a meaningless statement.
I was surprised to the amount of time it was devoted to various feedbacks and sensitivities. CO2 was not a new component in the climate machine, its volume increased by some 30% or so.
– If you accept that there is natural variation in the global temperatures, than all feedbacks and sensitivities are included in the process.
– if you have a climate independent physical process with a mechanism to affect climate, that temp anomalies can be ‘correlated’ to in long term, and the last 50-60 years show some disagreement between two, than it can be assumed that the differential is the result of any new component’s contribution.
Here is my analysis, very few if any may agree and majority may not, but science does not care for majority’s view, it is the facts and the data that drive science forward, not cosy cabal of consensus.
Your one of the very few people who uses his mind and are open the the variety of different forcings and feedbacks. This gives you a higher grasp of actual science to what we current call science governed by man-made laws.
Using temperatures as a global barometer to the planets health was never a good idea. Temperature readings are regional reactionary events to what is happening in the general area.
I enjoy reading your comments and at times it has inspired to look at a different avenues.
Appreciate your enthusiasm. However, please study some physics. Work requires force times distance. From below “atmospheric pressure” cannot work any “energy” on the earth because it is not moving the surface of the earth. “The magnetic field” is independent of “gravity”.
Atmospheric turbulence is dissipated as heat which is radiated away. It also slows the earth’s rotation.
See Henrik Svensmark’s Cosmoclimatology for examples of interactions – the sun radiation/magnetic field modulating earth’s magnetic field which varies galactic cosmic rays which affect atmospheric ionization which affects cloud formation which affects albedo – sun reflection – which affects temperature etc. These affects are now being quantified.
David L. Hagen,
If you stop the planet suddenly, the energy keeping pace by being pulled by the planet will travel at approximately1669.8km/hr at the equator.
Centrifugal force is the counterforce to the other energies that want to keep us into chemical puddles.
Centrifugal force can be mechanically reproduced.
The planet has stored energy that is being released and in using it slows this planet along with frictional forcings.
Science has created individuals who work individually for their own “glory” or “fame”.
My olive branch…
We have 3 energies working on this planet. 2 are reactionary energies and 1 is physical energy.
The sun gives this planet energy that we either absorb or deflect through radiational heat and light. The magnetic field gives us much in the way of gravity and charged particles. Rotation gives us the third energy which is the atmosphere being pulled by the planet and all the molecules retain this physical energy.
As an engineer, I consider climate as a very complex device with lot of minor parameters, but if there is a particular rhythm to it, and it has been there for as long as records exist, than it could be only but few (perhaps one or two) major causes which may be found in number of other natural processes as well as the climate.
If something new is introduced (CO2 rise by 30%, CFC-ozone hole, UHI, etc., all probably have some effect) than any pronounced excursion, could be attributed to these new factors, but it can’t destroy the underlying rhythm. Attributing everything or nothing to CO2, I think is erroneous in principle, at least until it is defined what is a natural and what man made contribution.
Anyone looking at
and is seriously considering climate question, should before embarking on 1950-2000, first take look at 1650-1750 period. (accuracy of the GP data- green line is 99.99..% , for NAP is probably around 95%, they are not proxies or reconstructions!!).
I would like to see comment, or even better a question or two from Dr. Michaels, and particularly from Dr. Curry, but of course they have own agenda.
We have 3 forcing of gravity.
Electro-magnetic which effects plant and life from what we consume to our bewing paert of this planet. Atmospheric pressure which exerts energy upon this planet. Last is centrifugal force which keeps us from being a pile of chemicals and is the only energy exerting outward.
I did find a pattern of climate behaviour which stretches from Ice Age to Ice Age which has a direct link to atmospheric pressure build-up. Each one is different due to plantary slowdown, further distance from sun and water changing it’s salt content to be fresher to compensate the atmospheric changes from centrifugal force. Gravity does not seem to change but the other perimeters around this planet does.
@vukcevic “Attributing everything or nothing to CO2, I think is erroneous in principle”
Very important point, this false disjunction is a plank of the denial machine’s platform. However it’s not just “in principle” since it can be strengthened to “demonstrably false.” If temperature depended only on (or even only 80% on) CO2 then temperature would rise as smoothly as CO2. CO2 rises extremely smoothly whereas temperature over 1850-now shows wide swings, demonstrating other influences on temperature besides CO2. The swings are not so wide however as to mask temperature’s evident tracking of CO2 , which the assumption of a logarithmic dependency makes statistically significant.
Pat Michaels does not seem to get who he’s talking to, right now.
This supposedly rational discussion of climate change becomes less rational when debated on the merits of the discussants instead of the merits of what they have to say. Who’s talking to whom replaces reason with appeals to authority that quickly degenerate to ad hominem arguments.
I would agree if we were having a rational discussion of climate change, but I doubt that’s the case. My point was to warn against dismissing your comment out of hand, simply because your name does not appear often in climate blogs.
Even if we assume that it’s a rational discussion, the merits of the discussants do have an impact on an audience who has hundreds of comments to skim.
Being an authority about formal stuff is not irrelevant for our discussion. It is also helpful when testifying against the idea that a community of scientists reliant upon support from that government is not value-free.
Ad hominem arguments are not always fallacies.
Asserting that we’re not having a rational discussion is a self-fulfilling prophecy. Rationality can be maintained (a) by ignoring appeals to authority and simply addressing the merits of the arguments, and (less obviously) (b) by ignoring the contributions of outsiders. The reason (b) aids rationality is because the insider-outsider dichotomy is itself irrational and those who draw that distinction therefore aid rationality by their silence.
Fair enough. Let’s hope then that Pat Michaels and the other insiders here will help maintain the ongoing rational discussion in a more constructive way than by their silence.
Before you pontificate further, please take a reality check. Please review t Climate Change Reconsidered, the 880 p 2009 report by the Nongovernmental International Panel on Climate Change (NIPCC) for a glimpse of summaries of major scientific literature ignored by the IPCC or since AR4.
The IPCC’s review “on a particular issue” is focusing on catastrophic anthropogenic climate change and reinforcing that for more research funds. The NON-anthropogenic climate change has been short changed, and underestimated.
For time being I think it is wise to concentrate on the data of reasonable accuracy.
I am not certain that ice cores are reliable enough.
The ice cores should be kept in perspective. They contribute to our understanding of climate change broadly construed. They do not however address our present circumstances, which at 0.39‰ (0.039%) of CO2 by volume have gone well above the ice core limits of .18-.28‰.
But digging deeper into the geological strata to get back to a time of comparable CO2 *level* cannot help because the current pickle we’re in has less to do with the level than its first derivative, which is now hitting .2‰/century and would be 7 times that in 2100 if the Keeling curve continues to track the (so far impressively accurate) model of NOAA ESRL’s David Hofmann. Currently we do not have the remotest idea where to look in the geological record even for traces of such a catastrophic rate of rise, let alone infer what its consequences were at the time.
As far as ice cores are concerned, reliability of trapped O2, CO2 etc, I assume that they are OK. I only question 10Be records, these appear to be highly contaminated by the climatic events, rendering 10Be as a solar activity proxy unreliable. I would assume that may be also applicable if 10Be decay is actually used for a time line for various paleo- studies, but I stand to be corrected.
My reply to your previous post ended Here
Sorry, my post went into wrong box, it was meant for Mr. Lalonde above.
I don’t trust Ice core samples either due to what you said of possible contamination. Ice is subjected to pressure cracking. Extreme cold can sound like a rifle going off in cracking, then you have warm weather that melts into the cracks. To say it is measurments of global is a grave error as cloudcover does not cross the equator. Just look at any rotational world map and they NEVER cross.
Another thing our planet does is that the lag time from an event that is started can take many years for the effects to reach due to stored energy.
‘essentially by definition’ is simply circular clap-trap.
On the contrary, the IPCC (and CCSP) represent one specific view on that the refereed literature says, specifically the view endorsed by their governments under the UNFCCC. They do it very well (I call it artful bias) but these are clearly advocacy arguments, as is yours. As assessments they are awful. Never forget that the IPCC belongs to UNEP, so this is all part of the Environmental Program. That is what is true by definition.
Can I ask why one has to have a tommy gun to go after the IPCC?
1. Because if you have one its best to use it.
2. because if you dont use one, one will be used on you.
3. integrity, I liked that reply from Lindzen, its goes all ways.
4. pride of workmanship.
5. People who work very diligently, were not afforded the rare opportunity to testify before congress. So gratitude for the opportunity would drive most to bring the best game
Your right. I had to do the same thing for a power generating turbine. Being a nice guy and not finding the faults of the current turbines in use in the first presentation. Never made it past the engineers. Second time, discovered and categorized with science ALL the faults, which brought the turbine before a board of directors.
Being a “nice guy” in business will get you run over.
I’ve been saying this for years….taps foot.
Would also add that a big problem the skeptics have with AGW consensus has to do with asserted lack of transparency, rigor, and step-by-step proof/explanation. so then then they come by and all their sketpcis AHAs tend to be WORSE! What is that showing the field? How is that raising the game?
John Graham-Cumming rawks. Watts, Eschenback, McI, Lucia are MESSES.
And yes, I include her Steve. I remember you pointing me to her and then that gawdawful trend analysis stuff never even had a clear assertion like which effing page to read (and was morphing) but you thought it some big tada to point me to. And I asked a very good noise model time series guy what he though and he turned thumbs down on Lucia (and no I won’t cite who, but he’s a conservative economist) and I raised the ENSO issue (and I’m not saying “first” (I don’t know or care if I was, the point is even more galling that independently people think of that, but she didn’t), but it seems fricking first thing to think of, if, well, thinking, and she did not respond (yeah maybe she did eventually, but it did not penetrate her noggin or else she just was resistant to thjinking critically about her own hypotheses (remember “you are the easiest to fool”, that is why you need to be tough on yourself and why you need to have really thought through your case (the tommy gun) when going after consensus.)
Y’all need to write papers. Heck if 17 year olds on wikipedia can figure out how to write references notes, so can you lot. It is not that HARD. Not like in the old days with typing and all that. We got word processers! And NO WHINING about being kept down by peer review (I have seen skeptic drafts and they were butt-bad (Willis, Steve, Loehle, etc.). You can always show your papers pre-pub as white papers (actually the tendancy now to hide them is interesting, trying to get little launches ahead of criticism, and also interesting given that there exists a publication channel.) Plus there are journals like Climate of the Past Discussions which are almost Arxive like.
VERIFYING IPCC CLIAM 2
In its Fourth Assessment Report of 2007, IPCC’s projection of global warming was the following :
“For the next two decades, a warming of about 0.2 deg C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1 deg C per decade would be expected.”
Let us verify this projection using the observed data from the CRU . This may be done by comparing the global warming rate between the last two decades as shown in Figure 1.
In this figure, the global warming rate decelerated from 0.25 deg C per decade for the period from 1990 to 2000 to only 0.03 deg C per decade for the period since 2000, which is a reduction by a factor of 8.3, which shows that IPCC claims of accelerated warming is not supported by the data.
Note that the projection for the current global warming rate by the IPCC was 0.2 deg C per decade, while the observed value is only 0.03 deg C per decade. As a result, IPCC’s Exaggeration Factor is 6.7.
 IPCC: “For the next two decades, a warming of about 0.2 deg C per decade is projected”
 Global Mean Temperature Anomaly from Climate Research Unit of the University of East Anglia (GRAPH)
(Gavin’s point about Pat’s increasing linearity constant is answered in my reply to Gavin’s first post. Bottom line is that CO2 is rising at an exponential rate PLUS A CONSTANT. The constant explains everything.)
For a clearer picture of what the actual climate forcings were since 1950, see Figure 5 of Hansen et al. (2007) http://pubs.giss.nasa.gov/abstracts/2007/
For a clearer picture of the attributed temperature response to these forcings, see Figure 8 of the same paper. This paper in Climate Dynamics analyzes climate simulations for 1880-2003 with GISS ModelE. As you can see, the well mixed GHGs clearly account for most of the radiative forcing since 1950, as well as for most of the global temperature change for this time period. Of note are the volcanic aerosol forcings due to the Agung, El Chicon, and Pinatubo eruptions which contribute a surprisingly large negative forcing during this time period. Note also that the volcanic aerosol forcing is transient in nature, but the GHG forcing is effectively permanent. But the GHG forcing continues to accumulate, and it is still increasing unabated. Unless we suddenly have a series of large volcanic eruptions again, the global temperature will also continue to increase in step with the increase in GHG forcing.
This Hansen et al. paper should be helpful to Pat Michaels to put his results into better perspective.
It should be noted that if the increase from 1950-2000 was expected to be 0.6 C, that expected for 2000-2050 would be 1.3 C (given simple fits to CO2 growth and a conservative temperature response based on the last few decades). If it was off by 0.1 C in the last 50 years due to other factors, that is small compared to the expected 1.3 C increase in the next 50. Going forward, the question should be how big is natural variability or black carbon, etc., compared to ~1.3 degrees. I think it is small, because even the Little Ice Age was probably only about 0.5 C, and that required a major solar variation. We should not be concerned with factors leading to only a few tenths of a degree, because by 2050 attribution will be very easy compared to today.
If you look at the solar insolation variations and compare that to the ice core temps, the correlation does not look really great. I am wondering how “robust” the empirical determinations of sensitivity based on ice cores really are.
God, I hate this nesting. It’s impossible to read the new comments. Go linear!
This may have already been said, but it’s Pat Michaels, and so the spelling is all wrong in parts of your post.
MichaelS’S controversial testimony
Pat MichaelS’S testimony has been generating significant controversy, both in the hearing and in the blogosphere.
MichaelS’S Objective #2 relates to the attribution of climate change
or is it Michaels’
I can never remember, but I think not.
… note the above proposed adjustment is for global average SSTs
… Fig. 4 shows a large discontinuity in SST measurement source % (U.S. and U.K.) around 1945
… measurement source % is relatively stable from 1950-1965
… Michaels’ testimony shows a temperature adjustment of 0.2 C to global average land+SST temperatures in 1950, hinged around 1965
… anyone else wondering the justification for this calculation/guesstimate?
… when it’s not even clear what the sign of the adjustment should be?… i.e. a slight increase in U.S. measurement % relative to U.K. is just about discernible, but both are declining relative to other sources
Thompson, D. W. J., J. J. Kennedy, J. M. Wallace, and P. D. Jones (2008), A large discontinuity in the mid-twentieth century in observed global-mean surface temperature, Nature, 453(7195), 646-649, doi:10.1038/nature06982.
The explanation of how the Thompson adjustment was applied is given at this MasterResoure.org post.
Most people agree that the climate is changing. Since AGW is not supported, then what is the cause?
This only leaves two possible causes, sun and atmosphere.
It is not clear that the climate is changing, or even what that means. It now appears that the important climate parameters all oscillate naturally. If oscillation is the natural mode then the fact that the parameters change does not mean that climate per se is changing. Think of a sine wave.
As for the causes of these changes, there are many other possibilities, including the oceans, albedo, the biosphere, cosmic rays, etc. Moreover, “cause” may not be the right concept. If the climate system is chaotic due to nonlinear feedbacks then it can oscillate under constant solar forcing. If so then constant solar input is the cause of all the changes. Paradoxical but true.
Just as I wouldn’t want to be a member of any club that would have me as a member, so I wouldn’t trust the results of any poll in which I have taken part.
It is interesting and rather telling that few of your vociferous regulars (with some noteable exceptions) have been capable of addressing your exam question in technical terms. This says a lot about some of the hot air that is emmitted from time to time. On the other hand, your blogsite is starting to attract expert attention, even from some of those who critized you when you embarked on your endeavour. From my layman’s perspective, it is great to listen to the scientists argue their points and I hope this becomes a regular feature. It would cerainly achieve your objective of trying to bridge the gap. …..I just wish I could understand all the technicalities. Keep at it!!
In fact, Judith, this gives me an idea. Could you persuade two scientists, one from each side (or even a panel), to set aside a few hours and argue their points in real time on the blog? They could introduce their standpoints with written submissions much like the committee meeting. The rest of us could observe the debate and perhaps comment in a separate thread set up solely for the audience. Now that really would empower the blogosphere! Just a thought :) (and no patent on the idea!)
Lets see who shows up on the Part II thread :)
Judith, it is interesting to compare what you describe as Pat Michaels’ controversial testimony with your own series of posts “Overconfidence in IPCCs detection and attribution”.
Michaels says that the IPCC statement ‘Most of the observed increase…’ is not supported.
You say that the same IPCC statement is overconfident, which seems to be a very similar statement.
Do you agree or do you think there is a fundamental difference between your statments and his?
The claim that the temperature rise from about 1910-1940 can all be ‘explained’ by ‘natural forcings’ while the very similar change from about 1970-2000 cannot is so ridiculous that it is hard to understand why anyone takes it seriously.
Joe, I think there may be other possible ’causes’ that we just don’t know about yet. Another possibility is that the climate has natural irregular chaotic fluctuations (like the flickering of a flame, or the weather, or the sunspot cycle, but on a longer timescale) in which case asking for the ’cause’ of every little wiggle is inappropriate. We don’t ask what ’caused’ it to rain today, or what ’caused’ a plume of smoke to wiggle to the left rather than right.
Michaels and I are tacking two different aspects of the argument. I am focusing on the “very likely” part, Michaels’ is focused on the “most” part.
We do see a brief ‘wiggle’ (perturbation )in the climate when a massive volcanic eruption occurs (see Lacis above). How is it that decades of GHG gas releases–that dwarf volcanic gas volumes–seem so ‘ridiculous’ as a forcing agent, given the evidence and on a planet that should be cooling?
Since the testimony argument will most likely move to the II page I will present an argument regarding uncertainty in attribution.
GEOPHYSICAL RESEARCH LETTERS, VOL. ???, XXXX, DOI:10.1029/,
Has the climate recently shifted? 1
Kyle L. Swanson
Anastasios A. Tsonis
“If as suggested here, a dynamically driven climate shift has occurred, the duration of similar shifts during the 20th century suggests the new global mean temperature trend may persist for several decades. Of course, it is purely speculative to presume that the global mean temperature will remain near current levels for such an extended period of time. Moreover, we caution that the shifts described here are presumably superimposed upon a long term warming trend due to anthropogenic forcing. However, the nature
of these past shifts in climate state suggests the possibility of near constant temperature lasting a decade or more into the future must at least be entertained.”
they also state:
“Finally, it is vital to note that there is no comfort to be gained by having a climate with a significant degree of internal variability, even if it results in a near-term cessation of global warming.”
If the warming goes in cycles as seems to be their argument, then the proper trend should be the trend through an entire cycle both a warming and a cooling phase. It would basically cut the trend in half. I don’t really understand the logic of the last sentence I posted since most of the trends currently used would have been only including a warm phase, but included it since it directly contradicts what my conclusion would be.
I just noticed the full version didn’t have the proper journal reference:
GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L06711, 4 PP., 2009
Internal variability makes adopting sensible adaptation strategies more difficult.
I.E. If sea level is going to rise 1 meter, better to be 1 cm/year. In any given year a relatively small number people will have to relocate.
With high internal variability one might have the situation that sea levels drop 1/2 meter, and throngs of people build on their new found beach front property. Only to be washed away when the cycle changes and sea levels rise 1 meter when the cycle switches phases.
Yes, but if the overall change is only half of what is currently expected there is considerably less change to adapt to.
Reply is here
I”m not ignoring this. But I am on a deadline for my new book. I’ll be on the thread some this weekend.
In my various comments above, I indicated that Michaels misinterpreted the contributions of GHGs to the post-1950 warming, and that they do in fact appear to contribute more than half. This is the case even if we accept his claim that Solomon et al demonstrated a warming contribution from changes in stratospheric water vapor, which if true, would reduce the GHG contribution somewhat. However, it turns out that Solomon’s work suggets a slight net cooling rather than warming from changes in stratospheric water vapor, which would make the GHG contribution larger. This can be seen in Fig. 3c in Stratospheric Water Vapor
For those behind a paywall, the article estimates that increases between 1980 and 2000 were slightly outweighed by reductions since 2000.
curryja | November 21, 2010 at 8:23 am |
The blip is a big deal IMO, I am waiting on some papers to get published (or at least in press) before I discuss.
Well, I hope there’s a connection with the U boats. I will then say ‘I told you so’ to Mrs Flood. How boring if it’s just noise.
Any new aerosol data from over and downwind of oilspills?
Gavin raised the objection “why have Micheals’ estimates of the ‘linear’ rate been changing over the years”. Well this is a rather interesting thing to say. At least part of the reason is that the underlying data on which that claim is based as been changing. Earlier someone remarked that Santer thought Michaels’ argument that the data are continually adjusted toward more warming ignores that these changes were all within error bars. But Santer’s criticism of that argument is totally ludicrous. Think about it for two seconds: if the data were adjusted up, then down, then up, it would be one thing: you could say, “this is random, within error bars, no big deal” except that the changes to the data aren’t random. They are systematic. That doesn’t mean they are wrong, but if they aren’t, then older data was systematically biased. Can Michaels be faulted for predicting lower rates in the past, when data were systematically biased toward lower trends? Was not the data WRONG back then if it is right now? And where is the similar treatment of “errors” and uncertainty for those trend estimates when asking if they have “changed”? Michaels refines his numbers almost continually as the data update, and doing so doesn’t really make a huge difference most of the time. Most of his estimates were within the “error bars” of his previous estimate, as far as I can tell.
Lastly, if one wishes to criticize the point he is making in that respect, where on earth is the evidence that the rate is really accelerating recently? It doesn’t exist because there is indeed no tendency for the rate to be greater in recent years than before.
Lastly, if one wishes to criticize the point he is making in that respect, where on earth is the evidence that the rate is really accelerating recently? It doesn’t exist because there is indeed no tendency for the rate to be greater in recent years than before.
That’s indeed how the recent temperature looks: rising steadily rather than on a curve sloping upwards.
But those are two competing hypotheses. How should we choose between them? One way would be to choose the one we prefer and argue that it’s consistent with the data.
The problem is that this way works independently of which hypothesis you prefer. Unless you apply some statistical technique such as the central limit theorem to reduce the error bars, they are large enough to accommodate both.
I like to picture the situation as nature getting up in the morning and tending to a saucepan of water, into which she drops ice cubes and molten lead at random throughout the day. By noon the saucepan’s temperature has reached an equilibrium that shows little jumps down or up with each ice cube or lead drop but that otherwise is holding steady.
At 4 pm a woman shows up, let’s call her Julia Child, and lights a little fire under the saucepan. The temperature of the water continues to fluctuate as nature keeps adding ice and lead at random. But gradually we notice the water starting to rise above the equilibrium level it had reached when the woman arrived.
Around 5 pm a crowd gathers around to remark on the increasing heat of the saucepan.
Then at 5:30 a man shows up and asks why all the interest? When it’s explained to him, he watches it for a few minutes and says, “Seems to me it’s been going up and down while I’ve been watching it. The last three minutes it’s been going down so I’d say this show is about over now,” and wanders off.
A huge debate then ensues among the crowd, some taking the side of the man and others sticking to their original opinion.
That in essence is the heart of the great global warming debate.
“But those are two competing hypotheses. How should we choose between them? One way would be to choose the one we prefer and argue that it’s consistent with the data.”
I said there is no affirmative evidence FOR acceleration, not that there is negative evidence AGAINST acceleration.
However, if you do try and add a second order term, Michaels has said it would fail the partial f-test. If so, that actually is evidence that we should prefer the linear trend.
However, if you do try and add a second order term, Michaels has said it would fail the partial f-test. If so, that actually is evidence that we should prefer the linear trend.
Two flaws in this line of reasoning are (a) CO2-governed temperature should be tracking neither a linear nor a quadratic curve but one of the form ln(b+exp(t)), and (b) Michaels is only using the last third of the available HADCRUT data which is not enough to draw the distinction in question: all 160 years gives a more significant indication.
The b+exp(t) model of the Keeling curve is due to David Hofmann at NOAA ESRL Boulder (journal version Hoffman, Butler, and Tans, “A new look at atmospheric carbon dioxide”, Atmospheric Environment 43:12 2084-2086 (2009)). Hofmann takes b = 280 ppmv and t to be time in units of what I’ll call Hoffmans since 1790, where I define 1 Hofmann as a unit of time equal to 32.5/ln(2) years or 46.9 years. 32.5 is Hofmann’s estimate of the time required to double the anthropogenic component of CO2.
(Hofmann actually wrote it as 280+36.2*exp[0.693(T-1958)/32.5] where T is the year AD but it’s the same function.)
Unlike polynomial fits to the Keeling curve, which have no theoretical rationale and which extrapolate horribly on non-polynomial data, Hofmann’s formula is soundly grounded on the principle that CO2 today has a natural base b to which an exponentially growing population has added an even more exponentially growing amount of CO2 compounded by improvements in technology, faster transport, and greater luxury (air conditioning etc.). Furthermore it extrapolates beautifully back centuries if one postulates 280 ppmv as the prevailing CO2 level centuries ago; all reasonable polynomial fits to the Keeling curve give horrible predictions for 1600 AD for example. (Though I found that if you postulate 260 instead of 280 and take t to be in units of 60 years since 1718.5 the fit to the Keeling data itself is best possible for any base-plus-exponential model, with an average error around 1 ppmv.) Whether it extrapolates forwards well remains to be seen, but so far it’s done a great job with the half-century of Mauna Loa data constituting the Keeling curve.
The ln in ln(b+exp(t)) is due to Arrhenius, namely his model of temperature as depending logarithmically on CO2. Let Te be the number of degrees by which temperature increases when the CO2 is raised by a factor of e = 2.718…. (So Td = Te*ln(2) where Td is the temperature rise per doubling of CO2.) Then according to Arrhenius the future increase in temperature of Earth’s surface at a CO2 level of c ppmv is Te*ln(c/c0) where c0 is the current value of c. This is only valid for c within a small factor of c0; with no CO2 the formula gives −∞ which is impossible since 0 K, absolute zero, is only −273.15 °C.
Using these formulas I eyeballed the possible choices for Te which made this theoretical impact of CO2 a good match to the actual temperature between 1850 and now and concluded a couple of days ago that it was considerably more than 1.5 and considerably less than 2.5, as I posted earlier to this discussion. I’ve since found a statistical method of estimating it both more precisely and more meaningfully, which yielded 1.81 °C per doubling. Even though the temperature is rising much faster now than before 1970, the fit using 1.81 is extremely good, with only three one-decade periods out of the past 16 decades in which the fit was worse than 0.1 °C. While it would be nice to explain these, theoretically they should have nothing to do with industrial warming since the technique is supposed to remove all of that component of the temperature variation.
I’ll write this up over the next couple of days and post it to the appropriate arXiv. This should answer a lot of the questions I’ve been getting lately with a single result.
Oops, the numbers 1.5, 2.5, and 1.81 in my post above were for Td (degrees per doubling), not Te. The reason 1.81 has an extra digit is not because it’s meaningful but because it’s good practice to retain a “guard digit” of precision in numbers obtained using a simple closed form formula, as was done in this case, namely Te = (T⊗AH)/(AH⊗AH) where T is the observed temperature curve at WoodForTrees.org, AH is the Arrhenius-Hoffman temperature model, and ⊗ is correlation (anticonvolution). Details in the writeup.
A reminder to Pat Michaels:
Here is an unanswered post:
Here is another one:
Crickets. Still no news from Pat Michaels. Where is bender when you need it.