by Judith Curry
Here is further explanation why I think Michael’s testimony is significant, and why I think the issue of the attribution since 1950 will be the battleground in U.S. CO2 policy. Michaell’s stated purpose for conducting this analysis was:
demonstration that the Finding of Endangerment from greenhouse gases by the Environmental Protection Agency is based upon a very dubious and critical assumption.
Michaels’ is seeking to establish reasonable doubt to the EPA’s CO2 endangerment finding, which is based on the statement (very similar to the IPCC’s statement):
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG [greenhouse gas] concentrations.
There are two main elements to Michael’s argument regarding the trend in global surface temperature anomalies and their attribution: the global surface temperature anomaly data record 1950-2009, and forcing. The analysis focuses on new research since the IPCC AR4:
1a. Adjustments to the global surface temperature anomalies prior to 1965 (following Thompson et al. 2008) reduces the total temperature increase by 0.15C, and the trend by about 20%. Thompson et al. (2008) state:
The adjustments immediately after 1945 are expected to be as large as those made to the pre-war data (,0.3C; Fig. 4), and smaller adjustments are likely to be required in SSTs through at least the mid-1960s.
While it is my understanding that this temperature correction has not yet been applied to the CRU data set, Michaels’ application of this seems consistent with what Thompson et al. recommend. Thompson et al. state that:
Corrections for the discontinuity are expected to alter the character of mid-twentieth century temperature variability but not estimates of the century- long trend in global-mean temperatures.
The net effect of this correction is to make the mid century trend more continuous and reduce the appearance of the 1940’s “bump” that was followed by cooling. This also serves to increase the average global temperature ca. 1950 (the beginning point of the mid century in the attribution statements.)
1b. The second point considers “non climatic” trends over land associated with data quality and land use changes (McKitrick and Michaels, 2007), which they argue account for 0.08C of the global warming trend from 1980-2002 (which is completely independent of the ocean adjustment.)
If these two temperature corrections are correct, then the decadal rate of change in the period 1950-2010 is now probably slightly less than the decadal rate of change in the period 1910-1940 (which is unchanged in the Thompson et al. analysis.) Further, it is in principle easier to explain a smaller rate of temperature increase due to natural variability.
JC comments: I have no idea whether these adjustments to the temperature record are correct, but they certainly reflect the overall uncertainty in the data. This analysis indicates a 33% discrepancy in the size of the trend, which reflects uncertainty in data itself. The actual uncertainty, if a comprehensive error analysis was done, is possibly larger than this. Note, errors in surface temperature data will be subject of a future series.
2. The second part of the argument is forcing, and Michaels only includes two issues: stratospheric water vapor forcing, and forcing from black carbon (two factors that are almost certainly independent of each other.)
Here is where Michael’s argument becomes confusing. Michaels’ attempts to explain the 0.7C trend by saying the observations are wrong and the trend is less, and then finds a residual trend (reduced further by stratospheric water vapor and black carbon) to claim that he has explained more than half of the 0.7C trend without CO2.
Here is how I think he should proceed with his argument: If the observed temperature increase between 1950 and 2009 is 0.468C (trend 0.078C), then the challenge is to explain more than half of that trend with natural forcings. According to Michael’s analysis, black carbon and stratospheric H2O account for 34% of the 0.468C trend. So technically, Michael’s argument has not refuted the foundation of EPA’s endangerment finding (more than half of the observed warming, the magnitude of which has now been reduced). Adding uncertainty associated with solar variability may possibly make his argument work, but the argument as it stands, doesn’t hold up in my opinion (and not for the reason that Santer gave, in terms of including sulfate.) On the other hand, the original concern was raised over the magnitude of the original warming (0.708C), so Michaels’ broader argument does raise reasonable doubt (and would we be so worried if the observed trend was 0.47?)
IMO, the more significant thing that Michaels did was in adjusting the surface temperature time series, which may result in the rate of warming in the latter half of the 20th century being smaller than that between 1910-1940 (somebody needs to do the calculations, I don’t have time right now.)
In any event, I think this overall line of argument presented by Michaels is a very significant one in terms of the EPA CO2 endangerment issue. However, the logic of the argument needs refining and it needs extending before lawyers can use this as “reasonable doubt” in challenging the EPA endangerment ruling. And those defending the science the behind the EPA endangerment ruling (which is basically the IPCC) need to shore up their arguments. I think that this is the coming battleground issue in U.S. policy on this topic.
“…why I think Michael’s testimony”…
If his (sur)name is “Michaels” (with an ‘s’), then the possesive would be “Michaels’s” surely…? Like “Bridget Jones’s diary”.
Or Phil Jones’s diary ….
I must admit I finished reading with a chuckle. :) I’ll have to read it again, to take in the post while ignoring the grammar. This particular use of the apostrophe presents problems for a lot of people and Judith is far from being alone.
For possessive, you can have Michaels’s but it’s more “normal” to just use Michaels’, or Jones’ diary. Or.. “in Jesus’ name,” even. But I’m not going to criticise Judith because, even though I’m not bad with an apostrophe, even I’m not sure what you’d do with possessives of more than one Michaels. “The Michaelss’ home”? :)
I never learned the” Michaels’s” way in my school. Then again, it was my second language so I must have missed quite a lot of things…
Luis, probably a lot fewer things than many for whom it is the English-only language.
Actually, everybody is right :-) I heard Dr. Richard Lederer speak about this just this morning and, according to him, the rule has always been that if you say the possessive ‘s’ then you add it, if you don’t say it you don’t add it. I the case of Michaels, this would be “Dr. Michaels’ theory” because you don’t say the ‘s’. In the case of Jones, it would be “the Jones’s car,” because you do. I think he also made a point about the number of syllables mattering, but my memory is less clear on that point. I appreciated hearing this because this had bugged me for a long time.
Strunk & White would disagree:
But anyway, I’ve never seen anyone assert a rule which says you splice an apostrophe before the end of someone’s name. So Michaels’ if you feel you must, I suppose, but why Michael’s?
As someone whose family name ends with “s”, let me explain my preferences, using Mr. Michaels’s surname for examples. (I’m treating the Michaels name as though it were my own in the following.)
One consideration I don’t see that anyone’s mentioned is whether in addition to the family name ending in “s”, there is also a family name, about as common, that is the same except for having no “s”.
Suppose there are as many, or more, people with the family name Michael as there are with Michaels. As a Michaels, I would, irritably, have already seen my last name printed many times without the “s”. I hope you’ll all agree that I have a legitimate interest in seeing a proper distinction made between the two.
The plural of Michael is Michaels, an unavoidable conflict with the singular of Michaels. But all other conflicts can be avoided. The plural of Michaels is Michaelses (not “Michaelss”).
The singular possessive of Michael is Michael’s. The plural possessive of Michael is Michaels’.
The singular possessive of Michaels is Michaels’s, with the second “s” pronounced, please. (Writing the singular possessive of Michaels as “Michaels'” or pronouncing it without the second “s” invites confusion with the plural possessive of Michael — I wouldn’t like that. I think most Michaelses would agree that adding another “s” — pronounced, please — after the apostrophe in the singular possessive is natural and proper, and ask that you please do so.) The plural possessive of Michaels is Michaelses’.
That’s the view from an “s”-surnamed guy. :-)
Interesting, the real issue that I garner from this is the “smoothing” of the temperature variations.
The EPA used the increase in temperature over the time period involved to make a declaration . The problem that I personally have is with the actual “adjustments” . Given that the original temperatures were actual measurements what basis is used to change those readings? Are they adjusting actual data reading to comply with model forcasts? If that is the basis for the EPA pronouncement the EPA really does need to justify its actions as based on science, not faith.
Actually, quite a few adjustments are made to both the land and ocean temperatures. Here are two references on the subject:
Brohan et al.
Rayner et al.
Why no answer to his question:
“Are they adjusting actual data reading to comply with model forcasts?” – Glen Shevlin
Can you give a yes or no answer to that question?
yes, the answer is no.
Thank you.
Now it’s Judith’s turn.
You have 2 parts to your question.
The first is no, the main reason for the adjustments is for real world reasons and have nothing to do with models. The most ideal temp measurement site is one that is in a good location, has not moved, and has existed for a long time.
If all temps stations were like that then we would not need adjustments. However, sites move, the time of day the temp measurements are taken can change, land use around the station changes, etc, each of which effect what the station is measuring.
For instance a station was moved from a cooler high elevation to a warmer lower elevation, the need is for a longer record, so with 2 measurements in the same general area, you can adjust upwards the older, cooler, higher elevation temps so the adjusted temps now look like what the temps would have been if the new station had always existed, or you can adjust the new station temps downward so that they match what would have been seen had the old station had not been moved, or do you adjust them both and come up with a temp record that would have existed if there were a station at the elevation half way between both stations. So adjustments are valid.
The answer to the second part of your question is maybe. The question you ask is not one that a scientist consciously choose, but perhaps could bias them unconsciously. Meaning if a model says the temps should be rising, a scientist that has done temp adjustments comes up with an rising temp trend, they may think that they got it right and stop there (and not look for errors or other valid adjustments that could lower the trend). However upon seeing a decreasing temp trend, it might cause the scientist to look hard at where an error in the adjustment could have caused the trend to decrease and come up with a valid adjustment that changes the trend to increasing.
curryja said “Actually, quite a few adjustments are made to both the land and ocean temperatures. Here are two references on the subject:
Brohan et al. Rayner et al.”
Brohan et al. is a study of uncertainties in the data. No adjustments are actually made in the paper although some of the data sources appear to be adjusted. Rayner et al. reference cites papers in preparation and a quick search suggests they still are, so the adjustments probably haven’t yet been applied to any data in the public domain. Do you have anything a bit more specific?
PS the Rayner reference is to a draft. The final paper is here
Glen,
I agree, the adjustments are the real problem!
I think that evidence in science should be considered if actually measured in the real world. In those rare cases where an adjustment might be needed, a neutral arbiter should be appointed, some entity whose credentials are impeccable and which will be accepted by all sides of an issue.
The way things are now, the datasets (including CO²) are controlled (almost) solely by people like Jim Hanson and Phil Jones who have an agenda, and who want to “hide the decline”.
There is, justified or not, a collapse of trust in regard to bias in the station selections and adjustments to the raw readings. Until such time as these are comprehensively available and declared sound by a critical audit team including observers acceptable to all, trust in the temperature series can not be re-established.
“Until such time as these are comprehensively available”
Which ones aren’t?
“declared sound by a critical audit team”
If scientists the world over are potentially falsifying or making unjustified assumptions to data, who do you trust to audit it?
Just me being selfish, but I would just love to have us trustworthy gravity skeptics represented on the audit.
Sharper00,
I trust Anthony Watts: http://www.surfacestations.org/
If you don’t, let’s replace the corrupt current gate keepers with a team of Scientists who represent both camps, and let them agree on the adjustments.
“I trust Anthony Watts”
Mr Watts has already published his conclusions via the Heartland Institute but he can’t seem to find the time to publish an actual analysis that supports it.
“If you don’t, let’s replace the corrupt current gate keepers with a team of Scientists who represent both camps, and let them agree on the adjustments.”
I don’t agree either that there exist a coherent set of “gate keepers” or that there is corruption in the temperature record. You’re starting with the conclusion and demanding that people agreeable with that conclusion be involved in checking whether it’s justified.
I suspected you will not trust anybody who is not a proponent of AGW, so I said “if you don’t trust…”
Here is one link which I just googled, there are hundres of researches showing those facts:
http://www.americanthinker.com/2010/02/a_pending_american_temperature.html at February 24
I’m sure you will find something nice to say about it.
“I suspected you will not trust anybody “
Actually I’m not particularly bothered by the trust issue at all. If people have a problem with the temperature record then great, by all means document it.
So far pretty much all the criticisms I’ve read are in the form of “This location was adjusted upwards! Fraud!!”. It would be nice if people could at least explain why they think the adjustment was applied and why it leads to the wrong result. Instead people look at the result, decide they don’t like it and then presume or infer it must therefore be fraudulent.
That’s before you even get into the problem of how a temperature record controlled by “corrupt gatekeepers” ended up showing warming trends nearly identical to that of the satellite record. They must be incredibly lucky for that to happen.
In addition I’m not convinced all the data and research necessary for a full critique isn’t already available. I hear vague claims about releasing data but people are generally not clear on what they think is missing.
Actually, the location was adjusted downward, i.e. toward the equator & to lower altitudes. http://chiefio.wordpress.com/2009/08/17/thermometer-years-by-latitude-warm-globe/
Obvious side-effect of dropping thermometers from your sample that are in northern Canada & on mountains in California?
Dropping thermometers in cold places does not in itself make the record warmer or colder as it is anomalies which are being measured nit absolute temperatures. However, as warming is slightly more pronounced at high latitudes dropping thermometers in northern Canada would tend to make the temperature record cooler.
@sharper00
‘It would be nice if people could at least explain why they think the adjustment was applied and why it leads to the wrong result’
Fail!
It would be nice if the people who applied the adjustment were to explain why they felt the adjustment was needed and to justify why the ‘corrected’ result is the right one.
But they can’t. Because they don’t document their changes. There is no justification, nor audit trail. And oft times they throw away the raw data and just keep the ‘adjusted’ stuff (Phil Jones). So even if we all wanted to go back to the raw stuff we can’t. We no longer know what it is.
Even in commercial IT behaviour like this (undocumented adjustments to raw data and then ‘losing’ the base), is approaching criminal negligence. Anyone caught so doing would never be employable again. In seriously grown up IT where safety cases and the like are paramount (nuclear reactors, pharma etc), the perpetrators would go to jail. In financial IT it is called fraud, and they would get to meet Messrs Enron and Madoff for breakfast lunch and dinner every day for a long time.
And because this is supposed to be the ‘most important data in the history of the world to solve the most important challenge ever’ or other such guff, maltreating it like this is entirely unforgivable.
Yes, and thanks to the those that took the online data and published it before Anthony, any work on a paper that was done must now be redone to rebut the study that was published.
It makes the paper go from why he thinks he is correct to why he thinks the other paper was wrong and why his method is better.
I have stated I think quite reasonably that there is a problem, and a potential solution to it, but your reaction is combative. Critical audit should be not only welcomed, but sought. This is, in microcosm, precisely what Judith Curry detects among her community at large, and why she is receiving such support for her campaign.
“I have stated I think quite reasonably that there is a problem, and a potential solution to it, but your reaction is combative.”
“Corrupt gate keepers” is not a reasonable problem definition.
” Critical audit should be not only welcomed, but sought. “
There are many who claim “critical audit” but who actually mean “Audit until the right answer is reached”.
“and why she is receiving such support for her campaign.”
I’m not detecting much support for her. Certainly there are supportive comments on this blog but there are supportive comments on many blogs.
This is my perception of the problem (and please let me know where I am incorrect if I am):
1. The stations were set up a long time ago and at different times relative to each other.
2. The stations have not been uniformly maintained, especially regarding the placement of instruments as urban elements have encroached on them.
3. People are often lazy, while sometimes funding has not been available. Especially in colder regions I can certainly see why no-one would want to walk any farther outside than they must to collect data.
4. Individual station data is suspect, but perhaps not collectively, depending on the assumptions made (and this I think is the true source of the problem). ‘Harmonization’ results (or discord, depending on your view). I see this as poisoning by the average, but that is my view.
As an interesting note, I had my middle-school class take data on UHI last year to see if it existed and to what extent. We are fortunate in being in a very rural location where we can see some areas of interest isolated form each other. We took data in the middle of the soccer field, next to temporaries, in the middle of small areas of blacktop and in shaded areas. Personally I did not expect to see anything dramatic, but the blacktop was 8′ F hotter than the field. We used the same portable weather instrument at all locations (can’t afford any more lol) so the temperature might have been off , but not realtive to itself. I was very surprised this and no longer have any doubts when someone grouses about Stevenson screens being located close to pavement or other urban effect.
In the end, I think the problem is that the surface station data is corrupt to the point, in my opinion, where we should have a reset and get proper data prior to making any decisions based upon it.
What people are looking for is changes, so the fact that one site is hotter than the other is only relevant if the measuring instrument has been moved. One of the most important parts of the record keeping is keeping track of and correcting for such changes (this is often referred to as metadata). For example a major change has been the time of day at which measurements were/are made. There is a large literature on such effects.
So basically concerning his claim about temperature trends you say
” I have no idea whether these adjustments to the temperature record are correct,”
And on his second point
“So technically, Michael’s argument has not refuted the foundation of EPA’s endangerment finding “
Other than “Interesting if true” where you apparently have no idea if it’s true, why is his testimony “significant”?
“And those defending the science the behind the EPA endangerment ruling (which is basically the IPCC) need to shore up their arguments. “
Shore up their arguments against what? A temperature correction of unknown quality and an attribution of warming you’re not convinced by?
Scientists will be ‘satisfied’ with the internal verification and resolution of data problems and let history judge. They really don’t have the time to check someone else’s work. The ‘science’ peer review process will do much the same, if it looks good and fits their own agenda, it gets a pass and eventually shows up in the professional arena with a public audience. Now we move into the political arena. This is where things get tricky. This is where you better check everything a few hundred times and watch your peas and ques. Ain’t life exciting?
Unanswered questions from last thread:
The 2008 Thompson paper also suggests a possible increase in anomaly of .1C based on the 2005 review here. is this figured into the percentage and why aren’t there large error ranges based on the fact that no adjustments have taken place yet?
The Solomon paper has no until 1980. The twenty years following that data, Solomon describes the data as “limited”. And the complete data in the 2000’s counters the previous warming. Is this paper very useful in showing a 15% of warming attributable to SWV?
Black carbon, cited from the World Climate Report website, says that:
This is not in the testimony given by Patrick. Does this 25% of warming also for the 1950> period we are discussing, or is it more, less, the same?
Judith:
But isn’t this the most important part of the argument? The IPCC does not use a similar method of calculating warming. They use all the forcings of possible warming AND all the possible forcings of cooling (with error bars) in separate categories. The statement that is being challenged here is about the warming, but Patrick uses an anomaly .7 warming, that includes cooling obviously, and begins subtracting from that.
These are interesting arguments but probably far too fine to affect the policy battle. There are two battlegrounds, the Court and the Congress. The primary issue before the Court is probably procedural, namely can EPA make such ruling without doing its own assessment? It never has until now so this is a strong argument. The Court does not want to rule on the science and its favorite legal maxim is ‘Never argue substance when you can argue procedure.’
Congress is going to decide this on brute ideological force, not the fine points of AGW. (Plus we have a veto threat to play with.) All that matters is that Pat says there is doubt. The rest is AGW angels dancing on the head of a pin. Everyone needs to realize that policy is a very big world while science is a very small world. Enjoy the show.
there are some legal efforts to overturn the EPA endangerment on these grounds, haven’t been sucessful so far
It is a well understood game, one I have played in many times. The Court will not overturn the finding, it will remand the issue, requiring EPA to do its own analysis. Then after two years or so we will litigate the draft analysis, along the lines of NEPA, not the finding, assuming Congress has not acted to stop EPA by then. Threatening Congress with EPA action was a bad policy move by the environmentalists. Then too EPA is backing away from real action as fast as it can.
With Kyoto dead and US cap and trade likewise EPA is the last national AGW policy issue. If EPA goes down or backs down then policy AGW is dead for the foreseeable future. Then maybe we can try some science in peace. Mind you policy AGW is not going away, but it will be a second tier political issue for a good while.
Yes most excellently fine and linear.
… As if it were splitting hairs and chasing after red herring from all quarters upon every shoal.
The epidemic of consensuvitus is astonishing
Judith,
Thanks for the new discussion and analysis.
You are correct that there are two separate portions of the argument. I think you have to realize that we are drawing a distinction between true warming and “observed” warming. The IPCC/EPA seems to use the latter. So, both portions of our reasoning apply to “observed” warming—that is measurement errors, and forcing changes.
You are also correct, that in our calculations, after we apply the Thompson et al. and MM corrections, then GHG still explain “most” of what is left over.
But, as you point out, starting with a rise of 0.47C is significantly different than starting from a rise of 0.7C—scientifically and politically.
Basically what we are saying is that if we apply what we know now to what was known when the IPCC/EPA made their statement, the veracity of their statement is severely tested. Not only has the uncertainty increased as to what the is the magnitude of the “observed” warming, but also the uncertainty has increased as to the relative importance of the factors forcing the temperature increase.
-Chip
“Basically what we are saying is that if we apply what we know now to what was known when the IPCC/EPA made their statement, the veracity of their statement is severely tested.”
Thanks Chip and thanks for your comments in the previous thread to my questions – which were about the same issue – “two sepearate portions of the argument”.
The key here is that the EPA made its ‘endangerment’ finding based on the statement:
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG [greenhouse gas] concentrations.
There are two separate issues here:
1) What is the veracity of this IPCC-derived statement today?
2) How much support does this statement lend to declaring CO2 as a ‘dangerous’ gas?
The extent of support this statement provides in considering CO2 as dangerous rests on two pillars.
1) ‘the strength of attributability’ – [highly likely]
2) the amount of the attributed warming – [most]
As Chip points out, both of these pillars are significantly affected by new findings, available post 2007, i.e., after the ‘endangerment’ finding.
Thus, in a formal re-assessment of the IPCC attribution is made, it is highly unlikey that neither of the pillars remain unaffected. One of them has to go down.
The IPCC statement of attribution is very important to the EPA for one reason. Environmental toxicology attribution and endangerment rests on finding that a given ‘toxin’ does large amounts of bad things, and, of all the bad things happening to the target, most are of the kind the toxin in question causes.
For the EPA, the IPCC’s attribution statement fits its regulatory framework of thinking about substances followed for the countless toxins it seeks to regulate. The perspectival distortion we observe arises from bending the climate system into a environmental regulatory framework.
“The statement that is being challenged here is about the warming, but Patrick uses an anomaly .7 warming, that includes cooling obviously, and begins subtracting from that.”
No, the statement is about the “the observed increase in global average temperatures ” ie 0.702°C.
Michaels’s argument is that if you add adjustemnts (Thompson) + non-climatic + black carbon + sratos H20, you get 0.396°C (>50% of 0.702°C), which means that what’s left (which included GHG, but not only GHG) represents less than 50% of the observed trend.
*includes (not included)
This does appear to be correct, although the relevant section in the IPCC (9.4.1.4) does not present it in that fashion.
The EPA statement is still valid, but that takes up back to the other questions, and as far as I can see, his own paper on land use, even with its uncertainty and objections, is the best bet to make his argument, unless the other factors are shown to be more useful. Or even other factors such as ocean oscillation show to be creating a larger signal than currently known.
Judith – You too are making the ‘positive forcing fallacy’, as I referred to it on the earlier thread. As a simple matter of logic, 34% from BC and strat H2O does not impose an upper bound of 66% due to anthropogenic GHG.
Sorry, forgot the “/a”. I hope the whole thread doesn’t become a hyperlink!
EPA’s statement is about “observed increase in global average temperatures since the mid-20th century”, not about the warming (unlike IPCC’s).
“We suggest equal emphasis on an alternative, more optimistic, scenario that emphasizes reduction of non-CO2 GHGs and black carbon during the next 50 years. This scenario derives from our interpretation that observed global warming has been caused mainly by non-CO2 GHGs. Although this interpretation does not alter the desirability of slowing CO2 emissions, it does suggest that it is more practical to slow global warming than is sometimes assumed”
Hansen et al: Global Warming in the 21st Century: An Alternative Scenario
JF
John can you send the link to your previous message. I totally agree that this does not imply 66% greenhouse forcing, that is not what I implied. However, if they can attribute more than 50% to to other factors, then the “most” statement by the EPA and IPCC becomes problematic.
http://judithcurry.com/2010/11/18/michaels-controversial-testimony/#comment-12811
this is certainly true. but if you have a situation where greenhouse forcing is 75%, other warming factors are 75%, and sulfate cooling is 50%, how do you interpret this? the most dangerous thing would arguably be to get rid of the canceling sulfate. if other factors, esp non anthropogenic factors, can explain more than half of the warming, with some internal cancelation by the anthro factors, then the endangerment issue would need to be defined differently, IMO.
“if other factors, esp non anthropogenic factors, can explain more than half of the warming”
I find your focus on this odd. You appear interested in this idea solely and not whether it has or has not been demonstrated to be the case. I take this to mean you’re not especially concerned with whether Michaels is actually correct but the implications if he is correct.
The most dangerous thing for the climate on the short term IS to get rid of the canceling sulfate!
Your hypothetical is not a hypothetical. The IPCC’s figure SPM-2 shows their estimates of all the anthropogenic radiative forcings. Here’s what they have, leaving off the error bars:
CO2: +1.66 W/m2
All other positive forcings: +1.2 W/m2
All negative forcings: -1.3 W/m2
Net anthropogenic: +1.6 W/m2.
Take away CO2, and the net positive forcing goes away. That’s how I interpret it.
As for the endangerment issue, the EPA is not the proper place for GHG controls.
I totally agree that EPA is not the proper place for GHG controls
As I stated before I am not a climate scientist, but when I read cryptic comments like “I totally agree that EPA is not the proper place for GHG controls”, I can only come up with this response:
You’re advocating policy, either directly or indirectly. What agency should be implementing GHG controls? Or, do you believe there’s no need for GHG controls of any kind by any agency?
These little one sentence humdingers just don’t it for me. Could clarify just what it is you mean?
AEG- I can’t speak for Judith, but I advocate a policy procedure that involves a comprehensive political decision rather than one that consists mostly of applying a law that was never designed for this purpose.
No, this is not science. Yes, that means that my opinion in this matter should count for very little.
I agree with John on this. Our opinions, take it for what you think our opinions on this are worth.
Also, there is policy, and then there is politics. Objectively, i think it is bad policy to try to regulate GHG this way. On the other hand, if you are strongly committed to reducing GHG and you see that this is the only shot you have politically, then it could be good politics to support this policy (but watch out for the unintended consequences).
John N-G is the Texas State Climatologist. His professional opinion about policy carries weight because of his position and expertise.
I’m the Texas State Climatologist, not the Texas State Climate Policy Maker!
True, but your opinion should carry weight and you should be one of the people that folk go to to learn about the climatology of the state and region.
Further, I suspect that much of the temperature increase is unforced natural internal variability (no idea how much). The real fallacy is in thinking that this is all due to forcing.
+1
I suspect that much of the temperature increase is unforced natural internal variability (no idea how much).
What exactly leads you to believe this?
“no idea how much” = “much of the temperature increase” should perhaps read “no idea how much much”. Then it becomes crystal clear.
The Pacific Decadal Oscillation, the North Atlantic Oscillation, the Atlantic Multidecadal Oscillation, the North Pacific Gyre Oscillation all have known signals that show up in global surface temperature anomalies. The recent switch of the PDO to the cool phase is being invoked to explain the lack of warming in the past decade. Etc. This will be the topic of a future series.
More broadly there is now evidence for a lot of unexplained natural variability. It is the great discovery to come our of 20+ years of intensive research.
Hooray! Maybe you can examine the cosmic ray effect shown at the presentation on the CLOUD website. Just eyeballing the graphs, it certainly appears temp correlates with cosmic rays quite well.
I am a skeptic concerning the 3 – (6-19) C/century rise, but now I have doubts about that. These doubts have crept in due to the fact that we have had CERES measurements for many years. Why haven’t we been able to determine the climate sensitivity and also nail down the effect of clouds by now? If the CERES data showed a low sensitivity, skeptics would be all over it. AFAIK, only Lindzen has made an argument for low climate sensitivity from radiation measurements. Where are the other skeptical climate scientists on this? The silence is disturbing. Is it not even possible to determine CS from the CERES measurements?
The issue I have with this idea is one of causality. When we try to invoke certain teleconnections/indices to explain a set of observations may it be regional or global temperature changes or any other phenomenon for that matter, you are not necessarily describing the physical processes at play.
Example: Cold air outbreaks… The frequency of these events increase when the the NAO is in its negative phase. People will interpret this result and say the NAO caused the cold air outbreaks. However, the NAO is not a physical process. The better question to ask is what processes occurred that allowed the NAO to be negative? For instance, a sudden stratospheric warming event could trigger a negative NAO as well as a prolonged blocking event from anomalous tropical heating in the Pacific. Two very different mechanisms. Yet when you interpret things in the context of teleconnections/indices, you miss the physics that were involved. I think this same idea transfers to the interdecadal climate time scale that is being discussed here.
*****
However, the NAO is not a physical process.
******
What is it, a spiritual process?
Judith,
The Pacific Decadal Oscillation, the North Atlantic Oscillation, the Atlantic Multidecadal Oscillation, the North Pacific Gyre Oscillation all have known signals that show up in global surface temperature anomalies. The recent switch of the PDO to the cool phase is being invoked to explain the lack of warming in the past decade. Etc. This will be the topic of a future series.
But none of that makes the forcing effect of CO2 go away.
True, but it terms of an explanation of the 20th century temperature record, no one has convincingly demonstrated (at a “very likely” level) that CO2 forcing results in a surface temperature change that exceeds natural variability.
andrew adams
To appreciate the magnitude of these uncertainties, see the impact of the PDO. e.g. Don Easterbrook’s projections for global cooling/warming based on historic 60 year PDO temperature fluctuations (in contrast to IPCC’s CO2 driven assumptions – which did not incorporate PDO affects.).
I DID include natural internal variability.
Please see my reply to Dr. Pratt
http://judithcurry.com/2010/11/19/michaels-controversial-testimony-part-ii/#comment-13111
Judith, you say “Further, I suspect that much of the temperature increase is unforced natural internal variability (no idea how much). The real fallacy is in thinking that this is all due to forcing.”
Can you please explain how you come to this conclusion (and point me in the direction of the relevant papers)?
John,
Warming is only caused by positive forcings (excluding for the moment natural variability). So the proportional contribution of each of the positive forcing agents to the sum total positive forcing can be partitioned to having caused the same proportion of the warming.
Ramanathan and Carmichael assign GHGs=3W/m2 and BC=0.9W/m2 for a total of 3.9W/m2 of positive forcing. From Solomon et al., stratospheric water vapor changes have caused a warming that is 15% greater than otherwise expected (I’ll assume otherwise expected was from the 3.9W/m2). In order to get 15% more warming, you need 15% more positive forcing. So now, we have:
3.0 W/m2 from GHGs
0.9 W/m2 from BC
0.6 W/m2 from strat H2O
GHG’s thus contribute 3/(3+0.9+0.6), or 67% of the positive forcing and thus 67% of the warming.
Any negative forcing applies proportionally to the positive forcing agents and so doesn’t effect the final warming appropriation, right?
Where do you think that I am going wrong?
-Chip
In your first sentence. Warming is not caused simply by positive forcings. It is caused by the extent to which positive forcings exceed negative forcings.
Except for natural variability, the statement “most of the warming is due to GHG” is equivalent to the statement “most of the excess of positive forcings over negative forcings is due to GHG”. If there was no excess, there would have been no warming.
You make the same error (equating fractional forcing with fractional warming) in your second paragraph. Suppose, for the sake of argument, that GHG = 3 W/m2, BC = 0.9 W/m2, and aerosols = -2.3 W/m2. The net forcing would be 1.6 W/m2 to produce the temperature change. If Solomon et al. stratospheric water vapor produced warming 15% greater than otherwise expected, its forcing would have been 0.24 W/m2, not 0.6 W/m2.
But, John, you are not proportionally redistributing the negative forcing from sulfates to *all* the positive forcing agents.
In your example, sulfates are offsetting 2.3/3.9, or 59% of the potential warming. This doesn’t uniquely apply to GHG and BC only but it also applies to the forcing from strat H2O. So, the work on observed warming becomes 3*41% or 1.23W/m2 for GHGs, 0.9*41% or 0.37W/m2 for BC, and your value for strat H2O of 0.24W/m2 (a quantity which in your calculation has already been subject to sulfate cooling). So, again, the proportional contribution from GHGs is 1.23/(1.23+.37+.24) which still equals 67%.
-Chip
Chip – Nonsense. The presence of aerosols does not affect the magnitude of GHG forcing at all. The only reason it enters into the Solomon calculation is because Solomon et al expressed their results in terms of a temperature change rather than a forcing change.
John,
Thanks for the continued patience!
Let’s see if this works better.
Recognized Forcings:
GHG = 3.0W/m2
BC = 0.9W/m2
Sulfate = -1.4 W/m2 (IPCC indirect+direct removing the +0.2 for their BC estimate)
Temp rise = 0.47C (after adjusting for measurement issues)
So we apparently have that 2.5W/m2 net forcing produced 0.47C of warming.
Now, it turns out that there was an unidentified positive forcing that was responsible for 15% of the recognized temperature rise.
The question we want to know, is what is the magnitude of the positive forcing from this new source.
We can determine this from what we know already after adjusting the observed temperature downward by 15% (since we now realize 2.5W/m2 was responsible for only 0.41C instead of the old notion that it was responsible for 0.47C). So if 2.5W/m2 produced 0.41C of warming, then it takes an additional 0.37W/m2 of forcing to produce the additional 0.06C of warming (to get back to the original 0.47C of observed warming).
So now our forcings table looks like this:
Strat H2O = 0.37W/m2
GHG = 3.0W/m2
BC=0.9W/m2
Sulfate = -1.4W/m2
Temp rise 0.47C
Diving the warming proportionally between the positive forcing yields, GHGs 70%, BC 21%, and strat H2O 9%.
This proportion from GHG is slightly higher than I initially calculated (70% vs. 67%), but even so, 70% of 0.47C is 0.33C which is still less than half of the original 0.70 that the IPCC assigned to “observed” warming.
Is this closer to your way of thinking?
Thanks,
-Chip
Chip – As a sanity check, would you agree that separating the forcing due to aerosols into a positive forcing due to black carbon and a negative forcing due to all other types of aerosols is arbitrary? One could just as easily work with total aerosol forcing, and whether you choose break up aerosols into “black” and “other” or keep them all lumped together shouldn’t affect the impact of other forcings.
As I’ve done it, it doesn’t matter how you choose to divide up aerosol forcing, the effect allocated to GHG is unchanged. Your way, the more species of dark-colored aerosols you split off from light-colored aerosols, the smaller the allocated impact of GHG. That simply doesn’t make sense.
Oops, the comment above is a followup to my 5:08PM comment just below.
Chip, I discussed this on the other thread.
The estimated net RF in the AR4 ranges from some +0.5 W/m2 to +2.5 W/m2 (with a fairly wide distribution, and a small but finite probability outside these values). Furthermore, the GHG forcing is virtually independent of what BC or the sun is doing (Myhre et al 1998 gives up-to-date forcing estimates for the WMGHG’s). By no means can you just change these calculations because you find out that something else is affecting the climate.
To quote raypierre: “No other theory based on quantified physical principles has been able to do the same. If somebody comes along and has the bright idea that, say, global warming is caused by phlogiston raining down from the Moon, that does not make everything we know about thermodynamics, infrared absorption, energy balance, and temperature suddenly go away. Rather, it is the job of the phlogiston advocate to quantify the effects of phlogiston on energy balance, and incorporate them in a consistent way beside the existing climate forcings. Virtually all of the attempts to poke holes in the anthropogenic greenhouse theory lose sight of this simple and unassailable principle.”
If you weigh 3
Chip – I agree with your logic up to the paragraph beginning with the word “Div[id]ing”. At that point we’re back to the same disagreement as before. Allow me to make a minor alteration or two:
Our forcings table looks like this:
Strat H2O = 0.37W/m2
GHG = 3.0W/m2
BC=0.9W/m2
Sulfate = -1.4W/m2
Temp rise 0.47C
Diving the warming proportionally between the total forcing yields, GHGs 105%, BC 31%, strat H2O 13%, and Sulfate -49%.
This proportion from GHG is much higher than you initially calculated (105% vs. 67%), so, 105% of 0.47C is 0.49C which is more than half of the original 0.70 that the IPCC assigned to “observed” warming.
Indeed, accepting the revised 0.47C at face value, the GHG are powerful enough to produce more warming than what “actually” took place. This is because, according to the numbers we’re working with, the remaining forcings add up to -0.13 W/m2.
Chip
“Warming is only caused by positive forcings”
Logically invalid. Would not reductions in magnitude of large negative feedbacks give the appearance of “positive forcings”? e.g. reductions in cloud cover reducing albedo?
You also ignored solar & planetary forcing, cosmic rays impacting clouds and Length Of Day, cause vs effect on warming and CO2, Type B errors (bias) etc. etc. e.g. see solar & planetary correlations by Nicola Scafetta.
See Nicola Scafetta at Duke
http://www.fel.duke.edu/~scafetta/
Excellent point!
Judith,
I don’t think so… holding post-1965 temps constant relative to pre-1945, applying an adjustment of +0.3 C at the 1945 drop and linearly decreasing to 0 C at 1965, implies the sudden bias introduced at 1945 decreases to 0… and that ain’t so… look at figure 4b, the red and blue curves, U.S. vs U.K. measurements do not return to their pre-1945 ratios, and both decrease relative to other measurements… Thompson did not recommend any such adjustment… the adjustment appears to be pulled from this hypothetical figure…
… (Thompson 2008) “smaller adjustments are likely to be required in SSTs through at least the mid-1960s” does not imply “a +0.3°C correction after about 1945 slowly declining to zero by the mid-1960s”
Lazar,
Admittedly, my adjustment to the temp data based on Thompson et al. is just an interpretation of what was described in their paper along with the graph from the CRU. I think it was a fair interpreation given the lack of detail available. Certainly you (or anyone else, see here for example) could come up with a bit of a different adjustment.
But I don’t think the overall result would be much different (given the info currently available).
-Chip
Chip,
fair enough
“But I don’t think the overall result would be much different (given the info currently available).”
Seriously?
What about these results?
ds,
I don’t think anyone is suggesting the latter, are they?
-Chip
“I don’t think anyone is suggesting the latter, are they?”
Well, strictly speaking, I don’t think anyone (except you and Michaels) is suggesting the former either. Real adjustment in this marine dataset (which will be published soon) is about half of what you two postulate, and the change of global (land+ocean) mean would be even less. But that’s not the point.
The bottom line is: given the lack of detail available, the result CAN be much different. And somehow you failed to acknowledge that.
Thanks, ds,
Obvisouly, when the new data are released, we can make a much better assessment of the impact of the SST issues.
Like I said, I got my estimate from a chart attributed to the CRU (so I didn’t just dream it up). And, I took into account that it only applied to oceans.
-Chip
“And, I took into account that it only applied to oceans.”
Oh, really? How exactly?
That would be funny, because one can easily replicate your results by taking HadCRUT3, applying 0.3 deg adjustment at 1946, decreasing linearly to zero at 1965, and calculating trend over 1950-2009. Which is quite consistent with your own description here.
but note that “the overall result”… meaning the trend with reference to McIntyre’s analysis… now does not depend on Thompson… McIntyre’s adjustment for Thompson (2008) is a simple step change shifting post-1945 data by the same amount (which I think is more likely than your and Michaels’ adjustment)… and which *doesn’t* effect the trend… the trend is effected by changes post-1970 which are based on other data/analyses… not Thompson…
ah… ds has already made the point… see above… good illustration
I would like to inquire about the use of data here. It is apparent to me that the case for AGW rests on the temperature rise in the latter part of the twentieth century. There is an argument about the quality of the surface temperature data which means mysterious adjustments as well as widespread siting and collection errors, station dropout, etc. To this data driven scientist, it is impossible to draw strong conclusions from such lousy data. The best to expect are inferences for future work.
On the other hand for this same critical time period satellite data is available. Maybe the precision of satellite data could be questioned but temp differences should be consistent and reproducible on any further adjustments. Satellites should then be best for showing a trend. Further the two gathering agencies, RSS and UAH, agree on trends.
Given this better source of data in the time period of most importance, why does everyone continue to use surface data rather than satellite data? Can someone straighten me out here?
Paul Chaxelle
Note there is a major dispute over an “adjustment” to the satellite record.
The satellite PI’s and Scafetta see not basis for a major adjustment made by IPCC AGW authors. See:
Total solar irradiance satellite composites and their phenomenological effect on climate Preprint submitted to Geological Society of America August 6, 2009 http://arxiv.org/PS_cache/arxiv/pdf/0908/0908.0792v1.pdf
N. Scafetta and R. Willson, “ACRIM-gap and Total Solar Irradiance (TSI) trend issue resolved using a surface magnetic flux TSI proxy model”, Geophysical Research Letter 36, L05701, doi:10.1029/2008GL036307 (2009). PDF Supporting material PDF
http://www.fel.duke.edu/~scafetta/pdf/2008GL036307.pdf
So much for the “science is settled”!
Double space between paragraphs.
Like this.
Sheesh.
Grr.
“Given this better source of data in the time period of most importance, why does everyone continue to use surface data rather than satellite data? Can someone straighten me out here?”
I tend to prefer RSS MSU data when applicable but they only cover 31 years. So if you smooth the temperature with a 30-year window, the window can only move one year, which is not much use. This can be seen concretely at
http://woodfortrees.org/plot/rss/mean:360
by noting the time scale. This is using all 31 years worth of satellite data yet all that the graph is showing is the difference between the last 12 months and the first 12 months which completely ignores the 29 years in between.
Oops, I mean 20 months at each end. (Forgot that we’re now most of the way through 2010.)
The Thompson et al paper should really be referred to as the McIntyre paper, since the idea for it was lifted by Thompson et al from Climate audit posts dating back to 2005 without any acknowledgement.
See “Nature Discovers Another Climate Audit Finding”
“The Team and Pearl Harbor”
and “Changing Adjustments to 19th Century SST”.
1. You have no knowledge that Thompson lifted the idea from Climate Audit posts and failed to acknolwedge. It’s possible, he did not nkow of McI’s unpublished research notebook meanderings.
2. McI throws out a lot of interesting ideas, but often fails to develop them. A paper is almost always a more thorough examination. McI does not “own concepts” for having thrown out a bunch of non-developed ideas.
3. Certainly, if someone used his work as a source, they should acknowledge it (the way you would for instance with a personal communiction), but it’s not reasonable to expect people to google Climate Audit like a reference search. So, it’s really not even an oversight of failure to “check the literature” to not have seen the CA posting.
4. It’s even possible that Steve came up with the concern and then it percolated somehow to Thompson as an “idea”. Again, though, if Thompson was not aware of the Steve post, there was no discourtesy.
OK. The original paper describing the production of HadSST2, the SST data set used in HadCRUT (which in turn is the data set used in the Thompson paper), can be found here:
Rayner et al. 2006
linked from this page:
HadSST2
Take a look at Section 6 “Remaining issues”. I quote: “Breaking the data up into separate countries’ contributions shows that the assumption made in deriving the original bucket corrections – that is, that the use of uninsulated buckets ended in January 1942 – is incorrect.” I guess one could quibble that the paper wasn’t accepted until after McIntyre first blogged about buckets in June 2005 – it had only been submitted – leaving plenty of room for shenanigans, so let’s go back even earlier…
…to Smith and Reynolds 2002, which comfortably predates Climate Audit itself by a year or so
Smith, Thomas M., Richard W. Reynolds, 2002: Bias Corrections for Historical Sea Surface Temperatures Based on Marine Air Temperatures. J. Climate, 15, 73–87.
AMS link to paper
They conclude (among other things) that “whenever substantial additions to the historical data are made, changes in the bias should be evaluated. The future additions of new historical SST may demand new bias estimates to account for differences in the bias of the new data.” and elsewhere “The unsmoothed coefficient changes abruptly around 1940 and it stays below the smoothed curve through the 1950s. Thus, there may be a mix of biased and unbiased data in that period. In the future more work may be needed to better estimate bias between about 1940 and the late 1950s, especially if additional data sources become available.”
Both of these predate McIntyre’s blog postings and both are cited, I think, in the Thompson paper. Of course you can go further back, to Folland and Parker 95 who were the subject of those McIntyre’s posts. There’s a copy here:
ftp://podaac.jpl.nasa.gov/pub/sea_surface_temperature/buoy/gostaplus/hdf/Document/papers/1-crrt/1-CRRT.HTM
The appropriate bit is “Since World War II, a mixture of uninsulated and insulated buckets, engine inlets, and latterly hull Sensors, has been in use (WMO 1956). Some double-walled, effectively insulated buckets, currently use canvas. The UK Meteorological Office was stili issuing only uninsulated canvas buckets in 1956, and some of these were used until the late 1960s (Meteorological Office 1969), but the first issue of rubber buckets was in 1957 and these mostly replaced the canvas buckets by the 1970s. Nevertheless, it is believed that a substantial fraction of observations from UK ships were (and are) really made using the engine-intake thermometer. Our climatological reference period, 1951-80, therefore contains several types of SST data, though uninsulated-canvas-bucket data were probably relatively few in number. It is relative to this mix of data that we have adjusted earlier SST data later in the paper.”
So they were aware that there might be biases in the data in the post war period, but that they were likely to be smaller than the pre-war biases. Contrast that with McIntyre’s version:
“One of the Team’s more adventurous assumptions in creating temperature histories is that there was an abrupt and universal change in SST measurement methods away from buckets to engine inlets in 1941, coinciding with the U.S. entry into World War II.”
Let’s look at the storyline. What Folland and Parker found, and what Smith and Reynolds found was – using the data available to them in 1995 and 2002 – there was very little evidence to justify a correction to the data after 1942. I think it’s safe to say we all agree that one shouldn’t make an adjustment to the data that isn’t justified.
The Kent paper that McIntyre quotes wasn’t released until 2007 and the metadata they used had only recently been digitised. When Rayner et al. looked at the data in 2006, they were looking at a much larger data set (ICOADS2.0) than either Smith and Reynolds or Folland and Parker had access to. Bearing in mind what Smith and Reynolds said about “whenever substantial additions to the historical data are made, changes in the bias should be evaluated” the preliminary investigations of Rayner et al. suggested that there were some sources of data that were using uninsulated buckets after the war. Thompson et al. looked at that same data in a little more detail and found that actually there was very good evidence to suggest that corrections were needed after 1942.
McIntyre’s comment at the end of his first post – “I don’t plan to wade through this material but someone should” – was spot on, but as it turned out someone already had waded through it, someone was wading in it and presumably, they still are.
RealClimate had a half-way sensible post on this at the time of the Thompson paper.
of buckets and blogs
@Vaughan Pratt | November 19, 2010 at 1:12 pm
Very important point, this false disjunction is a plank of the denial machine’s platform. However it’s not just “in principle” since it can be strengthened to “demonstrably false.”
Dr Pratt
My knowledge of the CO2 molecular properties is rather limited, but as with any process there is quantitive as well as qualitative assessments. The second tells me that the first is a poor candidate to take all the burden.
Some months ago I produced this graph:
http://www.vukcevic.talktalk.net/CO2-Arc.htm
It tells a great deal, not that I am not claiming that the Arctic geomagnetic field is the, or even a cause of the temperature changes, but I think it may be another manifestation of the natural forces controlling Arctic/North Atlantic currents. There are indications that the geomagnetic field is the temperature’s ‘time travelling companion’:
http://www.vukcevic.talktalk.net/NFC1.htm
http://www.vukcevic.talktalk.net/LL.htm
and more.
Sorry, double negative. It should be:
“It tells a great deal, not that I claiming that the Arctic geomagnetic field is the or even a cause of the temperature changes,…
Since it’s easier to imagine the field affecting the temperature than vice versa, the strong correlation makes an interesting argument for the former.
Sorry if I wasn’t clear. The only aspect of CO2 on which my proposed refutation of the dichotomy (temperature is driven either 100% or 0% by CO2) rests is the smoothness of the Keeling curve. No physical properties of CO2 are involved. The point is that there are temperature fluctuations during this century that cannot be accounted for by CO2 but which are comparable in size to temperature fluctuations in the previous century. When the alleged impact of CO2 on temperature has been taken into account at 2 °C per doubling (the observed sensitivity, as distinct from the modeled sensitivity of 4 °C), what remains are fluctuations that while nontrivial are also nonpathological in the sense that they might well be accounted for by variations in insolation, ocean oscillations, lack of correlation between aerosols and CO2, etc. CO2-correlated aerosol warming or cooling should be considered part of “taking CO2 into account” because those correlations are likely to continue into the future and therefore should participate in temperature projections based on expected CO2 variations, accomplished automatically with this methodology.
In my recent research I made an attempt to separate, what one could consider to be natural oscillations, from an apparent man-made input, result is shown in this graph:
http://www.vukcevic.talktalk.net/CETng.htm
GP ‘Global precursor’ (green line) shows a promising result. If large volcanic eruptions (Katla 1755, then in the early 1800’s Mayon and Tambora) are taken into account, than this ‘proxy’ has a good track with the CETs for nearly 350 years span.
Period of significant divergence starts in the 1950s, when various man made influences (CO2, UHI, CFS etc) may have come into play. Data represents kind of a ‘mini Milankovic’ effect based on the JPL ephemeredes.
@vukcevic “If large volcanic eruptions (Katla 1755, then in the early 1800’s Mayon and Tambora) are taken into account, than this ‘proxy’ has a good track with the CETs for nearly 350 years span.”
Your work looks very interesting. Do you have reliable global temperature data earlier than 1850? I’ve been relying on HADCRUT3 which only goes back to 1850. Even something global for 1800-1850 would be tremendously helpful. 1750-1800 would be fantastic!
Sorry, no help there. I concentrate on CETs, since there is a reasonable record from 1659 (http://hadobs.metoffice.com/hadcet/cetml1659on.dat ) and the Arctic (1850s); crucial role in the N. hemisphere.
“I concentrate on CETs, since there is a reasonable record from 1659”
Right, the NH record spans more than twice what we have for the globe. What’s been discouraging me from looking at data for just one hemisphere is the multidecadal warming of one or the other hemisphere at different times. Is there a reliable way to subtract that oscillation off? If so I could get more interested in the two centuries prior to 1850.
One can brute-force it simply by convolving with a half-century or so point spread function, but that erases everything changing faster than that. This is fine for paleoclimatological comparisons where that’s all you have anyway but it masks potentially interesting phenomena happening in a time frame of a couple of decades.
I should add that I was using “modeled sensitivity” only as a convenient name for the 4 °C vicinity of the sensitivity range, not to imply that models always produce 4 °C. Modeled sensitivities can vary over a wide range due to wide variations in modeling. Observed sensitivity is more constrained, varying only with choice of observations of CO2 and temperature and methodology for interpreting their relationship. Observations limited to 1850-now should produce close to 2 °C per doubling, justifying referring to that vicinity of the range as “observed sensitivity.
I realize this is a bit out of left field, but that’s me.
[When the alleged impact of CO2 on temperature has been taken into account at 2 °C per doubling (the observed sensitivity, as distinct from the modeled sensitivity of 4 °C)…]
Doesn’t the attribution of ‘observed sensitivity’ make the assumption that ALL of the observed temperature increase is due to CO2 increase? Is that a valid assumption?
Arfur Bryant: “Doesn’t the attribution of ‘observed sensitivity’ make the assumption that ALL of the observed temperature increase is due to CO2 increase? Is that a valid assumption?”
That’s a really great question, thanks for asking it. The answer is no. The methodology used here is to subtract the CO2 impact on temperature from the observed temperature. What remains is taken to be the temperature change (increase or decrease) due to other causes than CO2. Some of these are known (insolation variation, ocean oscillations, uncorrelated GHGs such as water vapor and ozone, etc.), others are unknown but perhaps we’ll learn about them in the future.
Vaughan, thanks for your reply. Your answer hasn’t clarified anything for me – maybe I’m just being dense…
[The methodology used here is to subtract the CO2 impact on temperature from the observed temperature. What remains is taken to be the temperature change (increase or decrease) due to other causes than CO2.]
My point is this:
The total ‘observed’ temperature increase from ALL factors and feedbacks is appx 0.8 deg C since 1850. In the same period, but not necessarily correlated with, there has been an ‘observed’ increase in CO2 of appx 40% . Therefore, 100/40 x 0.8 = 2 deg C which would equate to the ‘climate sensitivity’ to which you refer.
There is no division here between the warming from GHGs and non-GHG factors. Therefore the true figure of ‘observed climate sensitivity’ must be LESS than 2 deg C. The amount by which it is less will depend on all the factors you mentioned (insolation variation, ocean oscillations, uncorrelated GHGs such as water vapor and ozone, etc.) and maybe more.
Maybe I’m missing something but I am not sure anyone can distinguish the ‘observed sensitivity’ without knowing exactly what the ‘CO2 impact on temperature’ is. We can only use an ‘observed temperature rise’.
Regards,
“The total ‘observed’ temperature increase from ALL factors and feedbacks is appx 0.8 deg C since 1850.”
I cringe whenever I see people estimate trends from just the endpoints. Even climate deniers know not to do that, though they use that knowledge to create strawman arguments that climate scientists use that methodology.
Use 2 °C and estimate CO2 as 280 ppmv plus an exponential and you’ll see that the residue after subtracting the CO2 contribution to warming looks reasonably flat over a century and a half. 1.5 and 2.5 look nowhere near flat.
Vaughan,
I appear to have antagonised you, which was certainly not my intent. I am just trying to get a handle on all this. I will take your word for it that the observed sensitivity is 2 deg C.
Could you just tell me what, in your opinion, was the contribution of CO2 to the Greenhouse Effect when the CO2 level was 280 ppmv?
Thanks. I’ll understand if you don’t reply.
“Could you just tell me what, in your opinion, was the contribution of CO2 to the Greenhouse Effect when the CO2 level was 280 ppmv?”
No idea. Ironically although I’m better at theoretical than experimental physics it’s the opposite for me when it comes to climatology, because the climate is so much more complicated than physics. It’s a nice academic question what the temperature decline would be in the absence of CO2 but its empirical determination is out of the question (the experiment would kill plant life on Earth) and I wouldn’t know where to begin in calculating it. For starters you’d need some way of removing the infinities, which I barely know how to do even for QED let alone something more complicated like climate.
But even if you knew the answer, what good would it do anyone? It’s completely academic.
Vaughan,
Thank you. I really appreciate the time you have taken to discuss this with a layman!
“But even if you knew the answer, what good would it do anyone? It’s completely academic.”
I agree it is academic in a scientific sense. I suppose that, unfortunately, it is the sort of thing which can be used (or misused) to a large degree in the political sense if one wants to give ‘scientific authority’ to one side of the debate – such as saying that the contribution of CO2 to the GE is 9-26%.
As to your points about CO2 and ocean acidification – yes, the subject is definitely complex!
Regards,
“Thank you. I really appreciate the time you have taken to discuss this with a layman!”
Well, I’m retired so outreach is less of an imposition than if I were still in the saddle with deadlines to meet. Anyway as far as climate goes I’m pretty much a layman too, I’m just leveraging my physics, math, and business backgrounds to synthesize a general picture from what the climate experts tell us.
Such a picture makes it a lot easier to see the holes in the arguments of the denial machine, whose effectiveness is reflected in the need to represent it on all three panels and whose illogic is sharply illuminated by putting it under the spotlight in this way.
I get the feeling there are at least four kinds of climate denier: those who have difficulty following the thread of an argument, those who have not been adequately exposed to the science, those who consider it their patriotic duty to unquestioningly defend the denial machine like sentries on duty, and those like Fred Seitz, Fred Singer, William Nierenberg, and Robert Jastrow who understood the science but believed nevertheless that business and therefore the country is best served with the tools of propaganda in fulfillment of both parts of Barry Goldwater ‘s 1964 slogan “Extremism in the service of liberty is no vice, while moderation in the defence of liberty is no virtue,” where liberty in this case means laissez-faire freedom from government regulation.
Many deniers are some mix of these. It’s interesting to speculate what that mix is for their representative on each of these three panels. Since they all teach climatology only the second can be ruled out immediately.
“Therefore the true figure of ‘observed climate sensitivity’ must be LESS than 2 deg C. ”
Oops, sorry, missed that point. This is answered in the last paragraph of my post lower down of 12:32 pm today. How to infer the observational counterpart of modeled CO2 sensitivity from observed sensitivity is an excellent and wide open question. Even whether the other factors give a net increase or decrease is open.
Ooops, we seem to be typing across each other. Thanks for that reply:)
Got it, thanks.
Ok, great. Let me just add that to a first approximation, what I called “the observational counterpart of modeled CO2 sensitivity” is an academic question if the purpose to which sensitivity is to be put is projection of future temperatures based on extrapolation of the constant-plus-exponential Keeling curve. This is because projections are more accurate when they take the other factors besides CO2 itself into account.
When however one asks about the impact of mitigating aerosol impact independently of CO2 reduction, separating out the factors becomes practically useful. This impacts the cost-benefit analysis of different types of scrubbers at power plants for example. If the cost of removing aerosols dwarfs that of removing CO2 then one might begin with just CO2 scrubbers and put up with the aerosols until a more cost-effective aerosol scrubber comes along. Or it could be the other way round. In either case it then becomes valuable to separate out the contributions of each component in order to predict the benefits and weigh them against the costs.
One must also not lose sight of the other risk posed by CO2, ocean acidification. It is ironic that while acid rain comes from sulfate aerosols, ocean acidification is primarily via CO2’s action in converting carbonate to bicarbonate near the surface, a tiny effect when judged by pH but a huge one when judged by carbonate level. I’m not aware of any comparable impact of aerosols there.
Before one offers solutions it is advisable to understand what the natural variation ie what is causing it rather then who is causing it.
For example Volcanics are an important part of the climate complex and are poorly understood as a whole,ie both GCM and CCM are both constrained by the biological response eg Gauci, V., N. Dise, and S. Blake (2005),
Wetlands are a potent source of the radiatively important gas methane (CH4). Recent findings have demonstrated that sulfate (SO4 2 ) deposition via acid rain suppresses CH4 emissions by stimulating competitive exclusion of methanogens by sulfate-reducing microbial populations. Here we report data from a field experiment showing that a finite pulse of simulated acid rain SO4 2 deposition, as would be expected from a large Icelandic volcanic eruption, continues to suppress CH4 emissions from wetlands long after the pollution event has ceased. Our analysis of the stoichiometries suggests that 5 years is a minimum CH4 emission recovery period, with 10 years being a reasonable upper limit. Our findings highlight the long-term impact of acid rain on biospheric output of CH4 which, for discrete polluting events such as volcanic eruptions, outlives the relatively short-term SO4 2 aerosol radiative cooling effect.
We also see this in Keeling et al 1995 trying to explain the significant CO2 drawdown in the mid 80s and early 90s ,we can more readily explain this as an acute response to a limiting quality eg Langmann et al 2010
Volcanic ash as fertiliser for the surface ocean
Abstract
Iron is a key limiting micro-nutrient for marine primary productivity. It can be supplied to the ocean by atmospheric dust deposition. Volcanic ash deposition into the ocean represents another external and so far largely neglected source of iron. This study
demonstrates strong evidence for natural fertilisation in the iron-limited oceanic area of the NE Pacific, induced by volcanic ash from the eruption of Kasatochi volcano in August 2008. Atmospheric and oceanic conditions were favourable to generate a massive phytoplankton bloom in the NE Pacific Ocean which for the first time establishes a causal connection between oceanic iron-fertilisation and volcanic ash supply.
“Before one offers solutions it is advisable to understand what the natural variation ie what is causing it rather then who is causing it.”
Wait, what? What would it mean to ask who is causing the natural variation? Or is nature a “who” for you, the Gaia hypothesis.
If we agree to delete “rather than who is causing it” as meaningless for your question, the rest of it makes excellent sense and is one I’m very interested in myself. Since we know how much CO2 is in the atmosphere it is straightforward to subtract off the part of its influence that has changed over the past century and study what remains for correlations with natural phenomena. The amount to subtract off is that which minimizes correlation with CO2 variation, which turns out to be around 2 °C per doubling. If you subtract off too much or too little, the residual correlation with the extremely strong CO2 signal masks the natural variations, which you and I agree are of great interest.
I constructed this graph
http://www.vukcevic.talktalk.net/EW.htm
only few days ago, as I often do, to amuse myself (see the note of explanation). But as I look at it is starting to worry me on number of accounts:
-could one assume that there must be some meaning to it, despite no identifiable transfer mechanism.
– if answer is ‘no’ than it must be a major scientific puzzle.
– if answer is yes (assuming there is an unknown link) than something must have happened since 1950 to show the first major departure in the temperature intensity but maintaining the trend (note good agreement since the accurate temps data is available in 1860).
– let’s assume that the cause of the departure manly is CO2, than it must have saved lot of winter heating fuel since 1950, i.e. extending lifetime of the world’s oil and coal reserves.
– if the temperature is to follow the ‘wobble’, which can be accurately forward calculated, than a major cooling is to follow up to 2025-30. It comes as a surprise that this date tallies with prediction of solar scientists for downturn in the solar activity, and what I actually calculated (in 2003) that might be trend of the solar magnetic field.
http://www.vukcevic.talktalk.net/LFC2.htm
My logic might be at fault, but again there was a point to be considered.
Vaughan Pratt:
You say;
“I get the feeling there are at least four kinds of climate denier:”
then you slur several honest and accomplished scientists.
But you fail to mention the most common and most dangerous “climate deniers”. These, of course, are the AGW-proponents who deny that they should demonstrate a need for their proposed GHG emission constraints and who deny the right of others to resist their dangerous proposals.
Fortunately, it is now clear that those proposals will not be adopted so their dire effects will not be induced. But the “climate deniers” deny that, too.
Richard
There is a fundamental disagreement here that I would summarize as follows.
Freedom of speech entitles the George C. Marshall Institute to its opinion concerning whose rights take precedence: the right to pollute or the right to be free of pollution. The effects of taking away the right to pollute may as you say be dire, but it is the opinion of many that so are the effects of pollution. Not everyone agrees with you that you have the interests of their health at heart, whether concerning tobacco or CO2.
Vukcevic,
I am having the same type of indications with atmospheric pressure movement.
Would they also not be related to temperature fluctuations?
If you are referring to http://www.vukcevic.talktalk.net/CETng.htm , I do not think that GP (green line) is a driver (energy is miniscule) and for time being is just an interesting correlation, while NAP has all the necessary attributes to do that, but again indirectly via N. Atlantic currents.
Thank you Vukcevic.
That was informative.
Something to consider though, freshwater in the north to the saltwater in the south. Planetary shape and force are different to the equator. This means that precipiation cycle is not the same as at the equator. So if you brought fresh water south, the evaporation would be faster but there has to be a slight atmospheric pressure increase which would hinder it only slightly.
Vukcevic,
What you would not know as science could never understand it or recreate it is that centrifugal force is the primary driver to the evaporation process. Centrigugal force is directly linked to the speed of rotation and gets weaker as the planet slows. This planet rotates two dimensionally, so the energy from centrifugal force would hit the atmosphere at the poles first due to the shape of the planet and travels to the equator. The circulation draw down to the planet surface and pulls cloudcover north. This process happens in both hemispheres. It is a slow process as all the planet is giving off energy at different rates due to the many factors that would be currently happening. Land mass differences, etc. So the atmosphere does have a ceiling. This pattern is strikening simular to the magnetic field. Inside our planet, the energy cannot escape towards the core, so it must come out through the surface in way of earthquakes and volcanoes. Trapped gases have been compressed to store massive amounts of energy in liquids.
Centrifugal force can be mechanically reproduced.
The way current science is, if we don’t feel it, it ain’t true.
Can you count how many physics LAWS and thermal dynamic LAWS this process breaks? Manmade LAWS, and not science.
“The circulation draw down to the planet surface and pulls cloudcover north.”
What’s your latitude? At our latitude (37° N, Palo Alto) the clouds are always traveling from west to east, very reliably so. I’d been assuming that was because of the Coriolis force.
Yes, Southwest to northeast.
Check any global satelite on cloudcover and you can watch this live for days or years if it suits you.
“Yes, Southwest to northeast.”
You didn’t answer my question “what’s your latitude?”
The Westerlies are winds that blow from west to east between 35° N and 60%deg; N. These are even stronger in the SH due to less land to slow them down, and are called the Roaring Forties, Furious Fifties and Shrieking Sixties according to latitude. (I was raised at latitude 32° S, in Perth, Australia, where we were right in the path of the Roaring Forties as our geography teachers would often remind us. Dutch and English explorers trying to reach India by going around the southern tip of Africa were occasionally blown into the West coast of Australia that way.)
There is no noticeable northern component of the clouds that pass over our house at latitude 37° N. At the equator on the other hand the wind 2 km and up will be northbound (confusingly called a southerly wind) but trending east with increasing latitude until reaching 30° N which is the top end of the Hadley cell. The eastward drift is the result of angular momentum at the equator having to go somewhere as the radius of rotation about the Earth’s NS axis decreases, causing it spin faster by exactly the same mechanism as a skater who spins faster by pulling in her arms.
Bottom line: cloud direction is somewhere between southerly and westerly (i.e. northbound and eastbound) but in proportions strongly dependent on latitude. In the SH interchange north and south but not east and west.
Incidentally I noticed that the comment I posted a few minutes ago in response to Craig Goodrich was flagged as “Your comment is awaiting moderation.” Is this routine? I just don’t recall seeing that flag before.
Too many links causes that.
Keeping notes on your site and linking to them is not a bad idea!
Willam Geer’s testimomy. All the changes he is noting MUST have happened during the MWP and prior to this glaciation period. What Geer doesn’t understand is change in the biota is normal. Also how much of this change is anthropic but not CO2 emissions, like habbitat loss?
Hi, Judith: Regarding Thompson et al (2008), the 1945 discontinuity also exists in ICOADS Global Ocean Cloud Cover and two Global Marine Air Temperature datasets (MOHMAT4.3 and ICOADS):
http://bobtisdale.blogspot.com/2009/03/large-1945-sst-discontinuity-also.html
And it exists in the ICOADS Global Ocean Wind Speed data, but inverted:
http://bobtisdale.blogspot.com/2009/03/part-2-of-large-sst-discontinuity-also.html
Since there has not been any explanation as to why the discontinuity also exists in the other datasets, the results of Thompson et al should be considered questionable.
Is this why the Hadley Centre has delayed the release of HADSST3? Dunno.
Regards
“Since there has not been any explanation as to why the discontinuity also exists in the other datasets, the results of Thompson et al should be considered questionable.”
It could just be that there was a big change in shipping at the end of the war – I don’t think that’s disputed – and that they did things slightly differently. I think the suggestion was that marine air temperatures were read indoors at night during the war, rather than out on deck. If you split marine air temperatures into day and night, I’d bet you would see the WWII “spike” is largest in the night time temperatures.
Another thing that happened during WWII was that the ships changed the times at which measurements were taken. It would be interesting to see if those kinds of effects might influence the records.
Actually I’m not a fan of what Thompson et al. did, but that said, there are a whole host of issues associated with the ocean data.
Judith,
What is not taken into account is the lag time from an event to occur to the forcing being felt. Planet tilting for summer to winter takes months to be felt. A concentrated climatic event can take years due to stored energy in the oceans and atmosphere.
@Joe Lalonde “What is not taken into account is the lag time from an event to occur to the forcing being felt.”
That’s a routine modeling issue. Transient sensitivity takes it into account, equilibrium sensitivity doesn’t. The numbers are quite different, but in either case this is within the realm of modeling sensitivity, namely what the sensitivity *ought* to be.
Observed sensitivity is what it actually *is*, based on any given period of observation (which may yield quite different results depending on the period and how the CO2-temperature relationship is inferred from the observations). (Recall Clinton’s question, how do you define “is”?)
Since observers by necessity have less vivid imaginations than modelers, observed sensitivity varies over a narrower range than modeled sensitivity.
Okay then,
What is the significance of adding salt only to the oceans surface?
It’s not rocket science.
How about changing winds? What significance are they having in the atmosphere?
These perameters take time due to the stored heat and energy already accumulated.
Are these taken into account on current models? Of course not.
As far as climate science is concerned, these have no effects. But before 1974, all the oceans had the same mixture. Just salt on top of the oceans is against how the science is suppose to work, that all the mixture would change if heated and a mass evaporation were to occur. Which did not happen.
Joe, I fully agree with you about the incompleteness of modeled sensitivity. There may well be parameters currently neglected that would improve the accuracy of our climate models. But it’s not only the fact of their incorporation that matters but its manner: modeled sensitivity is so variable because of variations in both.
This can be seen in the work of Arrhenius, who in 1895 gave 5 °C as the climate sensitivity at the equator and 6 °C at the poles but revised this downward to 1.6 °C in 1904. In those days modeling was the only option for estimating sensitivity; today we have ice cores for paleoclimatology and the luxury of being able to observe the effect on temperature of adding a few hundred gigatonnes of CO2 to the atmosphere. From the latter, considering global temperature over a century and a half and assuming an exponential accumulation of CO2 superimposed on an assumed base level of .28‰ (280 ppmv) we may easily rule out both of Arrhenius’s modeled sensitivities as respectively way too high and too low. Even 2.5 °C per doubling is too high and the observations are consistent only with a sensitivity in the vicinity of 2.
While you are right that modeled sensitivity neglects some potentially important factors, the manner in which they would be incorporated is important. The IPCC defines transient climate sensitivity to be the 20-year response to the relevant forcings, and equilibrium sensitivity to be that in the limit. Observed sensitivity automatically factors in everything since the experiment is performed on the whole system omitting nothing whatsoever. It even factors in CO2 correlates such as aerosol impact.
Climate sensitivity as the observed response to CO2 is in reality using CO2 as a proxy for everything industry is doing to the atmosphere, measured in units of what we observe happening to CO2 at Mauna Loa.
Vaughan Pratt,
You are right.
There is a great deal more to be involved to have a complete climate model.
If climate science was not so focused on blaming everything on CO2 and look deeper, they could come up with a decent model that may have the ability to work. Just have to leave space for the unexpected factor like a volcano or solar flare or some other outside influence.
“If climate science was not so focused on blaming everything on CO2 and look deeper, they could come up with a decent model that may have the ability to work.”
Is that the actuality, or just the public’s perception?
If the former, what is the evidence for it?
How can anybody takes Michaels seriously when he used the un-scientific poll from Sci-Am?
Hi
Has anyone seen that pesky Consensus recently?
It was here a couple of years ago, along with its chum ‘The Science is Settled’.
But reading this blog it seems to have wriggled free somehow and is now further away than ever.
Joe Sixpack and Jenny Spritzer might well ask why they should believe any of anybody’s predictions of the future when its abundantly clear that you guys can’t even agree about what has happened already.
Even the most useless racing tipster has the undisputed form guide to work with. You haven’t even got that. A plague on you.
DOES ANY ONE BELIVE IN THE ACCELERATED WARMING OF THE IPCC?
Here is the interpretation of the data as accelerated warming by the IPCC:
http://bit.ly/b9eKXz
Here is the interpretation of the same data as an oscillating pattern by a skeptic:
http://bit.ly/cO94in
Which is the plausible interpretation?
We will find out the conclusive answer in about 5 years.
However, the IPCC projections are already wrong as shown below:
http://bit.ly/cIeBz0
GIRMA,
My answer to your question is: there may be some people who believe in accelerated warming, but I am certainly not one of them! Taking your first graph (or using the HadCRUt data if you prefer), the following trends are found:
1850-1880 = 0.17 per dec
1850-1998 = 0.07 pd
1850-2010 = 0.06 pd
This shows that there is patently NO overall acceleration. The overall rate of warming is reducing, not increasing. If the IPCC wants to move the origin away from 1850 to show any intermediate (short-term) periods of acceleration and negative acceleration (cooling), they can – but it won’t mean anything until the overall acceleration increases above 0.17 per decade.
Judith says : “Michaels’ is seeking to establish reasonable doubt to the EPA’s CO2 endangerment finding, which is based on the statement (very similar to the IPCC’s statement):”
Time and again Santer pushed the its-incontravertible-we-dont-have-any-other-explanation point about the AGW “fingerprint” of the increased atmospheric temperature gradient.
I dont understand why Michaels didn’t bring up Haigh et a as a perfect counter argument.
http://www.nature.com/nature/journal/v467/n7316/full/nature09426.html
I know its an early finding but hey, its one very possible alternative that hasn’t even been explored yet.
For those unaware of the paper, an analysis of the new Spectral Irradiance Monitor (SIM) data (monitored since 2003) has shown that the makup of the TSI isn’t what we thought. The sun’s UV is much less than we thought and the lower wavelength’s are greater. As far as I can see, this ought to have the same effect on our atmosphere in producing that temperature gradient.
“Time and again Santer pushed the its-incontravertible-we-dont-have-any-other-explanation point about the AGW “fingerprint” of the increased atmospheric temperature gradient.”
Yep. That is *the* argument to use in fingering CO2 as the culprit. That short but sweet point suffices to shoot down 50 pages of argument to the contrary. Although it didn’t work on OJ’s first jury it did for the later ones. OJ’s promise to search high and low for the murderer was pathetic, just as is the climate denial machine’s promise that they’ll find the culprit behind the warming and it won’t be CO2.
” the climate denial machine’s promise that they’ll find the culprit behind the warming and it won’t be CO2.”
Its not a case of shooting down CO2 as the culprit, its a case of finding all the relevent factors and apportioning each correctly.
Its not a case of shooting down CO2 as the culprit, its a case of finding all the relevent factors and apportioning each correctly.
Arguing with deniers is like arguing with people who claim that 2+2 is something other than 4. No sooner have you dealt with the guy claiming 2+2 = 5 than the next one comes along and says “It’s not a case of disproving 2+2 = 5 because we we’re claiming 2+2 = 3.”
In 1990, the amount of global warming attributed to chlorofluorocarbons was 24 percent. (Maddox, Nature, 1990, vol 247, p707).
@ Lalonde
I have simplified the previously referred graph, so it gives clearer idea. By using JPL data it is cast forward to 2045.
http://www.vukcevic.talktalk.net/EW.htm
I would like to make it clear at the outset, it does not appear that correlation is ‘effective’, since signal so obtained does not poses required energy. Of course I could be wrong, but for time being it is just an interesting let’s call it coincidence.
“I would like to make it clear at the outset, it does not appear that correlation is ‘effective’, since signal so obtained does not poses required energy. Of course I could be wrong, but for time being it is just an interesting let’s call it coincidence.”
You’re too modest. As you say the energy budget for the impact will need to be worked out taking all relevant feedbacks, but these ephemerides clear up for me some puzzles about the residual fluctuations in global temperature after removing CO2. The CO2 signal punches through these other signals clearly enough but their persistence had been puzzling me until now.
The correlation you point to is news to me but that means nothing since this is not my area. Who first noticed it and when? How come it hasn’t gotten more attention?
Vukcevic,
Thank you!
I doubt very much your calculations are incorrect.
My research is showing that all the vectors are in for a massive worldwide freeze due to more sunlight being reflected back than would normally be absorbed in the oceans from the surface salt changes. Next with the winds slowing planet-wide, this interferes with the break-up of cloudcover into smaller clouds and dispersal. This process started 36 years ago. It is direct into relation to pressure build-up in the atmosphere. More molecules generates more friction against wind sheer.
Judy, As you state, “According to Michael’s analysis, black carbon and stratospheric H2O account for 34% of the 0.468C trend.” It turns out that his analysis misinterprets or misrepresents the data, which in fact suggest a contribution of less than 25%, and consequently a significantly greater GHG contribution. I made this point near the end of Part 1, but since the discussion continues here, I’ll plagiarize myself to repeat the information:
“In my various comments above, I indicated that Michaels misinterpreted the contributions of GHGs to the post-1950 warming, and that they do in fact appear to contribute more than half. This is the case even if we accept his claim that Solomon et al demonstrated a warming contribution from changes in stratospheric water vapor, which if true, would reduce the GHG contribution somewhat. However, it turns out that Solomon’s work suggests a slight net cooling rather than warming from changes in stratospheric water vapor, which would make the GHG contribution larger. This can be seen in Fig. 3c in Stratospheric Water Vapor
For those behind a paywall, the article estimates that increases between 1980 and 2000 were slightly outweighed by reductions since 2000.”
I think that michaels performed poorly when the direct vs indirect sulphates came up and I think it was pretty clear he hadn’t prepared for the argument sufficiently. Santer definitely came out looking professional in that exchange.
@ Vaughan Pratt
The correlation you point to is news to me but that means nothing since this is not my area. Who first noticed it and when? How come it hasn’t gotten more attention?
It was new to me too, when it popped out on the Excel graph, 5-6 days ago
http://www.vukcevic.talktalk.net/EW.htm
I often amuse myself looking at odd things, usually resulting in a bit of ‘ding-dong’ with Dr. Svalgaard, it appears both of us enjoy.
Back to the subject, it is my original work, not aware of anything similar available. It is a strait forward data plot, just normalised to the actual
CETs; more meaningful than anomaly. I feel a bit like a poacher ensnared in his own net, since I am ‘normally’ sceptic, and here I proving to myself that I could be wrong. Unforgivable.
Sorry for the rumble, I put up a post, but it got lost somewhere in this rather chaotic set-up (Dr. Curry the WUWT’s system is far more user friendly)
Here is a link to my comment:
http://judithcurry.com/2010/11/19/michaels-controversial-testimony-part-ii/#comment-13391
I have greatly enjoyed the crossing of swords over how best to do the maths, but kept saying to myself ‘what are all these numbers based on?’ And they are based on some awfully sloppy and unreliable measurements, both on land and at sea.
I worked at the Institute for Social Research in Ann Arbor in the mid 1960s, and had drilled into me the extraordinary importance of the getting the basic data right. The statistics of sampling (which came with its own error bars, if and only if you had done it properly), the design and order of questions, the use of the right words, the style of the interviewer, the time of the interview — all of these variables could affect the little datum which was the answer, which was then given a number; the numbers were summed, averaged, played with in various ways, and were finally turned into words in English in a text that purported to say something about human behaviour. Get the basics wrong, my elders and betters said, more than once, and the rest is rubbish. I learned how to program what was then a big computer, and learned another great phrase, which was the computing version of the determination to be accurate: ‘garbage in, garbage out’. It was a salutary experience, and affected much of what I later did in that field.
And that memory has kept returning throughout this thread. Guys, no matter how clever the maths, if the original data are sloppy, and full of error, it doesn’t matter how ingenious your argument and how sophisticated the maths. You really can’t learn much this way. I am prepared to believe/accept that the planet warmed a little over the last century, but the notion that we can, from the temperature data, separate out natural variability from anthropogenic warming on a decadal basis,let alone anything finer, strikes me as being a huge leap of faith.
And I remember the late and much lamented Donald Stokes defining ‘factor analysis’ for me: ‘That form of mathematical reasoning where the analyst seizes the data by the throat, and shrieks “Speak to me!”
There seems to me a lot of throat-seizing in this whole game.
Raw data may be important to allow people to draw conclusions in a ‘hard science’ like sociology :-). But in Climatology it fulfils a different function.
Data exists only to confirm the theoretical predictions. No care at all need be taken about its collection, since ‘data post-processing’ handles it all.
In post-processing, data is sorted into three types.
‘Good’ data is that which fully supports the latest theories. This data is preserved, and may be recycled many times. Tree rings from very very special trees are examples of this.
‘Poor’ data is data which just needs a little more work for it to agree with theory. It os given a second chance and is changed (climatologists prefer the less emotive ‘adjusted’) until it shows reasonable agreement with theory. Then it is allowe dot mix once more with ‘good’ data. And since records of the adjustments are rarely kept, it is not possible to tell the difference.
A terrible fate awaits ‘bad’ data. It is eliminated, never to see the light of day. The perpetrator may go missing (especially in China), it may be ‘lost in an office move’, it may suddenly become ‘confidential’, Dr. Jones dog may eat it, or, in extremis, Harry the programmer may try to torture it to death and then give up. Whatever its exact fate, it plays no further part in the story – unless Steve McIntyre manages to capture it before final extinction and bring to back to life. But that is rare,
So, data quality in Climatology is very easy to understand. Just remember that in all other fields, if the data doesn’t agree with theory, the theory is wrong. In Climatology its just the other way around.
Simples!
It occurred to me that instead of responding ad hoc to comments on this blog that caught my attention, it might be more efficient to paddle upstream to the ultimate source of these comments, namely Patrick J. Michaels’ testimony itself. (Sometimes my brain moves with glacial speed and on those occasions I imagine a little global warming might get it flowing a bit faster.)
So with the same ad hoc strategy as for my earlier comments, I read PJM’s testimony until the following bit of statistical calculation caught my eye.
PJM: “I examined 13 consecutive months of Nature and Science to test the hypothesis of unbias. Over a hundred articles were examined. Of those that demonstrably had a ―worse than‖ or ―not as bad as‖ component, over 80 were in the ―worse‖ category and 11 were ―not as bad‖.
The possibility that this did not reflect bias can be determined with a binomial probability. It is similar to the likelihood that a coin could be tossed 93 times with only 11 ―heads‖ or ―tails‖. That probability is less than 1 in 100,000,000,000,000,000.”
Now this is the sort of thing I’m more capable of verifying than for example the time it takes heat to flow from CO2 to N2 and back in order to be reradiated. So I thought, OK, that’s interesting but what does it actually mean? What are the odds of getting 40 heads with 93 tosses, for example? Would Michaels have thought to cry foul if he’d seen 40 papers out of 93 preferring one side over the other?
To answer this I wrote a little program to answer the question, what are the odds of getting n heads with 93 tosses. Before applying it to n = 40 it occurred to me to verify it was working by testing it on n = 11, PJM’s case.
Now I naturally had assumed PJM had run this little calculation by enough statisticians and probabilists to be absolutely sure of getting it right because he was going to be doing this calculation before a Congressional subcommittee in sworn testimony. No one wants to be caught lying to Congress in sworn testimony because the penalties are not pleasant to contemplate: years in jail and never able to vote in a US election again.
So I computed (93 choose 11) divided by 2^93, which is the chance of getting 11 heads with 93 tosses. Twice that is the chance of getting 11 heads or 11 tails with 93 tosses.
I got .00000000000000620323.
That’s a chance of roughly one in
161,000,000,000,000. PJM claimed it was a chance of less than one in
100,000,000,000,000,000.
I don’t know about you but I’d say PJM is a trifle pessimistic about those odds, by a factor of more than a thousand!
Fortunately I’m not delivering sworn testimony to Congress. So if I’m wrong I just look a little silly.
But if I’m right then PJM is going to look more than just silly. He’s the one that swore to Congress that his testimony was the truth.
But while PJM is cooling his heels in some white collar prison, there still remains the unanswered question of the odds of getting 40 heads in 93 tosses, which would surely be considered unbiased by everyone.
I got .03358, or about 1 in 30. If a proposition has a probability of only .05 of being false, then the proposition is considered by statisticians to be likely to be true.
Had PJM looked at 93 papers, with 40 of them taking one side and 53 taking the other, then by the same reasoning he used for the case of 11 papers the proposition that this outcome demonstrates bias is from a statistical standpoint true.
(To be absolutely fair I should also have looked at 53 taking one side and 40 the other, in order to exactly match what PJM wrote, but hopefully I’ve made my point already.)
If PJM can’t even get elementary statistics right, what possible chance does he have of getting correct the far more complicated mathematics forming the basis of his testimony’s four objectives?
But then it occurred to me that maybe mathematical accuracy is not relevant to the question of whether global warming is a serious problem. As long as the basic reasoning is logically sound, individual mathematical errors should not harm the overall line of reasoning.
To assess the logical accuracy of Michaels’ testimony I considered more qualitative statements. One that caught my eye was his point that black carbon was a major contributor to global warming. From this he inferred that CO2 was being unjustly blamed for global warming, and that the black carbon being produced along with the CO2 was contributing a quarter or more of the warming.
Wonderful I thought, CO2 is not as harmful as we thought.
But then it occurred to me to ask, what are we really concerned about, CO2 or global warming? If black carbon is that big a contributor to global warming then perhaps we should be going after the people producing black carbon instead of those producing CO2.
This turned out to be the usual suspects. Those producing black carbon were those producing CO2. Maybe not one for one but basically the same lot, by burning many gigatonnes of carbon-based fuel every year.
This is like arguing that OJ didn’t do it because he was alleged to have used a knife when in fact he used a fork.
PJM’s logic doesn’t seem any better than his math. If anything it’s worse.
Well, maybe the important point is not to be mathematical, or logical, or any of those left-brained things, but just to be consistent. So I looked into that too.
Now I’d noticed that a number of people on this blog had pointed out that over the years PJM’s estimate of the rate per decade of temperature increase has been steadily rising. Yet in his testimony, and in his posts to this blog, he has been insisting it’s a steady 0.16-0.17 °C per decade over a period of decades.
One can only assume from this that every estimate he issues erases from his memory all recollection of his previous estimates.
I would say PJM fails not only mathematics and logic but also consistency.
At this point the hypothesis that PJM’s testimony is nothing but a pack of lies comes to mind. That would indeed be a harsh hypothesis, and therefore deserves very close examination.
I have examined PJM’s testimony sufficiently closely to convince myself that this is so far from the truth as to verge on libel. I found quite enough true statements in his testimony to refute this, including the following.
1. I am a Senior Fellow in Environmental Studies at the Cato Institute and Distinguished Senior Fellow in the School of Public Policy at George Mason University.
2. My testimony has four objectives.
3. For decades, scientists have attempted to model the behavior of our atmosphere as carbon dioxide and other greenhouse gases are added above the base levels established before human prehistory.
(Taking into consideration the work in 1767 of noted physicist and alpinist Horace de Saussure, the inventor of the solar oven among other accomplishments, this would even be true with the substitution of “more than two centuries” for “decades,” although certainly interest has mounted in recent decades.)
4. Analyses of climate models can be highly dependent upon the time period chosen.
(Word. I’ve noticed this myself.)
5. The reluctance of the Senate to mandate significant reductions in carbon dioxide emissions has resulted in EPA taking the lead in this activity.
Very true. Very sad.
6. Late in 2007, Ross McKitrick and I published an analysis of ―non climatic‖ trends in surface temperature data.
Only the completely unqualified could deny this.
So we seem to have a complex mixture of true and false statements in Michaels’ testimony. This greatly complicates the task of deciding whether, on the whole, it is true or false.
I would like to propose the following. If anyone can produce a statement in Michaels’ testimony that supports the thesis that global warming is not something to worry about, and that cannot be easily refuted, then we should regard Michaels’ testimony as making a positive contribution to the promised rational discussion of climate change.
Unlike locomotive engineer Peter Laux, who offered AUS$10,000 for a “conclusive argument” for AGW, I won’t offer even a penny to the first person who can find such a statement in Michaels’ testimony. I am prepared however to admit that they have a keener eye than I for the truth. This is entirely possible given that I’ve been noticing little flashes lately in my right eye that a couple of my friends have suggested to be symptomatic of a detached retina. Hopefully I’ll find out more about this after checking with my opthalmologist. If it turns out to be serious I may well have overlooked an important point in Michaels’ testimony.
Oops, replace “a factor of more than a thousand” by “a factor of nearly a thousand.” I’m too old to get these things right the first time.
Vaughan Pratt:
It seems that bias in data processing is of no importance to you.
You quote Michaels as saying:
“The possibility that this did not reflect bias can be determined with a binomial probability. It is similar to the likelihood that a coin could be tossed 93 times with only 11 ―heads‖ or ―tails‖. That probability is less than 1 in 100,000,000,000,000,000.”
And you claim Michels is wrong because you calculate the probability and say
“That’s a chance of roughly one in 161,000,000,000,000. PJM claimed it was a chance of less than one in 100,000,000,000,000,000.”
So,Michaels got it wrong – you say – by “a factor of nearly a thousand.”
Now, I know that AGW-proponents like to exagerate trivia, but don ‘t you think your demonstration of the practice is a bit extreme?
And can we agree that Michaels would have been right if he had said the following?
“The possibility that this did not reflect bias can be determined with a binomial probability. It is similar to the likelihood that a coin could be tossed 93 times with only 11 ―heads‖ or ―tails‖. It is nearing impossible in that it has a probability of less than 1 in 161,000,000,000,000.”
Richard
First, I was not debating whether his conclusion about bias was true or false, I was pointing out a flaw in his proof of that conclusion.
Are you arguing that as long as a conclusion is true, any proof of it however flawed is as good as any other? In this case his proof was flawed by arithmetic that was off by three orders of magnitude!
Second, you appear not to have noticed my follow-up point that his whole methodology for computing bias is flawed, because it demonstrates bias even in situations that one would not ordinarily judge as seriously biased.
Let’s say for the sake of argument that Michaels had found 53 papers siding with him and 40 against. Should he complain about bias? According to his reasoning for the 11-to-82 case, and assuming the customary 95% confidence threshold for statistical significance, it would be irrational of him not to complain about bias since the same methodology shows less than a 4% chance of no bias.
Third, you accuse me of not caring about bias in the literature. I certainly do care, I expect the literature to be strongly biased towards the truth or it becomes no more useful than determining the truth by tossing a coin. What is a reader to make of a subject 50% of whose papers argue for a proposition and the other 50% against it? Science is supposed to answer questions, but if it insists on balancing both sides of every question, no question will ever get answered.
When as a layman outside a subject (which I am in the case of climatology) I see a bias of 83 for a proposition and 11 against, my reaction is that I’m glad to see some scientific progress being made on that question. I will also not be surprised to see those arguing against the proposition complaining about the bias. And so they should, as the literature is evidently currently biased against their side of the proposition.
If they argue mathematically, logically, and consistently for their side of that proposition then I wish them luck in overcoming the weight of opinion so evidently stacked against them.
But if they miss one of those three then I stop paying attention because I can’t take seriously any reasoning that is unmathematical, illogical, or inconsistent.
In Michaels’ case it’s all three, for the reasons I gave earlier. To which we can now add irrational, unless he is willing to declare that 53 papers on his side and 40 against would demonstrate bias in the literature.
> Now, I know that AGW-proponents like to exagerate trivia […]
Here is an opposite rationale:
> I was once involved in trying to detect a business fraud many years ago. A friend told me that to look for evidences of dishonesty in little things, as someone who is dishonest in big things will also be dishonest in little things.
Source: http://climateaudit.org/2005/10/29/is-gavin-schmidt-honest/
US dimes are fairly evenly weighted, but the other coins are weighted slightly towards towards the heads side, so flipping heads or tails involves initial starting point and consistant anglular velocity. It has nothing to do with probability.
In my misspent youth I made a tidy sum off of people who believed a coin toss involved probability. It involves initial starting position and angular momentum. I never managed 93 heads ot tails in a row as the ‘mark’ invariable had insufficient funds to ‘double or nothing’ past 8 or 9 tosses.
Vaughn Pratt,
Thanks for the catch of the typo in Pat’s testimony. Clearly he got a bit overzealous with the zero key. He should have written, “a chance of about on in 100,000,000,000,000.”
The paper that supports that portion of Pat’s testimony is included in his reference list as:
Michaels, P.J., 2008. Evidence for “publication bias” concerning global warming in Science and Nature. Energy & Environment, 19, 287-301
In that paper, he has the right probability (for a slightly different set of numbers–10 (or fewer) out of 94 is a probability of less than 5.2 X 10-16.).
I’ll be sure to point out to Pat the typo in his testimony. I think the point he was making is unaffected.
Thanks again,
-Chip
Perhaps, but Pat Michaels’s point is not unaffected by Vaughan Pratt’s criticism there:
http://judithcurry.com/2010/11/19/michaels-controversial-testimony-part-ii/#comment-13748
Here is the relevant part:
> [H]is whole methodology for computing bias is flawed [.]
Perhaps this is also caused by a typo?
Please make sure to point out to Pat Michaels that criticism too. He seems so overburdened by his new book that he did not have the time to come back here this week-end, as he promised.
Or was that a typo too?
Willard,
I would invite you (and Vaughn and anyone else interested) to read Pat’s E&E paper that I listed above to get more of the details of his analysis.
-Chip
Perhaps Dr. Michaels’ analysis could be extended to bias in library subscriptions. I’m having trouble finding any nearby library that subscribes to Energy & Environment. As far as I can tell my own institution (Stanford) does not.
Most educational institutions are biased in favor of the truth as they see it. Even institutions that teach Intelligent Design in place of evolution are biased in that way. How many institutions can you name that are “fair and balanced” in the sense of giving both sides of every proposition equal time?
The proposition that science must be unbiased is absurd. Objectivity in both science and journalism is being undermined today by misguided efforts to replace it with lack of bias. If a reporter on the scene at a fire is obliged to spend equal time arguing that the fire is still blazing furiously and is under control, objective reporting goes out the window.
Ted Koppel and Fred Jarvis argued exactly this point on the radio a couple of weeks ago , with Koppel decrying the lack of objectivity in journalism today and Jarvis arguing that this point of view was now old hat because there was no money in it. Koppel argued that the audience for the news needs to know where their information is coming from so they can get it from sources they trust. Jarvis strongly disagreed on the ground that the media greatly increased the quantity of news by getting it from all sources regardless of who they were. Anonymous sources were just as legitimate as town mayors.
I did not feel Jarvis made a strong case for quantity over quality. Perhaps Dr. Courtney would disagree, given his lack of concern for details. Le bon Dieu est dans le détail (Flaubert). Details like honorifics are of no concern when much larger issues loom.
Quite apart from any possible forensic value in identifying it (McIntyre’s point raised by Willard), neglect of detail is a fast track to fallacy.
I would like to propose the following. If anyone can produce a statement in Michaels’ testimony that supports the, and that cannot be easily refuted, then we should regard Michaels’ testimony as making a positive contribution to the promised rational discussion of climate change.
An interesting challenge which I have no intention of pursuing. However, it is a thought provoking on line:s “ Is the global warming something to worry about ?” , of course it is to worry about, but it is a question of degree.
– As I previously suggested, if CO2 indeed raised global temperatures on one hand, it reduced consumption of fossil fuels during winters (1960-2010 and in the future), which would no be there without additional warming, on the other. In that way humanity benefited by extending life time of the world’s fuel reserves.
– This graph
http://www.vukcevic.talktalk.net/EW.htm
is not a model, it is a natural process, with the best approximation of the actual (CET) temperatures, not a reconstruction, that I have come across ( but I stand to be corrected). Not only that it tracks the temperature back to 1660 pretty faithfully, but also points to certain execs since 1950s, which may or may not be CO2.
What are we to make of its forward calculation?
It unmistakably suggests that there is certain amount of cooling is in the offing.
Question is: could the previous correlation hold for the future ?
Difficult to say, but there is a small chance that could be the case.
Conclusion:
– for those who think we should not worry here is a welcome ‘proof ’.
– for those who think we should worry, may help to lower their degree of anxiety
– as for myself ….hmmm, that would be taking sides; I shall wait for 2025 before I make my mind:
http://www.vukcevic.talktalk.net/LFC2.htm
Conclusion:
– for those who think we should not worry here is a welcome ‘proof ’.
– for those who think we should worry, may help to lower their degree of anxiety
Risk management balances probability against severity. Airlines for example consider plane crashes to be a severe risk and even a 0.001% chance of a crash an unacceptability high probability.
This has not always been the case. British South American Airways was piloted by ex-WW2 pilots who were accustomed to planes not returning from wartime missions and retained that attitude in peace time, greatly driving up the probability of crashes for that airline relative to all other significant airlines of the time. The famous STENDEC (“Star Dust”) crash was just one of several BSAA crashes back then.
In the case of global warming, arguing that we may get safely through this crisis is like a BSAA pilot arguing that the plane might safely make it to the next airport. It may even be a very high probability, but this cannot be the sole basis for comfort when the severity of an accident is very high.
So what’s the worst that could happen? Well, that’s actually pretty scary to contemplate, sufficiently so that the media prefers not to dwell on it for fear of being accused of scaremongering. (As a layman in retirement myself I have no vested interest in or any possible benefit from scaring anyone, nor any desire to do so without extremely good cause.)
One particularly bad outcome that is now starting to look more likely than even just two years ago is the prospect of the currently ongoing melting of the Northern Hemisphere permafrost releasing the several hundred gigatonnes of methane clathrates currently sequestered beneath it. The West Siberian Bog alone covers some 70 gigatonnes of methane. If that happens the current level of global warming will be like the warmth from the Moon in comparison to what that much methane can do when released over two decades. For that time horizon, methane’s globalwarming potential is 72 times that of CO2 (over a hundred years it is only 25 times since methane slowly degrades to CO2 which does not degrade further). Furthermore we are adding “only” 30 gigatonnes of CO2 a year to the atmosphere (60% of which is continually being removed by nature) which is a lot less than the amount of methane that could be released. Moreover its release would constitute a tremendously strong positive feedback that would quickly melt the remaining permafrost, making the current estimates of a century or more to melt it a fatal miscalculation. The term
“tipping point” that we hear less often in the context of global warming than we probably should is particularly apropos for this scenario.
So even if you peg the probability of that event as being at a relatively harmless 1% (which personally I would view as extremely optimistic), risk managers looking at even a 1% chance of such an event tend to try to get management to pay attention.
Managers who ignore their risk managers, for whatever reason however good it might sound, put their organization at unnecessary risk.
No need to worry.
The Chinese have gone from burning less then a billion tonnes of coal a year to more then 3 billion tonnes/year in less then a decade.
I’m pretty sure if the Russian’s announced ‘free methane’ the Chinese would be able to burn 70 billion tonnes of methane in 10 years without even trying.
I’m pretty sure if the Russian’s announced ‘free methane’ the Chinese would be able to burn 70 billion tonnes of methane in 10 years without even trying.
That’s a sensible suggestion whether or not you intended it as one. ;)
Given the huge global warming potential of methane (GWP of 72x CO2 for a 20-year horizon), burning it before it reaches the atmosphere would “save the world,” at least relatively speaking. Currently this might be done by igniting it everywhere it’s leaking. Without doing the math I would guess that burning it to form CO2 would add considerably less to global temperature than releasing it to the atmosphere as methane, since the heating effect of combustion should be well below that of greenhouse-gas heating.
And that’s assuming the heat of combustion is simply dissipated unused. If furthermore energy could be extracted from that combustion this would offload conventionally fueled power stations.
In either case, not a great solution (total destruction of the tundra, which ecologists would hate) but better than simply letting the methane escape into the atmosphere.
The Siberian bogs are a negative feedback,the ecosystem is well adapted to this ie it requires CH4 for a living.
The Siberian bogs are a negative feedback,the ecosystem is well adapted to this ie it requires CH4 for a living.
So are you saying that the released methane, instead of floating into the atmosphere, will recognize that the ecosystem needs it and do the right thing by sticking around? I’d be very interested in reading about this, do you have a source?
Firstly you provide a fine example of the Elephant Paradox eg John Godfrey Saxe where the elephant in the room is indeed the elephant.
Methantrophs are symbiotic fixers in the Siberian bogs ie they fix nitrogen.As the description tells us they are consumers of methane ie they do this for a living and become more efficient in a warmer world eg spring and summer providing increased photosynthesis for the complex as a whole eg mosses .
Dr. C,
Perhaps the most important point of Dr. Michael’s presentation is that, if the UEA’s Dr. Jones “can’t think what else [besides CO2] it might be,” other researchers certainly can. Just the papers (peer-reviewed, published) I’ve read over the last couple of years (including those cited by Dr. M) account for something like 200% of the observed warming. Clearly they can’t all be right, but equally clearly the CO2 hypothesis needs to take its place in line and stand or fall on the basis of observation and measurement. And so far, on that basis, it does not seem to be holding up too well.
While I can’t imagine my response to Craig Goodrich not eventually being posted to this blog once moderation has deemed it acceptable, let me nevertheless post the following pointer to its content just in case.
Vaughan, any post with more than 6 links lands in moderation (spam protection).
Ok, thanks, Judith, I hadn’t noticed that feature before. Grateful for your maintenance of a spam-free site.
My apologies, I appear to have mistyped the URL in the immediately precending post, it should have been this.
Just the papers (peer-reviewed, published) I’ve read over the last couple of years (including those cited by Dr. M) account for something like 200% of the observed warming. Clearly they can’t all be right, but equally clearly the CO2 hypothesis needs to take its place in line and stand or fall on the basis of observation and measurement. And so far, on that basis, it does not seem to be holding up too well.
Much of the last two decades of research into automatic speech recognition and understanding (the area my last three Ph.D. students worked on at Stanford) has been based on this line of reasoning. Each new technique demonstrates how it increases the proportion of correctly recognized sentences from 80% to 81%. We should now be at 200%, but instead it turns out that the techniques are either duplicative or incompatible, with the result being that perhaps 83% or at the very best 85% is achieved after combining hundreds of them.
Trying to combine hundreds of analyses of the alternatives to CO2 in order to understand the respective roles of all the contributors is entirely the wrong way of going about things. Instead one should subtract off the expected impact of the accurately known level of CO2 and study what remains in order to understand what all these hundreds of other impacts on temperature might be. If you don’t do it that way, the high level of CO2 masks those impacts which leaves you scratching your head which of CO2 and these many other impacts is driving global temperature.
As I’ve said several times earlier here, when you subtract off the observed impact of CO2 on temperature you are left with considerable fluctuations. The difference between these fluctuations and the CO2-induced rise is that the former don’t rise, instead they fluctuate the way you’d expect nature to fluctuate if we got rid of all the humans. A bump up here, a bump down there, some are small, some are big, but none of them suggest any grand plan of nature to greatly increase the temperature the way the horrendous quantity of CO2 has been doing lately. And CO2 will be twice as horrendous in a few more decades, unlike anything nature has in store for us. If you disagree please tell us how nature plans to nail us to the wall even better than we can do ourselves.
Only humans are greatly increasing the temperature today, by adding to the base level of .28‰ (280 ppmv) an exponentially growing amount of CO2.
We have known since Horace de Saussure in 1767 that something in the atmosphere was blocking thermal radiation. De Saussure invented the
solar oven in order to demonstrate the principle. During the first two decades of the 19th century Joseph Fourier incorporated de Saussure’s ideas into his theory of heat in a series of papers culminating in a book on that subject. In the 1860s physicist John Tyndall identified water vapor and CO2 as de Saussure’s “somethings,” as well as measuring a great many other gases to assess what we would now call their global warming potential. He observed that perfume absorbed particularly strongly, so we should be grateful today that combustion releases CO2 and not Chanel No 5!
In 1896 chemist
Svante Arrhenius observed that industry would increase only CO2 and not water vapor whence global warming if and when it happened would be driven by CO2, not water vapor, and proposed a logarithmic dependency of temperature on CO2, giving a doubling rate of 5 °C at the equator and 6 °C at the poles. Arrhenius believed global warming was good for the planet, as did
Guy Callendar who further developed Arrhenius’s theory before and after WW2, along with
Gilbert Plass yet later. Plass was among the first to worry that global warming might not be as good for the planet as Arrhenius and Callendar had presumed.
Until 1991 the theory of global warming caused by man had evolved as a scientific theory more or less calmly just like almost all other scientific theories such as gravity, light,
electricity,
the periodic table, Mendelian genetics, relativity, quantum mechanics, and so on.
In 1988 the anthropogenic global warming theory, AGW, encountered the first signs of an obstacle analogous to Darwin’s theory of evolution, with the ideology of laissez-faire markets in place of the dogma of divine creation of the universe, when the George C. Marshall Institute’s Walter Nierenberg published a paper attributing global warming to increasing heat from the Sun.
At that time the Institute, which had been created in 1984 to defend Ronald Reagan’s Strategic Defense Initiative against the protests of the Union of Concerned Scientists, was still preoccupied with that battle and Nierenberg’s paper was just a personal hobby to relieve him of either the stress or boredom of defending SDI day after day.
But then the Soviet Union collapsed. With SDI now irrelevant and the Institute finding itself facing unemployment, some genius in that organization spotted the value to the energy industry of Nierenberg’s hobby and the rest, as they say, is history. Expanding Nierenberg’s simplistic theory into a wholesale attack on AGW, the Institute homed in on the recently formed Intergovernmental Panel on Climate Change, affectionately known these days as the IPCC, and in 1996 wrote a widely circulated letter to Congress, the US Senate, etc. protesting alterations to Chapter 8 of the 1995 IPCC report, claiming that the alterations were made to “deceive policy makers and the public into believing that the scientific evidence shows human activities are causing global warming.”
Now the alterations the Institute was referring to were nothing but routine responses to recommended changes by reviewers of the report. The accusation of deception would therefore have had to apply to every reviewer. Since reviewers routinely find things to complain about, the Institute’s theory that these reviewer complaints collectively constituted a coordinated attempt to deceive the public were a bit hard to understand, and after Congress had spent some time reviewing these charges by the Institute it simply dismissed them out of hand.
Undeterred, the Institute continues to the present day to insist that everything it has ever said was the gospel truth, backed up by the Dip Phil (Cam) listed at the bottom of page 17 of the Institute’s Richard S. Courtney’s bio which if anything would count as progress towards a degree in divinity somewhere in Cambridge (though not the University of Cambridge which has stated emphatically that it recognizes no such diploma). The Diploma (Bath) listed there is presumably a high school diploma since tertiary diplomas are customarily in some subject.
All this might sound like some sort of attack on those reassuring the public that there’s nothing to worry about. One should bear in mind however that when an institution originally formed for political ends such as SDI attacks the institution of science, science might just decide that it is in the interest of preserving our understanding of how the world works to turn around and defend itself rather than allow the public to be sucked into conspiracy theories of how science is picking the public’s pocket.
An astonishingly high proportion of the public have turned out to be susceptible to these theories. My astonishment will be mitigated in due course only by those social scientists who come along later to explain how the Marshall Institute and its collaborators were able to pull the wool over the public’s collective eyes.
You can fool some of the people all of the time, and Glenn Beck likes to tell his staffers that those are his audience. If you like Glenn Beck you most certainly are not my audience.
It appears to escaped your notice that the IPCC is a political body, with political objectives and political funding. As such it is virtually impossible for it to ever question CAGW.
Which of CAGW, AGW, CGW, GW, and W do you accept? Without knowing that we are at risk of commenting at cross purposes. (For those who don’t follow the denier literature C is for Catastrophic, not a reference to Citizens Against Government Waste.)
Since each successive IPCC report differs from its predecessors by incorporating more recent understandings of climate science, in order to support your claim that the IPCC is too stubborn to question climate change you would need to attribute sinister motives to the governance of the IPCC by providing evidence that these updates serve merely to create the illusion of a panel responsive to the latest science.
(For those who don’t follow the denier literature C is for Catastrophic, not a reference to Citizens Against Government Waste.)
Despite your soft attempt at derision, bear in mind that the White House has decided to frame the climate discussion as Global Climate Disruption, which makes the denier accusation of decided chicken-littleness not only accurate, but also somewhat prescient.
Sorry, no derision was intended. I’d never heard of CAGW before, and when I looked it up with Google every reference I could find was to the other acronym. Same on Wikipedia. Only when I typed CAGW in conjunction with climate did Google clue me in. As far as I could tell the acronym seems largely confined to the denier blogosphere.
The comparison with Chicken Little assumes global warming presents no threat. The arguments for this do not stand up under close examination.
“As far as I could tell the acronym seems largely confined to the denier blogosphere”.
Yes, the Climategate-denying alarmist establishment isn’t honest enought to use it.
Thanks for the link to Courtney. I also found this:
‘The Continuing Misadventures of Robert S. Courtney (non scientist)’.
One of the bloggers there said “He also delights in threatening people with suits for libel”
Vaughan,
I am undoubtedly one of your ‘audience’, as I like to learn and i also like to hear both sides of a debate. So please indulge a few questions about your recent posts:
Vaughan Pratt: “Only humans are greatly increasing the temperature today, by adding to the base level of .28‰ (280 ppmv) an exponentially growing amount of CO2.”
Can you define ‘greatly increasing the temperature’ please? As far as I can make out, the total increase is appx 0.8 deg C in 160 years, and no increase since 1998. Even if you insist that ALL the increase is due to mankind’s emissions, this does not seem a ‘great’ increase to me. Do you not agree? Surely this confirms that the veracity of the (C)AGW theory rests on observation as, I suppose, does any scientific theory?
Arrhenius proposed a climate sensitivity of 5-6 deg C. I believe the current IPCC estimate is appx 3 deg C. This would suggest the original theory is partially or totally incorrect. Personally I would go with partially but it is the size of the ‘part’ that is really the cause of most debate. You have told me before that you use a basic climate sensitivity of 2 deg C. Do you have a reference for this?
Vaughan Pratt: “As far as I could tell the acronym [CAGW] seems largely confined to the denier blogosphere.”
I agree that the acronym was probably coined by those are not pro-(C)AGW. If you wish to use the term ‘denier’, that is your prerogative of course. However, there can be no doubt that the IPCC has associated mankind’s emissions with potentially ‘catastrophic’ changes in the climate. So the idea, if not the acronym, has at least some basis outside the blogosphere. If you want a reference, see IPCC AR4 Section 2-2-4.
Whilst I am on, and talking to a theoretical physicist, could you explain why Arrhenius felt that greenhouse gasses which total 0.04% of the (dry) atmosphere could possibly have a significant – potentially catastrophic – effect on global temperature? I would have thought that, as 99.96% of the (dry) atmosphere cannot be warmed by radiation (as radiation is not heat), the effect of a doubling of eg CO2 would be quite limited. Surely most of heat will be transferred by conduction, not radiation? Water vapour would have a far greater effect an would dwarf any ghg contribution, would it not?
Regards, Arfur
(Incidentally I should clarify that although my training was in theoretical physics my career has been as a theoretical computer scientist applying logic to computation at MIT and Stanford. I have no training in climate science and the most I can bring to it is a blend of physics and logic, for what that’s worth.)
Arrhenius proposed a climate sensitivity of 5-6 deg C. I believe the current IPCC estimate is appx 3 deg C. This would suggest the original theory is partially or totally incorrect. Personally I would go with partially but it is the size of the ‘part’ that is really the cause of most debate. You have told me before that you use a basic climate sensitivity of 2 deg C. Do you have a reference for this?
I would say the “original theory” as you call it was a naive approximation to the truth based on a very early understanding. Newton’s theories of gravitation and of light were similarly naive and inaccurate by modern standards. So was the old quantum theory as understood by Einstein and Bohr before Heisenberg and Schroedinger gave us our modern understanding of quantum mechanics in its equivalent formulations as matrix (particle) mechanics and wave mechanics. Einstein never did succeed in letting go of the old quantum theory, though Bohr did.
Regarding climate sensitivity, you’ve overlooked the distinction I drew earlier between modeled and observed climate sensitivity. Arrhenius had no basis for observations since CO2 did not change significantly in his lifetime, and had to settle for the much more challenging task of modeling climate sensitivity, which is well known to give a wide variety of answers depending on the assumptions. Today we’ve done the experiment of adding quite enough CO2 and correlated aerosols to observe their net effect, thereby doing an end-run around the assumptions (though people continue to publish estimates of modeled sensitivity). One of the best papers I know on observed climate sensitivity is Gregory et al, “An observationally based estimate of the climate sensitivity”, J. Climate, 15:22 3117-3121 (15 Nov. 2002). They argue that the most probable sensitivity based on observed temperature rise is 2.1 °C but with such a large variance that they make both 1.8 and 2.5 °C almost as plausible.
Using a technique that greatly narrows the variance I believe I can say with much more confidence than Gregory et al that the observed climate sensitivity based on equally weighted observations since 1860 is 1.8 °C. Different weightings will give different results, but determination of the appropriate weights is not going to be easy whence my use of a uniform weighting, one that assigns equal significance to measurements made in 1850 and now.
That said, in terms of predicting the likely temperature between now and say 2050 there is not much to choose between 1.8 and 2.1 °C per doubling since the respective projections differ by only 0.137 °C. This is computed from a projected CO2 level of 535 ppmv for 2050, 37% more than at present. Log_2(1.37) = .456 and multiplying that by 2.1−1.8 = 0.3 gives .137.
Those are really great questions, Arfur, because (a) they’re reasonable and (b) I bet a lot of other people are having the same thoughts. You’re obviously not an unquestioning denier, some might call you a skeptic, but I wouldn’t even go that far with such reasonable questions. Because they’re so good let’s address them one at a time so that the answers don’t get all mixed up with each other.
Can you define ‘greatly increasing the temperature’ please? As far as I can make out, the total increase is appx 0.8 deg C in 160 years, and no increase since 1998. Even if you insist that ALL the increase is due to mankind’s emissions, this does not seem a ‘great’ increase to me. Do you not agree?
(I’ll ignore the bit about “since 1998” since (a) there have been plenty of 12-year periods in the past 160 years where the temperature declined and (b) the past 12 years happens not to be one of them regardless of whether you measure it with satellites, radiosondes, or airport thermometers. Moreover even if it were I would not consider that proof of anything—12 years is much too short to base global warming predictions on.)
I would be very much less concerned if the temperature followed a more or less straight line from 1850 to now, and I would then agree with you. But it doesn’t, instead it follows Arrhenius’s logarithmic law with temperature rising at 2 °C per doubling. Instead of a straight line it has been curving up throughout those 160 years and is now increasing at a much faster rate than even half a century ago.
What will it do in the future? Good question.
Surely this confirms that the veracity of the (C)AGW theory rests on observation as, I suppose, does any scientific theory?
We can only observe the past and predict the future. The latter is done by extrapolating the former. If we don’t do that then the past becomes scientifically meaningless: why else would science care about it?
Now a naive extrapolation would simply fit a straight line to the past and extend it into the future, which I take it to be what you’re proposing. But suppose for the sake of argument that on closer examination the past turned out to be one complete cycle of a sine wave, to which the best straight-line fit would then be a horizontal line. Would you predict that the future is therefore a horizontal straight line or a second cycle of that sine wave?
That’s not what we have right now obviously, but let’s imagine it anyway. What would you forecast if the temperature had followed one complete cycle of a sine wave? A straight line or a second cycle?
Your other questions in due course.
This is an extrapolation of solar activity (based on past records), let’s call it a Sine wave modulated by a Sine wave (although is a bit more complex that) as shown by the red line
http://www.vukcevic.talktalk.net/LFC11.htm
I proposed this in 2003, 3 years later in 2006 the top NASA solar scientist Dr. Hathaway was predicting solar cycle 24 to be the highest ever in the recorded history.
http://science.nasa.gov/science-news/science-at-nasa/2006/21dec_cycle24/
Now, his latest prediction
http://solarscience.msfc.nasa.gov/images/ssn_predict_l.gif
is more in line what my extrapolation shows that it may be. Further more he is coming to a view that SC25 would be one of the lowest in the last 200 years.
I think that the above type extrapolations would be more appropriate for the climate models.
I think that the above type extrapolations would be more appropriate for the climate models.
Right, that’s what I find so appealing about your graphs.
The confusing part is that Fourier and Laplace, who were contemporaries in Paris, are in it together, with Fourier contributing the sine waves and Laplace the exponential decays and rises. Fourier describes the oscillations of nature, Laplace the Malthusian growth of humanity.
The first step therefore is to separate the observed temperature into the parts owned by respectively Fourier and Laplace. Prior to 1950 Fourier dominated, but humans today are empowering Laplace to the point where Fourier can only make short-term confusions seemingly favoring the climate deniers on some occasions and the affirmers on others.
With Laplace now dominating, the best way of separating them in my opinion is to compute Laplace’s large contribution and subtract it to leave Fourier’s. This is done by *defining* Fourier’s share to be whatever is not correlated with Laplace’s according to our best model of the latter. Doing it the other way round as most are still attempting is simply a lost cause.
Let’s assume that climate system has these two principal components. As it appears that the oscillating one is at its peak, and is due for downtrend, than it is going to ameliorate rise due to the exponential; hence temperature rise could be halted or even reversed (depending on the resultant of the two) for a period of time.
Natural oscillations may or may not be within certain amplitude band, but again this could be superimposed on an up- or down-slope of a much longer cycle.
As I suggest here
http://www.vukcevic.talktalk.net/CETng.htm
the CETs appear to be correlated to two different factors, which may or may not be related. Although the GP appear to be nearly regular cycle on a gentle upslope, it has no power in itself to affect temperatures directly. That could be definitely achieved by NAP (which on the other hand could be initiated by GP, either in or a counter phase), but the big unknown is to what extent the CETs react to NAP. Further important point here is, although NAP is an oscillating variable, its nature is such that it can also introduce irreversible changes in the climate system.
Vaughan,
Firstly, thank you for your kind words. I do try to be balanced as I freely admit to be lacking in scientific credentials. However, I like to think that I am capable of rational, logical thought (and debate).
(Although I completely take y our point about the temperature since 1998, it must also be stated that 1998 was the year that the MBH98 was ‘sold’ to the public as proof of ‘rapid and accelerating’ global warming. The fact that the temperature has not risen since then has to be a consideration, as the AGW theory (leading to the (C) part) suggests a cumulative effect. There will, of course, be short term variations but, even so…)
“Instead of a straight line it has been curving up throughout those 160 years and is now increasing at a much faster rate than even half a century ago.”
I’m not sure I agree with that. According to the HadCRUt data, the trend from 1850 to about 1879 was far greater than any trend since then, if you maintain the origin at 1850. Yes, there have been short term trends that have been steep, such as 1909 to 1945 and 1975 to 2005 (3 year running average) but the trend of 1855 to 1879 is virtually identical. There have been periods of acceleration, deceleration and negative acceleration and I admit the 1975 to 2005 is very steep but it certainly isn’t ‘a much faster rate’ than other before it.
As to your question about forecasting a sine wave, I am not sure that it is any more naive to extrapolate from a straight line fit or to predict using what happens to be an upslope of the curve when yo make the prediction. However, my answer would be that I would not make a prediction. What would be the point in forecasting?
I await your other answers in due course:) Thanks very much for the discussion.
Regards,
As to your question about forecasting a sine wave, I am not sure that it is any more naive to extrapolate from a straight line fit or to predict using what happens to be an upslope of the curve when yo make the prediction.
That’s a good answer. But now suppose someone points to an oscillating widget within the system in question whose period matches the observation. Would you still find both extrapolations equally plausible?
However, my answer would be that I would not make a prediction. What would be the point in forecasting?
Another good answer: perhaps nothing is riding on the outcome. But now suppose someone is willing to place bets with you from time to time as to what’s likely to happen, but unlike you has seen neither the history nor the oscillator inside. Assuming you were the betting kind, would there still be no point in forecasting?
Or suppose a flat outcome imposed no obligation on you while a sinusoidal one did. Armed with a reliable prediction you could plan evasive action in advance.
Biff, the character in Back to the Future who got an almanac out of the trip to the future and back, certainly saw a point in forecasting.
“That’s a good answer. But now suppose someone points to an oscillating widget within the system in question whose period matches the observation. Would you still find both extrapolations equally plausible?”
Maybe. But I still wouldn’t be necessarily interested in making an extrapolation in the first place.
“Another good answer: perhaps nothing is riding on the outcome. But now suppose someone is willing to place bets with you from time to time as to what’s likely to happen, but unlike you has seen neither the history nor the oscillator inside. Assuming you were the betting kind, would there still be no point in forecasting?”
No, because I’m not the betting kind, and that would show a marked lack of integrity!:)
“Or suppose a flat outcome imposed no obligation on you while a sinusoidal one did. Armed with a reliable prediction you could plan evasive action in advance.”
No, because to use the precautionary principle as an argument requires an actual threat to exist in the first place. Nugatory evasive action can frequently lead to a different, and maybe large, problem.
“Biff, the character in Back to the Future who got an almanac out of the trip to the future and back, certainly saw a point in forecasting.”
Hmmm. I seem to remember that Biff turned out to be a bit of a joke!
But anyway, can we get back to the questions?:)
There will, of course, be short term variations but, even so…
But even so they are meaningless. This can be seen very dramatically at the URLs agwtrend and agwrise (put tinyurl.com/ in front of each). These two plots show almost identical smoothings of the past three decades of temperature, one per decade (red, green, blue).
The only difference is that the second plot starts each of the three decades a mere one year earlier and shortens it by one year. Yet despite this tiny change the difference is huge!
The first one shows a slight rise in decade 1, a steeper-than-average rise in decade 2, and almost flat in decade 3.
The second one shows decade 1 essentially perfectly flat, and decades 2 and 3 both rising at the same rate, equal to the average rate of all three decades. Decade 3 is actually rising very slightly faster than decade 2.
This shows very dramatically just how meaningless it is to look at a single decade’s worth of data.
I totally agree. That is why I use the overall trend to make my point about no acceleration. The point about the trend since 1998 is that it proves the assertion made in the MBH98 graph was both premature and wrong, as it was based on short term trends.
I’m not sure I agree with that. According to the HadCRUt data, the trend from 1850 to about 1879 was far greater than any trend since then, if you maintain the origin at 1850. Yes, there have been short term trends that have been steep, such as 1909 to 1945 and 1975 to 2005 (3 year running average) but the trend of 1855 to 1879 is virtually identical.
Indeed. But the impact of industrial CO2 was negligible prior to the war, reaching only 8% of the natural level in 1938. These large fluctuations prior to that can only be of natural origin. However they are dwarfed by the past two decades, in which no such large natural fluctuations appear; instead there is only the inexorable rise predicted by Arrhenius in 1896, on which is superimposed small natural fluctuations both up and down, each lasting a decade or so, that are readily mistaken for either an end to global warming or unwarranted alarm about its acceleration. Were the former only so; fortunately neither is the latter so.
This will be easily seen from the paper I promised for arXiv, which should only take a couple of days I hope unless I am stuffed by stuffing tomorrow.
“These large fluctuations prior to that can only be of natural origin. However they are dwarfed by the past two decades, in which no such large natural fluctuations appear; ”
Vaughan, that is demonstrably incorrect. The temperature rose by 0.45 deg C between 1875 and 1878! According to you, that ‘can only be of natural origin’. It is not dwarfed at all by the past two decades, so these – by your own criteria – could also have been caused by natural causes!
As it happens, I do think that there is a case for ‘some’ of the twentieth century warming to have been caused by anthropogenic factors. I just think the ‘some’ is rather small.
ps, how do you change the font to italics on this blog?
Although I completely take y our point about the temperature since 1998, it must also be stated that 1998 was the year that the MBH98 was ‘sold’ to the public as proof of ‘rapid and accelerating’ global warming.
That was what I meant above by unwarranted alarm, protested by the deniers (as you are doing here). The deniers got their revenge a decade later, protested by the affirmers. In each case the data supported the winner provided you narrow your eyes to a slit. This game will go back and forth like a tennis ball for as long as the debate persists.
Fair call, Vaughan.
Maybe I should stick to serve and volley?:)
In looking at the climate problem as a problem in physics, it is useful to start first with the energy balance aspect of the Earth-atmosphere system.
The Earth, having an albedo of about 30%, absorbs (as a global annual mean) about 240 W/m2. This will heat the Earth’s surface. When the Earth’s surface reaches a temperature of about 255 K, the thermal radiation emitted by the surface will approach 240 W/m2, and this system can be said to be in thermal equilibrium.
Consider now adding an idealized single layer atmosphere atop of the ground surface. The atmosphere layer is idealized to be completely transparent to solar radiation, and completely opaque (absorption only, no reflection) to thermal radiation. The atmosphere layer is further idealized to have super-efficient internal energy redistribution to maintain an isothermal temperature, but to have energy communication with the ground by radiative means only.
Such an idealized atmosphere layer placed above the ground surface will begin to heat up since it is absorbing all of the thermal energy emitted by the ground surface. It will keep warming until it reaches a temperature of 255 K whereupon it will be radiating 240 W/m2 of thermal radiation to space in order to maintain energy balance for the Earth-atmosphere system (since it is the atmosphere layer that is now the outermost radiating surface from which the radiation to space comes from).
But the idealized atmosphere layer is isothermal, so in addition to radiating 240 W/m2 out to space from its top surface, it must also be radiating 240 W/m2 in the downward direction from its bottom surface. This serves to warm the ground surface which is now absorbing 480 W/m2 (240 W/m2 from the sun, and 240 W/m2 from the atmosphere). This causes the ground to warm to about 288 K, whereupon it now is emitting 480 W/m2 to keep in radiative energy balance.
This is the simplified operating principle of the greenhouse effect where the Earth-atmosphere system and the individual components (the ground surface and atmosphere layer) are all (individually, and combined) in thermal equilibrium. Since the ground surface emits energy at the rate σTS4, and the atmosphere layer at the rate 2×σTA4, the surface temperature TS will be 21/4 = 1.19 time warmer than the atmosphere temperature TA. This concept can be extended by stacking additional idealized atmosphere layers, with each additional atmosphere layer needing to be 0.84 times the temperature of the layer below. It is this basic setup that makes it possible for Venus to support a very high surface temperature for a modest input of solar energy at the ground level, and it also explains why there is a monotonically decreasing temperature in an atmosphere in radiative equilibrium.
The real atmosphere is not so idealized. There is significant solar energy absorbed within the atmosphere, in particular by ozone in the stratosphere, which makes the stratospheric temperature warmer that that at the tropopause. Also, atmospheric layers are not isothermal, nor are they totally opaque to thermal radiation, and there is also the possibility of energy transport within the atmosphere by convective and advective means in addition to radiative.
So, what is it that provides opacity in the atmosphere at thermal wavelengths? Oxygen and nitrogen do not have absorption bands at thermal wavelengths, so the bulk of the atmosphere is actually transparent to thermal radiation. On the other hand, the minor atmospheric (greenhouse) gases do absorb thermal radiation. Water vapor has strong absorption bands at 6 μm and 40 μm, CO2 has a strong broad band at 15 μm, ozone absorbs at 10 μm, and methane absorbs at 8 μm. Clouds, being made up of particulate matter, absorb thermal radiation at all wavelengths.
The absorption bands of all of the atmospheric greenhouse gases are actually made up of thousands of individual absorption lines with absorption strengths that vary by many orders of magnitude. Thus, while the absorption in the strong lines may get saturated, absorption by the gas as a whole never reaches saturation, exhibiting instead, the logarithmic dependence on absorber amount that is reported.
This makes the atmosphere partially transparent to thermal radiation and requires a detailed line-by-line radiative transfer model to calculate and add up all the spectral contribution to atmospheric transmission and absorption (and emission). So, calculating the total greenhouse strength of the terrestrial atmosphere (150 W/m2, or the equivalent 33 K, and breaking it down to being 50% due to water vapor, 25% due to clouds, 20% due to CO2,with the other minor GHGs accounting for the remaining 5%)) may be numerically intensive, but the concept is basically the same as for the isothermal layer atmosphere.
An interesting post that even I just about managed to follow. Without wishing to be rude in any way, however, what is your point?
Personally, I am mainly grateful that Andy Lacis has shown up here and is willing to engage in dialogue
http://pubs.giss.nasa.gov/authors/alacis.html
Judith. I was not in any way trying to be rude. Perhaps I am just demonstrating my lack of knowledge about the science. I was trying to ask Andy how his explanation related to the debate about Michael’s evidence. It was a genuine question from a layman.:)
I didn’t think your question was rude at all, i just wanted to point out that Andy is a heavy hitter in this field.
Without wishing to be rude in any way, however, what is your point?
RobB noticed, but not everyone else might have, that Andy’s reply to this is much further down. (One of the hazards of not using the Reply button to respond to an inquiry is that the response tends to drift away from the inquiry as others do use that button on the various items that accumulate in between.)
Even if you accept that total due to CO2 is as large as 40% of the water vapour, there is a problem. Global carbon fossil emissions from 1950-2000 are about twice as large as the total emissions for the 1800-1950 period, but temperature rise is roughly the same. Something wide of the mark there.
A Lacis:
“So, calculating the total greenhouse strength of the terrestrial atmosphere (150 W/m2, or the equivalent 33 K, and breaking it down to being 50% due to water vapor, 25% due to clouds, 20% due to CO2,with the other minor GHGs accounting for the remaining 5%)) may be numerically intensive, but the concept is basically the same as for the isothermal layer atmosphere.”
Just a quick question on that subject, if I may…
If CO2 contributes 20% of the greenhouse strength, that is equivalent to appx 6.6 deg K of the 33 K. Even if we assume all of the global warming since 1850 is due to CO2, we end up with a 0.8 deg K rise for an increase of 40% of CO2 (appx). I appreciate that linear assumption may not be completely correct but such a large relative increase in CO2 should have given a larger rise than 0.8 deg K, shouldn’t it?
What am I missing?
But the idealized atmosphere layer is isothermal, so in addition to radiating 240 W/m2 out to space from its top surface, it must also be radiating 240 W/m2 in the downward direction from its bottom surface. This serves to warm the ground surface which is now absorbing 480 W/m2 (240 W/m2 from the sun, and 240 W/m2 from the atmosphere). This causes the ground to warm to about 288 K, whereupon it now is emitting 480 W/m2 to keep in radiative energy balance.
I could use some help following this argument. It seems to me that under your assumption of an isothermal atmosphere, and with the ground at your hypothetical 288 K, the top of the atmosphere must reach 288 K. It will then emit 480 W/m2 upwards and now the TOA is no longer in equilibrium.
When I solve the differential equations implied by your assumptions I end up with 255 K at both TOA and the surface. Basically you’ve postulated a thermal superconductor for the atmosphere whence any radiative heat retention is completely compensated for by conduction.
One must view the atmosphere as a pair of resistors connected in parallel, one radiative and the other conductive. Your assumptions make the thermal resistance of the latter zero, in other words a short circuit across your open circuit radiative resistor. Normally it is completely the other way round, the conductive component is practically an open circuit while the radiative component approaches a short circuit as the GHGs decrease. In either case the net effect is a short circuit.
Vaughan,
Let me try another way than Andy took. The conceptual “Layer Model” is often used in introductory textbooks (e.g., Hartmann’s “Global Physical Climatology,” Marshall and Plumb’s “Atmosphere, Ocean, and Climate Dynamics” or David Archer’s book) as a way to visualize how absorbing gases can make the surface temperature warmer, but it’s not a very good way to think about the greenhouse effect. In fact, in a completely isothermal atmosphere (in the vertical) it would be impossible to generate a persistent greenhouse effect. Also keep in mind that to make the simple model more complete, you need to allow for the convective heat loss at the surface.
A more complete perspective considers primarily the energy balance at the top of the atmosphere. In a real atmosphere, the temperature declines with height due to the decrease in pressure and corresponding expansion/cooling by air parcels. In the troposphere, the so-called “lapse rate” is generally found to be between about 6 and 10 degrees C cooling per kilometer.
An interesting question to ask from this point is to take a beam of energy going from the surface to space, and ask how much of it is received by a sensor in space. The answer is obviously the intensity of the upwelling beam multiplied by that fractional portion of the beam which is transmitted to space, where the transmissivity is given as 1-absorptivity (neglecting scattering) or exp(-τ), where τ is the optical depth. This relation is known as Beer’s Law, and works for wavelengths where the medium itself (the atmosphere) is not emitting (such as in the visble wavelengths). In the real atmosphere of course, you have longwave contribution from the outgoing flux not only from the surface, but integrated over the depth of the atmosphere, with various contributions from different layers, which in turn radiate locally in accord with the Planck function for a given temperature. The combination of these terms gives the so-called Schwartzchild equation of radiative transfer.
In the optically thin limit (of low infrared opacity) , a sensor from space will see the bulk of radiation emanating from the relatively warm surface. This is the case in desert regions or Antarctica for example, where opacity from water vapor is feeble. As you gradually add more opacity to the atmosphere, the sensor in space will see less upwelling surface radiation, which will be essentially “replaced” by emission from colder, higher levels of the atmosphere. This is all wavelength dependent in the real world, since some regions in the spectrum are pretty transparent, and some are very strongly absorbing. In the 15 micron band of CO2, an observer looking down is seeing emission from the stratosphere, while outward toward ~10 microns, the emission is from much lower down.
These “lines” that form in the spectrum, as seen from space, require some vertical temperature gradient to exist, otherwise the flux from all levels would be the same, even if you have opacity. The net result is to take a “bite” out of a Earth spectrum (viewed from space), see e.g., this image. This reduces the total area under the curve of the outgoing emission, which means the Earth’s outgoing energy is no longer balancing the absorbed incoming stellar energy. It is therefore mandated to warm up until the whole area under the spectrum is sufficiently increased to allow a restoration of radiative equilibrium. Note that there’s some exotic cases such as on Venus or perhaps ancient Mars where you can get a substantial greenhouse effect from infrared scattering, as opposed to absorption/emission, to which the above lapse rate issues are no longer as relevant…but this physics is not really at play on Modern Earth.
Chris, I’m sorry if my “I could use some help” lured you into actually offering it. I was merely trying to avoid opening on too negative a note. I had hoped the sequel would clarify what I really meant.
As usual your technical arguments dealing with the science are spot on, you just need a little more practice in dealing with the people involved on both sides of the debate.
…and when I say “spot on” I mean really spot on. I fully agree with your perspective, what I don’t know (being just a layman in these things) is how many other people see it the way you described? Does any climate text do so? What papers do so? (Blog postings don’t count as papers.)
I think any decent text in atmospheric radiation should teach the reader the stuff I mentioned. Some of Hansen’s or Andy Lacis’ papers, such as in the early 1980’s, discuss this stuff (as well as some refutation papers of Gerlich and Tscheuschner that I can think of, such as Smith (2008) and a paper I helped co-author, Halpern et al (2010) will give an overview of some of the basic physics as well). But how the greenhouse effect works is really textbook material and I don’t think should be talked about much in the primary literature. Presumably the readers of this literature have already been initiated into the basic groundwork.
Communicating the physics is of course another challenge. The greenhouse effect is ‘simple’ to understand scientifically speaking, but not so simple to accurately describe it in a 20-second sound byte.
But how the greenhouse effect works is really textbook material
You may be thinking that your thinking is equivalent to what’s in for example the Wikipedia article on the greenhouse effect, but it seems to me yours is much better. So I don’t buy that you’re just repeating what’s in the literature.
The greenhouse effect is ‘simple’ to understand scientifically speaking
With such wide variation between your account and the more usual accounts this is not obvious. (I prefer yours.)
Chris, let me be more specific here. You wrote,
A more complete perspective considers primarily the energy balance at the top of the atmosphere.
Later on you wrote,
I think any decent text in atmospheric radiation should teach the reader the stuff I mentioned.
What proportion of “decent texts” focus on TOA equilibrium? My impression was that they emphasized back radiation at the surface. TOA equilibrium is the precise point where Andy’s idealized atmosphere analysis broke down. The reason I caught it is that I always start with TOA equilibrium myself, and the bug was immediately apparent. Andy missed it because he focused on surface equilibrium of the kind depicted in Fig. 7 of Kiehl and Trenberth 1997. That approach involves more complex reasoning, with the triple whammy of being harder to analyze, less obvious to beginners, and more easily exploited by the likes of Gerlich and Tscheuschner by twisting it into an apparent violation of the second law of thermodynamics. G&T would have found TOA equilibrium useless for their purposes.
It is, nevertheless, quite true that in my post I did not draw a specific connection to the topic of discussion regarding Pat Michaels’ interpretation of climate change. I was in part responding to some questions raised in earlier posts regarding the physical basis for CO2 impact on climate.
But there clearly is a direct point to make. For all the conclusions regarding the nature of global warming and climate change reported in the IPCC report, they all have a soundly formulated basis in physics and observations. That is very much in contrast to the contrarian position to global climate change that Pat Michaels likes to promote, based on his dubious statistical correlations applied to conveniently selected data.
I am sure that you are familiar with the oft repeated folk wisdom that statisticians can lie and draw erroneous conclusion on just about anything simply by selecting the right data to include in their statistical analyses, and deliberately excluding the data that happens to be inconveniently contrary to the conclusion that they wish to promote. That is how it is possible for Pat Michaels to conclude erroneously that ” most of the observed increase in global averages since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations, is not supported.”
When analyzing the global climate change in terms of physics (e.g., climate modeling) there is no room to promote unsupported conclusions. It is only the conclusions that the physics allows that we have to accept and go with. That is why there is good reason to believe the conclusions that are based on the physics of the problem, rather than those statistically derived conclusions that are based on imprecise, limited, and/or incomplete sampling of a subset of data from a complicated physical system.
Andy, thanks very much for your response. As this thread has quietened down a bit, I am emboldened to ask you a further question without too much fear of revealing my lack of knowledge in public!
You stated: ” …….climate change in terms of physics (eg climate modeling)”
Without wishing to waste your time, could you expand on this slightly. Can climate computer modeling be thought of as physics? I can understand that many of the known dynamics of climatology can be expressed mathematically and therefore safely encorporated into a model. But surely the models also include a number of assumptions that are not necessarily underpinned by cooper-bottomed fact. (From my reading, it is this area that seems to cause the most angst amongst the skeptic audience.) Is it right therefore to describe modelling as physics? Does that not give a false impression of certainty and infallability?
aprovide your thoughts o
sorry I meant to delete the last line – please ignore the gobledigook.
The objective is to formulate all of the climate processes as much as possible in terms of basic physics. Sometimes we may understand the physical process (e.g., aerosol chemistry) sufficiently well, but its implementation in the climate model is too computer intensive, so we have to settle for including an aerosol climatology instead. In the case of boundary layer turbulence and cloud formation, we don’t really understand the physics completely, so we settle for using empirical parameterizations that appear to adequately resemble the real world. The time and spatial resolution necessary to address some meteorological events such as tornadoes, are far beyond current climate modeling computational capabilities.
There is a bit more detailed description of climate modeling that I posted on the climate blog run by Roger Pielke Sr. You can check that out at: http://pielkeclimatesci.wordpress.com/
“some meteorological events such as tornadoes, are far beyond current climate modeling computational capabilities.” Is not this true of the whole zoo of meteorological events? Surely chaos cannot be tamed, even by physics and the supercomputer. Trying to predict the course of climate change seems to me like trying to predict the course of biological evolution.
For all the conclusions regarding the nature of global warming and climate change reported in the IPCC report, they all have a soundly formulated basis in physics and observations.
Certainly that basis is good enough for government (i.e. policy) work, even if one can nitpick its details.
One difficulty climate scientists have yet to confront is that after spending decades on climate research they build up such a strong intuition about what’s really going on “up there” that it is totally frustrating to have to debate with the complicated mix constituting the by-now-large denier community (which did not exist in 1990). Do any at all share your in-depth understanding? And if there are even just four or five who do, why are they being so disagreeable? Do they have some insight the community is missing, do they argue for argument’s sake, has someone asked them to argue, or what?
Part of the problem is that climate science is not as simple as physics. Every department within physics offers its formulas to make short work of problems: distance s = ½at², force F = ma, energy E = mc², temperature T = dQ/dS, resistance R = V/I, power P = I²R, and dozens if not hundreds more.
What are the formulas that make climate a science? Most climate papers are full of formulas, but which of them belong to climate rather than to the sciences that inform those studying climate?
What’s missing are the John Tyndalls and Richard Feynmans that are simultaneously at the cutting edge of their field yet are in such total empathy with the difficulties faced by the public in grasping the concepts that they can see the quickest way of transporting the public’s mind from point A to point B, of respectively non-comprehension and comprehension.
Complicating this is that the public is not all concentrated at a single point A but are scattered all over the place. An expositor must search for the optimal point A0 that minimizes the distances A-A0 over all A’s, or over a subset if that makes it easier. The preface can then specify the subset, that is, the audience at whom this particular account is aimed.
What would help greatly is simple formulas for climate science. Without that the public will be hard pressed to be convinced that those studying climate are scientists.
Those formulas of course should have been s = ½at², E = mc², P = I²R, assuming Judith’s web server can at least handle the numeric codes for these. (They worked fine when I tested them with Apache.)
A Lacis,
Sorry, I think I posted this question in the wrong section…
A Lacis:
“So, calculating the total greenhouse strength of the terrestrial atmosphere (150 W/m2, or the equivalent 33 K, and breaking it down to being 50% due to water vapor, 25% due to clouds, 20% due to CO2,with the other minor GHGs accounting for the remaining 5%)) may be numerically intensive, but the concept is basically the same as for the isothermal layer atmosphere.”
Just a quick question on that subject, if I may…
If CO2 contributes 20% of the greenhouse strength, that is equivalent to appx 6.6 deg K of the 33 K. Even if we assume all of the global warming since 1850 is due to CO2, we end up with a 0.8 deg K rise for an increase of 40% of CO2 (appx). I appreciate that linear assumption may not be completely correct but such a large relative increase in CO2 should have given a larger rise than 0.8 deg K, shouldn’t it?
What am I missing?
(My apologies to Andy for inserting my answer ahead of his, which is a completely unintended consequence of using the Reply button to reply.)
If CO2 contributes 20% of the greenhouse strength, that is equivalent to appx 6.6 deg K of the 33 K. Even if we assume all of the global warming since 1850 is due to CO2, we end up with a 0.8 deg K rise for an increase of 40% of CO2 (appx). I appreciate that linear assumption may not be completely correct but such a large relative increase in CO2 should have given a larger rise than 0.8 deg K, shouldn’t it?
The linear assumption is easily demonstrated to be completely incorrect for the period 1850-now. Simply extrapolate Michaels’ linear fit backwards to 1850 and (assuming his slope of 0.16 °C per decade) you reach a temperature 1.7 °C below where it actually was back then. This shows that, regardless of whether Michaels’ linear model can be reliably extrapolated forwards, it cannot be reliably extrapolated backwards.
The reason should be obvious: between 1850 and 1938 CO2 rose to only 8% over the natural base of 280 ppmv. We’re now at 39% over it.
Given that CO2 is expected to rise even faster in the future, based on both the Keeling curve and projections of carbon based fuel use, it stands to reason that the same rationale for why Michaels’ linear fit fails to extrapolate backwards also predicts that it will also fail to extrapolate forwards.
What is more likely is that the evident upward curve over 1850-now will continue to curve upwards, following the same general trajectory it has been on (modulo natural fluctuations including two interestingly large ones) for 160 years.
It is a feature of all analytic functions, without exception, including e ͯ , that the smaller the window they are viewed through the more like a straight line they look. That’s a mathematical fact whose practical application is all too obvious here.
“The linear assumption is easily demonstrated to be completely incorrect for the period 1850-now. Simply extrapolate Michaels’ linear fit backwards to 1850 and (assuming his slope of 0.16 °C per decade) you reach a temperature 1.7 °C below where it actually was back then. This shows that, regardless of whether Michaels’ linear model can be reliably extrapolated forwards, it cannot be reliably extrapolated backwards.”
But what I quoted wasn’t a linear assumption, it was a linear fact! it would only be an assumption if one were to extrapolate the overall trend. I am not interested in Michaels’ linear trend. The only trend of note is the overall trend since the start of ‘AGW’. That trend is 0.06 deg C per decade.
“The reason should be obvious: between 1850 and 1938 CO2 rose to only 8% over the natural base of 280 ppmv. We’re now at 39% over it.”
You are assuming that the cause of the warming is CO2. To that end, the warming for 8% = appx 0.4 C. Similarly, the warming for 39% = appx 0.4 C. That doesn’t seem like a strong case to me…
“Given that CO2 is expected to rise even faster in the future, based on both the Keeling curve and projections of carbon based fuel use, it stands to reason that the same rationale for why Michaels’ linear fit fails to extrapolate backwards also predicts that it will also fail to extrapolate forwards.”
Again, assuming CO2 is the cause, that would be a given, if irrelevant.
“What is more likely is that the evident upward curve over 1850-now will continue to curve upwards, following the same general trajectory it has been on (modulo natural fluctuations including two interestingly large ones) for 160 years.”
More likely? Only based on the same assumption. Evident curve? Yes, if you narrow your eyes to a slit there may be a very slight curve.
“It is a feature of all analytic functions, without exception, including e ͯ , that the smaller the window they are viewed through the more like a straight line they look. That’s a mathematical fact whose practical application is all too obvious here.”
Agreed. So lets forget about short-term trends, and stick to the overall trend, shall we? A decreasing trend as is obvious from the data.
Agreed. So lets forget about short-term trends, and stick to the overall trend, shall we? A decreasing trend as is obvious from the data.
It is? Maybe we’re talking about different data. I’m talking about the HADCRUT3 data from 1850 to now, as shown here.
I would have said that was curving up. But if you disagree, extend the 5 year smoothing to ten times that much, basically three half-century chunks. The first half century slopes down, the next slopes up, and the third has twice the slope of the second. How is that not curving up?
Note that because the smoothing is 50 years, dissecting anything finer grained than that is meaningless.
Vaughan,
A 50-year smoothing? Lets get back to reality, shall we?
Take an exponential curve. Draw a straight line from the origin to any point on the curve. The further right (along the x axis) you go, the steeper the ‘trend ‘straight line will be. That would be an increasing trend.
Now take your 5-year smoothed graph. Draw a straight line from the start of data to the peak around 1880. Measure the trend at around 0.17 deg per decade. Now draw another straight line to ANY OTHER point on the graph along the x axis and you will get a decreased trend. The trend to 2010 is about 0.06 deg per decade. Hence a decrease.
Now compare your 50-year smoothed graph with this 1-month graph (which is closer to reality) http://woodfortrees.org/plot/hadcrut3gl/mean:1
and compare the two graphs. They are not very similar, are they?
As each year goes by without the ‘rapid and accelerating’ rise in temperature, so does the increasing falsification of the (C)AGW theory.
I believe that we ARE using the same data! :)
But what I quoted wasn’t a linear assumption, it was a linear fact! it would only be an assumption if one were to extrapolate the overall trend. I am not interested in Michaels’ linear trend. The only trend of note is the overall trend since the start of ‘AGW’. That trend is 0.06 deg C per decade.
I understood you to be using the word “trend” in the sense of “trend line.” That’s what’s known as a linearity assumption, not a linearity “fact”: you’re modeling the temperature as a linearly rising value. That would make sense if the CO2 grew in a way that makes temperature a linear function of its level. Over 30 years it does it is not unreasonable to call linearity a “fact.” Over 160 years it is patently obvious that it is not a fact. Looking at the trend line over that period is nothing more than a badly unwarranted assumption.
Michaels’ testimony could have been predicted a century ago. Picture yourself as a time traveler going back to talk to Arrhenius in 1904, just after he’d lowered his 5-6 °C climate sensitivity of 1896 to 1.6 °C. You first tell him that he has now bracketed the range taken seriously in the time you come from, and the consensus number at your time is 3 °C per doubling, albeit with much uncertainty.
You then tell him that in 1950 the CO2 will be 310 ppmv and in 2010 it would be 390.
To your astonishment Arrhenius tells you that over that period the temperature will rise at an average rate of 0.166 °C per decade.
“Professor Arrhenius,” you cry, “did you read my mind? That’s exactly what the distinguished climate scientist Dr. Patrick Michaels told Congress shortly before I left for my time voyage to visit you! Well, maybe not that last digit, but he did say 0.16-0.17 °C per decade. However did you do it?”
“Arfur Bryant,” he replies, “did you read my 1896 paper? You gave me enough information to deduce this from it.”
Professor Arrhenius then goes to a blackboard (no whiteboards in those days) and writes the expression
3 log(390/310)/(6 log(2)). He explains that the ratio 390/310 is the amount by which the CO2 increased. Now 39/31 is 1 and 8/31. 8/32 = .25 and 8/31 is about 3% more or .258, so 390/310 = 1.258. He then picks up a table of logarithms and looks up log(1.258) which turns out to be 0.0997. “Close enough to 0.1,” he says cheerfully. “Now divide that by log(2) = .3010, but I know its reciprocal by heart, 3.32, so we’re looking at slightly less than 0.332 doublings to increase 310 ppmv to 390 ppmv. We have six decades so divide by 6, but multiply by 3 for the number of degrees per doubling so in the end we want half of 0.332 or 0.166 °C per decade.
“God bless you, Professor Arrhenius,” you say, “you’ve just shown that the distinguished Dr. Michaels’ testimony to Congress is completely consistent with what you knew back in 1896. His 15 pages of testimony in which he argues for a linear rise of 0.16-0.17 °C per decade is absolutely true because, armed with the foreknowledge of those two CO2 points, you have been able to foretell the same outcome with a mere three lines of calculation! I must rush back to my time and inform Congress of this most remarkable agreement between theory and practice. Rarely is climate science permitted such perfect agreement.”
“I am glad to have been of service to the New World,” says Professor Arrhenius. “But since you have been so kind to tell me the CO2 points at 1950 and 2010, by any chance do you happen to have it for 1980?”
“Well, funny that you asked because it just so happens that this question came up in a different context and was answered as 337.5 ppmv. But why would that be of interest to you?”
“I was just wondering whether every decade rose at this 0.166 °C rate. According to my theory it should not, and you have just confirmed this remarkably well with your extra data point. Allow me to explain.”
Arrhenius repeats his calculations using the same formula as before, but this time breaking the 6 decades into 3+3, namely 1950-1980 and 1980-2010. 3 degrees per doubling divided by 3 decades instead of six means that lb(337.5/310) is the warming per decade, and likewise lb(390/337.5), where lb is the modern notation for binary log or log base 2, by analogy with ln for log base e (lb(x) = log(x)/log(2)). These are respectively .123 and .209.
“So,” says the professor, “the mean of these two rises is indeed 0.166 as I forecast. But I predict that the first three decades will rise much more slowly than the second.”
“I’m sorry to be the bearer of such bad news, professor, but it is clear from Dr. Michael’s fit between 1950 and 2010 that the temperature has risen in a straight line. Your prediction may have seemed very convincing to your and your colleagues at the time, but we from the future can see how they panned out. The temperature has risen linearly and could not possibly have changed slope to anywhere near the extent your theory predicts. In physics, professor, experiment trumps theory. The future has found your theory very badly wrong. I am afraid I must bid you adieu now as I must report your first prediction back to Congress as having been borne out by experimental observation.”
The narrator interrupts. “Arfur, I must protest that you have made a serious methodological error here in your hasty dismissal of the learned professor’s second prediction. You relied solely on Dr. Michaels’ fit and consulted no other sources. Have you not heard of the concept of fact checking?”
The beauty of the WoodForTrees website, which was set up by UK programmer Paul Clark with encouragement and input from well-known denier Anthony Watts, a very experienced meteorologist who runs the WattsUpWithThat website, is that it tells the truth whether you are a denier or an alarmist. It has a wonderful menu system that lets you ask questions like, “What was the trend from 1950 to 1980 and the trend from 1980 to 2010, superimposed on the actual temperature?” (which is easier to see when smoothed with a 5-year running average). Here it is.
(Notice the URL and the menu entries, which give the same information in two ways, making it easy both to post URLs as I’ve done here, and to use interactively once you’ve absorbed the meaning of these graphs and want to play around further with it.)
This single URL above to a neutral climate-related website tells the real story behind Michaels’ testimony. Michaels fitted a linear curve to 1950-2010 when it is plain, both from theoretical considerations already understood in the 19th century and from the observed temperature rise over that period as shown by that one URL, that the rise is far from linear!
Between theory, observation, and Michaels’ testimony, the only good match is between theory and observation. Draw your own conclusions.
Vaughan,
Thank you very much for your lesson. It was well written and highly informative and I’m genuinely pleased to have been given a chance to talk to Arrhenius. He seems a decent enough sort of bloke – as do you.
If I had been really talking to him, I might not have concetrated on the 1950 – 2010 period, as you seem to want to, but the entire period of ‘AGW’ from 1850. For example, I might have told him that the CO2 level in 1940 was 305 ppmv and that in 1950 it was 310 and asked him to predict the two temperatures. He might not have been so prescient.
All your calculations are based on the unproven assumption that a doubling of CO2 is 3 deg C. If you have some real-world evidence to support this ‘consensus number’ then please show me.
You seem to have obtained the idea that I am either interested or fixated on the Michaels testimaony. I have already stated – and here it is again – that I am not. It doesn’t interest me in the slightest. Like you, I am only interested in observational data. However, I have no preconception of what should have happened based on my devotion to a theory which has yet to be proven. There is no need to ‘play’ with graphs. Just take the graph which shows the data:
http://woodfortrees.org/plot/hadcrut3gl/from:1850/to:2010
Take a good long look at it with an unbiased eye. The y axis total temperature rise is a mere 0.8 C whilst the x axis 160 years. To me, this does not imply a catastrophe. When James Lovelock (The Gaia Theory) was asked to investigate how we could assess life on other planets, he – after much research – ended up with ‘just look at the planet from space’. It was obvious Earth would support life.
Looking at the graph, I think it is obvious that there is no cause for alarm. Draw your own conclusions!
If I had been really talking to him, I might not have concetrated on the 1950 – 2010 period, as you seem to want to.
I didn’t want to. That was Michaels’ choice, not mine, see Figures 4 through 8 of his testimony, all five of which fit one trend line to 1950-2010. For that period he obtained 0.16-0.17. My point was merely that (a) he obtained the correct slope under the hypothesis of linearity, and (b) even a short period like half a century is already enough to reject that hypothesis.
I might have told him that the CO2 level in 1940 was 305 ppmv and that in 1950 it was 310 and asked him to predict the two temperatures.
How have you so quickly forgotten the two examples agwtrend and agwrise? These examples from my earlier post make the point that 10-year trend lines in the temperature record such as 1940-1950 are completely meaningless.
Take a good long look at it with an unbiased eye. The y axis total temperature rise is a mere 0.8 C whilst the x axis 160 years.
Again you seem to have difficulty paying attention. You are fitting a straight trend line; I keep telling you that a trend line is an assumption of linearity. Worse still, you are doing so on the basis only of the end points, instead of doing a linear regression in order to weight all points equally. I am not the only one who cringes at fits obtained from just the end points, even some deniers do.
To see how badly your line of reasoning can fail, suppose in some other world the temperature is following the exponential function 2^d °C where d is the time in decades, taking d = 0 to denote the present. All inhabitants of that world are in agreement that historically the temperature rose almost exactly 1 °C over the past century. Applying your methodology to that hypothetical world forecasts a further rise of 1 °C over the next century, when the reality is that it will rise to 2^{10} = 1024 °C.
To me, this does not imply a catastrophe.
At first I thought you were forecasting no catastrophe. But then I recalled your earlier statement,
However, my answer would be that I would not make a prediction. What would be the point in forecasting?
That puts a completely different spin on “does not imply”, namely that you do not extrapolate from the past to the future on the ground that doing so is pointless.
Which meaning did you have in mind?
In any event “catastrophe” is a straw man here since I was not arguing for catastrophic warming, I was arguing for continued warming closely following Arrhenius’s logarithmic function of the very smoothly growing CO2 curve, assuming business-as-usual. That might melt all the ice, but would that be a catastrophe?
On the one hand the Cambrian explosion, which appeared to happen after a huge increase in global temperature, seems to have benefited from the additional warmth by greatly increasing biodiversity. If the present warming had a similar outcome, evolutionary biologists would likely not consider that a catastrophe for the planet, though it may well adjust the balance of power between species, which some species would consider catastrophic and others beneficial. Species come and go, just like individuals albeit over a longer time frame.
On the other hand no one is suggesting that that rise half a billion years ago hit the planet remotely as fast as this one. The geological record has nothing to tell us about the impact on biodiversity of too sudden an increase in temperature. The outcome could be very different from the Cambrian explosion.
“Catastrophic” is a value judgement that scientists should leave to policy makers. Scientists should stick to the science and answer two kinds of questions, those of scientific interest and those asked of them by the policy makers. They should not become policy makers and they should not make value judgements, but instead should understand what’s going on well enough to be able to forecast likelihoods of the various possible futures to the extent possible. Such forecasts can be of interest to both science and policy.
By the same token, advocates of any given policy should not be muddying the waters of scientific understanding in order bend public opinion towards their desired policy. They can lead nature to those muddied waters, but they cannot make her drink, at least not without a quantum improvement in our current geoengineering skills.
Although I’m just a layman in matters of climate science I tend to see that subject from the perspective of a scientist rather than a policy maker.
Vauaghan,
Please don’t get the impression that I am not paying attention. I am. We are both looking at the same data but we ar both interpreting it differently. You may feel that your interpretation is ‘better’ because of your scientific qualifications but I, as a layman, can only look at the overall picture. Hence the end points. When you look at a graph, it may be ‘more scientifically correct’ to dissect various fits to the data but the layman will only look at the overall. I have given what the overall figures are and there is no point me repeating them but you are postulating a continued-warming future based on your belief that Arrhenius was correct in his original theory. I need have no such pre-conceptions! As a layman, I can take things at face value. Only time will tell whether you or I are correct.
At the same time as accusing me of not paying attention, you have ignored my point about drawing trend lines to an exponential curve. There can be no doubt that the overall trend is lower today than it was in 1879. Every single straight line trend since 1879 is lower than that one. Picking interim short-term trends and using them to advocate ‘continued warming closely following Arrhenius’s logarithmic function of the very smoothly growing CO2 curve, assuming business-as-usual.’ is not necessarily correct. Only the overall trend will prove Arrhenius’ theory. Even you have admitted that his original estimate is way off the mark. You assume that the climate sensitivity is 3 C but there is no real-world evidence for that assumption.
I can’t believe you are seriously suggesting that melting of all the ice is a likely outcome of AGW [even leaving aside the (C)]. This entire discussion hinges on the contribution of ghgs to the Greenhouse Effect. All the radiative ghgs in the (dry) atmosphere amount to less than 0.04%. Therefore 99.96% CANNOT be heated by radiation (because radiation is not heat). How do you think current day increases in ghgs is likely to lea to enough warming to melt all the ice?
All your calculations are based on the unproven assumption that a doubling of CO2 is 3 deg C. If you have some real-world evidence to support this ‘consensus number’ then please show me.
Sorry, forgot to answer this one. (Not fair to scold you for ignoring my answers if I leave one of your questions unanswered.)
Not only do I not have any evidence for the consensus number, I also don’t believe it’s the right number to use in making these particular sorts of calculations. But remember that in the story you merely told Arrhenius that “the consensus number at your time is 3 °C per doubling.” You did not tell him you believed it yourself, in fact you added “albeit with much uncertainty,” suggesting reasonable doubt on your part.
Arrhenius had to base his calculations on the consensus number because that was the only number you mentioned. The remarkable accuracy with which he was able to forecast the temperature rises over 1950-2010, 1950-1970, and 1970-2010 suggests the consensus number is the right one.
I don’t believe it’s the right one. Rather I suspect it may have emerged as the so-called “consensus number” at least in part because it gives the right answer to this kind of calculation for the past half century. As deniers enjoy pointing out, it gives the wrong answer for earlier times; for example Judith Curry points this out for the first half of the 20th century, and may have gotten some undeserved flak for that, but definitely deserved flak for drawing the unjustified inference that there must therefore be some bug in the physics of global warming theory (if I understood her correctly), a theory on which I am in full agreement with Andy Lacis. (My objection to Andy’s creative “thermally superconducting atmosphere” may have inadvertently created the impression I disagreed with him on other aspects of the physics as well, which I certainly don’t.)
The number 2 °C per doubling (1.8 if I had to commit to a second digit) as the observed climate sensitivity OCS is I believe better than 3 °C after taking out the natural contributions and assuming business-as-usual human CO2 emissions. OCS should not be confused with any of the modeled climate sensitivities such as “the” equilibrium climate sensitivity–as if there were a unique such–or the transient climate response however parametrized. Likely candidates for natural contributions include the 10.7 year solar cycle and especially the 60 year Atlantic Multidecadal Oscillation, more precisely whatever is behind the AMO. “Business-as-usual” means extrapolation of the Keeling curve, which by inspection has been doubling every 4-5 decades, sped up to 32.5 years with the further assumption of a natural base of 280 ppmv as made by NOAA ESRL’s David Hofmann. The observed climate sensitivity is intrinsically transient in character, but its transfer function and associated impulse response depend on the rate of increase of CO2 making it hard to model.
At the other extreme I would not be terribly shocked by equilibrium climate sensitivities as high as 8 °C per doubling of CO2, depending on the model’s assumptions. Land and sea have low thermal conductivity (offset by convection for the sea) while the phase changes of water have high latent heats: even though its latent heat of fusion is only 15% of that of vaporization, the fact that, in their respective habitats, ice melts more slowly than water evaporates I suspect makes melting a bigger contributor than evaporation to the gap between observed and equilibrium climate sensitivity. And the concept of tipping point makes climate sensitivity so high as to be meaningless.
Vaughan,
You are a gentleman and a scholar. You also have my respect for the way you handle yourself in this discussion. I can only hope to aspire to such cordiality.
“The number 2 °C per doubling (1.8 if I had to commit to a second digit) as the observed climate sensitivity OCS is I believe better than 3 °C after taking out the natural contributions and assuming business-as-usual human CO2 emissions.”
Ok, so we have come down from Arrhenius’ 5-6 C sensitivity through the IPCC ‘best guess’ sensitivity of 3 C to your OCS (not familiar with that, by the way) sensitivity of 1.8 C. This seems o make much more sense, albeit it is still based on the assumption that CO2 can cause that warming. For it to be true, the 40% increase in CO2 would lead to the current observed warming of 0.8 deg (using lb 1.40). Provided that all of the observed warming is definitely due to CO2, then your latest sensitivity figure would seem applicable.
I suppose we could now debate how likely it is that CO2 is responsible for all the observed warming but, for now, thank you for your patience.
Kind regards,
I suppose we could now debate how likely it is that CO2 is responsible for all the observed warming but, for now, thank you for your patience.
Here’s some historical background on the greenhouse effect, defined as the tendency of a minority of Earth’s atmospheric gases to allow inbound radiation to pass freely through the atmosphere while inhibiting its departure. It’s quite long, so let me break it into three parts: those who discovered and refined it, those who contradicted it, and (my perspective on) the underlying physics. Here’s Part 1.
Pre-1767: Several thousand years of experience with creative ways of keeping houses warm in winter and protecting plants from frost that the margin of these notes is too small to contain.
1767: Swiss physicist and alpinist Horace de Saussure puzzles over why mountain tops are so much colder than valleys. Since they’re very slightly closer to the Sun, common sense suggests that they should be very slightly warmer. He guesses that the atmosphere is letting the sunlight in but is somehow blocking the heat of the warmed ground from escaping. Drawing an analogy between air and glass he invents the solar oven, a box with a stack of three sheets of glass in the top that he was able to cook fruit in. He tests its efficacy at the top and bottom of a mountain and finds little difference, which he interprets as meaning that at the top of the mountain the glass blocked the heat even though there was relatively little atmosphere up there to block the heat. He guesses that the atmosphere must be doing something similar lower down, just not so strongly as three sheets of glass.
Early 1800s: French mathematician and physicist Joseph Fourier develops his theory of heat, which incorporates de Saussure’s theory while extending it creatively but not terribly accurately (he greatly overestimated the temperature of outer space).
1860s: English physicist John Tyndall invents the ratio spectrophotometer in order to test the radiant heat absorbing power of various gases. In section II-4 of his Fragments of Science he lists 13 such gases, noting that CO2 absorbed 972 times more strongly than hydrogen, nitrogen, or oxygen, ammonia 5,460 times, and sulfurous acid 6,480 times. (Venus’s atmosphere is 97% CO2, but the sulfur dioxide clouds greatly increase the greenhouse effect of Venus’s atmosphere, raising it much higher than it would be with CO2 alone.) Tyndall was the first to realize that water vapor was Earth’s main greenhouse gas (unlike Venus for example). The only reason we care more about CO2 than water vapor today is that CO2 is changing much faster than water vapor.
1896: Swedish chemist Svante Arrhenius derives his logarithmic dependency of temperature on CO2 mixing ratio, and estimates 5 °C per doubling of CO2 at the equator and 6 °C at the poles. Arrhenius looks forward to the additional warmth that the rapidly increasing industrial CO2 will create for the planet (Sweden is much colder than Italy, had Arrhenius been Italian he might have been the first global warming alarmist).
1904: Arrhenius reduces his original number to 1.6 °C per doubling and emigrates to Italy.
1938-1964: English steam engineer and inventor Guy Callendar writes 35 articles on global warming, infrared radiation, and human generated CO2. Like Arrhenius, Callendar felt that global warming would be a good thing for the planet as it would postpone the next ice age. (England is not quite as cold as Sweden thanks to the Gulf Stream, but it makes up for that in having the climate of Seattle and Vancouver. No sane Italian tourist seeking even more warmth would choose those travel destinations over the Bahamas.)
1956: Canadian physicist Gilbert Plass began writing about the hazards of global warming. Plass pegged climate sensitivity at 3.6 °C per doubling of CO2 and predicted that in 2000 the CO2 and temperature would be respectively 30% and 1 °C more than in 1900. These predictions were remarkably prescient: 30% was spot on while the temperature rise was a tad under 0.9 °C (but then 1910 was a tad over 0.1 °C colder than 1900). (Canada is even colder than England. Whatever was Plass smoking?)
1960-1986: A vast amount of research by a great many physicists and environmental scientists, none of whom appeared to have any doubts as to the soundness of the basic principles of the greenhouse effect.
1988: The World Meteorological Organization and the United Nations Environment Programme coordinated to form the Intergovernmental Panel on Climate Change (IPCC) to “assess the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.”
(Note the term “risk of change” as opposed to “risk of warming.” If it turned out an ice age was in the offing, the IPCC was just as responsible for addressing that risk as for melting the polar ice caps.)
This leaves out a huge amount, including the roles of oceanographer George Revelle and politician Al Gore, but it’s already too long and I haven’t even gotten to Part 2 yet, let alone Part 3, so let me wrap up Part 1 here.
Oh, and Arrhenius didn’t really emigrate to Italy, I just made that up. The rest is historically accurate to the best of my knowledge.
you are earning many quatloos :)
Part 2: The Skeptics.
1909: The first serious skeptic was the famous Johns Hopkins physics professor William A. Wood, who published two papers in Phil. Mag. 1909, one in February and one in November, claiming in the first on the basis of a barely documented experiment (65 °C and 55 °C were the only two numbers in the entire 1.5 page paper) that greenhouses were warmed by retaining heat, not by trapping infrared radiation, and that therefore the Earth could not be warmed in this way either. Both papers were rebutted in the same volume of that journal, the first in the July issue by the then Director of the Smithsonian Observatory Charles Greely Abbot, later Secretary of the Smithsonian from 1929 to 1945, the second (on the Talbot effect, nothing to do with global warming) by no less than Sir Arthur Schuster, author of the optics text Wood was taking exception to, in the November issue. More details at Wood in 1909.
1974: The question of whether greenhouses kept their contents warm by trapping infrared or merely trapping air was hotly debated in Journal of Applied Meteorology in 1974 and again in Optical Spectra in 1975, leading to yet further discussion at the time in Science, New Scientist, and Popular Science. Googling “The atmospheric science community seems to be divided” (in quotes) gives a more complete story as told by meteorology professor Craig Bohren, now Distinguished Professor Emeritus in the Department of Meteorology in the College of Earth and Mineral Sciences at Penn State.
1986: The next major skeptic is William Nierenberg. Now I have to say that I am a great admirer of Nierenberg, whose story is told here . To my thinking it is tragic that such a great mind could end its scientific career arguing that the previous two centuries of research into the greenhouse effect was complete and utter BS and that Earth was warming up merely on account of increasing solar temperature. Part of that tragedy is that all subsequent investigation into Nierenberg’s claim failed to bear it out. Had he been right he would have been a hero.
1996: The George C. Marshall Institute accused IPCC Lead Author Ben Santer of “scientific cleansing.” Oregon biochemist Arthur B Robinson further claimed that Santer had “deliberately omitted data points to create the trend that they reported… So Santer clearly faked the result, circulated it during IPCC proceedings in order to influence world global climate policy. They should never be permitted to work in science again.”
One might think this charge a little strong were it not for the fact that Robinson also charges that that Darwinian natural selection is incorrect and has written a book about how to survive a nuclear war. Sympathizers with these viewpoints will find Robinson a kindred spirit.
Fans of Steven Spielberg’s wonderful movie “Close Encounters of the Third Kind” will enjoy Santer’s takeoff of it at
Close encounters of the absurd kind, where Santer defends himself against these attacks by picking off those marksmen blazing away at him one by one with evident glee. Rarely do I want to be anyone but me, but on that occasion I would have given anything to be Ben Santer demolishing these arachnotrons, revenants, and cacodemons with the weapons granted me by logic and common sense.
The “web’s longest-running climate change blog” can be accessed from Pat Michaels’ New Hope Environmental Services website by clicking on “World Climate Report” which says,
As Americans sit down on Thursday and give thanks for the food on their table, hopefully many will say a good word for enhanced atmospheric carbon dioxide levels, as much of the foodstuff filling their plates will have benefited from the anthropogenically-elevated CO2 levels in the air.
And even if we don’t produce the food ourselves, we contribute to the growth of crops around the world through emissions of CO2 whenever we use energy derived from fossil fuels—a large component of the energy that we use to power our daily lives.
To which Svante Arrhenius and Guy Callendar would offer up a heartfelt amen. But not Gilbert Plass or his successors.
As for myself, I rather like that idea. It’s an experiment whose outcome we’re going to see whether we like it or not. Since it’s inevitable, the optimist inside me is telling me we’re going to like it. I’ve told the pessimist to bugger off.
Part 3: The underlying physics
The Sun heats the surface of the Earth with little inference from Earth’s atmosphere except when there are clouds, or when the albedo (reflectivity) is high.
In the absence of greenhouse gases like water vapor and CO2, Earth’s atmosphere allows all thermal radiation from the Earth’s surface to escape into the void of outer space.
The greenhouse gases, let’s say CO2 for definiteness, capture the occasional escaping photon. This happens probabilistically: the escaping photons are vibrating, and the shared electrons comprising the bonds of a CO2 molecule are also vibrating. When a passing photon is in close phase with a vibrating bond there is a higher-than-usual chance that the photon will be absorbed by the bond and excite it into a higher energy level.
This extra energy in the bond acts as though it were increasing the spring constant, making for a stronger spring. The energy of the captured photon now turns into vibrational energy in the CO2 molecule, which it registers as an increase in its temperature.
This energy now bounces around between the various degrees of freedom of the CO2 molecule. And when it collides with another atmospheric molecule some transfer of energy takes place there too. In equilibrium all the molecules of the atmosphere share the energy of the photons being captured by the greenhouse gases.
By the same token the greenhouse gases radiate this energy. They do so isotropically, that is, in all directions.
The upshot is that the energy of photons escaping from Earth’s surface is diverted to energy being radiated in all directions from every point of the Earth’s atmosphere.
The higher the cooler, with a lapse rate of 5 °C per km for moist air and 9 °C per km for dry air (the so-called dry adiabatic lapse rate or DALR). (“Adiabatic” means changing temperature in response to a pressure change so quickly that there is no time for the resulting heat to leak elsewhere.)
Because of this lapse rate, every point in the atmosphere is receiving slightly more photons from below than from above. There is therefore a net flux of photonic energy from below to above. But because the difference is slight, this flux is less than it would be if there were no greenhouse gases. As a result greenhouse gases have the effect of creating thermal resistance, slowing down the rate at which photons can carry energy from the Earth’s surface to outer space.
This is not the usual explanation of what’s going on in the atmosphere, which instead is described in terms of so-called “back radiation.” While this is equivalent to what I wrote, it is harder to see how it is consistent with the 2nd law of thermodynamics. Not that it isn’t, but when described my way it is obviously thermodynamically sound.
Thank you very much for this, this was the sort of thing I was looking for, now we need a cartoon or animated version of this.
That’s a wonderful story, Dr. Arrhenius!
Ever thought of having a blog?
Ever thought of having a blog?
No, have you?
I do have a noninteractive blog of sorts, Sayings of Chairman Pratt, not limited to any particular topic. It’s not something I put a lot of time into though.
> No, have you?
Yes, but I prefer a collection of snippets:
htttp://neverendingaudit.tumblr.com
I don’t believe we should overburden people to shuffle through too many sites to follow the discussions, at least the kind I can provide.
If you can put all your knowledge in a few posts, may I suggest that you offer Dr. Curry to have you as a guest poster?
Dr. Lacis too might entertain this idea. I believe there is a need for the kind of input both of you are adding here. It could be here or elsewhere.
Speaking of elsewhere, what both of you are talking about reminds me of
http://www.scienceofdoom.com
This site may be of some interest for Mr. Bryant.
Guests post would be welcome
Note that HR comment appeared in the RSS feed.
A collection of snippets:
http://neverendingaudit.tumblr.com
Thanks for the pointer to your site. My “Sayings of” site is a blog in pretty much the same sense as yours; indeed one of its functions is as a repository for posts of mine scattered elsewhere that I thought were worth collecting in one place. I should do that with my Arrhenius story.
http://www.scienceofdoom.com
This site may be of some interest for Mr. Bryant.
Thanks again. I assume AB would respond to SoD with similar logic to that in The Hockey Schtick, their styles seem very similar.
If you can put all your knowledge in a few posts, may I suggest that you offer Dr. Curry to have you as a guest poster? Dr. Lacis too might entertain this idea.
Would that be useful? My training was in physics and pure maths, not environmental sciences, so anything I’d have to say would be relatively uninformed compared to what a climate expert could offer.
In fact this whole global warming thing only came to my attention 18 months ago when I read about Klaus Lackner’s CO2 air capture work and wondered if I could improve on it using radiation emission from CO2 lasers or simply from hot CO2 to steer CO2 molecules. I also looked into a cryogenic approach based on a counterflow heat exchanger that I later found out was being worked on by Exxon at the time.
Eventually I realized that humans could not in the foreseeable future come remotely near competing with nature in mopping up atmospheric CO2, and that any effort put into direct air capture of CO2 would be at least an order of magnitude more effective if expended on reducing CO2 emissions. If we simply stopped increasing our CO2 emissions today and held them constant at the present rate of 30 Gt/year, the Keeling curve would slow down and within a few decades level off at around 100 ppmv over its present level, as high again as it has come since 1900 but no further. This is in sharp contrast to business-as-usual, for which we’re looking at 1000 ppmv in the early 22nd century.
Around that time I started noticing the work of the denial machine and became fascinated by the ease with which it had been able to change public opinion. This led to interest in the outreach problem of making it easier for the public to understand GW, which in turn led to the scientific problem of why the temperature was so badly correlated with CO2 over the past 160 years, a stumbling block for the public as accounting for it seemed to require what I felt were unnecessarily delicate statistical methods to tease out the trends from the noise.
While I enjoy explaining stuff, whether on blogs or in classrooms, a habit picked up from teaching at MIT and Stanford since 1972, my heart belongs to research, and my preference is to spend less time blogging and more time working on this scientific question of why temperature is not better correlated with CO2.
Vaughan Pratt,
Before we end up crushed by the left margin, let me tell you that if you offer Judith a blog post about what you said so far, she might consider it. Even Real Climate might take a look at it. It’s basic, but it’s clear, concise, and, as far as I can tell, correct. It certainly raises the level of debate, which is what Judith wants:
http://judithcurry.com/2010/11/26/raising-the-level-of-the-game
Comparing to an Italian flag and to the solution of the problem of induction suffices to show that your modesty is most certainly misplaced.
I would definitely host a post on all this by Vaughan Pratt.
Regarding multiple-valued logics and their role in the management of uncertainty (which I’ve been teaching at Stanford for decades under the guise of intuitionistic and other nonclassical logics), I could imagine their being applicable to the uncertain aspects of the climate. However I don’t see what they have to offer that would decrease the uncertainty of the public. My preference is the naive one of sorting knowledge according to how certain we are of it and starting with the most certain parts, working one’s way towards less certain things. Introducing uncertainty before gaining familiarity with what’s certain is not a great way of gaining expertise in a subject. The experts at least know what they know, whether or not they know what they don’t know.
Okay, let me put it on my stack. I won’t be able to get to it right away however as I have a couple of urgent projects.
Willard,
Thanks for the heads up. Looks interesting. I’ll investigate.
Thank you.
Check the Earth’s Energy Imbalance paper by Hansen et al. (2005, Sciece 308, 1431-1435), available also through the GISS webpage http://pubs.giss.nasa.gov/abstracts/2005/
The total GHG forcing is about 3 W/m2 since 1880. There are some other small positive forcings and a large negative forcing due to aerosols, leaving a total effective forcing of about 1.8 W/m2. Using a climate sensitivity of 3 C/4 W/m2, results in a global equilibrium warming of 1.35 C, of which about 0.8 C has already been realized. Because of the large heat capacity of the ocean an additional 0.6 C is still in the “pipeline” to be realized in due time without further increase in radiative forcing.
There is another Hansen et al. paper (2008) in Open atmospheric Science Journal, 2, 217-231, that describes the climate time rate of approach to equilibrium. The ocean mixed layer takes about 5 years and accounts for about 40% of the total equilibrium heating. About 60% is reached in about 100 years, with the balance taking as long as 1500 years, as heat diffuses into the deep ocean.
There is another Hansen et al. paper (2008) in Open atmospheric Science Journal, 2, 217-231, that describes the climate time rate of approach to equilibrium. The ocean mixed layer takes about 5 years and accounts for about 40% of the total equilibrium heating. About 60% is reached in about 100 years, with the balance taking as long as 1500 years, as heat diffuses into the deep ocean.
For this audience it may be worth pointing out that numbers like 1500 years are merely what the model indicates. In reality the butterfly effect necessarily adds error bars to such predictions. In this case it is plausible that the 40% balance will have been absorbed into the noise of other large heating or cooling effects that the model has no way of predicting. The meaning of 1500 years here is not so much to predict what will have happened by then as to give an idea of the time constants for such processes.
Methods for estimating the signal-to-noise ratio as a function of time would be very useful in these situations, for example telling a modeling program when it has reached a point in the simulation where no further meaningful information can be obtained. This would allow it to turn its attention to the first thousand years of the next simulation instead of waiting for the second thousand years of the present simulation to complete.
There is a limited amount of energy to attribute. If you argue for a high long term climate sensitivity then you must explain why there is less current forcing then estimated. If there is less current forcing then the long term climate sensitivity to the current is also less. The alternative to this is an explanation of why the warming periods in the late 19th and early 20th centuries have a different set of applied principles.
“The total GHG forcing is about 3 W/m2 since 1880. There are some other small positive forcings and a large negative forcing due to aerosols, leaving a total effective forcing of about 1.8 W/m2. Using a climate sensitivity of 3 C/4 W/m2, results in a global equilibrium warming of 1.35 C, of which about 0.8 C has already been realized. Because of the large heat capacity of the ocean an additional 0.6 C is still in the “pipeline” to be realized in due time without further increase in radiative forcing.”
OK, let’s assume you are correct. Let’s assume the radiative forcing theory is correct. Let’s assume that the warming is solely due to CO2 (which is what you’re saying when ‘0.8 C has already been realized’. Let’s assume that the heat ‘still in the pipeline’ is waiting for ‘due time’ (is there some teleological aspect here?). Put together, we will have an assumed 3 C climate sensitivity. Lots of assumptions there…
So how does that answer my question about the contribution of CO2 to the Greenhouse Effect supposedly being 20% (or 6.6 C)? Also, the ‘heat in the pipeline has had over 150 years to start ‘doing its thing’. What is is waiting for?
Andy Lacis:
the climate time rate of approach to equilibrium. The ocean mixed layer takes about 5 years and accounts for about 40% of the total equilibrium heating. About 60% is reached in about 100 years, with the balance taking as long as 1500 years, as heat diffuses into the deep ocean.
So if the heat from the atmosphere is going to diffuse into the ocean in Hansen’s view, then where is Trenberth’s ‘missing heat’ hiding? Not in the ocean obviously.
And by what mechanism does Hansen propose the atmosphere is going to heat the ocean? The ocean is on average 3C warmer than the atmosphere. Longwave radiation can’t penetrate the ocean beyond it’s own wavelength. So the downwelling energy is concentrated at the surface where it causes evaporation, a net loss of energy from the ocean. The Minnett skin differential theory is a crock, because low windspeeds over the surface will tend to keep the surface and the water immediately below it mixed to a state of thermal neutrality. The back radiation to the surface isn’t going to get mixd down much by this though, because free convection keep the less dense warmer molecules of water moving upwards…
The reality is that the climate works the other way round. The Sun heats the ocean. Reduced cloud cover between 1980-1998 as empirically measured by the ISCCP increased insolation *at the surface* which led to the ocean warming. This extra ocean heat content raised SST which radiated at a higher rate into the atmosphere, warming it. The lower troposphere temperature changes lag 3-6 months behind SST changes so the arrow of causality is pretty clear. The heat in the atmosphere then escapes to space via radiation from the upper troposphere. The altitude at which it does this is the only issue regarding the greenhouse effect we need to be concerned with. The increase in co2 has theoretically raised this altitude around 100 meters, but how much has it been lowered by the reduction in specific humidity since 1948 at the tropopause? And why does the level of specific humidity correlate with solar activity levels?
http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png
Ocean heat content has dropped since 2003 after cloud cover increased again after 1998 (Palle et al) . Co2 is increasing at the same rate it has for a long time. Ocean heat content correlates better with a cumulative count of solar energy output departing from the ocean equilibrium value than it does with co2 levels.
http://tallbloke.wordpress.com/2010/07/21/nailing-the-solar-activity-global-temperature-divergence-lie/
I don’t see why anyone should take what Michaels says at face value. He has a history of making false arguments. He also is a private consultant for coal and oil companies on environmental matters, and is part of the Conservative Think Tank the Cato Institute who are a right wing conservative outfit, also funded by the energy industry. If Judith Curry is concerned about the politicization of Science, why is she giving any credibility to Michaels? It doesn’t make any sense.
http://motherjones.com/blue-marble/2010/02/pat-michaels-climate-skeptic
I don’t see why anyone should take what the mainstream climatology says at face value. It has a history of making false arguments. Most of them work for the state, an inherently leftwing organisation with a vested interest in spreading alarmism. If Judith Curry is concerned about the politicization of Science, why is she giving any credibility to these government stooges?. It doesn’t make any sense.
Most of them work for the state, an inherently leftwing organisation with a vested interest in spreading alarmism. If Judith Curry is concerned about the politicization of Science, why is she giving any credibility to these government stooges?
I see I was mistaken focusing on the science in my previous reply to you. Obviously I should have been focusing on the politics.
Arfur,
On the assumption that you are genuinely asking exploratory questions, rather than winding up Vaughan Pratt in a fairly nice way, I should point out that a lot of what he has said about climate sensitivity – including the delightful story about Arrhenius – is highly misleading.
The logarithmic relationship between CO2 concentration and temperature, which Vaughan is relying on, is an EQUILIBRIUM relationship. I will touch on its derivation in a moment, but the main point is that it predicts (or tries to predict) the change in average temperature which the planet would eventually get to if atmospheric CO2 were to be increased to a certain level and THEN REMAIN AT THAT LEVEL UNTIL EQUILIBRIUM IS RESTORED. Theory says that the injection of a slug of CO2 into the atmosphere should cause a perturbation in outgoing LW – initially a decrease followed by an increase until TOA radiative balance is again achieved; this triggers a series of feedbacks, including the heating of the planetary surface itself. Since these are time-dependent processes, one cannot take a CO2 observation and a temperature observation and just plug the values into a sensitivity calculation. This is what Andy Lacis was referring to when he wrote:
“Using a climate sensitivity of 3 C/4 W/m2, results in a global equilibrium warming of 1.35 C, of which about 0.8 C has already been realized. Because of the large heat capacity of the ocean an additional 0.6 C is still in the “pipeline” to be realized in due time without further increase in radiative forcing.”
As for the derivation of this relationship, physics allows us via the radiative transfer equations to do a reasonable job of estimating the net change in radiative forcing associated with a change in atmospheric CO2 concentration assuming no feedback effects. By applying the RTE to different levels of CO2 in differing sky conditions one can derive an approximately logarithmic relationship between CO2 and radiative forcing. Up to this stage we are still on fairly solid physics – agreed by most skeptics and warmists apart from fringe elements. The next step, however, which is to relate a change in planetary temperature change to the CO2-induced radiative forcing, is art rather than science. One proof of this pudding is that the IPCC recognises a factor of 3 in the range of estimates of climate sensitivity to CO2 – AR4 cites between 1.5 and 4.5 degree C for a doubling of CO2 – but many skeptics and warmists would argue that the range is too small albeit with focus on different ends of the range.
The logarithmic relationship between CO2 concentration and temperature, which Vaughan is relying on, is an EQUILIBRIUM relationship.
Paul, I’d say you’re failing to see the woods for the trees. Arfur is arguing that, considered globally, temperatures have increased only 0.8 °C over 160 years, a rate of 0.05 °C per decade, whence there’s nothing to worry about since such a rate cannot possibly pose a threat.
If you agree that Arfur’s reasoning is sound then we have nothing further to discuss about the many different notions of climate sensitivity, since the distinctions between equilibrium climate sensitivity, transient climate response, observed climate climate sensitivity (googling that in quotes turns up over 50,000 pages), etc. are to Arfur’s understanding of climate science as the different interpretations of quantum mechanics (Copenhagen, Bohm, Everett, path integrals) are to someone disputing F = ma. I can then address both of you in continuing my argument against Arfur’s 0.05 °C per decade trend and ignore what you have to say about climate sensitivity.
If however you are on my side regarding the soundness of Arfur’s reasoning, then and only then can we discuss whether all non-equilibrium relationships between CO2 and temperature are non-logarithmic, as the second half of your sentence seems to be implying.
Actually zero quotes—I typed an extra “climate” by mistake, 53,700 without it.
Hi Vaughan,
I’m pretty bemused by your comments. Arfur is making an observation about the intrument record, which in itself is perfectly valid, and then asking a question based on that observation. I can see no basis in this on which to choose “sides”.
Some of your responses however showed confusion between, on the one hand, the calculation of an equilibrium temperature response to an impulse-based CO2 injection, and, on the other hand, the calculation of expected temperature response to changes in CO2 which come from an observed time series. I therefore thought it was worth taking a few minutes to clarify the difference, nothing more than that.
I have no idea what your last paragraph means, but since it suggests that I have to take sides before you are willing to enter a science-based discussion on something, I would rather politely decline.
Arfur is making an observation about the intrument record, which in itself is perfectly valid, and then asking a question based on that observation. I can see no basis in this on which to choose “sides”.
Meanwhile you might not have to choose since we appear to have reduced the disagreements as of Arfur’s Nov 28 3:32 pm message.
Googling “observed climate sensitivity” in quotes turns up 53,700 pages. If even one of those pages suggests that observed climate sensitivity might be other than a logarithmic dependence of temperature on CO2 I’d love to see it. The idea that there is a single notion of climate sensitivity is on its way out.
Paul,
Thanks for your input. I am seeking answers to exploratory questions but its nice to know that, even if I’m winding Vaughan up, it is in a nice way!
I am genuinely confused regarding the impact of the log effect on CO2 warming. I have twice asked a question which vexes me and have had no answer from either A Lacis or Vaughan. If you can help, I would appreciate it. It is this:
If the contribution by CO2 to the Greenhouse Effect is, according to A Lacis, 20%. (According to RealClimate and Kiehl et al 1997 the figure is 9-26%.), how is it possible that a 40% increase in CO2 can lead to a mere 0.8 deg C? The figure of 0.8 C assumes that ALL of the warming is due to CO2 and includes ALL other factors, both positive and negative. Even given the probable logarithmic effect, surely the increase n CO2 should have caused a much larger warming by now? (This is just a follow-on question!)
Honestly, if I could just get an answer to this apparent anomaly, I feel I would have a better handle on the subject. Of course, it could just be that the contribution of CO2 to the GE is MUCH less than A Lacis’ 20%. In which case, why do these figures keep being pushed by the pro-AGW camp?
Ok, maybe more than one question, but you get my drift…
Regards,
If the contribution by CO2 to the Greenhouse Effect is, according to A Lacis, 20%. (According to RealClimate and Kiehl et al 1997 the figure is 9-26%.), how is it possible that a 40% increase in CO2 can lead to a mere 0.8 deg C?
This would be a great question were it not for the caveat in my final paragraph below. The answer is because temperature depends logarithmically on CO2. (The issue of equilibrium climate sensitivity raised by Paul is a red herring, the logarithmic dependency is still true with other notions of climate sensitivity such as transient climate response or observed climate sensitivity.)
While 40% may seem like a big increase, as Arrhenius pointed out in 1896 you have to take its logarithm in order to see how much the temperature increases. lb(1.40) = 0.4854, whence a 40% increase will raise the temperature by 0.4854*s where s is whichever notion of climate sensitivity is applicable to the current situation (which depends on how you define “will” — do you mean in 10 seconds, 10 years, 10 centuries, or ∞ centuries, the last being the definition of “equilibrium sensitivity”).
In particular if the climate sensitivity s is, for the sake of argument, 2 °C per doubling, then a 40% increase in CO2 will raise the temperature by 0.97 %deg;C. We’ve been through this math several times now and it just doesn’t seem to register with you for some reason.
I have twice asked a question which vexes me and have had no answer from either A Lacis or Vaughan.
I’m beginning to suspect that you’re just jerking my chain, Arfur Bryant. You could not get away with repeatedly asking questions like that in a classroom while completely ignoring my answers because the other students would bodily carry you out of the room as a disruptive influence. I’m getting the impression you’re doing this deliberately.
Vaughan – or ‘Sir’…
I don’t care what the other students think. They can carry me bodily out if they want – it doesn’t make them, or you, right. The Emperor had lots of acolytes telling him he was right but it took a small boy in the crowd to show he was wrong. I do not pretend to be that boy, but if you want to persuade me you are right, you need something better than an argument based on quicksand.
However, I do respect your calculations. Just out of interest, why do you use a binary log? Not saying it is wrong, just want to know why it’s right.
But lets assume (!) your lb(1.40) is correct. If we use your favoured sensitivity of 3C, the answer comes out at a rise of 1.46 C for a 40% rise in CO2. A sensitivity of 2C gives 0.97C and a sensitivity of 1C gives 0.48. Happy so far?
As it happens, the only answer supported by evidential data is the latter – a climate sensitivity of just 1 deg C per doubling – UNLESS you make the following assumption:
a. ALL the warming is due to CO2.
b. More heat is in the ‘pipeline’
As far as I know, no-one, not even the IPCC, thinks that ‘a’ is the case. Hence the observed temperature rise is more consistent with a sensitivity of 1C, not your 3C, and not Arrhenius’ 5-6C.
As to assumption ‘b’, there is absolutely NO evidence to support such an assumption.
So, if you wish to argue the log case with a 3C sensitivity, in approximately another 340 years we may have raised the temperature by about 3C, I agree. Er, if Arrhenius was right.
However, an assumption is not a very convincing base on which to ridicule a student and try to get him to join the ‘groupthink’ mob, is it?
I am not jerking your chain, Vaughan, but I suspect Arrhenius is pulling your strings…
Arfur,
You state that the IPCC, as far as you know, does not believe that all of the warming is due to CO2. Well actually, over the instrumental period, the IPCC infers that the increase in CO2 and other GHGs should have increased the temperature by a LARGER amount than the (total) observed temperature change. To bring the calculated temperatures into line with observations, each of the AOGCMs uses variation in aerosols to introduce a net negative forcing which is used as one of the matching parameters to offset the “overprediction” of temperature gain from GHGs alone. Hence, it is still possible to to match the temperature dataset with either a high or low climate sensitivity to CO2 by postulating respectively either a large or small affect (i.e. net negative forcing) from atmospheric aerosols.
However, I do respect your calculations. Just out of interest, why do you use a binary log? Not saying it is wrong, just want to know why it’s right.
Because the result is in units of degrees per doubling. If it were in units of degrees per order-of-magnitude (10x) increase one would have to use decimal logs. Similarly if it were degrees per increase by e then natural logs.
Alternatively one can use natural logs but then divide the result by ln(2).
And another thing…
There are at least a couple of other assumptions you need to make for your case to have merit:
c. All temperature measurement devices have been accurate to at least one tenth of a deg C.
d. The base level of CO2 is 280 ppmv.
An individual-thinking student may be tempted to ask ‘What about the doubling from 140 to 280? How much warmth did that give? And 70 to 140? And… well, you know. How did we get to the statement ‘CO2 is responsible for 20% (6.6 C)’ just by allocating a starting point of 280 ppmv?
Can you tell us when in Earth’s history CO2 was at 140 ppm? 70 ppm? And what the temperature was?
Ok, fair call. I was getting carried away. The level of CO2 has probably always been 280 ppmv, and this has always contributed 6.6 K to the greenhouse effect…
The level of CO2 has probably always been 280 ppmv, and this has always contributed 6.6 K to the greenhouse effect…
What’s your source for the 6.6 K figure? (Chris Colose had the same question.)
As far as 280 ppmv goes, that’s a very ballpark figure for around 200 years ago and could well be as low as 270 then, and 260 ppmv a few centuries before that, hard to say for sure.
During the last million years the ice core data tells us that CO2 oscillated over the range 180-280 ppmv synchronously with the temperature to within some 800 years, with temperature leading CO2 at the start of each rise. The period of these oscillations is remarkably close to 100,000 years.
50 million years ago the CO2 level was 3.5‰ (3500 ppmv) and there were no significant ice sheets. Over an 800,000 year period the CO2 dropped to 0.65‰, creating “icehouse Earth” with its polar ice caps. There was a sharp rebound 25 million years ago causing Antarctic thawing that might have been due to release of sequestered methane clathrates, which persisted for 10 million years before gradually cooling over another 10 million years. This brings us up to 3 million years ago, at which point it was so cold that Greenland and the northern part of North America froze over and developed seriously thick pack ice. A cycle of glaciations and deglaciations then set in, with much wider swings in temperature than had been seen in the previous 55 million years.
But not as wide as the remarkable swing seen 55.8 million years ago, the Paleocene-Eocene Thermal Maximum or PETM. In a mere 20,000 years the temperature rocketed up by an unprecedented 6 °C, or 0.03 °C per century. At that time many species of carbonate-based sea-bed organisms went extinct, while land mammals experienced great changes in species leading to the ones we’re most familiar with today.
Over the past half century the temperature has been rising about 5 times the speed of the PETM and is projected to rise even faster during the coming few decades. CO2-induced ocean acidification appears to be shrinking the populations of some carbonate-based organisms such as krill, copepods, and coral but not other such as crabs and lobsters. Plants on the other hand are enjoying the extra CO2, and mammals aren’t bothered by it (the PETM would have taken care of any that would have been bothered).
If you go way back in time, 500 million years ago, the CO2 was around 6‰ (6000 ppmv), around 15 times the present level.
Penn State geoscientist Richard Alley, one of Pat Michaels’ fellow panelists in last fortnight’s Panel 2 to Congress, has a fascinating lecture at the 2009 AGU fall meeting titled “The Biggest Control Knob: Carbon Dioxide in Earth’s Climate History where he reviews the latest evidence for the role of CO2 in geology. If you find this sort of thing fascinating, as I do, it’s a terrific way to spend an hour.
A doubling is a doubling…the forcing is the same going from 280 to 560 as it is going from 500 to 1000, or whatever.
That’s pretty heavy stuff. Now, if you could just explain how we got 6.6 K worth of Greenhouse effect from 280 ppmv CO2…
what are you talking about?
chriscolose,
Please don’t think I’m being rude, but I do not wish to become distracted from what has been a most stimulating and good-humoured discussion with Vaughan. I have taken your comment on board. Thank you. I’m off to bed…
At the same time as accusing me of not paying attention, you have ignored my point about drawing trend lines to an exponential curve.
Ah, found it, it fell through a crack, sorry about that (lot of stuff for both of us to keep track of on this blog).
You, Nov. 26, 4:09 am:
Now take your 5-year smoothed graph. Draw a straight line from the start of data to the peak around 1880. Measure the trend at around 0.17 deg per decade. Now draw another straight line to ANY OTHER point on the graph along the x axis and you will get a decreased trend. The trend to 2010 is about 0.06 deg per decade. Hence a decrease.
The problem with that reasoning is that it assumes that CO2 is the only influence on temperature, which is clearly not the case. Back when CO2 was not having a discernible impact on climate, the natural temperature fluctuations, on the order of 0.15 °C on either side of the mean 19th century temperature, dwarfed anything attributable to CO2. In any random curve fluctuating by ±0.15 °C you will have no trouble finding two points that show a trend up, and two more that show a trend down. Those are meaningless since the curve is random.
There remains the question of how could one detect the CO2 signal in the presence of ±0.15 °C of noise? Well, you can’t, at least not while the CO2 signal remains below that level, which it did prior to 1950.
There is the further complication that during the first half of the 20th century the fluctuations increased to ±0.2 °C.
But over those 160 years the CO2 impact grew from essentially nothing to +0.8 °C, most of it in the last half century. Unlike the natural background CO2 shows no ± fluctuation, instead it has grown with remarkable smoothness as the Keeling curve shows. At that height CO2’s predicted contribution to global warming, based on the very-well-understood physics of infrared absorption by CO2, can be seen plainly sticking out of the background noise.
Interestingly the background noise dropped back to its earlier ±0.15 °C level during the second half of the 20th century. While less than during the first half it was still enough to make one side of the debate loudly sound the alarm in the 1980s, and even more loudly in the 90s, while the natural background warming was adding itself to the CO2 warming, and the other side to cry “false alarm” when the background turned around and became background cooling, subtracting itself during this decade. The subtraction essentially flattened global warming, though it didn’t drive it down as some were itching to claim because by then CO2 was really on a roll and able to fend off the natural decrease.
There is an even better answer, which I’m currently writing up. Meanwhile you’ll have to settle for the above for now.
Vaughan,
I’d like to thank you again for our discussion. I have learnt a lot and i do hope you have not misinterpreted my questioning as obtuse. I will undoubtedly await your ‘better answer’ with relish but I hope you’ll forgive me for not accepting your answer for now (although it is a very well-written answer…).
“The problem with that reasoning is that it assumes that CO2 is the only influence on temperature, which is clearly not the case. Back when CO2 was not having a discernible impact on climate, the natural temperature fluctuations, on the order of 0.15 °C on either side of the mean 19th century temperature, dwarfed anything attributable to CO2. In any random curve fluctuating by ±0.15 °C you will have no trouble finding two points that show a trend up, and two more that show a trend down. Those are meaningless since the curve is random.”
Well, no, it doesn’t assume CO2 is the only influence. It makes no assumption at all. The argument rests on the facts. It makes no difference what the ‘influences’ are, the difference between an exponential curve and the observed temperature anomaly curve is plain to see. If the AGW theory is correct, the temperature should rise at an accelerative rate overall. The overall trend would eventually show an increase. It does not. No amount of specious arguments about ‘natural variation noise’ will change that fact – even if you attribute +/- 0.2 C to it. The temperature rise between 1875 and 1880 was about 0.45 C in a five year period! (A rise not matched since then…) As for finding two points on the curve to show trends ‘up’ or ‘down’, that is blatantly not what I have been saying. I have only used the trend from the origin in my line of argument, because that way we have an even playing field without cherry-picking.
“But over those 160 years the CO2 impact grew from essentially nothing to +0.8 °C,”
And:
“…because by then CO2 was really on a roll and able to fend off the natural decrease.”
”
I’m afraid this is where we are at odds to the greatest degree. You state the CO2 impact as if it were a proven fact. The “very-well-understood physics of infrared absorption by CO2” is not the same as proof. I realise that pro-(C)AGW proponents don’t like to be asked for ‘proof’ but it really is down to those who postulate such an idea to find some sort of data to back it up. Th ability of CO2 molecules to absorb radiation may well be understood. There is, however, a huge leap of faith (nb) between that and its proposed ability to cause catastrophic (term as used by the IPCC) warming.
I confess I do not know what has caused the warming of the second half of the 20th Century. I do not know what caused the heating between 1875 and 1880.
I find it irrational to invent a false god (CAGW) simply because my knowledge is limited.
Thanks for a stimulating debate, Vaughan. You have my respect.
Regards,
The temperature rise between 1875 and 1880 was about 0.45 C in a five year period! (A rise not matched since then…)
That’s 0.09 °C per year. Here are five periods since then rising faster than that.
1893-1896: 0.11, 1912-1915: 0.10, 1950-1953: 0.10, 1976-1979: 0.11, 1994-1998: 0.10
As for finding two points on the curve to show trends ‘up’ or ‘down’, that is blatantly not what I have been saying. I have only used the trend from the origin in my line of argument, because that way we have an even playing field without cherry-picking.
Ah, my apologies, I’d overlooked that you were fixing the start. In that case suppose the temperature to be the mean of cos(x) and exp(x) where x is time in decades from the present. (So x = 0 is 2010, x = -16 is 1850, x = 2 is 2030, etc.)
Take the start of data to be 1869 or x = -14.1. The temperature then was 0.019, today it is 2, in 2030 it will be 3.5, in 2050 it will be 27.
Now let’s use your reasoning to prove that this is not a problem. Draw a straight line from the start of data to the peak of 0.496 around 1883 (x = -12.7) Measure the trend at around 0.34. Now draw another straight line to ANY OTHER point on the graph further along the x axis and you will get a decreased trend. The trend to 2010 is about .14. Hence a decrease.
So according to you we have nothing to worry about, even though the temperature will be 3.5 in 2030 and 27 in 2050.
Correction: today’s temperature is not 2 but 1. The trend from start of data to 2010 is therefore .07, not .14. Even smaller.
Vaughan,
I think you are beginning to lose sight of what is real here. I am probably guilty by not being specific enough…
“That’s 0.09 °C per year. Here are five periods since then rising faster than that.”
I apologise for being vague. I used rough figures. The actual figures I should have used are these:
1876-1878 = a rise of 0.4028 C which is a trend of 0.2 deg C per year. This is almost twice as high as the five examples you showed. My source is:http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
The point here is that this rise of 0.2 deg C per year is caused by (by your own definition) ‘natural variation’. I do not need to explain to you or any other pro-(C)AGW proponent that this knocks a fairly big hole in your assumption that the warming of the late 20th Century must be due to CO2.
“The temperature then was 0.019, today it is 2, in 2030 it will be 3.5, in 2050 it will be 27.”
I can’t believe you actually mean that. Seriously. The temperature will be 27 in 2050?
I think you need to take a deep breath, and I say that as someone who respects you.
I can’t believe you actually mean that. Seriously. The temperature will be 27 in 2050?
You may have missed where Dr. Pratt invited you to “suppose the temperature to be the mean of cos(x) and exp(x) where x is time in decades from the present.” It was a thought experiment to show the flaw in your argument. Dr. Pratt even used your words to show that.
I think you need to take a deep breath.
I do not need to explain to you or any other pro-(C)AGW proponent that this knocks a fairly big hole in your assumption that the warming of the late 20th Century must be due to CO2.
1. If by “due to” you mean “exclusively due to” then I cannot say I’ve ever heard of anyone who assumed that, can you? Certainly not me, in fact I’ve pointed to the historical record as proof that temperature cannot be under the exclusive influence of CO2. For starters we know other GHGs and aerosols both influence global temperature, and there may well be other major influences.
2. As I’ve said before, short term temperature trends prove nothing. There can be huge spikes in unsmoothed temperature for which we may sometimes have no explanation, and even 10-year trends can vary dramatically when the window is shifted by a single year (the point of my agwtrend and agwrise examples at tinyurl.com). Furthermore we don’t care about short-term trends since the concern is with what’s going to happen in 30 years, not 2 years. For these reasons I strongly recommend working with 10-year averaging (mean:120 on WoodForTrees.org).
3. Assuming the temperature has been so smoothed, it is 0.65 ° warmer today than any temperature prior to 1930. Prior to 1930 anthropogenic CO2 was neligible compared to what it has become since 1930. Between 1930 and now the CO2 rose from 300 ppmv to 390 ppmv. Subtracting the natural base of 280 ppmv from each to give the anthropogenic component, that’s a rise in anthropogenic CO2 from 20 to 110 or 450%. Temperature and CO2 are rising dramatically and synchronously.
In his congressional testimony Ben Santer repeatedly made the point that we have no other explanation for this remarkable correlation. One can make the further point that theory predicts that this amount of CO2 should cause such a rise in temperature.
You have offered a number of refutations of this line of reasoning. I have attempted to show why each was unsound. Do there remain any that I have not already so addressed?
Vaughan,
You seem to be contradicting yourself.
In para 1, you say:
“…in fact I’ve pointed to the historical record as proof that temperature cannot be under the exclusive influence of CO2. For starters we know other GHGs and aerosols both influence global temperature, and there may well be other major influences.”
Whereas in para 3, you only mention CO2 and say:
“…that’s a rise in anthropogenic CO2 from 20 to 110 or 450%. Temperature and CO2 are rising dramatically and synchronously.”
Which is it? Of course I have never suggested anyone attributes the warming to ‘exclusively’ CO2. It has been, however, the main attributed cause to AGW. That much is clear. However, what you seem to be saying is that any warming is due to CO2 but any non-warming is due to ‘other influences’. These ‘other influences’ seem to pop out of the woodwork whenever an excuse is required to explain a lack of warming. How many pro-AGW proponents commented that ‘other influences’ may have been at work when the ‘Hockey Stick’ graph was produced?
I am very happy to dismiss short term trends, so let’s stick to the one trend: 0.8 C in 160 years. From all factors, both negative and positive. That’s it. In spite of your 450% increase in anthropogenic CO2.
The fact that Ben Santer – or you – or I – cannot provide another explanation is neither here or there. The only proof that may convince you is a continuation of the current short term non-warming into something approaching a ‘long term’. We will have to wait.
I am very happy to dismiss short term trends, so let’s stick to the one trend: 0.8 C in 160 years.
I’ve already explained why doing so is fallacious, using the example of the mean of cos(x) and exp(x). You insist on interpreting that as my model of Earth’s temperature when all I’m doing is using it to point out the fallacy in your argument.
If you’re unable to accept that your reasoning is fallacious then we have the fallacy of global warming denial in a nutshell: deniers reject the hockey stick on the ground that 160-year smoothing erases it.
Put that simply, the global warming deniers will shrink to those who don’t mind using reasoning that anyone who knows what the cos and exp functions are can easily see gives wrong answers. PDA keeps trying to point this out to you by actually showing you the average of these two functions so you can see it with your own eyes, but you seem to have your head in the sand and refuse to accept that it demonstrates your reasoning is unsound. While I sure there are plenty of people as illogical as that, I seriously doubt whether they amount to 25% of those who know what cos and exp are. But I admit I could be wrong on that estimate.
I don’t actually believe you’re in that 25% or whatever it is. I believe you’re pulling my chain, as I said before. If you reject my explanation of global warming then I reject your claim that you’re not pulling my chain. Fair?
Which is it?
Did you know that Schroedinger actually owned a cat that was both dead and alive? He really did. One year it was alive, the next year it was dead. He was very sad.
It’s the same thing here. Before 1930 anthropogenic CO2 was negligible and we could easily observe other influences. In the past three decades CO2 has risen to such a high rate of increase as to almost completely mask those other influence.
You’re trying to make it sound like I was claiming that these two things were happening at the same time, which is obviously not the case from what I wrote. One would have to be seriously aphasic not to see that. I doubt you are, which is why I am more convinced that ever that you’re just advancing ever more lunatic arguments in an ongoing attempt to keep pulling my chain. No reasonable human being would argue as illogically as you’ve been doing without some hidden motive for doing so.
Matt. 7:6
I was just curious to see whether he was capable of trampling my arguments. So far no luck there. Either my arguments are not pearls (of wisdom?) or he’s no swine.
But it’s a good point and I need to time out permanently on him shortly, as I did several months ago with Al Tekhasski, who you may have noticed has started posting here and whom I used to debate in the same way and with similar results before calling a permanent stop to it. While I can’t predict the short term trends in any debate with him I am well calibrated on the long term trends. He’s as predictable as global warming that way, and Arfur Bryant is turning out to be much the same. They both feign innocence when confronted.
It’s not clear to me you aren’t arguing they should both be happening at the same time at all. You have advanced the hypothesis of a long term climate sensitivity and have ignored my comment that the same principles should then be applied to previous forcings. My question now is: do you reject the hypotheses regarding long term climate sensitivity or do you support them? If you support them as was indicated earlier why do they not apply to previous forcings? Am I showing the lack of logic that you distain from the public at large or is it those that have seemed to ignore this the ones that have shown a lack of logic? If it is me please pint out why, I have been saying this for almost 2 years to climate scientists and have yet to recieve a response that explains where I am making my error.
It’s not clear to me you aren’t arguing they should both be happening at the same time at all.
I’ve said it before and I’ll say it again. Before 1930 CO2 was not rising fast enough to notice, when compared with the 0.2 °C swings on either side (the amplitude of the swing, the range is twice the amplitude) that we could easily discern before 1930.
After 1930 CO2 was rising so fast as to cause heating that dwarfs those changes. From 1970 onwards we could no longer see the period-60-year swings that have been characteristic of global temperature since the 1500s, except for a quiet period in the late 17th century.
Does the before-1930/after-1930 thing make it clear that I’m arguing they are happening at different times? If not why not?
I have been saying this for almost 2 years to climate scientists and have yet to recieve a response that explains where I am making my error.
Thousands of non-climate scientists, maybe hundreds of thousands, would like the same thing. If you aren’t getting personal attention from your local climate scientist then clearly their outreach program is underfunded. I suggest you contact your congressperson about fixing that if you are serious about getting an answer.
In the meantime ask some willing layman to explain it. I’m retired and it’s one way to spend my retirement. I just wish I knew how to get my hands on even ten of those $47 trillion dollars that some climate deniers are saying is the amount that’s motivating the scientists and government officials behind what they believe to be a huge conspiracy. Maybe I should write a book for nonbelievers and sell it for cost plus one dollar. But I reckon I’d be lucky to make ten dollars that way.
I didn’t say I didn’t get a response and I am intelligent enough to know what forum to expect one in. I said I didn’t get a response that explain where I was making my eror. I have again recieved a similar response. This is what the paper you promoted says
“Ocean-caused delay is estimated in Fig. (S7) using a coupled atmosphere-ocean model. One-third of the response
occurs in the first few years, in part because of rapid response over land, one-half in ~25 years, three-quarters in 250 years, and nearly full response in a millennium.”
Now even if you state that natural forcing was overwhelmed in 1930, which I assume is in dispute, you still have reactions to previous forcings to account for. You are not talking to one of your students. Save your condescending manner for those that rely on you for grades. I am unimpressed.
Last but not least, you use denier quite often. Define denier.
do you reject the hypotheses regarding long term climate sensitivity or do you support them?
The latter.
If you support them as was indicated earlier why do they not apply to previous forcings?
But they do apply. I never said they didn’t, I only said the CO2 was not changing fast enough to pick out its effect from the noise of all the other effects, which before 1930 dwarfed the CO2 effect.
If you support a high long term climate sensitivity and you agree that previous forcings are subject to the same principles then you have a paper to refer me to which shows where the equilibrium temperature from the warming of the late 19th and early 20th centuries can be located. From this we can derive what the current forcing has done in the way of a short term sensitivity.
If you support a high long term climate sensitivity and you agree that previous forcings are subject to the same principles then you have a paper to refer me to which shows where the equilibrium temperature from the warming of the late 19th and early 20th centuries can be located. From this we can derive what the current forcing has done in the way of a short term sensitivity.
The “very-well-understood physics of infrared absorption by CO2″ is not the same as proof. I realise that pro-(C)AGW proponents don’t like to be asked for ‘proof’ but it really is down to those who postulate such an idea to find some sort of data to back it up.
The proof of infrared absorption by CO2 was found by John Tyndall in the 1860s and measured at 972 times the absorptivity of air. Since then we have learned how to measure not only the strength of its absorption but also how the strength depends on the absorbed wavelength. That’s what HITRAN is all about. The physics of infrared absorption by CO2 is understood in great detail, certainly enough to predict what will happen to thermal radiation passed through any given quantity of CO2, regardless of whether that quantity is in a lab or overhead in the atmosphere.
I confess I do not know what has caused the warming of the second half of the 20th Century. I do not know what caused the heating between 1875 and 1880. I find it irrational to invent a false god (CAGW) simply because my knowledge is limited.
Rest assured that your ignorance has not been an obstacle to progress in climate science.
“Rest assured that your ignorance has not been an obstacle to progress in climate science.”.
Indeed. Ignorance is the very cornerstone of mainstream climate ‘science’ today. That and political rectitude.
There has been progress?
I have no argument with the absorbing properties of CO2 but you and I both know that that is not the ‘proof’ to which I was referring. I was referring to the proof of the ‘effect’ of CO2 in being able to cause catastrophic warming.
I ACCEPT that CO2 is able to absorb radiation. The logic of extending that physical property to “it therefore can cause a catastrophic rise in temperature” (my interpretation) is equivalent to the logic that states “I see my first car… it is red… therefore I deduce that all cars are red!”
I consider your comment about my ignorance to be not worthy of you. Argue the post, not the person.
Regards,
I ACCEPT that CO2 is able to absorb radiation. The logic of extending that physical property to “it therefore can cause a catastrophic rise in temperature” (my interpretation)
I really wish you’d stop making that straw man argument of attributing warnings of catastrophe to me, it’s starting to get very tedious. I am claiming only that temperature rises logarithmically with CO2 as first pointed out by Arrhenius in 1896, and that anthropogenic CO2 rises exponentially with time as first pointed out by Hofmann recently. From this it follows that temperature will rise at an increasing rate in the future, unless by some miracle nature steps in and does something of her own to offset the CO2-induced temperature rise.
is equivalent to the logic that states “I see my first car… it is red… therefore I deduce that all cars are red!”
That logical form is the extreme case of inductive reasoning in which a generalized conclusion is drawn from a single inference. It had not occurred to me that my reasoning could be mistaken for that form, as I had based it on the two laws above of respectively Arrhenius and Hofmann.
You cannot refute my argument by refuting some other argument I didn’t use, you must refute some part of the reasoning I used.
I consider your comment about my ignorance to be not worthy of you. Argue the post, not the person.
I was referring only to your professed ignorance about what caused the heating, no other kind, I thought this would be clear from context (you’d just said you were ignorant about it). You’re the one who keeps seeing inductive reasoning where there is none, in this case from the instance of your ignorance provided by you to your ignorance in general, about which neither of us had said anything previously.
Let me phrase it differently if that helps. That you do not know what caused the heating is irrelevant to whether or not others know what caused it.
There has been progress?
If not then there is no point in continuing a fruitless discussion. At least it has been stimulating, for which I thank you.
And I thank you, Vaughan.
My best wishes to you and your family over the festive season.
Vaughan,
I shall make my point more clearly:
Proof of a property of CO2 is not the same as proof of its effect.
“The physics of infrared absorption by CO2 is understood in great detail, certainly enough to predict what will happen to thermal radiation passed through any given quantity of CO2, regardless of whether that quantity is in a lab or overhead in the atmosphere.”
Go on then, predict how much warming we are going to get in the atmosphere when thermal radiation is passed through the given quantity of 0.039% CO2.
Go on then, predict how much warming we are going to get in the atmosphere when thermal radiation is passed through the given quantity of 0.039% CO2.
I only claimed we could “predict what will happen to thermal radiation passed through any given quantity of CO2.” In fact we can do that fairly straightforwardly. What we can’t do straightforwardly is convert that into a precise impact on global warming.
What will happen to the thermal radiation is that the CO2 in the atmosphere will absorb a certain fraction of its total power. This depends on the wavelengths present in the radiation, which in turn depends on the source of the radiation and its temperature. Thanks to the HITRAN tables and Beer’s law for absorption we can compute this to excellent accuracy for any given source and temperature thereof. This part is straightforward and is a routine exercise that has been done by many people.
Converting it to global warming is a much harder problem (a) because of the huge variability on Earth of these sources and (b) because of the complex nature of the sinks ultimately absorbing this power after the CO2 has either handed it off to the atmosphere or reradiated it.
Regarding (a), the source may be the ground, in which case the wavelengths are those for Planck radiation at the temperature of the surface, which varies greatly by both time of day, season, and location on the planet. Or it may be water vapor, ozone, CO2 and methane molecules in the atmosphere, whose temperature varies with altitude. Herein lies one of the complexities of conversion to global warming. Even if I gave you a calculator into which you entered the source and its location and temperature and it output the absorbed power, you would have to use this calculator repeatedly for all the sources and their locations and temperatures, which would take you centuries. You would quickly give up and ask to be allowed to use a computer to do this more automatically. This would then make you either a buyer or builder of a climate model depending on whether you trusted someone else’s code or insisted on doing it yourself. In either case you would now be doing climate modeling, and there is no possible way to avoid this in combining our excellent understanding of the physics of absorption of radiation by CO2 with our remarkably comprehensive but dauntingly data driven knowledge of the Earth.
But even after all that, all you would end up with is the total power trapped by Earths’ CO2 blanket. You now have to address part (b), where does it all go, without which you don’t yet have temperature.
Well, some is exchanged with air molecules as kinetic energy and some is reradiated but not at the same frequencies at which it was observed due to Stokes shift, complicating your calculation. The former merely keeps the air warm, and since we know the temperature of the air everywhere thanks to radiosondes and satellite microwave sounding units we can do a pretty good job of accounting for that portion.
The latter is more complicated. Part of the reradiated power is lost to space, part is absorbed by other CO2 molecules (which is much more probable than for the black body radiation from Earth because CO2 absorbs strongly at the frequencies emitted by CO2), and part is absorbed by the Earth’s surface. This last part is what is customarily meant by back radiation.
Once this radiated power is back on Earth, figuring out how much of it is reradiated and how much goes into the ocean and the subterrain is an even more daunting task than the above. A massive amount of the originally absorbed heat finds its way by different routes into both land and sea, at different speeds that depend on the thermal structure of the ocean (are the isothermals perfectly horizontal or slightly slanted, and if so by how much as a function of longitude, latitude, and depth?) and the geology of the land (rocks, sand, etc.).
Ok, so now imagine ten programmers each volunteer to build a climate model. They all take the same one year course covering everything I’ve been talking about in the above. They then go their separate ways and compute all this with ten different programs.
Here’s a question for you. Given the immense complexity of what they’re up against, what are the odds that all ten programs will produce the same results?
Therein lies one factor in the wide variability of climate sensitivies.
But now we come to (c): what do we even mean by climate sensitivity? Is it the simply the immediate relationship between CO2 and temperature? Or should we measure the temperature at some suitable time after adding the CO2, say 20 years? Or maybe 1000 years, since some of the heat is going to take that long to influence the temperature. The IPCC distinguishes equilibrium climate sensitivity and transient climate response, but even the latter depends on parameters of delay and rate of CO2 increase that the IPCC fixes somewhat arbitrarily to avoid further proliferation of the sensitivity concept.
Does this help in conveying the difference between my “[we can] predict what will happen to thermal radiation passed through any given quantity of CO2” and your “predict how much warming we are going to get.”
What is not in contention in scientific circles is that increased CO2 heats the planet, this has been known for a century and a half. What is far far harder is to calculate the precise amount of heating we can expect from a given level of CO2.
For this reason it is helpful just to look at the actual temperature and try to correlate it with the various claimed influences to see how good a job can be done there. This is what I’ve been focusing on in my own (purely amateur of course) attempts to understand the global warming phenomenon, which to me but not to you is a reality.
While I like climate modeling as a way of understanding how climate works, the reasons above raise serious questions about its practicality as a predictive tool. I much prefer the direct approach of noticing that the temperature has indeed skyrocketed, having reached a temperature 0.65 °C higher than at any historically recent period before 1930 when considering the 10-year-smoothed global temperature record, and explaining how it got there including accounting for all its wiggles along the way.
Vaughan,
At last we seem to have a mature discussion again after a period of friction!
You may be surprised to learnt that I agree with most of what you have written in the previous post.
“What we can’t do straightforwardly is convert that into a precise impact on global warming.”
Exactly my point. The absorption of radiation by CO2 may well be understood. What I find difficult to swallow is the leap of faith that is made in order to reach the conclusion that CO2 is likely to cause serious global warming. What you have to remember is the scale of the problem. All the ghgs put together make up 0.04% of the atmosphere. This is a very small amount. I will expand on this after the next paragraph.
Your point about giving me a calculator to help with all the calculations is well-intentioned but, in my opinion, specious. It wouldn’t matter how many calculators you gave me, the important point is HOW I use the calculators, There are – as you say – so many variables in the climate that any modeler has to make assumptions as to how each factor may or may not affect the whole. This means that any errors due to lack of knowledge will rapidly become enlarged as more assumptions are compounded. You might have excellent understanding of the absorption properties of CO2 but you don’t have an excellent understanding of how that absorption affects the rest of the atmosphere.
Back to the ‘power of CO2’.
“Well, some is exchanged with air molecules as kinetic energy and some is reradiated…”
A agree. But how much is the value of each ‘some’? Lets take a parcel of (dry) atmosphere. less than 400 ppmv are ghgs The rest – over 999,600 ppmv are radiatively inert. That is, Nitrogen, Oxygen and Argon molecules. NOT ONE of these molecules can be heated by radiation. They can only be heated by conduction. So, already, you have a massive imbalance in your ‘somes’. Now introduce water vapour. These exist at an average of 2% in the atmosphere, which is 20,000 ppmv. As they are additional to the dry atmosphere, this will make a total of 102%. 20,000 is a much bigger amount than 400, and although these molecules can absorb radiation, they can’t re-radiate. They can however, conduct heat to neighbouring molecules of ‘some’ of the 999,600 inert molecules.
What all this means is that the heating of the atmosphere due to radiation is very small when compared to the heating by conduction. So even though you may have an excellent understanding of radiation absorption, the ‘effect’ of that absorption is not necessarily as large as the pro-AGW proponents give it credit for. The presence of water vapour plays a far greater part in the greenhouse effect than any other ghg. Yet its part is downplayed by the AGW machine. Remember that water vapour doesn’t need the other ghgs to create a ‘greenhouse’ – it can abosrb radiation directly from the surface of the Earth.
Also, the amount of ‘back radiation’ is very small compared to the radiation emitted by the Earth.
“Here’s a question for you. Given the immense complexity of what they’re up against, what are the odds that all ten programs will produce the same results?”
I imagine the odds would be very long. My reciprocating question to you is “How many of the models have actually provided an answer that is even close to the real-world data.”? Your comment about the ‘immense complexity’ reinforces my point. Basing your opinion on the predictive properties of models is assumption.
“What is not in contention in scientific circles is that increased CO2 heats the planet, this has been known for a century and a half. What is far far harder is to calculate the precise amount of heating we can expect from a given level of CO2.”
Hmmm. I think the standard of debates going on on this blog alone should have persuaded you that there is a massive difference between science ‘knowing’ that CO2 should lead to ‘some’ warming, and science ‘knowing’ just how much that warming actually is.
Finally, your use of the verb ‘skyrocketing’ to describe the increase in temperature since 1930 proves, in my opinion, that you have already made your mind up as to the cause. We have discussed trends ad nauseam. You are making a leap of faith here, Vaughan. There is no evidence that CO2 is the cause of that warming.
My best wishes to you on your current research. :)
Regards,
It makes no difference what the ‘influences’ are, the difference between an exponential curve and the observed temperature anomaly curve is plain to see.
In the (cos(x) + exp(x))/2 example the difference is not plain to see until it’s too late. The “influence” created by the cos(x) component masks the exponential increase early on, making the latter not at all plain to see until exp(x) approaches 1. After that the exp(x) starts to mask the cos(x).
“…until it’s too late.” That is a subjective remark. It proves you have a preconceived belief.
Arfur, please look at this graph. This is the mean of a variable function cos(x) and the exponential function exp(x).
The point here is that the variable function dominates the exponential function until the value of x passes a certain point. That’s being presented in direct response to your statement about “specious arguments about ‘natural variation noise’ “.
As you state: “If the AGW theory is correct, the temperature should rise at an accelerative rate overall. The overall trend would eventually show an increase.” Using this simplified example – where cos(x) represents natural variation and exp(x) represents CO2 forcing – you can see how there would be short term rises and falls and an eventual increase.
PDA,
Of course I understand the point you are trying to make. However, if you again look at my statement, it began with the word ‘If’.
You and Vaughan are ASSUMING that the AGW theory is correct and will lead to the type of curve you have shown in your graph. ‘IF’ you are correct, the global temperature may follow that ever-steepening curve. However, you are both basing your arguments on an assumption. Not only is there any initial evidence upon which you can base that assumption, there is no supporting evidence for the continuation of your assumption. You both argue for the future based on what you perceive, not what is.
At least try to be objective.
Reagrds,
Vaughan,
“… difference is not plain to see until it’s too late.”
That is speculation based on assumption.
That is speculation based on assumption.
Dr. Pratt was not, in that comment, expressing anything directly about climate change. Rather, it was a thought experiment.
You may or may not be familiar with what the graph of a cosine or that of an exponential function looks like, and so you might not be able to visualize the mean of the two functions. Which is fine.
Fortunately, we have the internet.
“Dr. Pratt was not, in that comment, expressing anything directly about climate change. Rather, it was a thought experiment.”
Really? In that case, I ‘thought’ he was making an analogy.:)
Huh?
I ‘thought’ he was making an analogy.
No, I was making a counterfactual statement, one that might not be true in this world but that your argument does not rule out. That’s different from an analogy.
The role of counterfactuals in geometry only became prominent in the 19th century when Bolyai and Lobachevsky came up with hyperbolic space as a model of non-Euclidean geometry, the more general geometry that obtains in the absence of Euclid’s Fifth Postulate or Parallel Postulate, that two lines inclined towards one another must eventually meet. The Parallel Postulate is false on a hyperbolic surface because two converging geodesics can discover to their horror that space is opening up between them faster than they were originally converging together, preventing them from ever meeting. On the plane, a flat space having zero Gaussian curvature, this cannot happen: two converging geodesics always meet because flat space behaves itself in that regard.
Prior to the work of Bolyai and Lobachevsky, geometers were inclined to view the question of whether the Parallel Postulate could be proved from the other four postulates as being in the same category as whether lead can be transmuted to gold: the sort of problem only cranks work on. The idea that there could be a universe different from ours by virtue of not being flat was inconceivable to them. (Oddly enough the sphere is hardly ever mentioned, but perhaps because it is finite, which the wording of Postulate 2, that a line could be prolonged indefinitely, seemed to forbid, or was so interpreted anyway. What was inconceivable to them was a non-flat space that was as infinite as the plane.)
If you find it inconceivable that the temperature could follow (cos(x)+exp(x))/2 in any world, you are in distinguished, albeit historic, company.
Fine. As long as we both know you are not talking about reality.
Fine. As long as we both know you are not talking about reality.
Correct. On that occasion I was talking about the lack of logic in your reasoning, where I most certainly am not talking about reality.
That is speculation based on assumption.
Assumption, yes, speculation, no. The point of the assumption is to make it not speculation but an inevitable consequence of the assumption. The assumption is legitimate because I was pointing out a fallacy in your argument by considering a different world from ours, as opposed to making an assertion about our world. Assumptions are not only permitted but often necessary in order to demonstrate a fallacy. I was not assuming anything about our world in doing so.
I can’t believe you actually mean that. Seriously. The temperature will be 27 in 2050?
In a world where the temperature behaves as the mean of cos(x) + exp(x) and x=0 is 2010 and x counts decades, then yes the temperature will indeed be 27 in 2050.
As you correctly point out, this is not our world (or so we both hope!). However your reasoning is unsound for that world, so what justification can you give for it being sound in our world when it does not work in other worlds?
For what it’s worth, I’ve got my illustrated explanation of the Tyndall gas effect (more generally sort of known as the greenhouse effect) written up and posted. Since I divided my time today between helping students and writing this entry, some of this may be redundant, but oh well.
I really like the satellite images, which makes the principle very visible (once you’ve figured out what you’re looking at and for). Is that the new HIRS stuff? Most of the satellite air temperature data since it began in 1978 has been with Microwave Sounding Units (MSUs), which infer temperature from “brightness” of the microwaves. These are much longer in wavelength than the actual CO2-absorbed/emitted infrared radiation. The latter tells you directly whether the photons you’re seeing are coming directly from the ground or indirectly via reradiation from the air.
Seeing is believing. Unfortunately not everyone believes their eyes.
Oops wrong thread.
I am not sure where you are getting your information, but good topic.
I needs to spend some time learning more or understanding
more. Thanks for fantastic information I was looking for this information for my mission.
Its like you read my mind! You seem to know a lot about this,
like you wrote the book in it or something. I think that you
can do with some pics to drive the message home a bit,
but instead of that, this is fantastic blog. An excellent read.
I’ll definitely be back.
Hello, Neat post. There’s an issue along with your website in internet explorer, could test this?
IE nonetheless is the marketplace chief and a huge section of
other folks will pass over your great writing because of this problem.
{{This is great informative content that obviously
has a lot of thought and work rolled into it. I can tell you are an intelligent person by the way you express your unique and original views.
|If you set out to make me think today; mission accomplished!I really like your writing style and how you express your ideas.
Thank you. |Two thumbs up!Your article wins my approval.
I know, thats no big deal. I just want to say thank you for
this information. I agree with your fresh views. |Gorgeous wedding
favours perfect for every style, budget and occasion|Thank you for
writing this informative article. You have made sense of this topic with your original and quality content.
Its smart, engaging and interesting. Articles like this are genuinely appreciated by people like me.
|Your article is the most impressive I have read on this subject in many years.
Its a refreshing twist. I like your point of view. |Wonderful
article!You are truly blessed with writing talent.
Your article gave me many original points to ponder. Thank you for thought-provoking and interesting material.
|What can I say about this article?You have done a bang up job of making your points clear and I agree
with you on most. I enjoyed this reading. |This writer is truly amazing with the amount of knowledge conveyed, and the
in-depth descriptions used in the information. The article written would make a wonderful start for a weekly newsletter posted on the web.
|You have really done a lot of research on this subject.
It shows in the viewpoints you make within this content.
You also managed to keep this information interesting and unique.
This is useful information. |I feel youve made a great contribution to this subject.
I enjoyed reading your article and appreciate the information.
I know its not easy to ensure each paragraph has interesting content, but
you did it. |Thank for sharing this useful and informative article.
I enjoyed reading your views on this subject
matter. Its a pleasure to read such well-written content.
|I wouldnt normally be so intrigued by content on this topic but the way you wrote this really grabbed my attention.
It is very well thought out and interesting informative
content. Thank you for sharing!|These kind of post are always inspiring and I prefer to read quality content so I happy
to find many good point here in the post,
writing is simply great, thank you for the post|This is totally
awesome!You have a great mind. This is interesting content I can
respect and appreciate. Its obvious youre knowledgeable in this area.
|You have managed to write an interesting and unique article that
kindled my interest. There are many solid points that make the reader
think. I can think of nothing better than an original thought-provoking
article. |I chose to bookmark your article so I can come back
and read it again. Its very well-written, interesting and intelligent.
Youre an excellent writer with a lot of style. |If I didnt know better, Id think you did a lot of research here.
No, really I can see you did a lot of hard work and I appreciate that fact.
This is quality content. |Ive read several articles on this topic over the past few days, but yours is the only one that makes sense to me.
Thank you. |Thank you for your commitment to good writing.
Your use of persuasive content impresses me. I like your views and
concur many points. Please continue to put out great work.
|I have read a few of the articles on your website now, and I really like your style
of blogging. I added it to my favorites web page list and will be checking back soon.
Please check out my site as well and let me know what you think.
|These are all really great points made here. I hope many people gain access to this information.
This is good quality writing deserving of attention. |Many people think theres nothing to writing an article, but they are not pros.
You a|Some readers might think there is a lot of information to digest in this article.
I think its so clear and well formatted that its easy to
read and digest. This writing is nicely done.
|Do you have a reliable visitor stats plugin?
Perhaps you could vouch for a certain plugin? Thank you!|This is an
excellent sample of good quality writing. You have made sound points with interesting content.
This is really great work obviously written by a true talent!Thank you.
|Excellent work!I really enjoyed your points of view and agree on most of
them. I will come back to check for more great content.
|I would say you did a lot of research on this topic before you started writing on
it. You have solid points listed here that I agree with and appreciate.
|I would like to know how long it took you to plan, research and
write all this unique information. I completely agree on many
points. |Thank you for the info, I was looking for details here on it for days !|If
I told you how often Im disappointed by informational articles because of the poor content, youd be
surprised. Your article is refreshing and full
of interesting points and material. |You made this content smart and very engaging and I
think thats great. This is wonderful reading material thats good for anyone
that likes to read informational articles. Thank you. |Low cost satellite phones for sale|I would read more on this topic if
the info provided were as interesting as what you have written in
this article. Dont stop caring about the
content you write. |Good to get visiting your page
once more, its been years for me. |This article is a great example
of informational content I enjoy reading. Its just the
kind of original content that provokes thought in the reader.
|I think more writers should take care to write with passion like you.
Even informational articles like this can have personality.
Thats what you have interjected in this informative article.
Your views are very unique. |There are many aspects of this article on which I
concur with you. You have generated synapses in my brain not used often.
Thank you for getting my neurons jumping. |The points
you make in this content are very clear, interesting and informative.
I agree with several points here. Please continue
these articles as they are very engaging. Great job!|This is exactly the type of information
Ive been looking for. Thank you for posting this article for readers.
We can all learn from this. |I like how you made this article so interesting.
I think you are perceptive and talented. I can really appreciate your hard work and your views.
Thank you for all this useful and fresh information.
|Very interesting. I think you made valuable and valid points in
this writing. I agree with you one hundred percent and am glad I had the opportunity to
read this. |Wow!I know thats a lame thing to say in a comment, but I am just so blown
away by your article. Its loaded with important information and interesting points that I can relate
to. |I really can have these sharing for the
better usage. Really needed these posting. And I have bookmarked this post.
|You are excellent with words. I know you worked
hard on this article and it shows. I agree with a lot of
the content you wrote. I enjoyed this and will be back for more.
|I have really enjoyed your article. Your perception of this subject matter
is unique. Thank you for not being boring.
I appreciate well-written, interesting informative content
like you have written. |Thank you for writing this article
in your own unique way. Ive been hoping to find clear information like this.
You really helped clear up a lot of my confusion. |It has
been a long time since Ive read anything so informative and
compelling. Im waiting for the next article from the writer.
Thank you. |I agree with you on many of your original ideas here.
Youve done a great job of making this very interesting and clear.
Thank you. |Im glad you wrote this piece.
This is the type of info that needs to be given and not the random misinformation thats at the other
blogs. I appreciate your sharing this, keep up the good work.
. |Great job on this article!Im intrigued with your thoughts on this subject as well as your writing skills.
I like it when I can tell a writer has poured him/herself into an article.
|Youve given me some really informative material to digest with this article.
Im happy I came across your article. I agree with much of
your content. Thank you for sharing your knowledge on this subject.
|I very much enjoyed reading your informative article.
I do agree with a lot of your original material and believe
it to be quite interesting. |Now this is highly recommeded post
for me. I will surely email this to my friend. }|
Thanks for sharing your thoughts on favorite gaze. Regards
This isn’t the end of the list, other than water, almost anything you consume, can discolour
your teeth, and given that we cannot live on water alone, then
our teeth will discolour to some extend. Through powerful emotional experiences at mass during the Vatican II council, Duplessis says he was purged entirely from
suspicion about Catholic doctrine (p. Montgomery, associated
with Jack Coe’s widow, Juanita, exposed this fraud with the following report:
“Some of these same evangelists reported that literally hundreds of deaf people were healed and received their hearing in the Jamaica meetings.
MILF Cam Advantages – There are several clear reasons to be enthusiastic about
MILF cam chats. You can take the benefit of their experience and expertise.
You want to start it all afresh though a bit baffled, mainly because you don’t know
where to go and how to start with it again.
For latest information you have to pay a visit web and on the web
I found this site as a best site for most up-to-date updates.
(Include a link to your book’s URL either from your
website or their website. Producing these marketing items among guests, who are at the business occasion, enables
your company to really create your product presence visible,
felt, and valued. The next purpose is definitely the carrot and stick
procedure.
Thanks for sharing that superb written content on your site.
I ran into it on google. I will be intending to
check back again once you publish a lot more aricles.
The current Canyon is likely to be replaced by 2014 with a new model.
The GMC Canyon boasts a six feet cargo box, a 2 tier loading, and a locking tailgate.
Keeping with the sentiments of the time, vehicle fuel efficiency has garnered a lot of
attention; a factor which was easily dismissed in favor of better designs and top
of the line accessories till yesterday has now become a
crucial constraint when buying vehicles.
Qualified attorneys assist their clients throughout the process and they work on the
behalf of clients in completing the paperwork and other legal procedures.
In order to recover a birth injury you will not only need to have a reliable
lawyer at your side but also some evidence that prove the fault of a pharmaceutical company, a hospital, or a medical service provider.
The attorney will arrange for a physiatrist to examine you.
Greetings! I’ve been following your web site for a while now and finally got the bravery to go ahead and give you a
shout out from Houston Texas! Just wanted to mention
keep up the great work!
S Food and Drug Administration has not esfablished a daily recommended dose for garcnia cambogia side effects.
Amazon is delivering your Hollywood Miracle Diet Hollywsood 24-Hour Miracle
Diet, Alll Natural Drink 16 fl oz 473 ml.
However, it is possible some holiday companies may
allow people to funeral insurance switch if they wish.
Oh, maybe 50 percent. Let’s find out So what does this mean for volcano-related
trip disruptions or others like it? Here are my top cruise travel tips that you must carry a precautionary budget, that
can be quite lucrative.
Remarkable! Its really remarkable piece of writing, I have got much clear idea regarding from this article.
You can expect a fast and fair settlement in hiring a professional auto accident
lawyer. Personal injury is a legal expression that is used for the cases when an individual receives an injury,
either physically or mental, or both, due to the negligent behavior
of another person. This means that if you do not
manage to settle with an adjuster, and you want to pursue your
child’s claim, you will need the help of a lawyer.
The emerald includes a reflective performance or ‘internal light’,
thought to be its flame. I determined that I would struggle for your certain modification later.
Today u must generate your custom firmware.
Hey there, I think your blog might be having
browser compatibility issues. When I look at your website in Chrome,
it looks fine but when opening in Internet Explorer, it has some
overlapping. I just wanted to give you a quick heads up!
Other then that, fantastic blog!
Thanks for all your initiatives that you have placed in this very interesting content.