Climate Etc has discussed the IPCC’s new protocol for alleged errors and an error found by Nic Lewis regarding climate sensitivity. This post discusses another error in IPCC AR4, one that has been around for some time but has only recently been reported to the IPCC.
The IPCC radiative forcing diagram
An important part of the IPCC WG1 report is Figure SPM.2, which represents the radiative forcing due to various human activities. The diagram also appears in FAQ 2.1and as Figure 2.20, and the details of its construction are given in Chapter 2. On the right of the diagram, in red, are the warming effects, such as carbon dioxide, while the cooling effects are on the left in blue.
This post is concerned with the largest of these cooling effects, the “aerosol cloud albedo effect”. Aerosol effects can be split into a “direct” effect, caused by the aerosol particles reflecting sunlight, and “indirect effects”, relating to the effect of aerosols on clouds, leading to a larger number of smaller water droplets. There are two indirect effects: first the “albedo” or “Twomey” effect, which is that more smaller particles are more reflective, and second the “lifetime” or “Albrecht” effect, that smaller particles take more time to accumulate and form rain, so clouds last longer. The physics of this is complicated, and this is an area where the level of understanding is categorised as “low”. For the purposes of this post, a detailed understanding of these processes is not required.
The first curious point about the IPCC’s assessment is that the second indirect effect is “not considered as RFs” (radiative forcing). This claim is contradicted by many of the published papers which clearly regard the second indirect effect as a radiative forcing. The combined indirect effects are listed in Table 2.7, but the high-impact figure in the SPM and elsewhere only shows the albedo effect.
Note that the RF value given by the IPCC for the cloud albedo effect in Figure SPM.2 is -0.7 W/m^2.
The IPCC error, noticed in 2008
The IPCC bases its value of -0.7 on the literature summarised in Table 2.7 and Fig 2.14. [For the purposes of this post, let’s assume that the IPCC has included all relevant papers and reported their results correctly, ignoring for the time being the fact that the IPCC refused to cite a relevant paper by Morgan et al 2006, as recently discussedhere.] After some discussion of these papers, the report says on page 180,
“Based on the results from all the modelling studies shown in Figure 2.14, compared to the TAR it is now possible to present a best estimate for the cloud albedo RF of -0.7 W/m^2 as the median, with a 5 to 95% range of -0.3 to -1.8 W/m^2.”
The IPCC error was first noted in a contribution to Climate Audit by Michael Smith, back in January 2008. He pointed out the median of the data shown in figure 2.14 is not -0.7, it is actually -0.985. Note that one of the numbers from the Williams et al study listed in Table 2.7 is not included in Fig 2.14. It is not explained why this number was omitted; perhaps it was thought not to be an independent estimate. If this number is included, the median is -1.07. On the CA unthreaded post there is some further discussion involving UC and RomanM (two smart people with a track record of reverse engineering climate science claims), who are unable to understand how the figure of -0.7 was obtained. Michael posted the question at RealClimate, where their Martin Vermeer was also unable to explain it (see comments 123 and 127 on this thread.)
Reporting the error to the IPCC
I reported this error, along with two minor typos, to the IPCC secretariat, as follows:
I write to report an error in the IPCC AR4 WG1 Report, Chapter 2, page 180. It is stated that the median cloud albedo RF from the results in Figure 2.14 is -0.7 W/m2. In fact the median is -0.985.There are three further very minor errors in this section:
1. Table 2.7, Chen study, 0.54 should be -0.54.
2. Table 2.7 note d, reference to Fig 2.16 should be 2.14.
3. Fig 2.14 omits one of the bold values from Table 2.7 (Williams et al study). If this is included the median is -1.07.
I was interested to see whether the IPCC would correct the final value from -0.7 to -0.985, or would try to maintain the final value of -0.7 and change the text to explain where the number came from. I expected the latter.
The official response came a month later. After some introductory material, it stated:
The Authors have confirmed that the cloud albedo forcing estimate is based on the subset of model studies in Figure 2.14 with more complete aerosol species representation. This estimate does not double-count similar forcing calculations within the same study. These two points may help explain the differences between your own calculations and those of the Chapter. It has further been determined that the section text does explain the reasoning behind the forcing estimate adequately. While the text could have been even more explicit on this, the present text is not in error, and therefore does not require changing.However, it has been concluded that the Radiative Forcing value listed for Suzuki et al. (2004) in Table 2.7 on page 174 was incorrect. The value should be -0.54 and not 0.54 as listed.
In addition, footnote d of Table 2.7 on page 176 should reference Figure 2.14 and not Figure 2.16 as given.
These two errors have been listed in an updated WGI AR4 Errata which is available from the WGI website (https://www.ipcc-wg1.unibe.ch/publications/wg1-ar4/wg1-ar4.html.) and attached for your convenience. We have asked the IPCC Secretariat to update the errata on their web site as well and this typically takes a few days.
Thank you again for bringing these errors to our attention and for your interest in the work of the IPCC.
IPCC WGI TSU
So apart from acknowledging the two typos, the IPCC denies that there is any error at all. The claim that “It has further been determined that the section text does explain the reasoning behind the forcing estimate adequately” must rank as one of the most absurd claims of the IPCC.
How the value of -0.7 was achieved
The IPCC response still does not fully explain how the value of -0.7 was arrived at, but here is my attempt to derive it. Starting from Figure 2.14, ignore the upper half of the diagram (not explained in the text). Then combine results from different computations within the same study, despite the fact that the caption to the Figure says explicitly that they can be regarded as independent. Let’s suppose that we take the median within each study. This leaves the following numbers:
[-0.45 -0.85 -1.85 -1.35 -0.54 -0.52 -0.77 -0.5 -1.1 -0.68 ]
The median of these numbers is -0.725, which with rounding could be interpreted as -0.7.
Looking back at the second order draft
It is often useful to look at the Second Order Draft (SOD) of the IPCC report. Recall that the SOD is the last version to be inspected and commented on by expert reviewers before the final report is produced.
Looking at pages 41-42 it is interesting to see that the results are summarised correctly: “the average RF from all models in Table 2.7 is -1.09 W/m^2”. Later, on page 45, this number is revised downwards:
“It is possible now to present a best estimate for the RF of (-0.9 +- 0.43) W/m^2 as mean and standard deviation, based on the results from the 8 modelling studies shown inFigure 2.16: Bottom. These models represent the complexity of the aerosol-cloud interactions, in terms of the species included, the state of the mixtures, etc., to the best current knowledge in forward calculations.”
The differences between the SOD and the final version raise some awkward questions for the IPCC.
- Why was the statement that the average cloud albedo RF was -1.09 in the SOD deleted for the final report?
- Why was the clear explanation that only data from the lower half of the figure was used dropped in the final version?
- Why was the average changed from a mean (in the SOD) to a median (in the final report)? (The answer is presumably that it gives a smaller number, but it can be checked that no expert reviewer asked for such a change.)
Note that two more studies were included in the final report, Chen and Penner 2005, median value -1.1, and Penner et al 2006, median value -0.68, so the introduction of these cannot be responsible for changing the overall number from -0.9 to -0.7.
A flaw in the IPCC process
This incident illustrates a serious flaw in the IPCC process. After the second round of reviewer comments on the SOD, IPCC authors are free to insert whatever they like into the final version of the report (in this case, further tricks to make the number smaller, and an arithmetic error). Ross McKitrick noted this flaw in his recent report on the IPCC, and gave two examples. Another example is the insertion into the final report of misleading short and long trend lines on the figure in FAQ 3.1.
The IPCC chose not to regard the second indirect effect as a forcing, and carried out three distinct fiddles in order to make the headline figure for the cloud albedo effect as small as possible.
1. Use only data from the lower half of Fig 2.14 (explained in SOD, not explained in final report).
2. Combine different values obtained within the study into one (contradicted by caption to Fig 2.14 which states they are independent).
3. Switch from mean values (in SOD) to medians (in final report).
Each of these fiddles reduces the number by about 0.1.
The attempts by the IPCC to disguise this sequence of fiddles led to an erroneous statement in the final report.
When the error was reported, it was denied by the IPCC, with the false claim that the text explained the reasoning behind the estimate and that there was no error.
JC note: Paul sent this post to me today via email, unsolicited. This is a guest post, and the views expressed are those of Paul Matthews. This post does not imply any endorsement by me of the content (but I sure as heck want to hear what the IPCC has to say about this).
Moderation note: This is a technical thread that will be strictly moderated for relevance and civility.
Is the error significant or more like something in the “noise level” of the calculations? Stated somewhat differently, does the error noticeably skew the results and if so, in what direction?
I’m with you MK – what diff does it make if it’s 0.7 or 1.whatever? Come on Paul, help a non-climate non-maths reader out.
Specifically, what does this do to the climate sensitivity?
It doesn’t do anything at all, given the wide uncertainties in aerosol forcing to being with, and that of ocean heat uptake. In fact, very few people take the instrumental record seriously as a strong stand-alone constraint on sensitivity. There are also more publications on this since the AR4
I have not checked the accuracy of this post yet (some project for later in the week, if I get to it at all, but people really curious should seek the input of aerosol experts who have specialty insight into the number value in question, these type of diagrams and distributions always have a number of caveats to consider, and even the table/diagram in the AR4 does not probe the full range of uncertainties in aerosols…even the direct effect is still a hot topic of current research); but this post looks like a big who-ha over nothing, just like most of the criticisms of the AR4 to date.
Keep in mind too that if the IPCC wanted to make things scarier, they’d actually inflate the aerosol forcing to as high (more negative values) as possible.
From AR4 184.108.40.206, Summary of ‘Inverse’ Estimates of Net Aerosol Forcing
In other words, if the net forcing, which includes the aerosol forcing, is not “just right”, the models “that explain everything” would be in big trouble. A change in the aerosol forcing of the magnitude described here would put the forcing derived from “forward” estimates and from “backward” estimates close to being inconsistent. Why is this important?
From the same section:
If they are no longer similar, what does that do to the confidence in estimates of not only aerosol forcing, but all the other forcings too?
Sorry, but you’re not even close. For one thing, inverse and forward estimates use different information, and are affected by different uncertainties, although the AR4 suggests general agreement between the methods, but the uncertainty in total aerosol forcing is very large (might be -0.5 W/m2, might be -2.5 W/m2) and even still, many complicated factors (like the influence on ice clouds rather than liquid clouds) are not at a stage to be quantified with high confidence. This makes the instrumental record a terrible target on its own for assessing climate sensitivity (along with other reasons). The quibbling over a tenth or two of a watt per square meter as a median estimate doesn’t change the large uncertainty range whatsoever, nor does it have much to do about the attribution process, which is not even focused on getting the right amplitude of each process correct. The AR4 and many other later papers discuss this in detail (Reto Knutti has some good work on attribution).
And of course, if you really want to sell a high climate sensitivity (and thus more worry for the future), you want the aerosol forcing to be more negative. If the aerosol forcing is closer to -2 W/m2 than it is to -0.5 W/m2, then the net forcing (including GHG’s) is closer to zero, and you need a large sensitivity to explain the observed warming. You quoted something in the AR4 which says the same thing, but evidently you did not understand it.
All I’m pointing out is what the IPCC chose to draw confidence from.
If you think they incorrectly drew confidence from this, I can’t argue with you.
Also, “would be very difficult to reconcile with the observed increase in temperature.” was their statement and they felt it was important enough to point it out.
Since they made it on the basis of the observed temperature record, which you say is not good enough to draw any conclusions from, is it possible you’ve found another error in the IPCC report?
The albedo is supposed to be a percentage of energy reflected from the Earth. The 1997 Kiehl Trenberth energy balance depicts Incoming solar radiation as 342W/m^2 and the albedo as 107W/m^2.
107/342 = 31.28%.
The 2008 revision depicts much more precise numbers with Incoming solar radiation at 341.3W/m^2 and albedo at just 101.9W/m^2.
101.9/341.3 = 29.86%.
The 2008 revision was based on 2004 values and the global temperature increased from 1997 to 2004 by approximately 0.1°C.
Was this temperature increase due to a reduction in albedo of 5.1W/m^2?
If this is the case then climate sensitivity is only 0.1/5.1 = 0.0196°C/W/m^2.
Using this climate sensitivity with the CO2 forcing parameter we get
5.35ln(2) = 3.71W/m^2 for a doubling of CO2.
at a climate sensitivity of 0.0196°C/W/m^2 this produces just 0.073°C for a doubling of CO2!!
CO2 is increasing at 2ppmv/year so a doubling from our current 390ppmv level will take 195 years and produce 0.0196°C of catastrophic global warming for which we must cripple the economy and starve the global population turning food into biofuels to stop this from happening!
Surely the figures would have been peer reviewed by a number of people before the paper went to press? So that suggests there are two scenarios to consider;
The first is that the peer review process doesn’t work and the second is that you are wrong.
Is it possible to find out who checked this piece of work originally and ask them for a clarfication?
Tony, the point is that major changes were made after the review process, that aren’t checked. The IPCC allows this, so yes, the review process doesn’t work. This is one of the key points made by Ross McKitrick, see page 5 of his report.
It’s a shame that other threads were posted so close in time to this one as this thread hasn’t received the attention it warrants.
I’m sure I read a listing somewhere of all the major changes made after the IPCC review process. It might have been from Ross. If anyone can link to it I would be greatful.
It is difficult to judge just how important the apparent error is-its beyond my area of expertise- and there have been too few commenters to be able to follow their train of thought. Chris Colose (who I have a lot of time for) seems to suggest the error is unimportant but I suspect that people are following their own prejudices on both sides of the debate.
When its just a trivial typo that is one thing, but when it appears that it might be important I find it disturbing that the peer review process doesn’t seem to work. I would be interested in the opinion of Chris Colose as a scientist-not a climate change advocate-on whether he thinks that is a fundamental flaw in the process that needs fixing.
Something here for everyone to enjoy. Aerosol contribution is higher than we thought, explains recent slowdown in temperature rise. It’s worse than we thought!
At least you got a letter,. When I reported an error in AR4 WG2 report Table 10.2 that put warming in Sri Lanka at the astonishing rate of 2 degrees per year, I got no official response what so ever. The error and a host of others were however eventually corrected.
Marc, thanks for the link, I was not aware of this.
The difference may be that you submitted your error report before the IPCC introduced its new formal procedures for error reporting this summer. I sent mine in after this time (partly to test the process).
If you still think there are unacknowledged errors, you could submit them now and you should get a prompt acknowledgement and then an official response after a few weeks.
“Moderation note: This is a technical thread that will be strictly moderated for relevance and civility.
I applaud both these intentions, particularly the latter – irrespective of whether the comments are mine.
I agree with Tonyb that the two relevant scenarios are that the peer review process hasn’t worked or that Paul Matthews is wrong somehow. My one addition to this is that the IPCC response seems unnecessarily defensive and unhelpful – it appears that the criticism and alleged error have not been taken seriously.
Be that as it may, I feel that this is something that can be resolved and I’m intrigued to hear how the less partisan of participants see it. It even seems like a proxy test for objectivity, or at least impartiality. As I doubt I have the energy let alone the competency to come to a firm conclusion myself, I’ll await further comments with interest. I think I’ll also put it to Lucia at the Blackboard for consideration.
It’s interesting that for all the moaning here about RC supposedly “censoring” comments they find challenging, no one censored #123, Martin made an effort to understand and respond to the question shortly thereafter, and acknowledged he didn’t have the answer and didn’t dismiss the point.
It’s also interesting that Paul raised three points through the error correction process, and the IPCC agreed with him as to two of the points, acknowledged error and made corrections. Paul then instantly dismissed the two points he raised as insignificant “typos” and decided the only point that really mattered was the one they didn’t agree with him on. If they were so unimportant, I wonder, why did he report them?
I hope the last point gets clarified and we see who is correct and why. I appreciate Paul’s contribution as a fact-checker, and I think his work and his perspective would be even more valuable without some of the resentment, childishness and hyperbole that creeps into his descriptions of events.
An interesting comment. Notable for it’s temperance and the absence of a certain scurrilous term. I hope others expand on your comment.
Robert your comment is not relevant to the TECHNICAL question asked.
If you have a contribution to make that addresses the math, then make it.
Otherwise I would suggest that you refrain from starting one of your filibusters.
steven, YOUR pointless comment aside, no one is filibustering.
Let me direct you to the blog rules:
“Only respond to comments that you feel are deserving of your attention, and ignore the rest.”
If you don’t like a comment, don’t respond to it. You aren’t a moderator.
You are really an idiot bobbie.
“Only respond to comments that you feel are deserving of your attention, and ignore the rest.”
That does not mean that if you disagree with a comment, that you are not supposed to respond to it. If one disagrees with a comment, then it is up to one to decide if one wants to respond to the idiot, who made the comment. In this case, that would be you. Get it now, bobbie?
“refrain from starting”
Logic and reading are not your strong suits. I suggested that you refrain from starting one of your filibusters. When you reply that no one is filibustering, you have not understood my suggestion.
2nd the instructions say “deserving” of your attention.
they say nothing about LIKING. The rules do not say that you should ignore what you dont like, they say ignore what is not deserving of your attention
your coment was deserving of my attention. The fact that you cannot read the rules and understand them should clue you in. And please try to understand the difference between me suggesting that you not Start a filibuster and a request that you stop one that you started.
When you learn to read and understand english let me know.
“the resentment, childishness and hyperbole”
That’ll be the usual Robertese for something like “doesn’t parrot the political correctness/IPCC/RC position”.
You are making things up bobbie. Paul wrote:
“I reported this error, along with two minor typos, to the IPCC secretariat, as follows:”
The minor typos, were minor. He didn’t “instantly” dismiss them after they had been corrected. It was not difficult for the IPCC to admit to two minor typos. It’s the other one that they are stonewalling on. See any difference, bobbie? Who raised you people?
WB and P.E. It looks as though changing the -0.7 to -1.1 (rounding up as they rounded down) would (see Fig. SPM2 referenced above) change the total anthropogenic affect by about 25%. It would reduce it down from +1.6 to +1.2 Presumably moving the average (or median) would also move the error bars if the total error stays the same. This would apply to the negative aerosol forcing as well as to the net (in red on right).
If you look at the error bars in the figure as it was published in the IPCC report, the warming effect could be 50% worse than the average or about 1/3 of the average. Those are pretty big error bars to be making huge economic decisions on. And this is with them leaving out the other part of the aerosol cooling and only represents the errors they know about and have included in this analysis. We need to study this another 10-20 years before we really know what is going on.
Bill thanks a bunch, mate. I get it and having re-read Paul’s post everything that needs to be there is there so I might have got there on my own with a bit more effort. Appreciate tha hand up.
The bottom line, seems to me to be that Paul has identified that between the SOD and the later published AR4 the IPCC altered figures to inflate anthropogenic warming (i.e reduce cooling) and when, after AR4 publicaiton, the incorrect figure was brought to their attention, they failed and/or neglected to correct it. Gold standard science, my a—.
This underscores the IPCC culture of misleading its clients, the UNFCCC, by way of message manipulation.
O/T – can I recommend everybody pop over to the Blackboard to have a bet on the December UAH anomaly? I’m guaranteed to win of course, but I could do with some of your quatloos :)
5000 quatloos that Anteros will prove untrainable and will have to be destroyed.
Pretty telling when the response cannot simply show the math that led to the value of -0.7. Seems to me that would be the simplest of things to do.
By underestimating the negative part of the forcing, they would have overestimated the net positive forcing which would have underestimated the climate sensitivity, since the sensitivity is the temperature change divided by net forcing. Or am I missing something?
Possibly Jim. If they leave the CO2 with the same weight in the models. Or it could mean that they have to tweak other parameters as being weighted slightly more.
For me, it’s all about the error bars. We don’t know enough yet to be making any kind of accurate guesses about the future. I assume this would also increase the error bars on the sensitivity.
Remember that the recent CERN stuff on cosmic rays had not been put into any of the models, only solar irradiance (TSI). They now also have more data on the UV part of the solar spectrum. A lot of work yet to be done. Just need to get the politics out of it, stop making fear-based arguments for catastrophic future effects, and start reporting numbers with error bars for everything, not this “most ……. is very likely” crap.
Paul, Thanks for a good post. This is exactly what I mean by independently auditing the science and the IPCC. However, I don’t understand its importance. If indeed the cloud albedo is -1.1 instead of -0.7 W/m2, that would make the total forcing lower and thus would imply a higher climate sensitivity, would it not? Where am I going wrong?
Theoretically, David, don’t we care about all mistakes, and not just ones that imply a lower climate sensitivity/more uncertainty/less damaging impacts?
Let’s assume that the higher mean value for cloud albedo effect is correct, i.e. -0.985 instead of -0.7 W/m^2 .
What does this mean for the total estimated radiative forcing of 1.6 W/m^2?
Is this a “bottoms up” model-based estimate or a “top down” estimate?
We now have IPCC telling us that all anthropogenic forcing factors, other than CO2, essentially cancelled one another out: CO2 = 1.66 W/m^2 while “Total net anthropogenic forcing” was 1.6 w/m^2.
Does this mean that CO2 forcing is actually greater or that total anthropogenic forcing is smaller?
And does it change the conclusion that “total net anthropogenic forcing” = forcing from CO2 alone?
IPCC has told us that “natural forcing” was 0.12 W/m^2.
Based on its estimate that “total net anthropogenic forcing” was 1.6 W/m^2, we can roughly calculate that “natural forcing” represented:
0.12 / (1.6 + 0.12) = 7% of the total forcing
If the total net anthropogenic forcing was actually lower, due to the larger negative albedo effect, we would have:
Total net anthropogenic forcing = 1.6 -0.985 + 0.7 = 1.315 W/m^2
And the “natural forcing” would represent:
0.12 / (1.315 + 0.12) = 8.4% of the total
Am I on the right track here?
Would it not be 0.12/(1.2 + 0.12) ? which equals 0.12/1.32 is like 9%
But I don’t think this is the right way to think about it. If the total net forcing is smaller, they may have to make the natural variability a larger number to get the models to work.
Also to follow up on my earlier reply to Jim D. that is also related to this:
On the issue of sensitivity, my understanding is that the only way they can get their spaghetti graphs from the models to give the same range as that for natural variability is to include many different models with many different sensitivities. So the models all “fit” the temp. data to some extent, yet they use quite different numbers to do so. Sounds like the classic problem of over-fitting. If you have too many variables (necessary since climate is complicated), you can always fit the data. And if you can do it in a variety of ways (quite different parameter values in different models), this tells you that the result is not rigorous.
The 1.6 w/m^2 figure is derived from the pdf in Figure 2.20.B.
It is a bottom-up accounting approach, incorporating dozens of model results across different forcing factors, and observational data where applicable. Throughout chapter 2 each individual forcing factor is discussed and assigned a consensus best estimate+range.
Accordingly, CO2 forcing is unchanged if we accept Paul Matthews has a valid complaint, and there is low uncertainty on that estimate (contrary to aerosol forcing; indeed -0.9 and -1.1 are both quite central within the expressed uncertainty range).
These values are estimates of total change from 1750 to 2005. Since none of the factors are expected to evolve in a coherant linear fashion the fact that the CO2 forcing best estimate is so similar to the net anthropogenic estimate in AR4 is simply an accident of timing. Skeie et al. 2011 provides a good exposition of the changes over time in Fig.1, Page 63. Note that two of the largest non-CO2 forcings (CH4 and Halogens) have barely increased since 1990.
The point about methane (CH4) is an interesting one. I remember in one of the early Hansen papers (Science I think), it showed that the slope of the CH4 increase had really dropped off and at the time, no one knew why. Do we know why today? CH4 is a big forcing (this alone could explain part of the rise in temp.’s and then today’s flattening of temp.’s) and as temp.’s go up, predictions are that more CH4 could be released from clathrates and the permafrost and that this could be a synergistic effect (can’t think of the correct term for a self-reinforcing death spiral ….) Anyway, it would compound matters. So we may be lucky that the CH4 stopped increasing so rapidly. But, IF we still don’t know why, this just points to how complicated and uncertain this whole field really is. Something that COULD (in my view) provide an alternate explanation for warming suddenly stops increasing and it’s a mystery.
We only have 30 years of satellite ice data and it’s over a warming period so until we follow it another 15 years we still have questions. Same but worse with the sun. Seems like just in the past six months I’ve heard about 3-4 new things we never knew about the sun and we are still increasing the numbers of new types of instruments to study the sun.
Calm down. Collect data for 15 years, then re-visit. I’m hoping that over the next few years, the pendulum starts to swing the other way and the scientific process will weed out the truth and most of the politics can be left behind. We’ll see.
I don’t know enough about the subject to say to what degree we can account for and attribute observed rises and falls in methane growth rates. There were a couple of back-to-back papers in Nature this August (paywalled) on the subject. AR4 has a section on Methane sources and sinks and the section on the hydroxyl radical (OH) – the most important component in methane sinks – is also relevant.
Thanks for response.
As I now understand it, the total net anthropogenic forcing of 1.6 W/m^2 is based on a “bottoms up” approach using a sum of data derived from model simulations rather than observed empirical data broken down into various attributions.
This means, if the net mean negative forcing from human aerosols was underestimated (as Paul M postulates) the total of 1.6 W/m^2 would become lower, while the 1.66 W/m^2 for CO2 alone would remain the same, right?
This means that the total natural plus anthropogenic forcing of 1.72 W/m^2 as estimated by IPCC would remain as is and the net natural forcing would increase by 0.985 – 0.7 = 0.285 W/m^2, i.e. from 0.12 to 0.405 W/m^2.
[This is not a loaded question. I am just trying to establish what Paul M’s postulation really means.]
No, the value for change in solar radiation is derived independently of other factors so it wouldn’t be affected at all. If we want to say that the best estimate for the cloud albedo effect should be -0.9W/m^2, with all else remaining the same, then the total net anthropogenic forcing from 1750-2005 would be 1.6 – 0.2 = 1.4W/m^2.
Adding solar radiation would give you 1.4 + 0.12 = 1.52W/m^2. That does of course make it a larger proportional contributor to forced change over the reference period.
Are you telling me that the TOTAL forcing from 1750 to 2005 as cited by IPCC is NOT based on empirical data backed by physical observations of the temperature, but solely on model estimates?
This would raise huge question marks in my mind regarding the validity of the whole tabulation of radiative forcing since 1750 as presented by IPCC.
I had assumed that the TOTAL was based on actual observations and the individual pieces were model-based estimates, based largely on theoretical deliberations, with the “natural” part being the difference (since we have no empirical method of measuring all the many known plus unknown “natural” factors – as conceded by IPCC in its statement that its “level of scientific understanding” of natural factors is “low”)..
I hope you are wrong on this, Paul S, because if not, this makes the IPCC estimates even more dicey in my opinion.
What could you observe that would tell you the total radiative forcing change since 1750? You can read through AR4 chapter 2 to understand the derivation for the individual forcing factor best estimates and ranges. Use of models is inescapable in calculating estimates for these values, though empirical data is used as a constraint where possibile, as Richard Betts notes below regarding aerosol forcing estimates.
I know this is a frequent complaint of yours so I thought you might be interested in this recent interview with Nathan Urban:
‘Analyses that are often passed off as empirical, such as direct comparisons of global average temperature and radiative forcing data, implicitly use climate models, even if the “model” is nothing more than a ratio or linear regression. This usually amounts to using a zero-dimensional linear energy balance climate model in disguise, whose physical assumptions are much cruder than the model we use.‘
Regarding natural forcings the only one in the SPM table is solar irradiance. The Low LOSU indicated basically reflects uncertainty regarding changes in solar activity before the satellite era. There are a number of proxies available (sunspots, 10Be cosmogenic isotope records etc.) but results obtained when attempting to reconstruct TSI evolution from these have varied with methodology, and it’s not completely accepted which are the most reasonable.
There’s a brief summary of forcings in section 2.9.1, noting the key points of uncertainty for each element. It also includes some not featured in the SPM table because LOSU is too low to reasonably assign a value.
Thanks for post.
Computer models are certainly very useful tools (actually they are simply superfast pre-programmable slide rules as far as I am concerned), and – of course – like the old slide rule, they are only as good as what is programmed in.
I had assumed (incorrectly, it appears from your post) that the temperature record from ~1750 to 2005 was essentially the basis for the total radiative forcing estimate of IPCC for this period, and that the individual anthropogenic forcings were estimated based on model simulations (this second part appears to be right).
Your post tells me that even the total forcing is not based on the empirical temperature record but on more model estimates constrained by physical evidence, “where this is available”.
As a rational skeptic, this worries me, as it tells me that it is very likely to be much more uncertain.
Going through AR4 WG1 Ch.2, the observed increase in GHG concentrations is covered in detail, but the derivation of the individual radiative forcing (RF) estimates cited in FAQ 2.1, Figure 2 appear to be based on model simulations alone, rather than real-time empirical data.
TOA measurements are cited, but these do not relate directly to RF.
Estimated RF from changes in solar irradiance (SI) is covered in some detail, but does not explain the below-average temperatures of the 17th century:
Inasmuch as this very small RF could not possibly have caused the significant pre-industrial warming following the Maunder Minimum, there must have been something else at play.
Aerosol influence on clouds (both natural and anthropogenic) is mentioned, but naturally caused changes in cloud cover as a separate forcing are not discussed.
Strangely, the cosmic ray / cloud cover hypothesis of Svensmark et al. is essentially written off as meaningless.
At the end of Section 2.7.1, IPCC does concede that its ”level of scientific understanding” is ”very low for cosmic ray influences”.
The first CLOUD experiment results at CERN are beginning to show that there is a good correlation and a plausible mechanism; further results may hopefully help IPCC improve its ”very low level of scientific understanding of cosmic ray influences” in its new AR5 report.
Several solar studies have attributed ~50% of the total warming to the unusually high level of 20th century solar activity (highest in several thousand years), rather than only 7% (IPCC model-based estimate, which considers only SI). Most of these are not even mentioned by IPCC.
In addition to serious questions about the real total solar impact (rather than just the impact from SI), what appears to be missing in the RF estimations is a hard look at the empirical data at hand. By this I mean the full 150+ year surface temperature record, with its multi-decadal warming/cooling cycles of around 60 years total cycle time. It appears that the primary emphasis has been on the latest warming half-cycle, starting around 1976. The statistically indistinguishable warming cycle 1910-1940 is hardly mentioned and an almost equivalent earlier late 19th century warming cycle is not mentioned at all.
Attributing RF to various anthropogenic forcing factors on the basis of one half-cycle without fully understanding the rest of these multi-decadal cycles appears to me to be quite risky.
Now that you tell me that even the total natural plus anthropogenic RF estimate is not based on empirical data, but on model simulations, this raises even more doubt in my mind.
So, all-in-all, while you have answered my specific question, it seems that the level of uncertainty on all the RF estimates is very likely significantly higher than has been expressed by IPCC.
Chapter 2 was not concerned with detection and attribution of climate changes. The purpose was to set out what is known about factors which exert a ‘radiative forcing’ – essentially a change in the energy flux at the top of the atmosphere. How these things manifest as effects on climate would require a completely different line of questioning, found in other AR4 chapters. For example, glaciation cycles are not necessarily initiated by a net change in radiative forcing. It is the changes in spatial and seasonal distribution of incoming sunlight, caused by the Earth’s axial and oribital mechanics, that carry the largest effects. Of course, as ice melts, albedos change and GHGs are released there is a net radiative forcing increase which drives global average temperature changes.
The models used to estimate radiative forcings are all based on physical evidence in the sense that they incorporate known physical properties of the elements they model. For example, the radiative properties of greenhouse gases are well known and the fact some of them are well-mixed up to the top of the atmosphere (i.e. their concentrations are relatively constant horizontally and vertically in the atmosphere) can be directly measured. These physical properties are built into models and then fed with a scenario to obtain a forcing change estimate. The confidence we have in the results is partly based on our confidence concerning knowledge of relevant physical properties and the level of consensus in the model results. LLGHGs are high on both counts so we can have very strong confidence in the RF values.
Aerosol effects carry less confidence on both counts. Their short lifetimes in the atmosphere (days to weeks) means their effects are highly divergent geographically. Their effects are also highly dependent on particle size (particularly when it comes to affecting clouds) and vertical distribution (there are absorbing aerosols and reflective aerosols – the sum of the effects is obviously dependent on which type is above the other). This is probably why the chapter 2 authors thought it wise to give greater weight to studies which incorporated empirical constraints on energy fluxes.
The very low LOSU for GCRs is based on the fact that nobody has yet provided a real quantification of the effect. The CLOUD result is an important first step but the aerosol particles produced by GCRs in the experiment are very small. Hopefully the CLOUD team can follow up with more results in time for AR5. There’s an interesting write-up on GCRs here. One thing that becomes clear from reading this is that chemistry in the atmosphere (chiefly interactions with other aerosol species) plays a fundamental role in the final effect, which means any quantification will be highly dependent on modelling (sorry!).
It is the estimation of the anthropogenic and natural radiative forcings per Ch.2 in which I was interested, not the other topics you mention, which are covered in other chapters.
I had assumed that the total RF from 1750 to 2005 per IPCC was based on actual physical observations (i.e. supported by the temperature record), and that, therefore, an underestimation by the models of the negative cloud albedo effect (as suggested by the post of Paul Matthews) would mean that some other (positive) forcing was underestimated.
You told me that the total RF was a “bottoms up” model-based estimate, so that my assumption was incorrect. After going through AR4 Ch.2 I see that this is correct..
This tells me that, if Paul M is right, the TOTAL net anthropogenic plus natural RF should be lower by 0.285 W/m^2, i.e. 1.315 W/m^2 rather than 1.6 W/m^2 (assuming all other forcings remaining equal).
And, assuming that IPCC’s estimate of natural RF (assumed by IPCC to be limited to direct solar irradiance alone) remains the same at 0.12 W/m^2, we would have total net anthropogenic RF at 1.315, natural RF at 0.12 and the total at 1.425 W/m^2.
IOW the natural component as estimated by IPCC would represent
0.12 / 1.425 = 8.4% of the total.
This, of course, does not include any indirect solar RF, for example from cosmic rays / clouds or any other naturally induced cloud forcing.
So, in light of more recent findings (CLOUD, etc.), I think we can safely state that, if Paul M is right, it is virtually certain that natural forcing (1750-2005) is more than 8.4% of the total natural plus anthropogenic forcing.
That is basically what I wanted to establish, and I think you have helped me confirm it.
OT, but the US Dept of Justice has contacted several leading bloggers, including Steve Mc, in connection with climategate e-mail “theft.” tallbloke in England has been raided in conjunction with this, and had his main computers taken into custody.
From Air Vent:
An interesting post at WUWT on the matter.
UPDATE: Tallbloke and I both received the following notification from the U.S. Department of Justice Criminal Division and forwarded by Ryan at WordPress. ClimateAudit is also mentioned yet I’m not certain that Steve Received notice. It seems that the larger paid blogs may not have received any notice. On pdf -WordPress Preservation Request-1
U.S. Department of Justice
1301 New York Avenue, NW, 6th floor
Washington, DC 20005
December 9, 2011
VIA ELECTRONIC MAIL
60 29th Street #343
San Francisco, CA 94110
Re: Request for Preservation of Records
Dear Automattic Inc.:
Pursuant to Title 18, United States Code, Section 2703(f), this letter is a formal request for the preservation of all stored communications, records, and other evidence in your possession
regarding the following domain name(s) pending further legal process: http://tallbloke.wordpress.com, http://noconsensus.wordpress.com, and http://climateaudit.org (“the Accounts”) from 00:01 GMT Monday 21 November 2011 to 23:59 GMT Wednesday 23 November 2011.
I request that you not disclose the existence of this request to the subscriber or any other person, other than as necessary to comply with this request. If compliance with this request might result in a permanent or temporary termination of service to the Accounts, or otherwise alert any user of the Accounts as to your actions to preserve the information described below, please contact me as soon as possible and before taking action.
READ THE LINK ABOVE FOR THE REST
I think all the blogs are through wordpress. Judith, hopefully you are not on the DoJ’s bloggers-of-interest list!
Sorry, the WUWT link didn’t appear: http://wattsupwiththat.com/2011/12/14/uk-police-seize-computers-of-skeptic-in-england/
Where’s the “whistleblower protection law” when we really need it?
Wow, yes. Dr. Curry, maybe you should open another thread for this subject to be discussed, so that this interesting thread won’t get swamped by discussion of this new (obviously also interesting) development.
Interesting how cloud cover movement coincides with the velocity and size difference to move from equatorial regions to the poles.
They have a difficult time of coming back due to the “downhill effect” and velocity differences.
The main point here is the denial by the IPCC of a straightforward error.
What would the effect be? Well if they had used a more accurate value for the average of the albedo effect, and included the lifetime effect, it would have been clear to anyone glancing at Fig SPM2 that the combined aerosol effect would be as large as the CO2 effect or larger, making a nonsense of the IPCC’s obsession with CO2. As Bill says, the right way to think about it is not that this implies higher sensitivity but that they would have to acknowledge more natural fluctuation.
While checking through the numbers from the references I found
two more errors:
(a) Kristjansson 2002 is missing from the references. The missing paper is “Studies of the aerosol indirect effect from sulfate and black carbon aerosols” J. Geophys. Res., 107, 4246, (2002).
(b) Menon et al 2002b should be 2002a in Fig 2.14.
The first of these was reported by a SOD reviewer (comment 2-1103, Kristjansson himself) and marked as ‘accept’, but this was not done. Another failure of the IPCC process.
That makes four typos in this short section.
A major weakness of pseudoskeptical rhetoric is your habit of overselling. You haven’t found a straightforward error denied by the IPCC. You found 2 straightforward errors they corrected, and sent you a kind thank you for pointing out. You have one number that comes up different in your calculations, and they in their response gave you a couple of reasons why that might be. Have you investigated those? It would appear not. Do you know how they arrived at the figure they used? If you don’t, it may or may not be an error, but it’s obviously not straightforward.
The overselling continues. Now you jump from a difference of 0.2W m^2 of the aerosol forcing to “it would have been clear to anyone glancing at Fig SPM2 that the combined aerosol effect would be as large as the CO2 effect or larger, making a nonsense of the IPCC’s obsession with CO2.”
There are at least three “straightforward” errors with this assertion, to use your terminology. You’re way off on the forcings, haven’t accounted for the different residence times of aerosols and CO2, and ignore the fact that reducing the next forcing actually implies a greater climate sensitivity (and no, a difference in the man-made aerosol forcing would not somehow show greater “natural fluctuation.”)
Dumb mistakes like that make you less compelling as a supposed scourge of error. And the quick and unwarranted retreat to a narrative of victimhood despite a positive and positively friendly response to your complaint highlights in a very ugly way your partisanship. That makes it less effective even as partisanship. Remember your Chekhov: “In order to move your reader, write more coldly.”
Great comment Paul – natural climate variability is important and the IPCC errs, when it does err, on underplaying it in favour of their CO2 thesis. The IPCC response in this case and the snippy Hegerl Solomon response to Prof Curry on the other recent post show the IPCC folks just don’t like it up ’em.
the right way to think about it is not that this implies higher sensitivity but that they would have to acknowledge more natural fluctuation.
Why is the latter the right way and not the former?
Could you explain where you think there is an undue obsession with CO2?
By the way, you can find RF estimates including all indirect aerosol effects in Section 220.127.116.11. You can see it doesn’t actually change the picture that much – the global mean is still around -1.2W/m^2. This is because some of the further indirect effects exert positive forcing.
I believe your statement:
has answered my question above to PaulS.
Some simple arithmetic (as I understand it).
– CO2 radiative forcing remains at 1.66 W/m^2
– mean human aerosol forcing increased from -0.7 to -0.985 W/m^2
– all other anthropogenic forcing factors remain “as is”
– total net anthropogenic forcing is reduced by 0.285 W/m^2
– total natural plus anthropogenic forcing remains at 1.72 W/m^2
– so net natural forcing is increased by the difference, i.e. from 0.12 to 0.405 W/m^2, to make up the difference.
Does this make sense?
Thanks for a response (to straighten me out on this).
Why would you hold the estimate of the total forcing constant?
Because I assume (maybe incorrectly) that this total number is based on empirical data backed by physical warming observations and not just simply more model-based gobbledygook.
You seem to have forgotten that we went through all this over on Bishop Hill a few months ago :-)
Remember I was a lead author on the RF chapter? (I did the land albedo forcing so the cloud albedo estimate was not directly my responsibility, but I did give you as good a response as I was able to).
Also you did get a reply by email from Piers Forster (the RF chapter CLA) which, as I understand it, pretty much agreed with what I told you on BH.
So you *have* had more of a reply from IPCC than you are saying here. Indeed you had 2 replies directly from the relevant chapter authors!
Please read the BH discussion thread for details, but the bottom line is that the “best estimate” for the cloud albedo RF was *not* merely the median of all the studies listed in the table – the observationally-based studies were given more weight than the modelling studies, as one would hope! :-)
BTW the First Order Draft of the AR5 Working Group 1 volume will be out tomorrow. If you want to review the new RF chapter (or indeed any other chapter) you can sign up here:
You can self-declare your expertise, and if you have no publications you can put “none” and still be considered. My understanding is that as long as someone genuinely wants to make well-posed, evidence-based comments (such as that which you have made here) then these will be welcome.
NB This time I am not on the WG1 RF chapter – I’m in WG2 this time, on the terrestrial ecosystems chapter. Our FOD will be out in June 2012, and a similar open call for reviewers will be made.
Richard, thanks for your comment. This seems to be another example of a judgment by the IPCC that is not easily traceable.
Wow, this guy really sounds like a monster. No wonder these IPCC folks inspire such loathing. I had no idea.
Don’t fret Robert. Richard tends to keep his ire for idiots who think 2 degrees spells disaster ;)
If you can prove that it is safe to make the world hotter than it has been since the dawn of human civilization, please do so.
I don’t think you’ve made the case for that, do you?
“If you can prove” that human CO2 emissions will “make the world hotter than it has been since the dawn of human civilization”, please do so.
Otherwise, don’t bring up such hare-brained notions, Robert.
“You seem to have forgotten that we went through all this over on Bishop Hill a few months ago … Remember I was a lead author on the RF chapter?”
Probably worth reviewing this BH thread as well,
The Lean 2000 versus Svaalgard thing has always interested me. Lean 2000 has been used by both sides to prove points, which to me indicates that Lean 2000 may be showing some part of a non-linear response to TSI (that may be perverse logic, but non-linear response can be interpreted various ways). Lief disagrees, while he is an expert on the sun, his point that correlation is not perfect enough, is just another characteristic of non-linear response.
Hopefully, Douglass and Tsonis can combine or modify their approaches to get a better handle on things.
I think Paul’s comment here has it right (“As Bill says…”):-
It doesn’t seem to matter what the argument is, both sides will find a way of using it as support. Richard does this as well as anyone.
Richard, I had not forgotten our previous discussion, but I did not want to make this look like a personal criticism of you or Piers, so here I relied on the official IPCC response, which was not available at that time.
You seem to be agreeing with me that it is not just the median of all the values in table and that therefore the text is wrong. If the IPCC had just accepted this and and agreed to make a slight change in the wording there would not be much of a story.
The definition of ‘median’ is quite clear and does not include arbitrary undisclosed weightings as you now seem to be suggesting.
The IPCC Principles include the words “open and transparent” in the second sentence. We are talking here about the derivation of the second biggest number in the IPCC’s important RF diagram!
I have signed up as an AR5 reviewer, so I now have the text of the AR5 FOD aerosol chapter in front of me – but I am not allowed to say anything about it! However, the flaw in the process pointed out above has not been fixed so the process may not be very worthwhile.
Thanks Paul. I agree with you and Judith that the text could and should have been clearer. If you think similar issues are present in the AR5 FOD then do say so in your review comments. Apart from anything else, I’m sure the authors will appreciate an argument to get an increased page length for their chapter :-) But seriously, yes, this kind of thing should be explained clearly. At the WG2 lead author meeting which finished yesterday, we were reminded that traceability is important. One of the important roles of the reviewers is to help make sure this happens.
Speaking of traceability, I stumbled upon the sexiest authoring tool I’ve seen for a while:
While it is not a real tracing tool, it does help create documentation based on any kind of document, including LaTeX. For those who might feel lukewarm about yet another new tool, it is based on Sweave.
This is only a suggestion, of course. While we all should welcome constructive proposals, we all should beware of feature requests that sounds like “we should formally specify models” or “we should formally derive”, which are not as obvious as one might wishfully think.
Any methodological change must bear its cost. Some people still use Word.
This page length (and/or variants thereof, specifically “page limit” and “space limit”) constraint was the “reason” given for Rejecting no less than 320 reviewer comments on the SOD for AR4.
Pardon my skepticism, but – apart from the fact that in this day and age “space” should not be an issue* – it seems to me that this “reason for rejection” might have been (and, evidently, might continue to be) used as a very convenient excuse for exclusion of important details.
* I recognize that this constraint no doubt dates back to the early days when the dead-tree version of the IPCC reports predominated. But surely, it is time to re-consider this apparently over-riding priority. If this cannot be done, then perhaps some consideration should be given to amalgamating the References [which for AR4 came to a total of 1,000+ pages] for each chapter into a single bibliography for each of the three reports, Since many references are cited (not always consistently) in many chapters, this would surely free up some “space” for greater “page length” so that important content does not fall by the wayside, don’t you think?!
After reading Richard Betts’s evolving explanations for the use of “-0.7 W/m2 as the median” at Bishop Hill, this outside observer concluded that Paul Matthews’s complaint was more than justified.
@ Paul Matthews
this looks like the sort of thing that Lord Monckton is looking for as evidence of fraud
he can be contacted here
The Viscount Monckton of Brenchley [email@example.com]
Are you referring to the lack of peer review as a fraud or the apparent initial mistake -which may or may not be important?
it is the refusal on the part of IPCC to correct the record which Monckton regards as potentially deceptive. If you read his article his point is that one act taken in isolation does not appear all that important but when they are all added up the scale of the potential deception is huge. That is why he is asking for examples of this sort of behaviour. Well at least that is where I think he is coming from.
Gary, I don’t think I would call this fraud. I would call it spin and distortion as they appear to be trying to push the numbers in a certain direction.
What is the difference between distortions to push numbers in a certain direction and fraud? It seems that you are making an accusation of deliberate and knowing distortion for the purpose of misleading. Why wouldn’t that be fraud?
Joshua is right – “pushing numbers” is to “fraud”, what “being economical with the truth” is to “lying”.
I think you may be rather proving Paul’s point. Perhaps a more plausible interpretation is that the IPCC process encourages the construction of a coherent story.
So Paul’s basic point is this:
(1) First everything is scientfically reviewed and corrected
(2) Then everything is politically reviewed and corrected
(3) Then it is published
I’m slightly confused about why there is so much suspicion about this. The usual accusation (eg: from Christopher Monckton) is that the IPCC is deliberately exaggerating climate sensitivity. If this were true then surely the aim would be to go for a larger estimate of the negative radiative forcing from aerosols, in order to give a smaller net anthropogenic RF and hence require a larger climate sensitivity in order to explain the observed warming? But actually, the evidence-based (but admittedly not clearly explained!) smaller estimate of negative aerosol RF leads to a higher net RF and hence a smaller climate sensitivity. (Judith, do you agree with that reasoning?)
So, if it’s a conspiracy, it’s a pretty disorganised one, as the RF chapter were clearly working against the D&A guys! :-)
There is no conspiracy. The IPCC tries to construct a plausible scientific story that supports the idea of anthropogenic warming, and in part this requires the diminishment of the role of natural variations. The thing that truly puzzles me, is why the IPPC believes it is so important to build such a story, when in fact it is irrelevant. CO2 is a GHE and therefore rising emission levels need to be addressed; the scientific questions will be resolved by building a better understanding of natural variability.
Read chapters 3 and 6 of AR4 and I challenge you to maintain the idea that natural variability is being downplayed.
It is downplayed, and all but completely ignored in Ch 9, on attribution.
Paul S: If it were not downplayed the IPCC could not possibly come to the conclusions they do.
Read http://www.sepp.org/publications/NIPCC_final.pdf. To anyone familiar with the skeptical arguments for natural variability your claim is preposterous.
I thought a reasonable question about natural vs anthropogenic was raised at the head of the following BH thread,
AR4 says that “most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” What is your best evidence in support of this conclusion? [paraphrased slightly]
I’m not sure if this particular question has already been discussed here, but it might benefit from the wider range of contributors, and also be of maximum benefit to lay people like myself.
Philip, Wow, you are new to this blog aren’t you! This is one of Judith’s main themes. Search for ‘attribution’ and ‘uncertainty’ and read the paper by Curry and Webster, ‘Climate Science and the Uncertainty Monster’.
Paul, Not entirely new, although I’ve certainly been trying hard to catch up. I think I do understand that this is one of the themes of the blog, and yet I haven’t noticed a discussion that could be described as a defense of the IPCC’s position captured in that statement. Perhaps you are right and it’s scattered around the ‘attribution’ and ‘uncertainty’ posts – I’ll search around a bit more, thanks.
Philip, the next thread, Hegerl et al react…, is an attempt to defend the IPCC position against Judith’s criticism. You may not find it very convincing.
Thanks Paul – Again you’re right.
Philip, I do not understand the therefore in your claim that “…therefore rising emission levels need to be addressed;…” Whether they have to be addressed (if by that you mean controlled or reduced) is precisely the subject of the climate debate. My position is that they do not have to be addressed.
David, My reasoning on this one is very simple (possibly too much so). More CO2 will have an impact because of its radiative properties. CO2 emissions are rising sharply and it is plausible that this will cause atmospheric levels to reach 1000 ppm, if nothing else changes. Although no-one knows what the result of this would be, even optimistic assessments suggest that such levels may cause some problems. I don’t think it is feasible (or desirable) to control emissions without practical alternatives. Therefore, I think that more funds should be directed towards energy generation R&D and towards vulnerability-reducing civil engineering projects. It doesn’t sound likely to me that the UN approach will be successful, and I’d feel happiest if the IPCC were discontinued.
Philip, the optimistic assessments are that there is no problem, which I agree with. So I see no reason to build civil engineering projects on speculation, such as dams and irrigation systems where none are presently needed. If you merely want to build resilience to natural events then that might make sense. The US flood control program is only half built.
The US gov’t alone already does perhaps $8 billion a year in energy generation R&D, which is quite enough. I could support research into actual climate change but I see no prospect of that happening as the US climate research organization is dominated by the IPCC model.
Agree with you that the IPCC should be disbanded and replaced with a smaller body of objective scientists, engineers and economists, who do not have a hidden agenda. But to your other statement on CO2 curtailment:
Human CO2 emissions have risen almost exponentially, as have atmospheric CO2 levels, currently at around 30 GtCO2/year and 390 ppmv, respectively.
Whether or not “rising emission levels need to be addressed” is an open question, as no one can show empirical evidence that these have any negative impact on human society or our environment.
As the WEC has estimated in 2010, the carbon contained in the “total inferred possible fossil fuel resources” of our planet would constrain the “maximum ever possible CO2 level from human emissions” at around 1,060 ppmv. This is based on the rather optimistic estimates that to date we have used up only around 12% of all the fossil fuels that were ever on our planet, and still have 88% to go; these would last us 300+ years at current usage levels. [Other projections, such as Rutledge/Hubbert, estimate the total fossil fuel availability to be much lower, constraining “maximum ever possible” atmospheric CO2 concentrations to around 600 ppmv.]
It is clear, however, that we owe our current standard of living, quality of life and high life expectancy largely to the Industrial Revolution and the availability of low-cost energy, based on fossil fuels.
Some very populous nations of this world, such as China and India, are going through this industrial development process today, thereby lifting their populations out of poverty, much as we did over the past one hundred fifty years, again based largely on the ready availability of low-cost fossil fuel based energy.
Other nations are still at the very beginning of this process; while many of these possess fossil fuel deposits, they have still not developed a low-cost energy infrastructure with general availability for their populations. As a result, the WHO estimates that some 4 million people die annually from the combined impacts of having no access to clean drinking water or to clean energy for domestic heating and cooking.
“Addressing” the purported CO2 “problem” entails shutting down the existing use of fossil fuels for the developed nations and blocking the increased use in the developing nations.
If this means simply replacing fossil fuels with nuclear fission (as could be envisioned in the developed world plus China and India) this would present no technical problem (albeit a significant political hurdle in a post-Fukushima world).
But many of the underdeveloped nations do not have nuclear technology (and, as many of these have unstable governments, it is not likely that it will be desirable for these nations to develop this technology, in view of the nuclear proliferation risks involved).
So we are left with a dilemma: deny the developing nations a better quality of life and reduce our own, in order to fight a model-based virtual future “problem”, for which we have no empirical evidence that it will ever occur.
On top of all this, we can easily calculate the maximum theoretical impact on our future climate of reducing our carbon emissions (and quality of life) by a certain percentage.
If all the nations currently “signed up” for Kyoto reductions (EU plus Australia and New Zealand) would shut down completely, this would have an impact of averting a few hundredths of a degree of greenhouse warming by the year 2100; in other words, an imperceptible impact.
This seems like a total no-brainer to me.
Until we have clearly defined alternate energy sources, which are politically acceptable and economically fully competitive with fossil fuels (and not artificially so, by simply taxing the fossil fuels) we should not undertake any drastic mitigation steps whose climate impact we cannot clearly quantify and whose unintended consequences we are unable to estimate today.
My opinion, of course.
Possibly nobody really knows how to calculate future CO2 levels any better than a back of the envelope projection from where we are today. Over the last 10 years, levels have risen by 2 ppm/year. We can hope and expect that energy provision for everybody in the world will increase over the next few decades to European or US levels i.e. world energy use will increase between 2 and 4 times. If rises in CO2 levels are proportional to emissions, then this means levels will increase by 4-8 ppm/year or 400-800 ppm/century, all else being equal. Hence, 1000 ppm likely sometime in the next century.
I agree that the most optimistic assessments are sanguine about this possibility, but others, regarded by many as also being optimistic, are not. If the fossil fuels hold out long enough for 1000 ppm, and this level does turn out to be a problem, then the suggestions I mentioned will help to reduce the human impact. If the fossil fuels don’t hold out that long, then this is still a serious problem in itself, which the suggestions will help to solve. Even if there is no problem at all (and of course I hope this turns out to be the case), then the suggestions are still likely to be useful.
Regarding energy-related R&D expenditure, there is currently a gap between technical know-how and requirements. This gap has been known about for at least 30 years, and still remains. The UK DECC’s Chief Scientist (a reputable and reasonably credible fellow) believes that UK expenditure in this area is currently too small. If there are other reliable assessments that argue current US or global expenditure is enough to close the gap over the next couple of decades, then I’d be very pleased, but would also like to see a reference!
Regarding civil engineering, yes, I mean building resilience to natural events.
The answer to these questions all depend upon climate sensitivity to CO2 and the future course of natural variation upon temperature, neither of which we know. If the holocene continues as a gradual descent into the next ice age, then we may sooner be praying for a powerful greenhouse warming effect from CO2 than expecting to saute ourself in our own guilt over anthroCO2. If we are cooling, anthroCO2 may be the safest geoengineering method to keep us warm.
The centennial and millenial scale climate variations must be understood before we can sensibly decide whether anthroCO2 is a danger or a boon. As someone else has pointed out, ‘no regrets’ is a hollow facade of the Precautionary Principle, that paean to ignorance. ‘No regrets’ ignores lost opportunity costs in a foolish and willful attempt to avoid regrets by refusing to regret.
It’s been pointed out now repeatedly, that warmer is better than colder in terms of social security, species diversity, carrying capacity of the earth, and general sustenance of all life on earth. It will be easier to adapt to a warming earth than a cooling one.
If I were really interested in minimizing future regrets with the degree of ignorance we have, I would err on the side of anthropogenically warming the earth. It’s a better bet for all involved.
We regret we cannot allow regrets …
Your “back of the envelope” calculation of future atmospheric CO2 levels, i.e. “1000 ppm likely sometime in the next century”, has three basic problems.
The first is logical.
Over the period 1960 to today, human population grew from 3 billion to 7 billion, or at an exponential compounded annual growth rate (CAGR) of 1.7%/year. At the same time the atmospheric CO2 concentration (which determines greenhouse warming based on a logarithmic relationship) increased from 316 to 390 ppmv, or a CAGR of 0.42%/year.
The UN estimates that human population growth will slow down to a CAGR of around 0.3 to 0.4%/year over the remainder of this century (it has already started slowing down), reaching between 9 and 10 billion by 2100.
If we assume that, despite this dramatic slowdown in population growth, the atmospheric CO2 level will continue to grow at the same exponential rate of the recent past, we will end up with a projected CO2 level of 584 ppmv by 2100.
This is IPCC’s model-based “scenario and storyline B1”: population growth slowing down as indicated, moderate economic growth and no “climate initiatives”.
IPCC’s worst-case “scenarios and storylines” assume that the exponential rate will almost double and CO2 levels will reach 700 to 850 ppmv. This appears totally unrealistic, in view of the dramatic decrease in population growth.
So much for the first problem.
The second problem is physical. All the optimistically estimated remaining fossil fuels on our planet contain just enough carbon to get us to 1,065 ppmv CO2 when they are ALL GONE (I can show you the numbers, if you are really interested).
Other studies (Rutledge/Hubbert) have estimated the remaining fossil fuel reserves at much lower levels, constraining the absolute maximum ever physically possible CO2 level at around 600 ppmv.
At present consumption rates, the optimistically estimated remaining fossil fuels would be used up in 300+ years.
Finally, there is absolutely no question in my mind that fossil fuels will become more and more expensive as the remaining reserves become increasingly difficult to extract (tar sands, shales, Arctic or deep offshore locations, etc.), so that the world will gradually move toward lower cost energy alternates, which will undoubtedly be developed in the future.
Fossil fuels will then be used as feedstocks for higher added-value end uses (petrochemicals, fertilizers, pharmaceuticals, etc.), which do not generate the CO2 from combustion.
So I believe, based on the above logic, that we may reach 500 or 600 ppmv over the next 100-150 years, but it is very unlikely that we will ever reach 1,000 ppmv.
If you have any data that would indicate otherwise, I would be interested in seeing it.
I’m sorry, my last comment crossed with your last one, and I ended up missing your criticism. I should say that I’m generally pretty skeptical about this kind of calculation, but have simply not come up with a killer reason for it to be wildly wrong. I’d be pleased if someone would point out the fatal fault, and if you like I’m happy to do a few more rounds about it!
Here are the figures I’m using for current energy consumption, which include fuel, electricity, everything (rounded figures from World Bank):
World: 512 EJ/year
US: 96 EJ/year
UK: 10 EJ/year
World: 6000 M
US: 300 M
UK: 60 M
Projected future world population: 9000 M and thereafter roughly constant.
Therefore, current per capita energy consumption:
World: 512 10^18 / 6 10^9 = 85 GJ/year
US: 96 10^18 / 3 10^8 = 320 GJ/year
UK: 10 10^18 / 6 10^7 = 170 GJ/year
If in the future all 9000 M of us have either US or UK energy consumption, then world energy consumption reaches somewhere in the range 1530-2880 EJ i.e between approximately 3 and 6 times more than today’s consumption (I did indeed forget to take account of the population increase last time).
Apart from the population projections, main criticism I can see is that it may not be possible to raise everyone up to these levels, but it’s a goal that lots of people aspire to (including me), and this looks like a consequence of that aspiration.
Rate of change of CO2 atmospheric levels (as per ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_annmean_mlo.txt):
1960s: ~ 1 ppm/year
2000s: ~ 2 ppm/year
CO2 emissions levels:
1960s: ~3.5 GT/year
2000s: ~7 GT/year
From this, assume proportionality between emission levels and rate of change in atmospheric levels. Possible criticisms I can think of are that the data may be flakey, the apparent proportionality may be an illusion, Salby.
Also assume emission levels are proportional to energy usage, which will be true I think provided the energy mix remains unchanged.
Therefore, the projected rate of change of CO2 atmospheric levels once population and energy consumption have peaked will be between 3-6 times 2 ppm/year i.e. between 6-12 ppm/year.
If the population and energy consumption peak 100 years from now, then why not say it gets there linearly? If so, then by 2112 CO2 levels are between 550-700 ppm (400 + 0.5*[3-6]*100 and 50 years later between 700-1000 ppm (+ [3-6]*50).
As you say, the main additional criticism of this is that the fossil fuels may run out before then. On the other hand, known fossil fuel reserves have only ever risen in the past, and as far as I know it is quite possible that this will continue long enough for the scenario to work.
Before you decide the IPCC is acting on professional motives, you should consider the evidence.
Read Donna Laframboise’ book on the topic.
It is extremely well documented and has not been credibly disputed by IPCC defenders.
Thanks everyone. I agree that the most interesting science issue in all this is natural variability, and will keep on reading here to try to improve my understanding of that issue.
It may be outside your area of expertise, but do you think the extra weighting on observational studies was fully justified? I’ve read a few comments in the chapter 2 second-order draft review which were pointing out large structural uncertainties in the observational approaches. For example, Graham Feingold says ‘The satellite results that I am familiar with suggest to me that it is far too premature to use them as constraints to GCMs.’
The final AR4 text even has a section which suggests studies constraining cloud albedo changes using satellite data may lean towards underestimation of the effect.
You’re right, it is beyond my expertise to be able to comment in detail I’m afraid – I’m a land surface guy! However it will be interesting to see how the cloud albedo forcing estimate shapes up in AR5. We shouldn’t discuss it here, but I would definitely encourage people to review AR5.
Incidentally, Judith, please may I encourage you to change your mind and accept the invitation to review AR5 too? I think your input would be extremely valuable.