# CO2 no-feedback sensitivity

by Judith Curry

The IPCC defines climate sensitivity as “a metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2concentration.”

The no feedback sensitivity is the direct response of the surface temperature to radiative forcing by the increased CO2, without any feedbacks.  Why is this interesting/important?  The no feedback sensitivity is in principle much easier to calculate (and can presumably be calculated with certainty) and it provides a reference point for assessing the sensitivities associated with climate feedbacks in the overall climate sensitivity to CO2.

The CO2 no feedback sensitivity is an idealized concept; we cannot observe it or conduct such an experiment in the atmosphere.  Hence, the CO2 no feedback sensitivity can only be calculated using models. Determination of the no feedback sensitivity has two parts:

• calculation of the direct radiative forcing associated with doubling CO2
• determination of the equilibrium change of global mean surface temperature in response to the CO2 forcing

The IPCC TAR adopted the value of 3.7 W/m2 for the direct CO2 forcing, and I could not find an updated value from the AR4.   This forcing translates into 1C of surface temperature change.  These numbers do not seem to be disputed, even by  most skeptics.  Well, perhaps they should be disputed.

On the previous greenhouse thread, Jeff Id wrote :

I also wouldn’t be surprised to find out that warming from doubling of CO2 even with no feedback has been incorrectly estimated. I haven’t found a good reference for the 1C calculation as yet. The only stuff out there is too basic to correctly capture the magnitude IMO.

Fred Moolten replies:

Jeff, the 1C value for a forcing of 3.7 W/m^2 (the canonical value for doubled CO2 based on radiative transfer equations and spectroscopic data) is derived by differentiating the Stefan -Boltzmann equation that equates flux (F) to a constant (sigma) x the fourth power of temperature. We get dF/dT = 4 sigma T^3, and by inversion, dT/dF = 1/(4sigmaT^3). Substituting 3.7 for dF and 255 K for Earth’s radiating temperature, and assuming a linear lapse rate, dT becomes almost exactly 1 deg C. In fact, however, the models almost uniformly yield a slightly different value of about 1.2 C, based on variations that occur with latitude and season.

Peter317 then writes:

Yes, a body will radiate ~3.7 W/m2 more energy if its temperature is increased by ~1c. However, that does not mean that increasing the energy flow to the body by that amount is going to increase its temperature by ~1C. Any increase in the temperature has to increase the energy flow away from the body by all means it can, ie conduction, convection, evaporation as well as radiation. When the outflow of energy equals the inflow then thermal equilibrium exists with the body at a higher temperature. But that temperature cannot be as much as ~1C higher, as the radiative outflow has to be less than the total outflow.

I am siding with Jeff Id and Peter317 on this one, there is too much that is not clear on this subject, and translating 3.7 W/m2 into 1C temperature increase seems incorrect to me.  To try to sort out this issue, I committed to spending 1 day preparing this post.  Perhaps I am missing some crucial references.  In any event, some rethinking and re-evaluation of this topic seems to be in order.

Direct CO2 radiative forcing

The concept of radiative forcing, as it is used by the IPCC, is clearly described in this Appendix to the TAR.

The direct CO2 radiative forcing is the change in infrared radiative fluxes for a doubling CO2 (typically from 287 to 574 ppm), without any feedback processes (e.g. from changing atmospheric water vapor amount or cloud characteristics.)

The IPCC defines radiative forcing as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’.

When reading a paper about radiative forcing, be sure you check to see what level in the atmosphere they are referring to; in addition to the tropopause level, it might refer to to the top of the atmosphere (or top of model; this definition is used by Soden and Held) or to surface forcing.  Note Collins et al. considers radiative forcing at all three levels.

Modeling the direct CO2 forcing

In the Best of the Greenhouse thread, we argued that the basic mechanism of the so-called Tyndall gas effect (e.g. the infrared radiative emission of gases) is well understood.  In the thread on Confidence in Radiative Transfer Models, we argued that line-by-line radiative transfer codes and the best band models can accurately simulate clear sky (no clouds, aerosols) infrared radiation fluxes at the surface provided that the vertical profiles of atmospheric temperature and trace gas concentrations are specified accurately.

Can we believe calculations from line-by-line radiative transfer codes for an atmosphere with doubled CO2?  Well, the line-by-line codes and the RRTM band model have been shown to do a good job of capturing radiative flux variations in response to substantial (more than an order of magnitude) variation in water vapor content for a polar versus tropical atmosphere. Since the radiative properties of the CO2 molecule are simpler than those for the H20 molecule, we infer that the line-by-line codes and RRTM should perform accurate calculations for an atmosphere with doubled CO2. The best of the radiative transfer band models used in climate models (e.g. RRTM, GISS) should also be doing this correctly. Although Collins et al. does point out that many of the climate models radiative transfer codes do not compare well with the line by line models for CO2 doubling.

One of the nicest analyses I’ve seen that explains the latitudinal variations of CO2 forcing is given by Pielke Sr (see also this additional post), which provides total, troposphere, and surface forcing for tropical and subarctic summer, winter. The impacts of doubling CO2 are seen to vary strongly with latitude and season, owing the blocking effects by clouds and water vapor on the CO2 forcing. For example, the surface forcing is much larger in the subarctic winter than in the tropics, since the subarctic winter air is very dry and clouds are thin, and hence CO2 plays a bigger role.

Calculating the global CO2 forcing taking into account the geographical and seasonal variations has been done by Myhre and Stordahl 1997 (anyone locate a free online copy of this?) and also Soden and Held. To account for the variations in clouds, humidity and temperature, Myhre and Stordahl took the approach of using temperature and water vapor from the ECMWF analyses, climatological ozone data, and ISCCP cloud data; if I were designing this experiment, I would have made the same choices. The Myhre and Stordahl calculations are dated owing to deficiencies in the band model, and their line-by-line model was almost certainly not treating the far infrared correctly.

Note, Myhre and Stordahl found a 12.7% difference between CO2 radiative forcing with clouds versus clear sky. Radiative forcing with clouds (and aerosols) is a more complicated problem with greater uncertainties. But the calculations made by Stephens and Woods cited at Pielke Sr’s site are probably definitive in terms of radiative transfer in a cloudy atmosphere.

While the definitive global CO2 forcing calculations might not have been made yet with the currently best available radiative transfer codes, this is certainly something that it should be feasible to do with fairly high accuracy.  And it is certainly something that should be done, rather than continuing to cite the 3.7 W/m2 value that was determined ca 2000 and does not use the latest radiative transfer codes that have been improved and validated.  Note the relatively recent Collins et al. study only considers a midlatitude summer atmosphere.

Equilibrium surface temperature change

So once we calculate the direct no feedback CO2 forcing (to a doubling of CO2, it is often stated that all scientists agree that the Earth’s temperature would respond by increasing 1C.

From its heritage from 1D radiative-convective models of the earth’s atmosphere, the IPCC AR4 WGI Section 2.2 describes the concept of determining the change in equilibrium surface temperature from radiative forcing (RF):

Radiative forcing can be related through a linear relationship to the global mean equilibrium temperature change at the surface (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter.  This equation, developed from these early climate studies, represents a linear view of global mean climate change between two equilibrium climate states. The TAR and other assessments have concluded that RF is a useful tool for estimating, to a first order, the relative global climate impacts of differing climate change mechanisms (Ramaswamy et al., 2001; Jacob et al., 2005). In particular, RF can be used to estimate the relative equilibrium globally averaged surface temperature change due to different forcing agents.

According to this simple model that relates radiative forcing at the tropopause to a surface temperature change, there is an equilibrium relationship between these two variables.  The physical relationship between these two variables requires many many assumptions, including zero heat capacity of the surface and a convective link between the surface and the tropopause.  Well, these kind of assumptions were arguably useful in 1967, at the time of the famous Manabe and Wetherald 1967 paper, but why are we still using  such a vastly oversimplified model to assess climate sensitivity?

Surface radiative forcing – link to surface temperature

If our primary variable of interest is the surface temperature, it seems more reasonable to use the surface radiative forcing in a direct calculation of the surface temperature change, accounting for the different surface types (e.g. snow/ice, bare land, ocean) and different surface energy balance regimes.

If the energy available at the earth’s surface increases by 1 W/m2, how does the surface temperature change?  The simple Planck calculation works for a surface absorbing/emitting as a black body, with zero heat capacity and change in the surface turbulent heat loss.   Such a situation does not occur on the planet earth, although land surfaces are a closer approximation to this ideal than the ocean or snow/ice.  Consider the Arctic Ocean sea ice during summer when it is melting;  the surface temperature remains near 0C as the ice melts, with the heating (including solar heating) melting the ice rather than raising the surface temperature.  Consider the tropical ocean, where the surface heating is mixed over a layer of order 100 m in depth, and likely contributes to an increase in evaporative cooling.

So understanding how the earth’s surfaces accommodates an increased flux of heat is key to understanding climate sensitivity IMO, and using the surface forcing seems to be the logical way to approach this.  The IPCC cautions that:

It should be noted that a perturbation to the surface energy budget involves sensible and latent heat fluxes besides solar and longwave irradiance; therefore, it can quantitatively be very different from the RF[tropopause], which is calculated at the tropopause, and thus is not representative of the energy balance perturbation to the surface-troposphere (climate) system. While the surface forcing adds to the overall description of the total perturbation brought about by an agent, the RF and surface forcing should not be directly compared nor should the surface forcing be considered in isolation for evaluating the climate response (see, e.g., the caveats expressed in Manabe and Wetherald, 1967; Ramanathan, 1981).

I checked the “caveats” in Manabe and Wetherald 1967,  p 250: The basic shortcoming of this line of argument may be that it is based upon the heat balance only of the earth’s surface, instead of that of the atmosphere as  a whole. This argument didn’t make much sense to me, but then I didn’t spend much time on it.  The Ramanathan 1981 paper is interesting (I don’t recall reading it before.)   Ramanthan emphasizes that it is not just the radiative forcing at the surface, but also the atmospheric warming from CO2 that amplifies the surface warming.  This is simple enough to include.  Ramanathan concludes that it is possible to formulate the climate sensitivity problem from the viewpoint of the surface energy balance approach.

Redesigning the CO2 no feedback sensitivity analysis

I think the correct way to do this problem is to use the surface energy balance approach, as broadly outlined by Ramanathan.  I would design the analysis in the following way:

1.  Compute the surface radiative forcing and its amplification by the atmospheric warming in a manner following Myhre and Stordal 1997, using gridded global fields of of the input variables obtained from observations (e.g. the ECMWF reanalysis, ISCCP clouds, satellite ozone, some sort of aerosol optical depth from satellite.  Conduct the calculations daily over two different annual cycles (say 1 El Nino and 1 La Nina year).  These two different years provide an estimate of the uncertainty in the sensitivity associated with the base state of the atmosphere. Note, each annual forcing dataset will need to be run repetitively for maybe up to a decade to get equilibrium for the ocean and sea ice models.  A grid resolution of 2.5 degrees should be fine.

2.  Use the calculated fluxes  to force the surface component of a climate model (without the atmosphere), including the ocean, sea ice, and land subsystem models, for the baseline (preindustrial) and the doubled CO2 forcing.  Conduct two calculations for both the baseline and perturbed cases:

1. keep the the (turbulent) sensible and latent fluxes for the perturbed case the same as for the baseline case
2. determine the perturbed surface temperatures by calculating the turbulent sensible and latent heat fluxes using the perturbed surface temperatures

Note, these two different ways of treating the sensible and latent heat fluxes tell you different things about sensitivity (without allowing the evaporative flux in #2 to change the radiative flux).

This is how I would do the analysis to determine the CO2 no feedback sensitivity.  The number would almost certainly be less than 1C.

Conclusions

My 1 day time limit is up, I feel like this needs more thought, but this is what I came up with, look forward to your feedback.  The simplified radiative forcing model that relates the tropospheric radiative flux to the surface temperature requires far too many unwarranted assumptions (my concerns with the equation ΔTs = λRF will be the subject of a future post.)

I’ve mentioned these general ideas a number of times before, including my opinion that the equation ΔTs = λRF was not carved in stone on Mount Sinai.   I’m not sure if I have an solutions here, but there are a whole lot of questions on this subject that require much more substantive answers.  I look forward to your ideas on this.

### 377 responses to “CO2 no-feedback sensitivity”

1. sharper00

“I’ve mentioned these general ideas a number of times before, and the “mainstream” has declared me to be dotty, and not understanding that the equation ΔTs = λRF was carved in stone on Mount Sinai.”

Can you point to where you’ve mentioned these general ideas and you’ve been declared “dotty” for doing so or is this another “It’s in my personal email” sort of thing?

• How about Colose’s opening statement below?– Colose says “With all due respect, Dr. Curry is getting caught up in a fallacy that was popular in the early half of the 20th century, particularly in early calculations done by Plass, Moller, and others. I think some of this confusion carries over to her quibbles with the equilibrium climate sensitivity equation as well.”
http://judithcurry.com/2010/12/11/co2-no-feedback-sensitivity/#comment-20872

Is Old Fallacy + Confusion + Quibbles sufficiently derisive? Note by the way that Colose never perfects the insult by saying what the fallacy, confusion and quibbles are. In fact he never addresses Curry’s specific arguments, but rather presents a cross argument instead of a counter argument, mostly a view from outer space.

• I did reply to Chris below. I know what he is talking about, I am trying to provoke some broader thinking on the subject.

• sharperoo: Perhaps I am just more sensitive to insults than you are. Where does Colose point out (1) the fallacy and (2) the quibbles? I do not see those words used in the body of his response.

Moreover, Curry responds as follows: “Chris, thanks for your nice summary, I understand all this. My issue is that all this seems to have nothing to do with surface temperature, which is the issue of concern.”

I agree with Curry. Colose’s response does not seem to address Curry’s argument. Rather he is arguing for something that is not at issue, which is a cross argument (as in cross purposes), rather than a counter argument.

• Jim

See comment #21 on Climate Progress

I don’t know who Adam R. is, but here is one instance.

November 11, 2010 at 8:40 pm

Curry has created for herself the persona of dotty aunt of climate science. I am sure it is not what she intended, but what was her goal, Grand Dame?

Now she is irretrievably lost in the wilderness of climate disinformation and obfuscation. There is no going back from here. Look for her to be collecting speaking fees at the Heartland Institute’s next pseudo-conference. Perhaps that’s been her goal, all along.

• sharper00

I saw that comment but remember the statement that’s being supported here

“I’ve mentioned these general ideas a number of times before, and the “mainstream” has declared me to be dotty”

Are really meant to interpret that as “An anonymous commenter on climateprogress said I had created a dotty persona”.

I also see no indication that “Adam R” is reacting to Dr Curry’s views on no-feedback sensitivity which is apparently the issue which has caused mainstream science to declare her “dotty”

• Mark F

And yours as an attempt to legitimize the slime while further marginalizing the target? Um, “tacky”.

• sharper00

” legitimize the slime “

Well my question is actually “Where is the slime?” and unless I’m supposed to accept “Adam R” on climateprogress is now the spokesman for mainstream science it appears to absent.

“while further marginalizing the target?”

So in your view if someone makes claims concerning their treatment on an issue a request to support those claims is “further marginalizing the target” and “tacky”. That’s certainly an interesting way of looking at things.

• I could provide quotes from the sites Curry mentioned that are pejorative to the point of libel–and so can you. If you haven’t already seen them (and I think you have) a simple look through the archives of the sites she mentioned can provide you with enough soft porn for alarmists of any age.

• sharper00

“I could provide quotes from the sites Curry mentioned that are pejorative to the point of libel”

Are these statements directly relatable to Dr Curry’s view “that the equation ΔTs = λRF [is] not carved in stone “?

• RobB

You are well aware the topic is ‘CO2 No feedback Sensitivity’ as opposed to Dr Curry’s dottiness. Seriously, please take this to the last open thread if its important to you.

• I am removing the “dotty” sentence from my original post, no further discussion on this topic on this thread, pick it up on one of the other threads if you like. If i have time (and I am insanely busy today preparing to leave for the AGU meeting), I will cite instances where my statements on this subject have been criticized.

• Shub

Dr C
These type of “show me the evidence” questions for absolute non-sequiturs get us nowwhere.

“Show me where the mainstream has said disparaging things about you Dr Curry” – as though “the mainstream” has done anything else to date! They expect you to carry a log of their insults so they can prove from your records that their hearts are pure as the driven snow. Heh.

This, of course, brings us to the question of who this “mainstream” is. This browbeating, moralizing bunch – they know who they are – is not the ‘mainstream’, but a small claque that has been attempting their coup by being the most brazen, the loudest, so that all other voices are drowned.

• sharper00

All in relation to the “Italian Flag” model of representing uncertainty, not C02 sensitivity.

• *****the previous responses to this have been moved to open thread, continue the discussion over there if you want to

2. Excellent column , you have raised a number of questions, and I will be thinking all night now…..
Your comment that their are to many unwarranted assumptions in some of the current work raises other questions , first would be how can we remove or answer for those assumptions ? A decade long study to determine base line forcings is a laudable goal from a science point of view. Unfortunatly the current “discussion” has little to do with science any more and more to do with politics and agendas.
Please keep up the questions , the answers are out there it is just required that people look for them .

3. Brian H

Edit: Incomplete sentence and thought:
” Note the relatively recent Collins et al. study only considers ….”

And nothing relevant follows.

• thanks for spotting it , have fixed it “only considers a mid latitude summer atmospheric profile.”

4. Steve Fitzpatrick

Judith,

A very nice and thoughtful post, even if the effort was limited to one day. You are absolutely correct that there are multiple issues not completely addressed, even in the simplest no-feedback case. Certainly if there are more robust methods to treat the no-feedback case, then it would make sense to adopt them.

But I do not see any simple way to verify the no-feedback sensitivity; if you include evaporative cooling from the ocean surface (for example), then you immediately get into the feed-back issues of water vapor and cloud cover. My bet is that the no-feedback sensitivity will never be accurately quantified, because the no-feedback case is a physically unreal construct, even if intellectually convenient, and any proposed method of calculating it can’t be tested with data from the Earth… since this is always is going to be ‘contaminated’ with feed-backs. Constraining of important factors like aerosol OD, aerosol influence on cloud albedo, and ocean heat accumulation is much more likely to accurately determine a net (including feed-backs) sensitivity, even if the no-feedback case remains uncertain.

• actually, with surface evaporation, you do not need to feedback onto the radiative fluxes, you can just turn that off, a number of things are turned off in the no feedback response, I agree, there is no way to verify the no feedback sensitivity; the best we can do is try to understand it and decide whether it is a useful concept, and if it is , calculate it carefully and thoroughly.

• Brian H

Judith;
Somewhat late edit suggestion: use “no-feedback” instead of “no feedback”; it makes the grammar of the topic and many statements etc. clearer, requiring less of a lagged mental hitch to decipher.

• TomFP

Strongly agree. I often see cases where a well-placed hyphen would significantly improve the readability of a text.

• TimR

Very true.

• point taken, fixed (in the title, anyways)

5. ianl8888

I asked this exact same question several threads back. Chris Colose answered it. Here is his answer:

>”The temperature value of “1.2 C” is determined by the product of two things. First, it is dependent on the radiative forcing (RF) for CO2 (or the net change in irradiance at the TOA, or tropopause, depending on definition). It’s also dependent on the magnitude of the no-feedback sensitivity parameter (call it λo to be consistent with several literature sources). Therefore, dTo=RF*λo. So we can re-write the Dessler equation (although this goes back a while, Hansen, 1984 one of the earliest papers I know of to talk about things in this context, Lindzen has some other early ones) as dT=(RF*λo)/(1-f). (I will use the “o” when we are referring to no feedbacks). It turns out virtually all uncertainty in dTo is due to the radiative forcing for CO2 (~+/- 10%, the best method to calculate this is from line by line radiative transfer codes which gives ~3.7 W/m2)”<

This is why I had assumed RF(CO2) had been done through spectroscopy. According to Lacis, the spectroscopy on a line-by-line basis for mixing of a range of concentrations of CO2 through the atmosphere is used to validate RF as calculated though Stefan-Boltzmann. The calculated values are those used in GCM's

[Hope I have that correct]

• Yes, the RF part is ok with line-by-line calculations supporting this. the issue is how you translate this to surface temperature change. note GCMs calculate radiative fluxes and surface temperature change at each grid box, each model time step.

• But this answer is wrong. It ignores the point made by Peter317 and others.

6. Jim D

I will summarize where I am on this after a long discussion on the previous thread. The only clear no-feedback sensitivity is that at the top of the atmosphere. When you double CO2, the average effective radiative temperature drops from about 255 K to about 254 K. This is out of balance because now the earth-atmosphere system is emitting less than it is taking in, so it restores this temperature to 255 K. The big question is how does it do that, and especially of you don’t allow feedback? A simple-minded approach is that the air column everywhere could warm by about 1 K to achieve this, which implies the surface also warms by the same amount, but this is too simple. When you try to consider the effect at the surface, a global average temperature change no longer means much. The tropics may have a much reduced effect at the surface because of the column being dominated by H2O, while it is opposite for the polar regions. So I baulk when it comes to trying to specify a surface no-feedback sensitivity, because it is very spatially dependent, and you would need to do a global integral, which is an orders of magnitude bigger problem than at the top of the atmosphere. Instead of doing such a global integral, I would just sample the atmosphere. It is probably best illustrated by using single soundings for sensitivities tests with a code like MODTRAN. For example for the US Standard Atmosphere, you can cancel out the effect of doubling CO2 on the outgoing longwave by warming the whole profile and surface by 0.9 C, keeping water vapor constant. This is probably going to be typical using other profiles, and such an exercise will quickly give a sense of the effect and variability without having to resort to a full climate model.

• The only clear no-feedback sensitivity is that at the top of the atmosphere. When you double CO2, the average effective radiative temperature drops from about 255 K to about 254 K. This is out of balance because now the earth-atmosphere system is emitting less than it is taking in, so it restores this temperature to 255 K. The big question is how does it do that, and especially of you don’t allow feedback?

While I’d prefer to hear expert commentary on this very interesting question, I’ll throw in my amateur two cents worth.

The opacity of the atmosphere depends strongly on wavelength. Where it is transparent, heat is radiated from the surface. Where opaque, it is radiated from higher up.

As the opaque portions get more opaque with increasing CO2, heat at those wavelengths is blocked more, and moreover blocked at lower altitudes so that less heating of the CO2 in the stratosphere occurs, and the stratosphere thus cools. To keep things in balance Earth must increase its heat output in the transparent wavelengths, which it does by warming up. But this warming is at all wavelengths including the blocked ones, so that the troposphere gets hotter.

Even though the surface temperature continues to rise, the stratosphere continues to cool because the heat passing from the surface straight to space affects neither the troposphere nor the stratosphere, while the heat being trapped by the troposphere has an increasingly harder time reaching the stratosphere.

The effect is analogous to a bathtub with two drains and a running tap. It fills to the level at which the pressure of water pushes out an amount through the drains equal to the incoming water. If you block one of the drains the level then rises until the pressure becomes sufficient to push the same amount of water through the open drain.

The open and closed drains correspond to the transparent and opaque portions of the spectrum respectively.

• Jim D

Vaughan, OK, except I don’t quite agree with your stratosphere interpretation. I see it is a locally determined system there, not relying on radiation (nor obviously convection) from below. Increasing CO2 increases its own radiation to space, while it continues to be warmed by ozone. I think the cooling is a direct response to the altering of the CO2 amount.

• Increasing CO2 increases its own radiation to space, while it continues to be warmed by ozone. I think the cooling is a direct response to the altering of the CO2 amount.

How about we split the difference and say it’s both? ;)

I suspect the two effects are equal. But it would be good to do the math.

• Jim D

Yes, I expect there is a contribution from the CO2 below.
Note that as the stratosphere gets warmer with height, increasing CO2 in it will cause it to radiate at a warmer temperature, which is why it has to lose temperature to get back to equilibrium.

• Yes, I agreed with that. The only question is the ratio of the two effects in cooling the stratosphere, more loss to space due to more CO2 radiating vs. less heat from Earth due to more CO2 below blocking the CO2-relevant heat coming up. I can’t even guess at whether the ratio varies with altitude.

• Now that is interesting.
North Atlantic currents are the opposite, there are two taps with water flowing in ( 9 +1 Sverdrups ) and one drain hole out ( 10 Sv ). Get some relevant data, do a bit of calculation and you get this incredible close match for the trend lines.

• Brian H

OMG! The CET is driving the NAP! How could we have missed it? It even reaches back slightly in time …

;)

• Brian H
“The truth is incontrovertible, malice may attack it, ignorance may deride it, but in the end; there it is. ” W.C.

• Brian H
Here is another. Chose your weapon : consider, attack or deride !
http://www.vukcevic.talktalk.net/NPG.htm
On the other hand you may prefer to flee.

• Brian H

Uh, you DID notice the ‘wink’ smiley at the end? A clue 4U, perhaps?
Let me explificate. Since it is nonsensical that the CET would drive the NAP, I was sharing amusement at the confusion of effect and cause so characteristic of Warmists. Sorry you didn’t comprehend.

• I agree that the concept of no feedback sensitivity makes most sense in the context of the top of the atmosphere radiation fluxes. The issue at hand is relating this to surface temperature, in terms of the radiation fluxes causing a surface temperature change. the approximation is then made to look at the troposphere radiation fluxes (along with assumptions about what is going on in the stratosphere) and then infer something about how the surface temperatures are connected to tropopause fluxes. It is this inference in the middle that is unsound. Which brings us back to the main issue of trying to infer a change in surface temperature from the increase in radiative flux.

• It is this inference in the middle that is unsound.

Exactly so. I believe you and Jim D are expressing the same concern about the logic here, which does not make sense as customarily explained. It’s a genuine paradox, and as I said a very interesting one.

So what did you think of my proposed resolution of the paradox in terms of the dependence of opacity on wavelength? Do you find the analogy with the two-drain bathtub plausible?

So far I’ve been limiting my role in these discussions to supporting the science and debunking the pseudoscience. This is why I go with Hofmann’s formula for dependence of CO2 on time rather than mine, for example, since my 260 ppmv natural base in place of the accepted 280 ppmv is heretical. Advocating new climate science is heresy, regardless of whether it’s right or wrong, and should be left to the authorities in the field to judge. I’m just an amicus curiae with enough background to offer clarifications supportive of the established science, and suggestions for improvements, but without the authority to declare the latter factual.

As an expert, Judy, you’re in a better position to do so. However even that doesn’t make it scientific fact. New ideas do not get a free pass out of the inquisition until they have met the litmus test of scientific consensus, which takes time, discussion, and adoption not just by the experts themselves but as reflected in their literature.

• Jim

New ideas do not get a free pass out of the inquisition until they have met the litmus test of scientific consensus, which takes time, discussion, and adoption not just by the experts themselves but as reflected in their literature.

I thought the litmus test in science was empirical falsification.

• I thought the litmus test in science was empirical falsification.

Sorry, Pop, but Popper’s not popular in philosophy.

Science observes, infers, rationalizes, and predicts. Anything else is in the realm of esoteric philosophy.

• Jim

No wonder we aren’t getting anywhere here.

• You mean if we all adopted Popper’s peculiar philosophy on this blog we’d make more progress? :)

• Jim

I mean that if there is no way to validate climate science via empirical measurement, especially the predictions of temperature rise as a function of CO2 concentration, then we are getting nowhere. I can create a climate model, say it’s just as good as anyone else’s, and no one can prove me wrong. You can only appeal to authority if there are no empirical tests used to validate the hypotheses.

• Brian H

Popping the adjective “peculiar” into each mention of his philosophy of science is not persuasive, merely a resort to propaganda and implicit ad hom efforts to win without fighting.

As Jim says, validation eventally requires matching of prediction and observation, and if that isn’t possible then you’re trapped forever in the universe of hypotheticals.

• As Jim says, validation eventally requires matching of prediction and observation, and if that isn’t possible then you’re trapped forever in the universe of hypotheticals.

How is that empirical falsification? It sounds more like scientism.

Empirical falsification is Popper’s criterion for whether a theory is scientific: if it doesn’t predict a measurable outcome it isn’t a scientific theory. Climate models predict measurable outcomes: if their prediction is false then they’re a scientific theory, if their prediction comes true what’s the complaint?

Or do you feel some climate models make predictions that aren’t measurable? Since they’re based on laws of physics etc. that we know to hold, and limit themselves to measurable quantities, I don’t see how that could be. They’re not like psychological theories which contain unmeasurable quantities like the id and the ego, which is what got Popper off onto empirical falsification.

(I should clarify that I don’t hold with the idea that climate models can predict globally, they should be limited to local predictions of specific variables under well-controlled assumptions. Global climate is even more complex than the global economy, well known for its unpredictability. As far as global climate prediction goes I trust only the past record, both recent and distant past, and plausible rationalizations thereof based on our understanding of the underlying scientific phenomena. I’m not sure what philosophical camp that places me in, but I’m clearly neither a rationalist, an empiricist, or an empirical falsificationist.)

• Jim

V.P. – I think you need a good does of pragmatism :)

• Jim D

The drain-faucet-sink analogy is a traditional one (e.g. see David Archer’s book on The Long Thaw and I am sure he wasn’t the first). I think it works well. There the CO2 just obstructs the drain, but it is that idea. The level of water represents the temperature, and the faucet is the incoming solar energy.

• The drain-faucet-sink analogy is a traditional one

Yes, the one-drain case is well-known and I should have referenced it (I have the The Long Thaw at home, so far have only got 20,000 years through it, 80,000 to go.).

What I was claiming novelty for was the two-drain bathtub, with the two drains corresponding respectively to the transparent and opaque portions of the spectrum. Has that been suggested before? I think you’ll find it solves your paradox.

• Jim D

Perhaps I am missing some of the subtlety. Two drains and closing one, versus partially blocking one (bigger) drain are the same thing to me.
My main problem is not really a paradox, but that it is an ill-determined problem. You can get the same outflow of IR for various temperature profiles.

• Two drains and closing one, versus partially blocking one (bigger) drain are the same thing to me.

But that’s not what’s going on here. Partially blocking one drain is just adding half as much blockage as it takes to block the drain completely. In this case we have two distinct drains associated with respectively the opaque and transparent portions of the spectrum. No matter how much GHGs you add you can’t block the transparent drain.

But you could argue that this distinction is immaterial, and I might have to agree with you there. I’ll have to think about it some more.

My main problem is not really a paradox

Right, I think I confused only myself there. So let me unparadox things as follows. The planet is in equilibrium, we add CO2, the new equilibrium has to radiate the same amount from Earth as for the old one. So in some sense the “radiating surface” is at the same temperature as before.

The difference is that now the heat from the Earth’s surface to the radiating surface (which can be the same thing for some wavelengths but not all) has to contend with a higher “radiative resistance” in the form of more CO2. This develops a higher temperature differential across that gap, resulting in a hotter Earth’s surface even though the radiating surface did not change temperature (in a suitably averaged sense).

I can live with that if you can.

• Jim D

Yes, and the fact that the drain is not completely blockable may explain why the Greenhouse (Tyndall Gas) Effect won’t lead to a runaway (overflow) situation however much CO2 is added. However, because H2O is slowly blocking these drains too, as a function of depth, and we only need to block them to the extent that the flow out can’t match the faucet flow. It depends a lot on whether the (non-CO2, non-H2O) window region can sustain the faucet-matching flow by itself. Interesting stuff.

• It depends a lot on whether the (non-CO2, non-H2O) window region can sustain the faucet-matching flow by itself.

That’s a very interesting point. To within a factor of two Venus’s atmosphere contains about as much carbon as the whole of Earth, yet that’s only enough to get it up something like 500 K, I forget the precise number. It reaches its surface temperature of 740 K by using only trace amounts of carbon monoxide (a strongly absorbing greenhouse despite being diatomic, but at different wavelengths from CO2) and sulfur dioxide.

While minute by comparison to the CO2, these suffice to block off enough of the otherwise open drain to raise the surface another one or two hundred degrees. It’s mind-bending.

Interesting stuff.

It’s certainly got my full attention lately. 18 months ago I didn’t anything about either climate or its politics, nor cared. I was focused on building computers.

• Brian H

The Venus analogy is bogus.
No solar flux reaches the surface to be re- and back-radiated.
The surface pressure is almost 100X Earth’s; compute the temperature rise that causes, and there’s not much left to go.
Venus radiates more than it receives from the sun; there’s an unknown heat engine down there, maybe core radioactivity?
Despite months-long rotation speeds, day and night are almost identical temps, to within a degree. There is no surface wind to distribute heat. Totally incompatible with GH assumptions.
Etc., etc.

• Kan

Via pressure broadening, increasing CO2 will encroach on the atmospheric window. Enough CO2 (on the order of what is in the venetian atmosphere) and the window gets severely clogged. Thus, one drain.

• Brian H

The encroachment is minor, and in any case spreads the total IR absorption over a slightly wider area, without increasing it. Bogus.
As T confessed in his emails, the energy budget as depicted by CRU is a travesty.

• Brian H

Since temperature is not a conserved quantity, but rather the consequence of complex interactions of energy flows, specific heats of materials, pressures, and more, I think the analogy is deeply misleading.

• Jim D

You can multiply T by cp (heat capacity at constant pressure) and integrate over the atmospheric mass get a total energy (actually enthalpy) in Joules if you prefer, but T is the main variable in this integration.

• Right, that’s the same comeback I’ve been using on that one.

• Brian H

Except that other things change the temperature, independent of your neat integration. Your theoretical closed vessel is actually open to the winds …

• Jim D

This is just supposed to explain what CO2 does by itself.

• Brian H

Lag times for re-radiation are, I gather, about 1000X as long as for collision with surrounding molecules, which are about 2500:1 more likely to be other gases. Where does the “by itself” come in, again?

• Jim D

CO2 can do a lot to the atmospheric temperature by itself. What was the question?

• Jim D

The constraints of the problem are that each column is independent, and convective effects should also be ignored because we don’t consider water vapor distributions changing in the column. There are many ways to reconfigure the temperature in the column and at the surface to balance the radiative change due to CO2, but I would submit the only objective one is to change the whole column and surface by the same temperature and find out what that temperature is independently for each global column (as I posted elsewhere here regarding MODTRAN). Then these changes can be averaged into a global mean if desired, but it makes more sense as a map.

• The constraints of the problem are that each column is independent

Given the rates involved (0.1-0.2 °C/decade, convection between Hadley cells), how realistic is that constraint?

• Jim D

Not realistic, but necessary if you are going to solve it without a GCM, which I believe the pre-amble is aiming at.

• So is the goal here to prove that GCMs are needed? Since if without them you’re stuck then basically that’s what you’ve gone and done.

While I don’t feel a strong need for tremendously detailed GCMs, I do feel a strong need to understand the Hadley cells. I think they’re beautiful and they’re on my agenda to investigate in depth after getting a few things off my stack.

• Jim D

I think there are many useful back-of-the-envelope calculations, and one-dimensional radiation calculations that can get most of the way towards understanding forcing and feedbacks quite quantitatively without a GCM. Even this problem, if we accept that we are fixing a lapse rate to solve it, and not considering the general circulation changes,

7. With all due respect, Dr. Curry is getting caught up in a fallacy that was popular in the early half of the 20th century, particularly in early calculations done by Plass, Moller, and others. I think some of this confusion carries over to her quibbles with the equilibrium climate sensitivity equation as well. The quantitative details of the surface vs. TOA budget reasoning are discussed heavily in Ray Pierrehumbert’s upcoming ‘Principles of Planetary Climate’ textbook, and he has also worked with David Archer to produce a historical account if this type of stuff in ‘The Warming Papers’ (though I haven’t looked at this yet, if it’s even available right now).

The reason nobody talks about the direct CO2 effect in terms of the surface budget anymore is because it’s not right (Spencer Weart in his historical writings has articles that discuss this), and it can lead to spurious results. In fact, the radiative forcing for a doubling of CO2 (~ 4 W/m2) is entirely different than the forcing at the surface (~1 W/m2).

Consider a thought experiment: Let’s double CO2 from modern concentrations and see what happens. First, let’s hold the temperature of the atmosphere fixed and only talk about the downwelling IR flux to the surface. In a relatively dry atmosphere, CO2 is not anywhere near saturated and so one will get an increase in the downward IR flux to the surface. To actually say something about the temperature change though, you also need to account for evaporation and sensible heating fluxes. Manabe and Wetherald showed (successfully) that you can’t just compute an equilibrium condition for the Earth’s surface, but rather for the Earth-atmosphere system as a whole, and determination of surface temperature change requires simultaneous satisfaction of both the top-of-atmosphere energy budget and surface energy budget.

Now consider an isolated tropics where the boundary layer is pretty moist. If you add CO2 to the atmosphere, you might not get much increase in the downward IR at all, since the lower atmosphere is already emitting close to a blackbody at its temperature. Any CO2 added to the atmospheric column in the high atmosphere will also not contribute much to the downwelling IR (at the surface), since it will be effectively absorbed by the lower layers, which are sufficiently optically thick. Does this the surface temperature can’t rise anymore? Not at all!
is

In this case, what happens is the whole surface-atmospheric column is now receiving more (absorbed) solar radiation than it is emitting terrestrial radiation to space. Why is this the case? Consider an observer looking down at the planet from space that has the unique ability to see in the IR, asking him or herself ‘how far to the surface do I see infrared light coming up?

In the case where there is no greenhouse effect (or in wavelengths where the atmosphere is very transparent), the observer can see ‘optically’ very far into the atmosphere. In this sense, they are looking at radiation coming from the surface. Now if you keep adding greenhouse gases, the observer can see less and less deep into the layer. In other words, the mean free path of a photon is reduced, and the sky is becoming more optically thick, and eventually the observer will see the bulk of radiation coming from the very high troposphere, or stratosphere. If you consider an iso-line of constant optical depth (say the TAU = 1 marker), it will shift to higher altitudes as the IR opacity goes up. In that sense, the level at which the effective temperature remains ~255 K (in equilibrium with the absorbed shortwave radiation) is reduced to lower pressures (higher heights) in the atmosphere.

Now our observer has the clever idea of computing the outgoing radiation by starting off with a simple Planck curve corresponding to the temperature of the Earth’s surface. But now, in wavelengths where there is IR absorption, the radiation isn’t escaping from the surface but from some layer in the atmsophere that is colder than the surface, so one views this as a “bite” taken out of the Planck curve. See this image. The top curve is the new “quasi-Plank” outgoing radiation spectrum, where the low transmittance regions (high absorbance) that Earth emits at show up as ditches in the OLR spectrum. Now we compute the integral of this curve at all wavelengths, which remember is physically the total area under the curve. For a blackbody, this integral is σ/π*(T^4)…(the 1/pi) factor is often removed by making an assumption about the angular distribution of radiation). The “real” integral with a wavelength-dependent optical thickness however, due to the ditches in the spectrum, is now much less. Physically, the planet is not losing as much energy to space as it is receiving from the sun, so it much warm up over time until it gets hot enough for the spectrum to increase the total area under its curve (which is a strong function of T) to satisfy equilibrium.

The only way to bring the system back into a new equilibrium is for the whole troposphere to warm up, which is pretty well-mixed by convection, except in regions at the poles or mid-latitude winter (see e.g., Manabe and Wetherald 1975, which builds upon their work in the sixties). It is the corresponding warming of the low level air that drags the surface temperature along with it, and in fact the downward IR flux to the surface may be dominated not by the increase in CO2 directly, but by the increase in the tropospheric temperature due to TOA dis-equilibrium. The increased temperature of the whole troposphere
increases all the energy fluxes into the surface, not just the radiative ones.

Note that in the back-of-envelope calculation that Fred provides, where he takes the derivative of the Stefan-Boltzmann equation, he is using the emission temperature (where radiative balance is set), not the surface temperature. He’s doing this correctly. There is an assumption here though that the surface and emission temperature are linearly coupled, which is good enough for a back of the envelope, and as Fred notes, more advanced models produce results rather consistent with this value. Moreover, the finite absorptivity of the atmosphere in the longwave band means that the sensitive parameter is mildly higher than this (Roe 2009 provides a quick calculation as well, as discusses feedbacks in a much broader context).

Finally, there are situations where a detailed computation of the surface energy budget is needed, especially in weakly coupled domains where there is a large discontinuity in lower level temperatures. If you add an evaporation source to the Sahara desert for example, the ground itself will cool even if you add CO2.

8. GaryW

Dr. Curry,
Locating authoritative and accurate articles on this subject is particularly difficult for those of us outside the field. I have no complaint about the 1 degree C per doubling of CO2 value. I would, however, like to be assured that it was derived through some rational means.

Does this calculation take into account:
1. Earth’s rotation – Solar flux and flux angle varies over a 24 hour period.
2. Local albedo – The surface of the earth is not a uniform gray ball.
3. Local heat capacity – Differing surfaces cool and warm at different rates for the same heat flux.
4. Polar winter – Polar winter surface temperature may be well below equatorial tropopause temperature.

If the calculation does, great. I’d just like knowing one way or the other.

• Jim D

This kind of calculation is only relevant to a static atmosphere, so it doesn’t need to account for rotation. I think of it as independent individual columns. In fact, this brings up a more general point, beyond this answer, that the response in individual columns is also independent, so each column has to balance its own radiation budget. Things get interesting when you allow dynamics because different columns adjust at different rates to CO2. For example, land adjusts faster than ocean, and icy areas faster than non-icy due to extra ice-albedo feedback. As an aside, I believe the weaker than expected response in the tropics is related to the stronger than expected response in the polar areas, especially Arctic sea-ice loss.

• Bart R

I’ve had (some limited) exposure to icy oceans.

While albedo increase due to ice cover is often mentioned, where the ice is relatively solid and stable, it’s been my observation that where there is slushy ice, the sea is visibly darker.

I’m sure there are other ocean-darkening events, like blooms, some types of waves, etc. that have some impact.

Does anyone know if the level of such changes is significant?

• GaryW

Jim,
I suppose I was hoping the answer would not be something like yours.

“This kind of calculation is only relevant to a static atmosphere, so it doesn’t need to account for rotation. ”

What my concern about Earth rotation is that in considering an individual column of gases, with Earth at the bottom and open space at the top, the temperature of the ground should matter. Over the course of a day, most areas on Earth experience a significant variation in surface temperature. In the desert belt of this planet, day to night ground temperatures sometimes exceed 35 degrees C. (I know I have experience that much temperature swing while camping in the desert.) That will ‘likely’ produce a significant difference in surface IR radiation (perhaps by more than a factor of two?) Also, the reflectivity of some surfaces, such as water, vary with sun angle. That should produce a diurnal variation in local albedo. Papers such as Myhre and Stordal (97) seem to ignore these factors.

• Jim D

Gary W, you have to realize this is a very idealized hypothetical calculation. A full model would take all these things into account, but a radiative transfer model used for this is constrained to only change CO2, and temperature, and nothing else. Even the solar radiation doesn’t come into it, just infra red. In my view, as soon as you start allowing other things to change it is a slippery slope towards full physics.

• Jim D

Actually I misspoke, solar radiation does come into it slightly, I guess, because CO2 has a small effect on it too, maybe a few percent of the longwave effect, but this does not make the diurnal cycle important in this idealized case.

• GaryW

“you have to realize this is a very idealized hypothetical calculation. A full model would take all these things into account, …”

Interesting. Doesn’t that then indicate that the non-feedback sensitivity number discussed here is not intended to be representative of what we would expect for a non-feedback value for the Earth?

• Jim D

It would be relevant if you could hold everything constant except CO2 and temperature, and even constrain the temperature to maintain its lapse rate.

At the other extreme, not being discussed here, you could do a no feedback simulation with a full GCM where you hold atmospheric water vapor constant despite the warming ocean surface. This might cause changes in cloud cover, due to consequent reductions in relative humidity, so you have to stop those too because they are a feedback. It gets messy.

• Jim D

I see that Jianhua Lu has posted a relevant paper that tries to get at the problem with a dry GCM and interactive surface. It certainly is interesting, and shows surface effects that are 1-1.5 C warming for doubling CO2. Since they have no moisture, and don’t want to have the (steep) dry adiabatic lapse rate, they adjust towards a latitude-dependent lapse rate that goes more towards the moist lapse rate in the tropics, which is a method of getting realistic temperature structures without moisture.

• GaryW

As I read that paper, albedo is held at a fix value for a given latitude, not including ice surfaces and that ground temperature is set to track bottom of the atmosphere temperature with a lag of 10 minutes. I don’t think that is what I was looking for.

• Jim D

These papers are looking for equilibrium solutions, so I don’t think adding thermal inertia will affect this except to slow down how quickly it reaches equilibrium, if that is what you are asking for.

• GaryW

Jim, one last comment then I’ll drop this. The equilibrium you describe is valid only if both solar thermal flux and surface temperature values follow sine curves or at least similar curves. They don’t and phase any difference between them (thermal lag) can offset the resultant thermal equilibrium value significantly.

• Gary, I have the same problem you do. It seems to be derived from elementary radiative convective models and back of the envelope calculations.

9. Bart R

And again, I am humbled.

I’ve spent a month trying to bring my studies closer to the level this would all make sense to me in the hopes the mathematics and principles of Thermodynamics would come naturally to me so when I read discussions exactly like this one, so it would be clear and obvious.

It appears I’m barely a fraction of the way to that level of understanding.

Is what is being said here and upthread that the ~1C warming figure is after all mechanical structural (melting/evaporation/equilibrium states(eg sea level)/etc) and energy changes (eg related to windspeed, current velocity, ocean pH, etc.) are taken into account, or before?

The former to me indicates ~2C impact on the biosphere per ~1C rise (at the top of the atmosphere? at the surface?) (positing that the climate is so complex and turbulent as to approximate a perfectly efficient engine), the latter .. I’m not sure what that means. My grasp hasn’t budged from where it was the day I started reading Climate Etc.

I’ll look for the book mentioned (Ray Pierrehumbert’s upcoming ‘Principles of Planetary Climate’ textbook), and go back to upgrading my skills. Can anyone recommend other readings helpful to this mad quest?

• Joe Lalonde

Bart,
If it is any consulation, we are all in the same boat of still learning.
Too much bad theories and “experts say” has given us a legacy of bad science.

• RobB

You’re not alone Bart. I also feel that I’ve made very little progress since Day 1.

10. Judy – What you quoted from me in your post was my description of an approximation to the change in temperature from a “no-feedback” response to CO2 doubling. The models generally arrive at a value of about 1.2 C, not 1 C, as can be calculated from the Soden and Held reference I cited in the earlier thread on “confidence”. Unlike the approximation, which assumes a single “mean radiating altitude” and a single lapse rate, the models incorporate heterogeneity involving many variables, including region, seasonality, and other factors. It is rare these days to see the 1 C value cited (except as a rough approximation).

It’s also important to point out, however, that although changes in evaporation, convection and latent heat transport, humidity, water vapor, clouds, and other quantities affect the surface temperature response, they are excluded from the Planck (“no-feedback”) response by definition. Therefore, stating that they matter does not affect the Planck calculation, which is based on the Stefan-Boltzmann law. The effect of these other variables must be combined with the Planck response to yield an estimate of actual temperature change, and thus “climate sensitivity”. The relationship of changes at the tropopause to surface changes does depend on the linearity or non-linearity of lapse rate (but not to changes in lapse rate, which are part of the feedback). I have not seen evidence that globally averaged lapse rates, although differing by region, yield on average great departures from linearity in the altitudes most relevant to radiative equilibrium, but perhaps you have evidence bearing on this. With a linear lapse rate, regardless of its actual value(s), a temperature change at one level will be equalled at other levels, including the surface. What will not necessarily be equalled are the energy changes in W/m^2, since these vary with temperature and with energy transport mechanisms.

It was not immediately obvious to me what aspect of heat capacity you consider to be neglected in estimating the “no-feedback” response to a forcing such as CO2 doubling, and how that affects the estimate. Could you elaborate?

Finally, it’s worth noting that the Planck (“no feedback”) response is actually a strong negative feedback, except that “feedbacks” involved in estimating climate sensitivity are generally taken to mean those that occur in addition to the Planck response.

• Reviewing Dr. Curry’s post, I notice that it would be worth including ice melting, turbulence, and other variables beyond those I mentioned as examples of what should be cited as feedbacks, but not included in the no-feedback response.

• I think we are having semantical issues. I would not regard heat being stored in the ocean and melting snow/ice but not completely melting it (and thereby changing the surface albedo), since this does not have any sort of direct change to the planetary radiative fluxes. Further, if you allow surface evaporation to cool the surface but do not allow this moisture to feedback onto the radiative fluxes, then this is not a feedback either. An inconsistency, but there are many inconsistencies also in the conventional method. So how do we define this problem to make sense? Or can we?

There are a whole host of conventions/semantics that have evolved around the topic of climate feedbacks that confuse people, I would like to clarify these, and also further explore the issue as to whether there might be better (or at least other) ways to approach this issue.

• I agree that there are semantic issues, but I don’t think heat storage in the oceans or warming ice are among them. The heat capacity of the oceans is recognized as a factor of enormous importance in delaying the equilibrium response of climate to a CO2-mediated forcing. It is not a feedback. Semantic issues aside, it’s probably reasonable to view the assessment of climate sensitivity, as typically done, to entail a Planck (“no feedback”) response, in combination with feedbacks that include positive water vapor, ice-albedo, and probably cloud feedbacks, and a negative lapse rate feedback. There are probably other climate responses that different models treat differently, even though they all tend to generate temperature responses to doubled CO2 of about 1.2 C. In my view, an important point is that the Planck response, as the name implies, is a radiative one based on the Stefan-Boltzmann law – in essence, it answers the question, “If the climate couldn’t change evaporation, humidity, clouds, etc., and must rely on radiation to restore a radiative balance at the tropopause (or the TOA), what would be the temperature response to a CO2 doubling?” To estimate the actual temperature response then requires adding in the feedbacks and their interactions with each other and the Planck response.

To return to an earlier point I raised that a linear lapse rate mathematically translates a temperature change at any altitude to other altitudes including the surface, I remain interested in observational data on linearity is terms of a flux-weighted global average. Elsewhere, you’ve cited near-surface temperature inversions as examples of non-linearity, but globally, how close are we to linearity as a means if estimating surface temperature changes from those calculated for the tropopause, even if the energy fluxes are quite different at the different altitudes?

• Richard S Courtney

Fred Moolten:

I strongly agree with you that “these are not semantic issues”.

They go to the heart of the real problem which is that the “no feedback sensitivity” is a hypothetical construct that can only misleads if used in theoretical analyses.

“No feedback” is impossible. Change radiative forcing at the tropopause or at TOA and everything in the climate system changes. Importantly, ocean heating (including the latent heat absorbtion by ice melting), the hydrological cycle (so cloud effects and cloud cover), and lapse rates change.

It is the fact of these changes that confuses any determination of the temperature change at the surface resulting from “no feedback sensitivity”. The real world in includes all the real effects and their contributions to surface temperature. But the magnitudes of their individual contributions are not known and, therefore, the effect of “no feedback sensitivity” on surface temperature could be estimated in a variety of ways by choosing from a variety of assumptions.

Indeed, this need for unreal assumptions means that any calculations based on an estimate of “no feedback sensitivity” are almost certain to provide unreal results.

Richard

• Indeed, this need for unreal assumptions means that any calculations based on an estimate of “no feedback sensitivity” are almost certain to provide unreal results.

Quite possibly so.

But are there any calculations based on an estimate of “no feedback sensitivity”. It appears that this number is used only as an indicative value in presentations. It is hardly used for anything at all. Even with climate models it is something that can be calculated separately, not even an intermediate result.

This is at least my understanding.

• The 1C no feedback sensitivity looms as important as a lower conceptual limit for the the CO2 sensitivity (with feedbacks). The number 1C is bandied about, in fact there seems to be little to no disagreement about this number (lindzen often cites it). I’m trying to figure out if there a is any meaning behind this number in terms of actual surface temperature change, other than the trivial derivative of the planck function. So far I haven’t come up with any real meaning that makes sense to me.

• You seem to refer to the idea that the sign of the feedback would have some special relevance and thus the no feedback would be in some sense the lower limit. I cannot see any basis for this line of thinking. There is nothing special in the 0% feedback compared to +30% or -20%. I tend to believe to the common view that it is positive, but see nothing fundamental in that.

I a couple of my other messages I have tried to explain one possible way, which might give some real meaning to no feedback sensitivity. Personally I think that the idea is relevant. It may be wrong, but that is something that can be checked at least in the models and possibly also in the real world. I am somewhat disappointed that nobody has picked up those messages and presented either criticism or support.

• This is often how the question is posed, and lindzen specifically uses the no feedback sensitivity as a reference point, and this dominates his thinking on the overall CO2 sensitivity

• Peter317

Do we know how he calculates the no feedback sensitivity?

• no he just quotes the 1C value everybody uses

• David L. Hagen

Judith
I asked Lindsen and he replied:

“This is essentially the Planck blackbody response. It is also what models get when feedbacks are suppressed.”

• David L. Hagen

Correction:
Lindzen replied:
“This is essentially the Planck blackbody response. It is also what models get when feedbacks are suppressed.”

• Richard S Courtney

Judith:

I wonder if you have tried asking Lindzen for his derivation.

I have found him to be very helpful in response to such questions (e.g. concerning his calculation of lapse rate changes in response to altered radiative forcing).

Richard

• Christopher Game

Dear Richard S Courtney,
Please enlighten me about the details of “Lindzen for his derivation …. in response to such questions (e.g. concerning his calculation of lapse rate changes in response to altered radiative forcing).” How does he calculate the lapse rate feedback?
Yours sincerely, Chrisopher Game

• Richard S Courtney

Christopher Game:

“How does he [i.e. Lindzen] calculate the lapse rate feedback?”

There is not sufficient space to properly discuss “details” of the matter here.

Lindzen stated his understanding of the tropospheric temperature profiles in a paper he co-authored with Hou in 1988.
(ref. Lindzen R.S. and A. V. Hou, 1988: Hadley circulations for zonally averaged heating centered off the equator. J. Atmos. Sci., 45, 2416–2427).

A good explanation of principles defining the lapse rate by Ramaswamy et al. is at
http://www.climatescience.gov/Library/sap/sap1-1/third-draft/sap1-1-draft3-chap1.pdf

In 1994 Lindzen stated his thoughts concerning the constraints on the spatial temperature distribution at the Earth’s surface in a paper he co-authored with Sun. Section 2 of the paper covers the issue and the paper can be read at
http://www.esrl.noaa.gov/psd/people/dezheng.sun/dspapers/Sun-Lindzen-1994.pdf

Later, in considerations of his Iris hypothesis, Lindzen has repeatedly used an AMIP model to determine how the temperature profiles through the atmosphere vary in response to altered radiative forcing at the TOA. An example of his reliance on AMIP is the famous paper he coauthered with Choi that analysed climate data; see
http://WWW.LEIF.ORG/EOS/2009GL039628-PIP.PDF

There are reasons to dispute the use of AMIP (e.g. the model is not ocean coupled) but it seems that Lindzen thinks it to be a useful tool for calculating tropospheric temperature profiles and, therefore, lapse rates.

Please contact me personally if you want to explore these issues further.

Richard

• Christopher Game

Dear Richard S Courtney,
Thank you for this very helpful reply. You have set me some homework. How do I contact you personally? Perhaps I could ask someone we might both know to send you my email address?

Reading your post again, I see it did not actually say that you have seen Lindzen’s derivation. Even perhaps he has not constructed a derivation that we know of?
Yours sincerely, Christopher Game

• Richard S Courtney

Christopher:

My personal email address is
RichardSCourtneyATaol.com
but ‘AT’ needs to be replaced by ‘@’.

Richard

11. Bruce Cunningham

I am curious as to how good of an understanding there is about what exactly is the efficiency of the radiative system being considered. The second law of thermodynamics states that nothing is 100% efficient. Some entropy is always produced. How much is created in this instance will possibly determine what the final temperature increase of the oceans and atmosphere due to increased back radiation will be?

• Bart R

I’m trying to find the source I read (somewhere in the 1980’s) that the atmosphere could be considered perfectly thermomechanically efficient, ie approaching 50% of all heat increase would be converted to mechanical effects (with ‘some loss’ to structural changes, like endothermic (bio?)/chemical reactions, state change ice to water/water to vapor, erosion of solid barriers, changing the net length (therefore total energy) of wind/current formations).
The idea being that the biosphere is large and complex enough that (with some latency rate) heat diverts from raising temperature at ever possible point where it is less work to cause some other effect.
This is different from a ‘feedback’ in that it takes up the heat.
Feedbacks would be only cases where some consequence of all this entropy itself also produces more (or causes less) heat independent of the original heating?

• Bruce Cunningham

Some good points! I especially like the bits about chemical reactions and erosion of solid barriers. I bet lots of losses there.

I still think clouds are the big unknown about all this.

• Bart R

Bruce Cunningham

I’m unaware of and unable to find any catalog that attempts an encyclopedic estimation of all contributions and buffers, and clouds strike me as a particularly poor candidate for the major significant diversion of energy, though that is only gut feeling, so worthless.

Indeed, ‘loss’ is misleading; if energy goes into something, it isn’t destroyed, and must come back out again, generally as heat in the long run.

This part of thermodynamics is the one I have some of my greatest troubles.

You’ve got energy flowing around, doing work, being transformed, potentially ad infinitum until it can radiate out of the system.

It seems counterintuitive, at least to me, that the heat ought ‘divert’ into mechanical motion, or sound, or vibration, or chemical reaction, but it doesn’t fall into these buffers forever.

We pay for the buffer effect we get now twice: once in the erosion or disruption of the buffers, and once again when they free up their heat.

Only TOA radiation permanently removes energy from Earth, for all practical considerations. Indeed, eroding most materials releases additional heat, if I recall correctly, but that would be a feedback.

• Bruce Cunningham

“Only TOA radiation permanently removes energy from Earth, for all practical considerations.”

Not according to the second law of thermodynamics as I think I understand it. This article might help might be of help.

http://blogs.discovermagazine.com/cosmicvariance/2009/01/12/where-does-the-entropy-go/

• Brian H

“long run” and “forever” are the kickers, here. If the lag is 1000, or 1,000,000, or 1,000,000,000 years, then from the current POV that’s GONE. It certainly isn’t going to have a “delta” in current climate predictions/projections.

• Bart R

Brian H

Agreed, some of what falls into the “some loss” category is effectively gone for the purposes of most models. And with no real access to estimates of the many guesses we could make of sinks and buffers, diversions and transformations, we’re stuck with assumptions, parameterizations, order-of-magnitude arguments, fudge factors and blind guesswork.

But then, these tools have often proven surprisingly effective when applied to problems cleverly.

If it looks like a Gordian Knot, it may simply be waiting for Alexander.

12. Judith,
The 1997 Myhre & Stordal paper is at

http://folk.uio.no/gunnarmy/paper/myhre_jgr97.pdf

• thx, i’ll provide that link in the main post

13. Geoff Sherrington

Quote from above “If our primary variable of interest is the surface temperature, it seems more reasonable to use the surface radiative forcing in a direct calculation of the surface temperature change, …”

That is a big “IF”. For some people, the variable of interest is the one that has most effect on the human condition sensu lato. This tends to take emphasis away from CO2, which many of us still regard as a largely unexplained factor left over from model reconciliations.

Analysis by vertical column shapes also worries me. It is easy to imagine places at Polar surfaces that are in darkness while the sky above them is lit, not for just a vertical distance towards the sun as can happen in the Tropics, but by a much larger distance involving the light in the approaching as well as the departing tangential atmospheric path. Any analysis that assumes a column normal to the surface will be full of approximations.

A deal of confusion has been caused by the unwarranted emphasis on CO2. Some of it is alien to the purist spectroscopist. If the emphasis is on CO2 simply because it can be demonised more easily, causing guilt by people, then that’s where the science says goodbye.

• Peter317

Geoff, the surface temperature might not be the primary variable of interest, but, together with the lower tropospheric temperature, it’s arguably the most important one.
Climate sensitivity at and near the surface is quite literally the billion-dollar question, where every tenth of a degree potentially has billions, if not hundreds of billions of dollars riding on it. Rightly or wrongly, this makes it supremely important.
So it’s important that we attain a good understanding of the mechanisms which affect it so we can make an accurate assessment of its value. Whether or not we can is a different issue.
As far as I can tell, we’re not even entirely sure whether the greenhouse effect causes the surface temperature to be higher than it would otherwise be, or simply allows it to be so.
The sensitivity at TOA, whilst relatively easy to assess, is relatively meaningless, as its effect on the lower troposphere is indirect at best.

14. Is it really useful to discuss at all the concept “surface warming without feedbacks”. The opening posting as well as the comments of Chris Colose, Fred Moolten and Vaughan Pratt all tell that defining “without feedbacks” cannot be done in a unique way.

It is possible to imagine a sudden change in the greenhouse house concentrations of the atmosphere and calculate the resulting radiative forcing keeping everything else fixed as there are no internal contradictions in the settings.

Calculating no feedback temperature change in the same way is not possible, because the no feedback temperature change cannot be defined similarly without contradictions. Stating what is part of the no feedback change and what is feedback is always to some extent arbitrary. Everybody seems to agree that the lapse rate should be kept within its stable limits, but how far should one proceed in keeping the details in agreement with physical principles when one considers factors affecting the vertical profile of the atmosphere?

Active modelers have their own definitions, which are built in into the models. The modelers may have reached within themselves a consensus on, where to draw the line. This does not, however, imply that the choice has a unique logical basis. I think that the recent discussion here has proven that the whole concept is confusing, perhaps to the extent that it should not be used at all.

• Actually exactly these thoughts were going through my head as I wrote this, I’m glad you made them explicit. You can’t do this exercise without running into contradictions. Is this concept even useful? If it is used, it should be discussed more and calculated carefully, not just refer to stuff from the 1980’s or whatever.

• It appears to be an attempt to do physics in the original way. That is, isolate and explore a simple core process. For example, positing that a feather and a cannon ball will fall at the same rate in a vacuum was a breakthrough of enormous methodological importance. That it does not work here is a problem.

• As I write a few messages further in this chain, I got some possibly (but not certainly) contradictory thoughts. There I emphasize that no feedback warming of the earth surface may after all be largely determined by radiative constraints alone. By this I do not mean that radiative considerations could describe well all processes involved but that it is possible that the other processes are constrained to operate in such a way that the worming of the earth surface is to a reasonable accuracy determined by the radiative constraint.

Many details have certainly some influence on the relationship between radiative forcing at the TOA (or tropopause) and the average temperature of the earth surface. Still the result could be rather insensitive to these details.

All this is speculation as I am not a climate scientist but a physicist turned later to system analysis and energy economics (and now retired). This has led me to be interested in climate policy and to use my physicist’s background to learn also about climate science.

• will need your systems analysis expertise to discuss some coming threads on feedbacks

• Ken

It is my understanding that one wavelength of the entire IR spectrum is exciting the CO2 molecules in the atmosphere. The molecules then re-emit the energy to return to their base state. Thereby causing the greenhouse effect by diffusing the energy.

When I read most of the comments here I get the impression that people believe the whole spectrum is being diffused not just a single wavelength.

Am I wrong?

• Jim D

There are a few bands of wavelengths, but the main one by far is within 10% of 15 microns.

• Brian H

Which brings up one of my core queries, that I haven’t seen dealt with persuasively: What prevents all this extra “stored” heat from escaping very rapidly out the atmospheric window? The flux value posited in the Trenbarth energy diagram is not a fixed or independent variable, after all. It will respond to increased heat energy without limit (i.e., it’s an open drain hole.)

• IR emissions are a function of temperature, via the Stefan-Boltzmann equation. The Earth can’t radiate more IR in “window” wavelengths except to the extent it heats up.

• Christopher Game

Dear Pekka Pirilä,
As I read things, the first step here is to calculate a rate of energy retention. The second step is to calculate a temperature increment.
Surely the second step would be to calculate a rate of temparature increase, or to allow the energy retention to operate at the calculated rate for unit time, and calculate the temperature increment at the end of that time, which would have to be short in comparison with the characteristic times of all “feedback” mechanisms?
To arrive at a straight temperature increment seems to imply that one has waited infinitely long to get the steady state response? Meanwhile the only cooling process considered is radiation.

In another approach, one calculates the increment in radiative emission rate that must by postulation equal the radiative forcing, and thence one calculates the temperature increment needed to provide that increment in radiative emission rate. And one posits a baseline negative feedback item called the “Planck response”. One doesn’t actually need to wait at all to get such a reference or baseline negative feedback.

In this line of reasoning, the answer is not really a no-feedback response: it is a response with the reference or baseline feedback.

The desire to talk about a “no-feedback” response is based on a desire to talk about “feedback” in terms of the Bode (1945) theory, which is based on the presence of a “forward or µ circuit” (Bode, Chapter IV, page 44). Bode 1945 is cited by Hansen et al 1985 and by Schlesinger 1985 and by Bony et al 2006 as the justification for talking about “feedback”. And its wonderful child “positive feedback”. We all learnt about feedback and positive feedback at university, didn’t we? It’s well established standard theory. Ok?

The Bode forward or µ circuit is supplied with arbitrary auxiliary power (from a battery or somesuch) so as to allow true power gain in the signal path. In the Bode case, some moiety of the power added to the signal was diverted from the load back to the input of the µ circuit. This diverted moiety of power was carried by Bode’s feedback circuit β . This allows negative feedback to be used to improve on the raw properties of the µ circuit, or positive feedback to do various things. The carrot was the talk of “positive feedback” with “amplification”. In the climate system there is no arbitrary auxiliary power supply to feed the carrot. Schlesinger cites Chapter III of Bode but shows no evidence of having read Chapter IV.

Climate study talk of “amplification” is thus mere spin, and you are right to say that “the no feedback temperature change cannot be defined similarly without contradictions”. Nick Stokes is right to avoid taking seriously the IPCC’s “forcings and feedbacks” formalism, by saying “No, it isn’t a baseline. The model calculations make no use of this. The no-feedback sensitivity just a number that’s easily calculated and can be used as a reality check.”

It lends air of verisimilitude and makes the calculations seem real. And that is the desired appearance. Who cares about a rational account when an irrational one creates the desired appearance?

Yours sincerely, Christopher Game

• Christopher Game

To continue about feedback in climate studies.

In the case of the IPCC “forcings and feedbacks” formalism, the energy that is fed back cannot be a moiety of the added power from the arbitrary auxiliary power supply, because that does not exist in the climate system. In the climate system forced by added CO2, the feedback power is simply bled from the output to the “load” and fed back as if it were coming from the auxiliary power supply. It is not a true signal from the signal source.

It is as if the load resistance were split and one part of it reconnected to the input of the µ circuit of Bode, in parallel with the β circuit of Bode. The µ circuit of Bode is however running on a flat battery. There is no real amplification; there just a redistribution of pre-existing energy flows. We are looking at a passive circuit, and the talk of “amplification” is entirely specious. A runaway effect, which can happen when there is a true arbitrary auxiliary power supply, is in the climate model case utterly impossible.
Christopher Game

• Brian H

Thank you! This is what has always made ‘back-radiation’ smell like bootstrapping free energy into the system for me. The radiating back cools the low-mass atmosphere far more than it warms the high-mass surface.

15. Bruce Cunningham said on December 11, 2010 at 11:47 pm :

I am curious as to how good of an understanding there is about what exactly is the efficiency of the radiative system being considered. The second law of thermodynamics states that nothing is 100% efficient. Some entropy is always produced.

The first law of thermodynamics tell us that energy is always conserved.

The temperatures of the sources and sinks tells us the maximum theoretical efficiency of a heat engine.

“How much [entropy] is created in this instance will possibly determine what the final temperature increase of the oceans and atmosphere due to increased back radiation will be?”

It’s easier to use the conservation of energy to see what happens to temperature. The interested student can always calculate the change in entropy – which is always positive (or zero in an isentropic process).

• Bruce Cunningham

“The first law of thermodynamics tell us that energy is always conserved.”

Well yes, but entropy losses are part of that. That is why you can never go back to the same state you were before without adding more energy. Energy was lost in the process due to inefficiencies. Where it is nobody knows. It is just another case of increasing entropy. Like my thermo prof said in class (jokingly), “one of these days were all going to be buried by all that entropy that is building up somewhere”. He added, if you ever invent a reverse entropy process, don’t go to anyone else, come straight to him and we’ll get rich!” Before scientists can calculate temperature rise due to back-radiation they need to know just how big or small the amount of entropy created is, yes? As Dr. Curry has said, climate science is in its infancy, and uncertainties about such things as this are not as well understood as are say the steam tables. As us skeptics know all too well, the science is not settled. This is possibly one of the things that has Dr. Lindzen saying before Congress a couple of weeks ago that the calculated value of 1.2 deg C rise due to a doubling of CO2 is beginning to look more like only half that much. The process involves many things that are not as well understood as other topics in thermodynamics, and is why no one has yet accurately predicted future temperatures.

“It’s easier to use the conservation of energy to see what happens to temperature.”

Provided we know all the processes and losses involved, which we don’t, as Bart R stated above.

• suricat

Well said Bruce!

Engineering disciplines consider energy consumed within a system that produces ‘work’ as ‘enthalpy’. Any energy loss that doesn’t produce ‘work’ within that system is known as ‘entropy’!

This leads the engineering sector to a ‘cut off’ dichotomy where ‘enthalpy’ is understood, but ‘entropy’ (the rest of the 100% efficiency) needs to be explained by science.

If only science could explain where the rest of the energy goes there would be no need to evaluate the ‘attractor’ quality of a ‘subject under scrutiny’!

Best regards, Ray Dart.

• Energy is not going to any unknown place. The efficiency losses mean only that other forms of energy are converted to heat. This is an efficiency loss, because the other forms are capable of producing more work than one can get from the resulting heat. One related concept is exergy, which tells how much work the system can produce using ideal heat engines like Carnot engine.

• Bruce Cunningham

“Energy is not going to any unknown place. ”

Really? I am not a thermodynamics expert, but as I understand it the energy lost to entropy will not show up as temperature somewhere else. This link I posted below tries to figure out where all the entropy that my thermo prof said is going to bury us all someday, is going.

Regards

• Bruce Cunningham

I started reading this about entropy, but after a while I decided I had more important things to suffer through!

http://blogs.discovermagazine.com/cosmicvariance/2009/01/12/where-does-the-entropy-go/

• Bruce Cunningham

Thanks for the kind remarks. Here is what appears to be a good discussion and an attempt to arrive at a better understanding of the subject. I haven’t read too far into it (I’m a little slow) but maybe I will try to understand when he talks of such things as a “miracle of curved spacetime” and Euclidean Quantum Gravity at a later date.

http://blogs.discovermagazine.com/cosmicvariance/2009/01/12/where-does-the-entropy-go/

16. snide

How many times can you jump the shark, Judith?

>i>Any increase in the temperature has to increase the energy flow away from the body by all means it can, ie conduction, convection, evaporation as well as radiation.

Energy flow away from the earth can only be done via radiation.

• Steeptown

The “body” is the earth’s surface, not the TOA. Doh

• Bart R

But then, there are no hurricanes in space.

One believes Dr. Curry must be familiar with the top-of-the-atmosphere radiation methods and conclusions, and would know that the energy flow of the Earth as a whole is easier to calculate when treating the boundary as the edge of the vacuum of space.

The far harder math is in what happens under that boundary, all the way down to the point that the surface can be considered to not conduct heat away underground at any meaningful rate.

Isn’t the distinction important?

To my philosophy, it isn’t so impressive: changes at the top of the atmosphere due to our actions ought be reduced as a matter of our obligation to conserve for our heirs what our ancestors passed on to us. To others’ point of view, I have a hole in my head for thinking so, apparently.

So, for them, the distinction informs what are the consequences of this change. A rise in temperature of a handful of degrees absent any other effects is harmless, no?

A smaller rise of temperature accompanied by the sea pack ice and glaciers of the more temperate parts of the world (including the North Pole) melting, the ice of the most extremely frigid parts of the world (Antarctica) increasing in volume until the temperature rise there passes some tipping point, the salinity and acidity of the oceans shifting, the distribution of water vapor in the atmosphere changing, wind and ocean current velocity skyrocketing and dominant patterns wildly changing or disappearing.. that’s the stuff that gets accused of being alarmism by the complacent contrarian.

Isn’t what Dr. Curry’s doing attempting to find a mathematical presentation of the facts of what we are doing to us?

I mean, if I have an accountant and all they can tell me is ‘at the end of the day, you have as much money going out as coming in, but your business has grown larger’, then what’s the point?

• Actually, the “hurricanes in space” (turbulence in the solar wind) is what changes Earth’s climate via changes in the arctic and antarctic oscillation leading to changes in ozone over the mid latitudes. I doubt if the radiative physics experts are ready and capable to recognise this though.

• Bart R

tallbloke

I doubt if the radiative physics experts are ready and capable to recognise this though.

My reading may be limited, and my math skills yet too poor to keep up with the most abstracted calculations, but even casual search of radiative physics experts published statements on this topic show a lot of readiness and capability specific to your point.

It just all indicates an effect two or more orders of magnitude too small to account for observations or to be considerd significant compared to CO2 sensitivity.

• Jim

Bart R said:
To my philosophy, it isn’t so impressive: changes at the top of the atmosphere due to our actions ought be reduced as a matter of our obligation to conserve for our heirs what our ancestors passed on to us. To others’ point of view, I have a hole in my head for thinking so, apparently.

If humans worked by your philosophy, we would still be living in caves, burning wood or dung for heat, and chasing game with rocks.

• Bart R

Jim

Huh.

Doesn’t sound likely to me, unless one confuses ‘paralyze’ with ‘conserve’.

(Heck, we burn more wood and dung for heat — or whatever — as a species today than at any time in the past.)

We haven’t succeeded even at conserving for the current generation as little as the level of health of the previous one; demographers opine that people born today have a life expectancy shorter than our own.

Conserving what is valuable from the past takes more advanced technology and methods than does lazy consumption to extinction.

Certainly, those who seek to use up the Earth on the promise of transport to a better world are at odds with my point of view.

Is this your stance?

• Jim

I’m all for conservation, just not to extremes. We aren’t “using up the Earth.” We are just part of a natural progression of life on a habitable planet. We are natural, not synthetic. We belong here. We aren’t doing anything wrong by existing and making a good life for ourselves. So, obviously, the devil is in the details. The carbon in the ground once was not, so we aren’t really destroying anything, just transforming it. New species will spring up to fill new niches as other species go extinct. If you believe in evolution, you have to admit this is the way DNA-based life functions on Earth with or without humans. In fact, I would go so far as to say that species are but waves on the ocean of DNA. DNA is the prime actor, species are simply manifestations of it.

• Bart R

Jim

..all for conservation, just not to extremes.

What could conceivably be more extreme than asserting, “We aren’t doing anything wrong by existing and making a good life for ourselves.

Can no one do any wrong, ever, then, on that argument? Pirates and military dictators exist and make good lives for themselves.

New species take many centuries or millennia to spring up, and as diamonds are produced, because of heat and pressure deforming what had been until the stress forces transformation. If you’re enticed by the opportunity to be responsible for new species, one suggests genetic engineering is more timely, and less costly than inducing catastrophe to create new niches or eliminate old denizens of current ones.

I can’t talk to DNA, can’t watch it fly from branch to branch, can’t pet it or take it for walks or enjoy direct experience of it.

Your argument is very abstract, and in its abstraction sounds far prettier than the stark and dangerous malady it propounds.

• Jim

Species take centuries to develop? Really? It is our biological imperative to survive. That isn’t extreme. It is the way nature works.

• Bart R

If as you say our biological imperative is to survive (and Darwin might add pass on our traits to our progeny), then it seems you are agreeing with my point; after all, your argument, “The carbon in the ground once was not, so we aren’t really destroying anything, just transforming it,” (which by the way didn’t hold much water when my college room mate applied it to the beer in the fridge) tells us we’re dependent as a species for our existence and good lives on the circumstances we’ve inherited from those who came before.

However, I’m not interested in enforcing my view of the world on others, or persuading that belief in stewardship, obligation, duty and giri are valuable.

I’m quite willing to participate in tracing the logic of this puzzle so far as reason can take us in science, in the belief that the best of evidence and inference will lead people of good intention to act based on their traits’ survival interest.

• Brian H

>:(
Any argument that invokes “tipping points” is instantly rendered in to hand-waving. None such have been coherently or logically identified, and indeed geological history is rife with long periods when pretty much any variable or combination thereof you care to name has been above, below, or similar to the present. No runaways have occurred; such rapid transitions as have occurred are sprinkled across a huge range of circumstances.

E.g.; the end of the last interglacial apparently had wild up/down temperature swings, and storms violent beyond all modern experience (multi-ton rocks tossed miles inland, seacoast reshaping-in-a-hurry, etc.)

If the Magic Thermostat actually were as effective as fantasized, its deployment as planned and demanded would be more likely to terminate the interglacial than anything else.

Be careful what you ask for.

• Bart R

Brian H

Proof, one supposes, that if one says enough words one will find the tipping point that sets off a pet peeve in someone in the audience. ;)

Tipping points are of course the stuff of handwaving, or Chaos Theory as it is more formally called. In the case I invoked, the existence of a tipping poing is not key to the argument. At some temperature, ice stops increasing and starts decreasing. Tipping point. What of it?

Maybe you’re thinking of a Slippery Slope?

• Energy flow away from the earth can, indeed, only be done via radiation. From this we can conclude that the change measured by radiative forcing must be compensated by changes in some temperatures.

Increased absorption in the atmosphere lowers the emissions at some specific wavelengths – those wavelengths where the optical depth of the atmosphere is not very small or firmly very close to one, but somewhere between these extremes. The radiation is not likely change much at wavelengths where the optical depth is very close to unity. Thus the compensating change must come from direct radiation from earth surface and lower layers of troposphere.

Due to the fourth power of the Stefan-Boltzman law a modest increase of the highest temperatures occurring in the earth system has a large effect on the outgoing radiation (1K change at 288K corresponds to 5.4 W/m^2 in black body radiation). This counteracts the fact that most of the wavelengths are trapped effectively by the atmosphere.

• After posting my previous message I realized, that it may contradict my earlier thinking.

The statement made in that message implies that the temperature change at earth surface is the one temperature change that is most firmly related to radiative forcing. It implies also that the calculation of this temperature change is controlled by radiative processes. The dominant condition is that the total radiation directly from earth surface and lowest troposphere to the space has to remain unchanged. The direct radiation is reduced at the limiting wavelengths where the increase in the optical depth of the full atmosphere is largest and it is increased at wavelengths where the optical depth remains small.

The reduction is controlled by the change in CO2 concentration, while the increase is controlled by the temperature change at earth surface. Requiring that the changes compensate each other is the dominant factor determining the no feedback CO2 sensitivity.

• Joe Lalonde

Pekka,
You have to think outside the box of current perameters.
We need water vapour in the air so our bodies would not be permently attached to water.
If you go to another planet like Mars, the atmoshere would suck the water out the body in seconds. As seen by the Mars rover finding a speck of ice. It did not melt, just evaporated.

• Joe,
The idea of my message is to question, whether we can determine the no feedback CO2 sensitivity without discussing all the changes that happen in the atmosphere.

The approach would be to look at the radiation from the earth to the space and analyze, what is behind the changes that we observe.

The first assumption that I make is that the temperature at the top of the troposphere does not change. This means that the higher temperature of the earth surface results in larger temperature difference between the earth surface and the top of the troposphere. If the laps rate does not change, this implies that the altitude of tropopause increases.

The second assumption is that most of the radiation comes either from the top of the troposphere or from the earth surface and low altitudes with a temperature following closely the temperature at earth surface.

If these two assumptions are valid to a sufficient degree, one can conclude that the principal changes resulting from an increased CO2 concentration in the outgoing radiation are the following:

– A larger fraction of the range of wavelengths is radiating from low temperatures close to the top of troposphere. This follows from the increased optical depth of the atmosphere.
– The remaining fraction of wavelengths has on increased intensity due to the higher temperature of the earth surface.
Requiring that the total change in outgoing radiation is zero, as it must be, relates the temperature at earth surface directly to the CO2-concentration through calculation involving only radiation.

It is clear that the two assumptions are not precise, but they may be good enough to act as a good starting point for a more accurate calculation.

The most important consequence of this approach is that one would after all be able to relate accurately enough radiative forcing and no feedback CO2 sensitivity without the need to go through all the changes in the atmosphere that follow from an increased surface temperature.

Whether the approach is valid or not has to be studied by analyzing the real atmosphere. Whether it is valid in a particular climate model can be studied by analyzing that climate model.

• Joe Lalonde

Pekka,
Current science flew before it could walk.
Putting all perameters globally is a huge mistake in science.
What happens at the equator, even in the troposphere, is not evenly happening at the poles. AT THIS PRECISE TIME.
Even the hemispheres are unbalanced to get a global all around understanding of the energies and forces that are in place. Science should have studied regions by regions at different latitudes, incorporate for differences in land mass to water volume mass.
Planetary rotation is a 2 dimensional in energy distribution(unlike an atom with a 3 dimensional rotation).

Just watch a satellite image of the globe turning with cloudcover. Clouds DO NOT cross the equator.

Climate models are no where near to being accurate or precise until they can incorporate or understand this is a rotational body and not a stationary global map. AT THIS PRECISE TIME.
(I will use this phrase”AT THIS PRECISE TIME” as this planet is constantly changing due to factors of planetary slowdown. This constantly changes many other applications in science. Current science of physics fails when time is included.)

• Bart R

Joe Lalonde

Uh, what?

We need water vapour in the air so our bodies would not be permently attached to water.

While it’s not comfortable, many people routinely encounter effectively zero humidity. We’re best adapted to an atmosphere with water vapor because it’s there, but .. uh.. what?

Sublimation on Mars dominates because Mars is cold and has low atmospheric pressure; the same effect happens in the polar extremes of Earth (where again, practically zero humidity dominates). Watch a snowflake on a dark surface, and it vanishes.

• Joe Lalonde

That snowflake melts first then evaporates.
Where there is constantly no humidity, there is no sustainable life on this planet as water is needed.

• Bart R

Your contention of the behavior of snowflakes does not match my direct observations under the circumstances I’ve described, at -40C or colder.

Indeed, getting water liquid without losing much of it to sublimation at altitudes in extremes of cold is quite the trick, experienced climbers have told me.

The Saharan subspecies of breviceps is well-adapted to constant near-zero or zero humidity, to name but one counterexample to your claim.

• Brian H

There is persuasive evidence (e.g., optimal wing loading design for pterosaurs, etc.) that the atmosphere was once 2-4X as dense. It was certainly not always at the present level, probably much higher at some stage of geological development.

The loss of that atmosphere and much accompanying energy did not occur radiatively. Some form of evaporation or stripping (by solar wind, or whatever) occurred, and is presumably always occurring. Brownian motion assures at least a little escape of gas from TOA.

So that’s another hole or fudge in the “radiative balance” approximation. How serious it is I don’t know. Do you? Does anyone?

• if we are talking about surface temperature (which is what I am talking about), there are many ways energy can leave the surface.

17. kai

A simple question, which can have implication on energy balance at surface level:
Is the assumption that the earth surface, even the solid part of it, act as a black body in the LW region? I ask because none of the basic derivation of CO2 sensitivity, nor the illustrations of energy fluxes involved in the greenhouse effect, show any ground reflexion in the LW.

It is alwaus shown in SW, surface albedo is considered important, but not in LW.

Why? No reflection for those wavelength?

Not important because it is recaptured back immediately by the atmosphere (This would make no sense, as the same GE effect of absorbing from one direction, emitting in all direction should be at play so I can not believe it would have no effect)?

Or all the computation are done at the top of atmosphere and lapse rate is used to translate to surface temp? If this is the explanation, then simple figure explaining energy flow are misleading…

18. John M.

Any increase in the temperature has to increase the energy flow away from the body by all means it can, ie conduction, convection, evaporation as well as radiation. When the outflow of energy equals the inflow then thermal equilibrium exists with the body at a higher temperature. But that temperature cannot be as much as ~1C higher, as the radiative outflow has to be less than the total outflow.

As radiation is to all intents and purposes the only possible egress to space for energy, I cannot see how you justify that last sentence. If it is there because you think conduction and convection within the atmosphere somehow reduce it, then you are deluded. Conduction and convection cease at the boundary of the atmosphere, however nebulous that boundary is.

• jp.morlet

Why should there be a radiative equilibrium at the earth surface ?
And why compute the radiative balance using the mean global temperature when what (approximately) has an influence is the fourth root of the average fourth power of temperatures.

• Peter317

As I took pains to point out in my original posting on the subject on an earlier thread, this is all about outflow from the surface and not from the TOA.

19. Irrelevance of the CO2 global warming hypothesis ?
Theory of CO2 heat absorption and re-radiation is well understood. What appears to be bone of contention is the extent of it. One thing is clear that on any relevant time scale the CO2’s heat retention (storage capacity) is insignificant.
If 1368 W/m2 was more or less constant during the last 150 years , question is:
Is CO2 effect capable of causing the observed temperature variation either regionally or globally?
I think the answer has to be clear NO
It has been said by some pro-CO2 agenda scientists that if the AMO, PDO, SOI are excluded than temperature rises tracks the CO2 emissions increase during the last 150+ years.
That is not only misleading but a fundamentally wrong. The above indices are de-trended, by the process of calculation; pressures differentials will automatically eliminate any common either positive or negative gradient.
Urban heat islands are another matter.
Solid surface (ground and ice) heat absorption and radiation can be calculated taking into account seasonal effects on ice and snow coverage.
The oceans’ heat absorption and retention capacity in comparison with the atmosphere are enormous. Two are calculable, however re-radiation is not, it is affected to a significant degree by currents circulation variability from one year to the next.
Any change in the circulation parameters will affect heat realise and consequently temperature and pressure differentials.
Number of critical factors related to the ocean currents, are considered in the Pacific and Atlantic, with a long term rising trend present as shown here .
Results are important foundation for understanding climate change, rendering the CO2 effect insignificant.

20. Labmunkey

BLAST!

Another really interesting thread- unfortunatley i do not have the time to go into the depth required for such an interesting subject; as my beautiful wife has just given us a new beautiful baby boy.

Hopefully the thread will still be going in a weeks time.

I must say though Dr Curry, i’m glad you are now begining to look at the ‘assumptions’ with a critical eye. It’s what started me worrying over the theroy, i’d be very interested to see where it leads you- and i’d encourage you to lookat any other assumptions you can find too.

regards to all
LM

Lab

congrats on the sun and air – shoot -that should be son and heir!

Will look out for your return both here and at Mr Black’s BBC circus. Your posts are always worth a read.

Gary

• Baa Humbug

Congrats mate and all the best of good health to Mrs and little Labmunkey.
p.s waddidya name the little one?

• Congrats on your new son!

• Labmunkey

we opted for Kieron.

Thanks for the kind messages- i’ll be back to my belligerant posting-self in no time -)

• Labmunkey

Wow. you can tell i’m tired-spelt my own sons name wrong…. Kieran, not kieron lol

• Peter317

Congrats and best wishes, LM.

Hope you don’t have too many sleepless nights

• Conger rats to you and Mrs LLabmunkey!

• Bart R

Congratulations!

• Labmunkey

many thanks all-anyone tell me what sleep is like?> i forget…. :-)

• Baa Humbug

Sure mate, I can prescribe something much much more effective than sleeping pills or exhaustion.

Just read a few of the recycled alarmist comments here and you’ll nod off in no time. e.g.

hottest year evva…seas rising…glaciers meltizzzzzzzzzzzzzzzzz

• Bart R

You already sound too well-rested for a man in your happy situation. ;)

21. Joe Lalonde

Ah, Judith,

I hit these same stone wall for physics and other areas that you MUST follow the LAWS created to contain “man’s concept” of science.
Climate science to many people is totally foreign and does not make common sense to what the weather is doing. Too broad a terms in classing ALL science under one category is also a problem.
Too many common people say that the theories and concepts I have been following and looking at just makes too much sense and they CAN understand what the planet is doing and trying to achieve.

The planet is trying to tell the story of it’s evolution but too many over educated and incorrectly understood science has been our basis of knowledge has prejudiced the whole process. Simple measurements are far easier to follow than complex and most often incorrect formulas.
Why do these fail?
They fail to incorporate constant changes that are occurring since this planets birth.

22. Climate Sensitivity. Warm melts Arctic Ice and other ice. That lowers Albedo. That causes warmer and this continues until enough Arctic Ice is melted to allow Arctic Ocean Effect Snow to raise Albedo and cool the Earth. This is the major Climate Sensitivity. Take the false Carbon Feedback parameters out of the Theory and Models and put the Albedo and Ocean Effect Snow in and improve the models. I have been reading books and websites, especially NOAA, for almost two years and the only two places that I have seen reference to Ocean Effect Snow having a climate effect was in Tom Wysmuller’s, “The Colder Side of Global Warming” talk and in reading the Theory of Maurice Ewing and William Donn. Consensus Climate Science does not acknowledge Arctic Ocean Effect Snow and will never be correct until it is included. Michael Mann’s book, “Dire Predictions”, hardly mentions snow. NOAA has a recent article about Lake Effect Snow, but I can find no reference to Arctic Ocean Effect Snow being considered in their climate models.

23. Chris, thanks for your nice summary, I understand all this. My issue is that all this seems to have nothing to do with surface temperature, which is the issue of concern. I understand the history of all this, Moller etc. Ramanathan’s paper showed how to use the surface energy balance approach with an enhancement to the downwelling IR flux from the atmosphere, which is part of my proposed strategy. Do you have a critique of the Ramanathan paper? This is key to my argument. I am proposing to extend his idea to increase our understanding of how the actual surface temperature would change in response to CO2 forcing. And if this idea doesn’t make sense, does the broader concept of CO2 no feedback sensitivity make sense? To me, the 1C number doesn’t make physical sense in the way that it has been determined.

The IPCC defines climate sensitivity as “a metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2 concentration.”

As a layman I look at this statement in terms of its policy implications. IPCC (whose remit is to assess the science and inform the policy) must regard it as policy relevant to define climate sensitivity. So what is the policy relevance?

Well it appears that we are interested in ‘surface temperature’. The assumption being that we humans live on the surface and therefore we have an interest in what the temperature of the surface might be. It would seem to follow from this that if there was warming (or cooling) in other parts of the climate system, this would not be relevant for policy. Is this right?

We then have reference to ‘global mean surface temperature change’. Here I have to make an assumption that we are to read ‘temperature’ as a proxy for heat energy. From the point of view of policy, the issue is whether the surface of the earth will be heated to an extent that it becomes dangerous to mankind. Is this right?

If we get to this point, we then have to ask if the system currently used to calculate global surface temperatures actually tells us anything of value about the heat energy at the surface. I know Judith has promised us a post on surface temperatures and I would not wish to pre-empt that. However, the point I am making is that the efforts of the IPCC to define climate sensitivity will have no policy value if that which we measure (and the way in which we measure and calculate it) to achieve our records of global mean surface temperature is not in fact a true reflection of the heat energy at the surface.

Does that make any kind of sense?

• Brian H

Lotsa sense. What doesn’t make sense is averaging temperatures, even on one site, much less across huge grids of the planet’s surface. Phase changes and the fourth-power law render all such efforts egregiously inane.

25. I’d have thought that the best place to measure the no feedback case was the Arctic, where there is no water vapour feedback and the place to measure the feedback case are the tropics which is where positive water vapour feedback, if it exists at all, should be seen. I could be wrong but I thought this had all been done already by Lindzen and the community at large either yawned and said it was interesting or didn’t even bother to read it, citing tobacco denier/oil lobby* smears; something, Judith, that you have also been guilty of sad to say…

*[Not that the oil lobby argument is implausible but it seems that most of them are in the forefront of activism for carbon trading. Common sense tells us of course that big energy companies would likely control any nascent alternative energies too, and that indeed seems to be the case…]

Also, regarding the simplistic 1D theory, has anyone considered a ramp loading rather than a step loading for the doubling of CO2? It’s only a little bit more complicated, involving some slightly more involved numerical iteration, but it would be a darn sight more realistic and hence less pessimistic than instantaneous doubling.

26. While the conventional view might be applicable to the tropics, it is definitely not applicable to the Arctic, which is convectively decoupled from the tropopause. Lindzen focuses on the tropics. Climate models do transient loading, although the 1D radiative convective equilibrium models do the ramp loading approach.

27. Judith, you write “I’ve mentioned these general ideas a number of times before, and the “mainstream” has declared me to be dotty, and not understanding that the equation ΔTs = λRF was carved in stone on Mount Sinai. I’m not sure if I have an solutions here, but there are a whole lot of questions on this subject that require much more substantive answers.”

I am not so much concerned with the science of what you have written, as I am it’s political implications. The basis of CAGW is in 3 steps.
1. Estimate the change in radiative forcing for a doubling of CO2.
2. With this number as input, estimate the change in global temperature, without feedbacks (deltaT), for a doubling of CO2.
3. With this number as input, estimate the magnitude of the feedbacks.

We have established that the change in radiative forcing cannot be measured; there is, however, observed data which suggests that the number obtained is about right. Since this number has not been measured, it casts doubt as to how accutately we can estimate deltaT. We have also established that deltaT cannot be measured. But here, there is no experimental data to support any particular value. I have also noted that no-one has established an accuracy for deltaT; your write up, Judith, casts no light on this issue. There could also be unknown factors in it’s estimation that could render any particular number completely wrong.

So my claim is that deltaT is a hypothetical and meaningless number. This, together with your words, has enormous political implications.

Ever since I have been aware of CAGW, my intuition told me that it was either, at best, a myth or, at worst, a hoax. I only have a bachelor’s degree in physics, and here are eminent people, led by the Royal Society, stating in unequivocal terms that I was wrong. How can I reconcile this dilemma? The expression “the equation ΔTs = λRF was carved in stone on Mount Sinai”, provides the answer. Many eminent people have nailed their colors to the mast on the basis that they believe that deltaT is gospel. Unfortunately people like Richard Lindzen dont help on this issue.

What I hope you have written shows is that there is no scientific basis to believe in CAGW. After all, deltaT is an input to the third stage of the estimations, and, from what I can see, climate sensitivity is roughly linearly proportional to deltaT in these calcualtions.

So, what I think you have demonstrated with your day’s work, is that we dont know whether CAGW exists, and maybe that CAGW is just plain wrong.
Surely it is impossible to underestimate the political implications of this idea.

• Michael

Ever since I’ve seen people on blogs write ‘CAGW’, my intuition has told me that they, at best, have come to believe in a myth, or at worst, are actively engaged in a hoax.

Either way, such people, by writing ‘CAGW’ , have nailed their colours to the mast in that they take ‘CAGW’ as gospel. Furthermore, they demonstrate that when they look at the science, they can only see it through their own political lens.

• Jim

Catastrophic effects from global warming, the “myth,” are all over the main stream media. And don’t say this hooey isn’t peer reviewed, that does not matter. The MSM shape the publics image of global warming. That is why some refer to AGW as CAGW. It just makes sense. The fact that you (supposedly) don’t “get it” shows your own colors on your mast. They don’t see it through a “political lens,” they see it on TV, the movies, newspapers, magazines, and the internet. If you are serious about changing this perception, come out publicly against these purported catastrophes. Then you will have some credibility.

• I am not questioning whether increasing CO2 will warm the earth’s surface. The question is how much, and how the energy gets distributed in the atmosphere, ocean and melting/freezing of ice.

• Judith, you write “I am not questioning whether increasing CO2 will warm the earth’s surface. The question is how much, and how the energy gets distributed in the atmosphere, ocean and melting/freezing of ice.”

I agree completely. What I am trying to say is that the way people have tried to estimate how much global temperatures will rise as a result of doubling CO2, without feedbacks, does not work. The whole process is so uncertain and little undertstood, that any number that is obtained is hypothetical and meaningless.

28. Jim

This calculation, in it various forms, does not appear to take into account latitude. If one is trying to gage the global impact, shouldn’t one integrate from pole to pole? Maybe once each for the four seasons and take an average?

• Jim

Shouldn’t it also include, in addition to pole to pole integration, a 360 degree integration so as to include night as well as day? It seems this could be done better even for the “radiation only” case with no feedbacks. Although if a realistic atmospheric profile were used for each column of the atmosphere, some feedbacks would be built in. But that wouldn’t really take away from the meaning of the calculation, it would add to it.

29. Judith

I’ll gladly take your word for it that the Arctic is convectively decoupled from the tropopause but surely that means it is even more the place to look for a no feedback radiative effect. If not, why not? Lindzen has also argued before that the Arctic is probably the best place to view the effect of a manmade increase in greenhouse gases, not in his latest paper, but certainly in his talks, some of which are on youtube. He concludes that the signal just isn’t apparent.

So the ramp loading and the transient loading all come up with the same number as the step loading…Interesting, and a somewhat unlikely result I’d have thought. It all rests on other more dominant assumptions perhaps.

I am given to understand that the original argument for a higher feedback number was mainly because the models could bot emulate the 20th century temperature rise without it. Then because the cooling periods could not be emulated, they turned up the aerosol knob. Of course all that was predicated on the false assumption that natural variation could be emaulated rather well. The wheel fell off that assumption with the latest “pause” in warming, which wasn’t supposed to happen, regardless of the belated excuses for it now involving unidentified “noise” (an explanation even Trenberth in the crugate emails said was insufficient).

Now there were two papers put out by a Swiss team (you should know who) on consideration of European warming where they argued that natural effects could be ruled out; the first paper argued for strong water vapour feedback causing the 1980 to 1998 temperature rise and the later paper, using exactly the same data, argued for a reduction in aerosols causing a recovery in temperatures over the same period. A hoot really…but I’m sure both of these contradictory conclusions were “robust”.

The point is, there is no real scientific justification in assuming a strong positive vapour feedback in the first place as it all comes down to your initial assumptions about a) natural variation, and b) aerosols, both of which are acknowledged to be too uncertain to make such assumptions. Except, of course, for the fact that a single degree just isn’t scary enough to both influence policy and keep the scientific funding at the same high level. Would you agree?

Following on from that I wonder how many skeptics about the catastrphic warming would be totally unskeptically about such moderate, possibly beneficial (if you believe the IPCC that is) moderate warming.

30. Jonathan Gilligan

A simple, technical question that demonstrates my ignorance, but it seems to me that a lot of the argument is not about science but about the definition of “no feedback.”

How does one really define a “no feedback” response? If we follow Manabe and Wetherald, you pretty much hold the lapse rate fixed, figure the temperature at TOA, and use the lapse rate to infer the surface temperature. Judith criticises this for failing to take into account how the lapse rate might change (due to changes in latent and sensible heat flux), but why doesn’t that change in lapse rate count as a feedback?

The whole notion of calculating the surface warming in a condition where the atmosphere would be manifestly out of equilibrium (e.g., relative humidity would be unnaturally low as temperatures rise but specific humidity is not allowed to adjust) may be great as a homework exercise to simplify things to the point that a student can solve it, but has no physical significance, and if we raise it as a scientific dispute, it all comes down to nonscientific quibbles about which adjustments to the atmosphere count as feedbacks (and are thus excluded from the calculation) and which are not counted as feedbacks (and thus included).

As Judith said in the OP: there is no physical significance to a no-feedback sensitivity because we can’t turn the feedbacks off, so we’re talking about an imaginary theoretical construct and all the arguments I’m seeing look more like arguments over how to define the construct than how to analyze it once it’s clearly defined.

Given all the energy that goes into this discussion from people who understand the issues better than I do, I figure I’m missing something, but don’t see what it is, perhaps someone could explain to me where I go wrong in my understanding of this argument.

Alternately, if there is a clear and widely agreed-to precise definition of which processes are allowed to change (re-equilibrate) and which must remain fixed under “no feedback” conditions, could someone point me to it?

• Jonathan – among the processes generally required to remain fixed in a “no-feedback” analysis are atmospheric water vapor, lapse rate, clouds, and albedo, because these are all recognized as probably the most important variables that respond during timescales of interest to a temperature change induced by the forcing agent alone (e.g., increased CO2), and consequently mediate temperature changes of their own that enhance or diminish the no-feedback response. Individuals who model the no-feedback response might comment on what is allowed to change other than the variable responsible for the forcing. For example, are convective adjustments to the lapse rate permitted?

Your question about the value of a no-feedback calculation is provocative. It seems to me that one use is to permit empirical data on specific feedbacks (e.g., the water vapor response to the cooling induced by Mt. Pinatubo in 1991) to be combined with the Planck response as well as information about other feedbacks to estimate a total climate sensitivity response to a forcing. This can complement totally empirical estimates derived simply by correlating known forcing changes over time with observed temperature changes, a method of value but subject to uncertainties about the exact magnitude and nature of all relevant forcings.

• Peter317

Fred, I’m starting to have misgivings about the value of a ‘no feedback’ analysis. It’s one thing to hold other processes and variables fixed, but the results could be deeply misleading – especially if the variables we’re holding fixed are in any way interdependent.

Consider this thought experiment:

I’m trying to determine the relationship between the voltage across a resistance and the power dissipated in the resistor. Having no prior knowledge of the relationship except for the equation: Power = Voltage * Current.
So, by using this equation, I calculate the Power for several values of Voltage, but keeping Current at a fixed value (and Resistance as well, although I don’t use it) And, lo and behold, I conclude that Power is directly proportional to Voltage.
Now had I done an actual experiment, with a real resistance and real meters, I would have come to a very different conclusion.
The reason being, I was wrong to hold Current at a fixed value, as it’s an interdependent variable, and increasing the voltage will also increase the current through the resistance.

That, incidentally, is one of the dangers of models. Models will happily allow you to create constructs and associations which cannot possibly exist in the real world, so we have to be very careful to avoid these traps.

• The models recognize and incorporate interdependence into their construction. Exactly what inputs they use, and with what outcomes, is something they can address better than I. Let me give two presumed examples, however.

First, and very simplistically, the Planck radiative response to a temperature change is applied to temperature changes induced by feedbacks as well as those induced by forcings. (Otherwise, many feedbacks would lead to a runaway climate).

Second cloud feedbacks and albedo feedbacks interact. For example, if a cloud forms over a dark ocean area, it will scatter more light back to space, with a cooling effect. On the other hand, if a cloud forms in response to temperature change over a region that has becomes snowy due to an ice-albedo feedback, the cloud albedo will tend to cancel the snow albedo, with little change in outgoing flux.

The modelers should comment about additional details.

• the thermodynamic feedbacks (water vapor, lapse rate, clouds, snow/ice) will the topic of a series of posts that will start hopefully before the end of the year.

• Jonathan Gilligan

Fred,

I’m not meaning to be provocative. Just trying to clear up my understanding. Modeling ain’t my expertise, but I understand the basic physics well (a dangerous combination). I see an argument between Curry and Colose (and others) here that seems to me to be more about defining what no-feedback means than about scientific calculations under a well-defined set of assumptions. However, since I’m not remotely expert on the details here, I may well be misunderstanding their argument.

So I’m trying to understand better whether the issue is people arguing about how to solve the same problem or about what problem they’re solving.

If everyone agrees on the problem, then I would agree that solving it as a baseline case is useful for the reasons you specify. But if there’s no clear and precise agreement what the problem is, then I see little value in people solving different problems and arguing that their results differ without explicitly discussing their different definitions of what the problem is.

31. Roger Andrews

Permit me a dumb question.

Climate models that are used to hindcast 20th century temperatures show a climate sensitivity of around 1.2C for a doubling of CO2. (GISS Model E, net forcing = 1.75 watts/sq m, warming o.57C, is an example.)

So where does the currently-accepted model-derived estimate of 3C come from?

• sharper00

Roger,

Is 1.2C versus 3.0C not the difference between non-feedback response and feedback response in those models?

• Roger Andrews

Sharperoo

Doesn’t seem to be. The GISS Model E forcings are referred to as “net” forcings and presumably include feedbacks. Besides, if a 3C climate sensitivity were applied to the GISS forcings then Model E would show a surface temperature increase of about 1.4C between 1880 and 2003 – far more than actually occurred.

In other words, climate models can replicate 20th century warming only when they assume no significant feedbacks.

• Wikipedia for once gives a good account of the history of this number range and more recent estimates.

http://en.wikipedia.org/wiki/Climate_sensitivity

Of course unsurprisingly I count at least 3 recent empirical estimates missing; Spencer, Lindzen and Christie. Of course, empirical estimates are all lower than theoretical or inferential (from paleo data assumptions). Doesn’t nature know best?

• Roger Andrews

I guess my point is that if climate models predict (project?) 21st century warming using the 1.2C climate sensitivity they have to use to replicate 20th century warming, then they will show only about a third as much 21st century warming as they actually do.

I’m not claiming that the model predictions are necessarily wrong. I’m just hoping that someone can explain to me what’s going on. (I’m a geophysicist, not an atmospheric physicist.)

• Of course unsurprisingly I count at least 3 recent empirical estimates missing; Spencer, Lindzen and Christie.

Oh them wicked wikiguards. ;)

Of course, empirical estimates are all lower than theoretical or inferential (from paleo data assumptions). Doesn’t nature know best?

Part of the problem might be that the empirical estimates aren’t building in the time constants intrinsic to the inferential ones. The IPCC definition of “transient climate response” specifies a gratuitous 20 year delay along with a 1% annual CO2 rise. Since we won’t see the latter until 2060 at the earliest (or never if CO2 mitigation has kicked by then), one has to wonder how reasonable the former is.

A 20-year delay is equivalent to sliding the Keeling curve 20 years to the right before making an empirical estimate. Since the Keeling curve is in the denominator of the estimate, doing so increases the estimated climate sensitivity (since CO2 was lower 20 years ago). While this might not seem like a big deal, if you empirically estimate climate sensitivity at say 1.78 (which is what I’ve been getting), when you take the 20-year delay into account it jumps dramatically to 2.61. Make it 30 years and you’re up to 3.18, even more dramatic.

• Roger Andrews

I agree that a 20-year delay would make a large difference to climate sensitivity, but the problem is that I got my 1.2C estimate from GISS model E, and GISS doesn’t build in a 20 year delay. Its GHG forcings move in lockstep with CO2 concentrations between 1880 and 2003. They don’t shift the Keeling curve to the right at all. You can check this by comparing atmospheric CO2 against the “well-mixed GHG” forcings shown in http://data.giss.nasa.gov/modelforce/RadF.txt

• GISS doesn’t build in a 20 year delay.

That explains everything. Delay makes a huge difference in estimating climate sensitivity.

The name of the game is not climate sensitivity, it is the time to allow for CO2 to have an impact on the global climate.

I determined empirically based on a least squares fit that the delay that best fits the HADCRUT record since 1850 is 32.5 years (but 31 years and 34 years are almost as good). The climate sensitivity with that delay is 3.3 °C per doubling of CO2.

Independently there is the data point that nature has been absorbing around 50% of anthropogenic CO2 for at least a century. Plugging this in to the Hofmann formula 2^((y-1790)/32.5) for anthropogenic CO2 (to which one adds 280 to get total CO2 in the year y), we have to solve

2^((y-1790+d)/32.5) = 2^((y-1790)/32.5).

The solution is d = 32.5 * lb(2) = 32.5 years.

I find that an incredible coincidence. (Actually I had been using 60% rather than 50% which gives d = 24 years, but since David Archer uses 50% in his online lectures and the match-up is so beautiful, to within 6 months even, I decided once again that it was better not to play the heretic and to go with someone else’s number. Perhaps the truth lies somewhere in between 24 years and 32.5 years.)

• Oops, forgot to include the factor of 1/.5 = 2 in what we have to solve:

2^((y-1790+d)/32.5) = 2 * 2^((y-1790)/32.5).

But since we know that d = 32.5 years doubles anthropogenic CO2 according to Hofmann, you don’t really need to do any algebra or even arithmetic to solve this, it’s staring you in the face.

Blows me away. While I knew about solving this equation earlier, I didn’t notice that 32.5 years also gave the optimal least squares fit until I tried it just now.

• Apropos of my

Perhaps the truth lies somewhere in between 24 years and 32.5 years,

if the “consensus number” of 3.00000 °C per doubling holds some particular fascination for you, you’ll be interested to know that this is achieved with a 27 year delay using the same methodology by which I got 32.5 years and 3.3 °C per doubling.

As a very big caveat, getting these numbers requires some attention to detail in eliminating noise from the HADCRUT record. The longterm biggie is the 65.5-year Atlantic Multidecadal Oscillation, which I think I’ve been pretty much able to eliminate. That’s the one that everyone jumps on when pointing out that temperature fluctuated wildly in the 19th century, it’s huge relative to the influence of CO2 back then. Only since 1995 has CO2 got the better of the AMO.

However the 9.5-11 year solar cycle correlates are pretty erratic and strong and therefore hard to eliminate. Increasing the moving-average window from 12 years (what I used to get a least-squares fit of 32.5 years) tends to drive down the best-fit delay, with 14 year averaging lowering it from 32.5 years to 19 years and 16 year averaging to 15 years.

So without further investigation you should definitely take all this with a grain of salt. Maybe 12-year averaging (the kind that gave d = 32.5 years, and which is pretty good for “killing” the solar cycles) will turn out to be the optimum for removing solar cycles, maybe not. 11-year averaging gives 36.5 years, showing how sensitive things are to how one removes solar cycles.

All very up in the air for now. I need a better way of removing solar cycles.

• Vauhan, “Only since 1995 has CO2 got the better of the AMO.” Note, the AMO switched to the warm phase in 1995, by most reckonings.

• Roger Andrews

﻿Vaughan: Thank you for your efforts to bring me up to speed. I’m also pleased to note that someone else recognizes the importance of the AMO as a climate driver.

Sorry about the delay in responding, but I’ve been working with the GISS Ocean-Atmosphere model to get a second opinion. GISS OA hindcasts temperatures back to 1850 and forecasts them through 2100 using two different IPCC SRES scenarios (A1B and B1). Using the Model E forcings through 2000 and projecting them through 2100 based on the IPCC atmospheric CO2 concentration estimates for these two scenarios gave the following empirical climate sensitivities for GISS OA:

A1B & B1 1900-2000 1.5C
A1B 2000-2100 1.4C
B1 2000-2100 1.4C

These numbers are slightly higher than the 1.2C number I got from GISS E but they are still well below the 3C “accepted” estimate. Unless I am misunderstanding something you said I guess we must conlcude that climate models don’t allow for the Hofmann formula. (Do you have a link to it, incidentally?)

Another interesting thing I found was that when I plotted ten-year means of GISS AO temperatures against the SRES watts/sq m CO2 forcings for the two scenarios I got an almost perfect straight-line fit (R=0.998) with a gradient of 0.55 degrees/ watt. (Note that the 2C climate sensitivity this result implies doesn’t allow for any forcings other than CO2, so it will be too high). The question this raises is why we even bother to run expensive and complex models like GISS OA when we can duplicate its temperature predictions almost exactly simply by multiplying the CO2 forcings by 0.55 and adding a constant.

• Dr. Pratt
I would respectfully disagree. AMO does not drive the climate, AMO is the climate, so is PDO, ENSO, etc. Climate is driven by : Sun, CO2, UHI, volcanoes, tectonics, etc.

• Roger Andrews

Thank you. I stand corrected.

• sharper00

Roger,

Do you have a link for the Model E hindcasts?

• Roger Andrews

sharperoo:

GISS Model E is at:
http://data.giss.nasa.gov/modelE/transient/climsim.html

Also of use is the table of annual forcings at:
http://data.giss.nasa.gov/modelforce/

This table gives slightly higher total forcings than those used in Model E, so I prorated the annual values to match the lower Model E total forcings. Then I constructed an XY plot of temperature vs. forcings by year (with volcanic eruptions removed), fitted a line and got a strong linear relationship (R=0.98) with a gradient of 0.32C per watt/sq m and no sign of any change with time. I wish I could attach it but there doesn’t seem to be any way of doing that.

• steven mosher

If you search on CA some time ago I gave instructions on how to access ModelE results. Steve Did a post on it. go search

• Pure guesswork! Annan and Hargreaves attempted to wrap it up scientifically by assuming these guesses, being from prominent researchers, quqlified as expert opinions, as opposed to just guesses, so that Bayesian stats were justified, then they applied a slightly skewed distribution model over it to give bounds, and hence rule out as unlikely the higher and lower numbers. If he’d included more skeptical, empirical estimates or even just used common sense to assume that any number over 1 degree must be weighted as less likely since it relies on profoundly greater guesswork than those close to 1 degree, then his skew (imo) would have centred on a 1 degree rise. Again it’s about what you assume as being plausible. A new estimate is long overdue, given the fact that the models are all overestimating things so far.

Did you run that analysis yourself or do you have a source for it?

32. There are some inconsistencies in the standard “modeling” of the zero-feedback situation. The equation used basically models the Earth as a perfect black body suspended in space with no radiatively-interacting material surrounding the Earth. Under these conditions, the radiative-equilibrium temperature of the Earth, with the solar constant taken to be about 1370.0, is about 278.8 K. If an emissivity of about 0.87 is assigned to that Earth, the radiative-equilibrium temperature increases to about 288.7 K. Each of these base temperatures gives a different value for the zero-feedback sensitivity; 0.75 and 0.78, respectively.

The classic calculation, as in this post, uses a base-temperature value of 255 K. That value is obtained by using a rough accounting for the reflectivity of an Earth which is surrounded by a radiatively-interacting material. That calculation, while introducing a material surrounding the model earth, omits accounting for the longwave out-going energy contribution to the energy balance; the greenhouse effect, let’s say. It’s half an energy balance. If the latter is accounted for in the energy balance, the Earth temperature is then about 288 K. The no-feedback sensitivity is then about 0.78.

These inconsistencies are minor relative to the other real-world effects noted in the post and comments. Especially, these zero-D box-like models introduce a false concept relative to the physical material that is to be assigned to the radiative surface that is an important part of the radiative-equilibrium concept. It is not the surface of the Earth. It is not in anyway represented by the temperature that is measured at about 2m above the Earth’s surface. And, very importantly, the Earth’s climate systems have never been in radiative equilibrium and never will be.

I have some additional information here, and there are some basic references listed here.

I think all my calculations are correct. Verification will be appreciated.

• All the links in the second ‘here’ above were broken. I’ve fixed them.

• Dan thanks for these links.

• To expand a bit.
How can it be called no-feedback when the base temperature used in the calculation is for a planet that is surrounded by material that reflects part of the in-coming SW radiative energy and in the case of Earth that material is phases of water?

33. DeWitt Payne

According to MODTRAN, the change in surface flux for clear sky conditions for a tropical atmosphere is 1.57 W/m2 when going from 280 to 560 ppmv CO2. One side of the CO2 band is buried in the water vapor spectrum, but the other side isn’t. Unless cloud cover in the tropics averages 88%, I don’t see how you get Woods surface flux change of only 0.195 W/m2.

34. Jiri Moudry

“The CO2 no feedback sensitivity is an idealized concept; we cannot observe it or conduct such an experiment in the atmosphere.The CO2 no feedback sensitivity is an idealized concept; we cannot observe it or conduct such an experiment in the atmosphere.” Can we conduct such an experiment on the surface? Like two adjacent greenhouses, one filled with CO2, one with the air; switch them after a year or so. Surely this could be done for a fraction of a billion?

• steven mosher

That would tell you very little about how C02 functions in the atmosphere.

• Jiri Moudry

It may tell me very little, but it could be confirmed experimentally. It is an extremely simplified configuration which should make it easy to verify the predicted amplitude of CO2 warming. Could you please supply me with a list of experiments which verify the greenhouse effect using a real solar illumination? Do models take into account the CO2 backscattering of solar IR energy (about 0.5% of total, if I remember correctly)?

35. Alexander Harvey

A Layman’s Guide to Soden and Held (2005) and its reference to Held and Soden (2000).

I find that there are a number of things that need to be understood. I can but give my understanding and where I err, I stand to be corrected.

In S&H (2005):

R is the net radiative flux at TOA using the down minus up convention
R = SW – LW.

This convention has pros and cons. A pro being that feedbacks have the correct sign, a con being that viewed as conductances they have the wrong sign.

In H&S (2000) R = LW and S = SW

The λ (lambda) in S&H (2005) have the untis of thermal conductance per unit area, and are referred to as feedbacks. This is not the same as the f factors which are unitless. In H&S (2000) the feedbacks are the unitless feedbacks.

One λeff (lambda effective) is described as the effective sensitivity, this is not the sensitivity in the climate sensitivity sense, it has the wrong sign and units and its behaviour is recriprocal.

If you cannot quite see how λ0 (lambda Planck) is calculated, nor can I.

They show λT (lambda T) both in the text and there is a diagram of its kernel (Fig 2 top), but nothing for the lapse rate part, this may be understandable as it might have used up a lot of space but perhaps space should have been made. So the paper does not clearly show the method of calculating the Planck value, which would have been really handy just now.

If you fing the “w” as in lambda w a bit of a puzzle, so do I. It seems to imply atmospheric wetness but it is not clear as to what the wetness scale is. This gets more puzzling before it gets better, if it gets better.

You might think that the water vapour feedback kernel (Fig 2) would relate radiative flux to “wetness” but it doesn’t. This is not the kernel giving the partial derivitive or R by wetness (wk) mentioned in the text. Sadly there is probably no meaningful way of varying wk without varying T. But the kernel shown is not the kernel of R with varying T and constant RH either. It may be the difference between the kernels with RH constant and Vapour Pressure constant as T varies. Whatever, it is not clear how it is used without knowing the “wetness” scale (a guess would be w=0 means constant pressure and w=1 means constant RH, but I don’t know this and it does not seem to be the case, “see below”). How it is applied in practice is really not at all clear to me. A problem is the RH constant and vary T method does not change the amount of water vapour in each cell by the same amount so if w is water vapour amount (they just define it as water vapour) I can not see how their kernel is applied.

In the Results section there is the following:

“Interestingly, the true feedback is consistently weaker than the constant relative humidity value, implying a small but robust reduction in relative humidity in all models on average, as weighted by the water vapor
kernel.”

This may give some insight as to the nature of wk. It may indicate that it is the water content but that I can not see how that could be a directly used to in the calculation of the water feedback given the units of the kernel. To be honest I am just guessing.

It would seem quite incorrect to view the lambda vlaues for T (0 and L) ,C, w,and alpha as IPCC model properties but rather as attributed properties. The paper quiet clearly illustrates that kernels are derived separately using a different radiative model and used in an attribution process. This is necessary due to the IPCC models not making such kernels available to the authors (they are not a necessary, perhaps even meanigful, model component). The effective sensitivity does seem to be an IPCC model value. Well almost as a constant value for the tropopause forcing is assumed for all models. It may be important to bear this in mind as in order to calculate the residue (assumed cloud feedback value) it is necessary to substract values “attributed” to each model from a value “derived” from that model. Which is a tiny bit apples and oranges.

In summary, this paper is (or was at its time of publication) an authority on the method of the attributing feedback values to models. But I see it as that, an attribution scheme. It does however, give some concrete meaning the the notion of feedback values though demonstrating a practical way of calculating their attribution.

You may have spotted that Fig 3 (left) only has 13 dots not the promised 14 (one for each model) the 14th would be just outside the bottom righthand corner.

Alex

36. Frank

Figure 4 of Collins et al (and Chris Colose) show us that the radiative forcing for 2X CO2 is dramatically difference at the surface (ca. 1 W/m2) compared with a 200 mb “tropopause” (ca. 4 W/m2). In addition, there must be an altitude at which radiative forcing for 2X CO2 is a maximum, because we know that the forcing is negative in the stratosphere (as well as greater in magnitude). Consequently, there appear to be many possible answers for the “no-feedbacks climate sensitivity” that can be calculated from radiative forcing. From a skeptical point of view, the IPCC could have chosen the tropopause because that location maximizes radiative forcing and the associated no-feedbacks climate sensitivity.

If climate scientists want to determine a no-feedbacks climate sensitivity from changes in radiative forcing, the result should be as relevant as possible to surface temperature change. Since surface (and lower troposphere) temperature is determined by both radiation and convection, it doesn’t make sense to add 1 W/m2 of downward radiative forcing at the surface to the existing 168 W/m2 of downward SWR and 324 w/m2 of downward LWR and calculate a new equilibrium temperature surface temperature. Higher in the atmosphere, convection is unimportant, radiative equilibrium develops reasonably quickly, and such calculations make sense. If radiative forcing were calculated at the altitude of the top end of the constant lapse rate (g/Cp), we can expect the no-feedbacks temperature rise to be transmitted to the surface by that “constant” lapse rate. (We account for changes in Cp with lapse-rate feedback.)

Unfortunately, the IPCC has chosen to define radiative forcing as the change in flux at the tropopause, a location where additional complications may obscure relevance to surface temperature change. These complications include the effects at the tropopause of stratospheric cooling (due to both changes in CO2 and ozone) and changes in the varying lapse rate between the altitudes where convection ends and the lapse rate is zero. Myhre’s results show that the radiative forcing for 2X CO2 already varies by about 10% depending how the changing altitude of the “zero-lapse-rate” tropopause is handled.

Does anyone know how radiative forcing actually varies with altitude? Has someone tried to identify an altitude where the no-feedbacks temperature change has the greatest relevance to the surface?

• Frank, see the Pielke Sr references in my original post, the variation with altitude depends on background state of the atmosphere (primarily as it varies with latitude/season as per the calculations Pielke Sr. provides)

• Jim

Is it possible to put together a global climate sensitivity from Dr. Pielke’s data? I am guessing not since he didn’t do it, but thought I would ask anyway.

• not really, but these calculations illustrate aspects of the global variability

• Frank

I did read Pielke, but must not have retained something important. It is clear that radiative forcing depends on the altitude and the state of the atmosphere through which the radiation is moving.

Here is a thought experiment: Suppose you had a database containing the average monthly radiative forcing calculated at all altitudes in a high resolution set of 3D grid cells covering the whole atmosphere. The forcing was calculated from the observed monthly average state of the atmosphere in each grid cell. If you wanted to obtain the radiative forcing and no-feedbacks climate sensitivity most relevant to the surface, which cells would you pick and why? (I gather that you are considering how approach this problem working directly at the surface, as suggested by Ramanathan, so you’d need to assume that approach wasn’t practical.)

[For each month, I’d choose the set grid cells immediately above the top of the fixed lapse rate, expecting this to be the lowest altitude where convection doesn’t disturb radiative equilibrium. In polar regions in the winter, some of these cells presumably could be at the surface. With a fixed lapse rate between each selected cell and the surface, the surface temperature rise should be the same as in each selected cell.]

37. Now that I’ve read Chris Colose’s account above I take back what I said about my suggestions being heretical, along with the insufferable philosophy. It looks like I’m merely covering already understood ground. Although I put things slightly differently, unless I’ve misunderstood Chris it’s essentially the same story. The bathtub analogy that goes with it might be new though, I’d be inclined to stick with that to make things crystal clear.

Judy and Jim, you could do worse than to follow up on Chris’s pointers to the literature.

• Vaughan, I have read all these papers (not pierrehumbert’s in press book tho). Chris does not address the main point I am making. The equation ΔTs = λRF does not tell us how surface temperature will respond to TOA or tropopause radiative forcing. If surface temperature is what we care about, and the surface forcing in the tropics is very small, and the tropical ocean surface temperature being more dominated by evaporation than longwave flux, well isn’t this more relevant to the problem at hand than the tropospheric radiation balance?

• isn’t this more relevant to the problem at hand than the tropospheric radiation balance?

No because you’re not asking about magnitudes but about responses. Evaporation indeed cools the planet more than does radiation flux, but those are magnitudes. The reason this is not relevant to the problem at hand is that the latter is changing faster than the former and responses have to do with the first derivative, not the magnitude. We’re talking climate change here, not climate.

• ok, but if we are referring to surface climate (e.g. temperature), if the surface temperature in the tropics isn’t sensitive to a change in IR radiative forcing, the surface temperature isn’t going to change much, no matter what is going on at the tropopause. ( in the sentence you responded to, i should have said radiation balance at the tropopuase, not tropospheric radiation balance).

• if the surface temperature in the tropics isn’t sensitive to a change in IR radiative forcing,

So is your concern that ΔTs = λRF is unhelpful because λ is smaller (or zero?) in the tropics? Is the latter generally agreed on? And (whether or not it is) is there a reason not to work with a globally averaged λ, or some other way of integrating over the variations in λ?

• my concern is that is no physical way to link a change it tropopausal RF to causing a change in surface temperature. the assumptions that go into that expression are mind boggling (this will be the subject of a future post).

• my concern is that is no physical way to link a change it tropopausal RF to causing a change in surface temperature. the assumptions that go into that expression are mind boggling

Your argument seems to be that the tropics are already so well blanketed (by the greenhouse action of water vapor?) that additional blanketing can’t make a difference. Isn’t that just a variant of Ångström’s objection to Arrhenius?

But even if the objection held water (pun unintended), if global warming is happening at 0.1-0.2 °C per decade, how hard could it be for the temperate zones to warm up the tropics in a decade?

This does not violate the second law of thermodynamics because the Hadley cells were in equilibrium before global warming, so what matters is not whether the temperate zones are colder than the tropics but whether the pre-existing equilibrium is shifted by additional warming of the temperate zones.

• Further to this question of whether tropical CO2 can have any influence, at the tropopause or anywhere else in the tropical troposphere, on account of the greater water vapor in the tropics, there is also the point that CO2 and water vapor block separate portions of the spectrum. So even if one accepts that the portion of the spectrum allocated (by the FCC?) to H20 is blocked to further increases, the photons blocked by CO2 pass straight through the water vapor. Absent CO2, tons of heat can pour out of the atmosphere through the part outside water vapor’s jurisdiction. Adding CO2 to that part should block a lot of heat otherwise lost to space, just as in other parts of the globe.

But I’m just theorizing about that. What do the actual measurements show? Or can my theorizing be overridden by other theorizing?

• DeWitt Payne

I don’t understand the concentration on forcing at the surface while apparently ignoring the total forcing. The total radiative imbalance of 5.332 W/m2 for a doubling of CO2 for a tropical atmosphere is a large imbalance. Does it really matter whether the atmosphere warms the surface or the surface warms the atmosphere?

There is a real question about the balance of convective and radiative heat transfer from the surface to the atmosphere, though. Is the current balance of ~100 W/m2 convective and 66 radiative at 15 C average temperature the only allowed value or could there be a range of surface temperatures and corresponding different ratios of convective to radiative heat transfer? What about albedo? If I remember correctly, models spin up to stable values at different average surface temperatures over a range of ~4 C.

• I’m trying to focus on the actual surface temperature, which seems to be the decision relevant variable. As you point out, there are many different combinations of heat transfer processes and states of the atmosphere and surface that can provide that same value of tropopause radiative fluxes.

• DeWitt Payne

But when the troposphere warms, the surface forcing goes up too. The forcing at the surface from an instantaneous doubling of CO2 is only a starting point. It can’t possibly be the sole determinant of the increase in equilibrium surface temperature. An increase in LW flux from the atmosphere means SST has to go up as well because the net radiative heat loss decreases. You’re not going to get 100% negative evaporative feedback to keep the SST constant.

• agreed the amplification of the forcing by tropospheric radiative heating is the step 2 correction in ramanathan’s method.

• Is the current balance of ~100 W/m2 convective

I wonder if that’s an underestimate. Trenberth, Fasullo and Kiehl calculate 80 on the basis of observed precipitation, but this only considers cycles that touch the ground. I can easily imagine up to another 100 W/m2 involving cycles that don’t. Falling raindrops are way more susceptible to evaporation than after they start collecting in puddles on the ground that are thicker than raindrops. It would be nice to have some way of observing these non-grounding cycles. Even observing one rainstorm at altitude might help quantify this more concretely.

• Jim D

I had replied on your bathtub analogy back at another branch (search for faucet), as I have seen that before and certainly it is a good one. On Colose’s post, a minor disagreement I have is that he says the only way to get back the energy lost by doubling CO2 is to warm the whole atmosphere, when, while that is one way, it is not the only one. Conceivably you could just warm the regions where CO2 is cooling most to space, which would limit it to the upper troposphere in the tropics. There is definitely arbitrariness involved to get at a surface effect. I have posted some alternatives dotted around this set of threads.

• Ah, thanks, I got lost in the blog and didn’t see your reply, just replied to it now. The novelty I was claiming was not for one drain but two, with one blocked. Two explains your paradox, one doesn’t.

Conceivably you could just warm the regions where CO2 is cooling most to space, which would limit it to the upper troposphere in the tropics.

That would be nice if you could figure out how to engineer it. Until you do nature makes that decision, and decides to warm the lower regions more. There is currently negligible global warming at the tropopause (as a function of latitude), and global cooling in the stratosphere on account of antigreenhouse gases (primarily ozone).

One way to engineer it would be to increase evaporation in a way that condenses the vapor as close to the tropopause as possible. With enough evaporation you could warm the upper troposphere significantly enough to solve global warming, and then there’d only be ocean acidification to worry about.

• Jim D

We can’t change the vapor, because that is outside the rules of this no-feedback exercise. Warming the whole troposphere is a valid choice, as it implies surface warming and convection to maintain the lapse rate, so it is the most natural in some way, though still an idealization. That is why I suggested somewhere on these threads that the most objective choice is to uniformly warm the whole profile (easily done in the online MODTRAN). But this is not what the stratosphere does in reality, so that is not perfect either.

• Warming the whole troposphere is a valid choice, as it implies surface warming and convection to maintain the lapse rate,

This not perfect either since it’s not what the troposphere does in reality. Radiosonde data over a few decades show it pivoting about the tropopause, or somewhere near there. 1 km above the surface warms much more than the tropopause, closer to the surface is convectively coupled to the relatively stable ground.

• (“Relatively stable” at a decadal scale, that is. Diurnally it’s the other way round, the atmosphere is remarkable stable 2 km up between day and night while the ground fluctuates mightily. Greenhouse gases can’t trap enough radiation to have much impact on the atmosphere in 24 hours, global warming is a much slower process.)

• Jim D

I’d say that after we eliminated the water cycle anything goes regarding lapse rates, so let’s just keep that fixed, and it is only about 1 degree we are considering.

• I’d say that after we eliminated the water cycle anything goes regarding lapse rates

Again you don’t have much say in this as nature dictates lapse rates as a function of pressure, temperature, and moisture. With dry air the lapse rate increases slightly with temperature and decreases slightly with altitude. With moist air it’s the opposite, see Emagram.

Whether your idealization of a constant lapse rate is reasonable depends on what argument you’re basing it on. If there’s grounds for tearing out hair I’d be interested in your reasoning, which I’d thought I was following but now realize I’m not, sorry about jumping to conclusions there.

• Jim D

This is far from a natural system. We doubled CO2 instantly, and now want to see what temperature profile results to balance the flux again, but without any vapor changes, but maybe allowing the surface temperature to change. I don’t think this has a uniques solution unless you give it a lapse rate constraint too. Maybe there is a more clever way to do that than assume it is fixed, but I don’t know it.

• We doubled CO2 instantly, and now want to see what temperature profile results to balance the flux again, but without any vapor changes, but maybe allowing the surface temperature to change.

Ok, well give it three decades or so to catch up. As per my recent posts there seems to be some inertia in the global climate. I don’t think it’s realistic for nature to respond instantly to a doubling of CO2. That’s a lot of CO2 to absorb in one session.

• Jim D

It is not so much the time scale of the response, just that it is not constrained or balanced by anything since we don’t have convection in this radiation model. Specifying a lapse rate approximates a convection constraint.

• I agree on all that you write.

I would just like to add that after fixing the lapse rate one has also to decide, whether the temperature at tropopause is increased by the same amount as ground temperature or the thickness of the troposphere is increased to allow the minimum temperature at tropopause to remain constant.

• This seems to be a problem in state estimation/inverse modeling. Given RF, what is delta T surface and T atmospheric profile? There is not a unique solution. For example, if there are clouds in the atmosphere, they could be doing most of the warming. The other issue in a control system is that you have to define the system as to what is gain and what is feedback, there are different ways you dan do thi.

• Jim D

These complications are why I just suggest adding the same temperature at all levels to find the balance. It is equivalent to maintaining the previous lapse rate in the troposphere and stratosphere.

• Jim D,
What about the minimum temperature at tropopause?

My feeling is that adding 1K to that temperature would lead to a seriously erroneous outgoing LW radiation spectrum at TOA. The radiation from the strongly absorbed wavelengths would increase too much and consequently the warming of earth surface would be too low.

38. AnyColourYouLike

I’m not very up on this stuff, but if you’ll indulge my dumbness for a moment, the consensus here seems to be saying that something that can never be measured directly is used as a baseline for the scary model forecasts (questionable feedbacks added) that the IPCC has 95% confidence in? Is this correct?

• AnyColorYouLike it writes “Is this correct?” In a word, Yes.

• I should have added that one stage back is the same thing. Change of radiative forcing cannot be measured. This is used as an input to an estimation of change of global surface temperature, without feedbacks, which can never be measured. This in turn is used as an input to estimating the effect of feedbacks, which can never be measured. Sarc on. Sort of gives you great confidence in the results. Sarc off.

• No, it isn’t a baseline. The model calculations make no use of this. The no-feedback sensitivity just a number that’s easily calculated and can be used as a reality check.

• AnyColorYouLikeIt, You have had two contradictory responses. I am afraid you will have to do your own research to find out that I am correct.

• AnyColourYouLike

Thanks Jim (andNick), I didn’t think it was worth replying just to tell you both I was now even more confused. ;-)

I don’t know enough to know who’s right, (I’m trying to read up about ocean warming at the moment). On the face of it though, Nick’s “simple calculation” doesn’t seem to square too well with Judith pulling her hair out!

A calculation isn’t a direct measurement, though obviously some quantities can only be defined this way. Whether 1c is a trivial stat, that happens to be a useful quality control, as Nick asserts, but has no direct bearing on models is beyond me at this point. The number of replies on this thread would seem to indicate that there is a lot of confusion/disagreement around this, which makes me feel a little more comforted in my ignorance. :-)

39. Jianhua Lu

Dr. Curry,

In the Figure 1 of http://www.springerlink.com/content/6677gr5lx8421105/fulltext.pdf ,

the left panel shows the 3D structure of 2CO2 forcing, and the right is the atmosphere-surface temperature response to the forcing if the long-wave radiation change due to temperature change is the only way to balance the forcing, i.e., with no any feedbacks. In fact the non feedback sensitivity is close to the IPCC value, though our calculation is for a very idealized coupled climate system- but the simulated temperature profile are similar with profile of the observed Earth climate.

• Jianhua, thanks very much for providing the link to this paper, I am reading it now, looks very interesting and relevant.

40. Jianhua Lu

Sorry, it is Figure 2 of the linked paper.

• Your main conclusions regarding the latitudinal/vertical distribution of radiative forcing are generally consistent with the values cited by Pielke Sr. One mechanism that you missed is a hyper water vapor feedback in the polar regions, where the relative humidity with respect to ice is the main control (which varies with temperature), see my paper on this, this effect is not included in pierrehumbert’s analysis either. ALso, the Fu and Liou (1992) radiative transfer model does not correctly treat the water vapor rotation band (the so called dirty window), which is critical for getting the IR forcing correct in polar regions and also near the tropopause (more recent Fu radiation codes have fixed this, not sure which version you actually used). These two aspects (dirty window, ice relative humidity) further contribute to the polar amplification.

So I like your approach and generally agree with your conclusions, but these neglected factors are actually quite important in the polar amplification, beyond what you found.

• Jianhua Lu

Judith,
Thanks for your reply and the comment about the water vapor feedback in high latitudes. I’ve downloaded and am reading your paper. The model in our paper was quite idealized, but the approach (CFRAM) that takes the vertical structures of climate forcing and climate feedbacks into consideration is general to real and climate system (global or regional) and applicable to full GCMs.

Possibly off the topic, I like very much your 1983 paper on the formation of continental polar air, which I knew from Kerry Emanuel’s essay ” Back to Norway”.

• Jianhua, CFRAM is quite interesting, I will be thinking about this some more.

• I guess that answered my question too :)

41. Raving

The direct CO2 radiative forcing is the change in infrared radiative fluxes for a doubling CO2 (typically from 287 to 574 ppm), without any feedback processes

The term ‘forcing’ is DELIBERATELY FORCED.

No thank you.

42. manacker

Judith

Without going into the calculation details of the 2xCO2 radiative forcing itself, you mentioned a different climate response in Arctic winter (few water droplet clouds) than over the tropics year-round (lots of water droplet clouds).

This makes good sense.

The tropics represent around 40% of our planet’s surface while the Arctic plus Antarctic are only around 8%. In addition, “winter” is only half of the year at most.

Does this mean that the response over the tropics is five to ten times as important as that over the Arctic plus Antarctic?

(I realize this may be an oversimplification plus I’m ignoring what happens over the temperate zones, but I’m just interested in an “order-of-magnitude” answer.)

Thanks,

Max

• In an areal averaged sense, the tropics are more important. But the polar regions can trigger ice albedo feedbacks, amplifying their importance. Also, most of the projected surface warming is the high latitudes of the northern hemisphere, along with the direst impacts (melting of arctic sea ice and greenland, release of methane from permafrost, etc)

• manacker

Judith

Thanks for your response. The “ice albedo” effect is, of course a major factor in the Arctic and Antarctic, as you write.

Ignoring the possible increase of “methane from permafrost” with warming for now, it appears that NSIDC data tell us a) that northern hemisphere snow cover has not shown any statistical change since the 1980s, b) that Arctic sea ice has shrunk since measurements started in 1979 and c) that Antarctic sea ice has grown gradually over this period. The Arctic and Antarctic sea ice are exposed to sunlight during half of the year, and the statistical change in surface area at the lowest point (end of summer) since the record started is a finite number (loss of around 2.1 million square km in the Arctic and gain of around 0.8 msk in the Antarctic).

There has been no reduction in the surface area of grounded ice in the Greenland and Antarctic Ice Sheets, although the mass appears to have declined recently, at least in Greenland, if we can believe the GRACE results, which show more mass loss than earlier satellite altimetry measurements by Johannessen/Zwally (GRL) and Davis/Wingham (Antarctica), which showed net growth over the period 1993-2003.

So we are left with a relatively small delta in the ice albedo effect over the northern summer months with an even smaller delta in the other direction during the southern summer months (when albedo plays a role). And, even in summer, the angle of the sun’s rays is far from directly overhead (the sun’s rays cause warming of around twice as many Wm^2 in the tropics as in the Arctic due to the angle of the sun).

It is estimated that half the Earth is covered by clouds on any given day (this represents 125 million square km on the sun-exposed side). Cumulus and stratus clouds apparently have roughly the same albedo effect as snow and ice. Yet these cover a far greater surface area than sea ice and it is unclear how they will change with increased surface temperature. Ramanathan and Inamdar estimated that clouds reflect 48 W/m^2 on average (or 14% of the incoming solar energy of 342 W/m^2). Studies over the tropics (Spencer et al.) point to an increase in outgoing radiation (LW + SW) with increased tropospheric remperature. Is this due to an increase in reflected SW radiation from more “water droplet” clouds? If so, is it not reasonable to assume that this effect would occur over the entire year in the tropics, with the angle of the sun’s rays far greater than that in the arctic summer?

Soden and Held estimated the total future ice albedo feedback at 0.07 to 0.34 W/m^2K. If the low altitude cloud cover increased by only 1% per degree of warming, this would represent -0.48 W/m^2K, more than canceling out the ice-albedo effect (for comparison, Spencer et al. measured –6.1 W/m^2K or over ten times this impact). .

Since “cloud feedbacks remain the largest source of uncertainty” (as IPCC has put it), and since the magnitude of this uncertainty is so great that it changes everything, shouldn’t this great source of uncertainty be cleared up before any long-term temperature projections are made?

And then there is the suggestion by Spencer that clouds act as a natural forcing factor, themselves, driven by a yet unclear mechanism, which may be tied to the PDO or other natural variations in ocean currents (which, themselves, could be related to solar activity by some as yet unexplained mechanism). From what I have been able to read (and I think I have read that you agree), today’s level of scientific understanding of natural climate variability (or forcing) is still very low (as IPCC has also conceded).

I guess all I am saying is that until “climate science” has a good handle on the uncertainties surrounding the climate impact of clouds as well as possibly related other natural variability or forcing, it does not know enough to make any meaningful projections for the future, and should, therefore, refrain from doing so.

My opinion, of course, but I would appreciate your comment.

Max

• Agreed that clouds are the biggest issue, and that overall uncertainties are large. Scenario simulations of future climate states are useful for testing ideas about how the climate system works. These scenario simulations have some value for decision making. if placed in the appropriate context of the uncertainties.

• Mike Jonas

manacker has explained very clearly one possible specific instance (clouds) of what I argued for: “we are arguing about something [CO2] that appears to be of rather low importance. Wouldn’t we do better to look at what causes the surface temperature changes in the first place?“. (http://judithcurry.com/2010/12/11/co2-no-feedback-sensitivity/#comment-21514)
I happen to think that clouds are a very good place to look.

Judith – you say “clouds are the biggest issue, and that overall uncertainties are large“. You don’t make it clear what uncertainties you are talking about, but I suspect that you are referring to uncertainties the same way that the IPCC does, namely the uncertainties in the “climate model” view. In other words, uncertainties in the CO2-driven view.

As always, the emphasis is on CO2 or the CO2-driven view.

We desperately need a paradigm shift in the way climate is discussed. Until we start addresssing clouds explicitly and in their own right, we are never going to break away from the all-pervasive myopic concentration on CO2 and the perception in most people’s minds that CO2 has an overriding effect on climate. [NB. The real-world observation I cited shows that it does not].

As long as every post on climate is about, or refers to, CO2, it is virtually impossible to break people away from the notion that CO2 is of overriding importance. After all, if CO2 dominates the discussion it is hard to conceive that it does not dominate the climate. What we need is a post or series of posts about clouds (say) : what we know about them and still don’t know. How they may or may not be playing a role in each of the timescales. How we could test the various hypotheses – and the tests that have already been conducted or are still in progress. There is a vast body of information already available, and putting it together here even at a fairly basic level could help to swing the climate debate in a meaningful way.

So here is a very specific request : please please can you do a post or series of posts on clouds. Just clouds, no mention of CO2 (though other natural factors would inevitably get a mention). Might I also suggest that you could do what you did for your reply to the House of Reps : throw it open to your readers to put forward suggestions, papers, etc, and use those as an aid to synthesis.
One of the interesting things that could come from this is an idea of just how much of the climate might be driven by clouds, or at least have clouds involved. I think the result might surprise you, but we’ll never know unless you take the plunge …..

• Brian H

Interesting that a decade or two ago dino fossils were discovered along the shores of Antarctica. Big eyes for seeing in the winter, and specialized adaptations, but they persisted in that extreme day/night environment for millions of years. Then the ice came …

43. Judith is quite right to suggest that these figures “should be disputed”.
On the previous post she wrote
trying to nail down what we actually understand; currently tearing my hair out since I can’t even make sense of how people are coming up with 1C and how to interpret it. So everyone seems to believe this number, i’m trying to figure out why people have confidence in this (I’m having less and less).
Well, Steve Mc has been asking on his blog for years for an up-to-date, detailed paper explaining where these numbers come from, with no satisfactory response.

Judith also hints at the lack of substance in IPCCAR4 on this issue – given its crucial importance, you might think that AR4 would discuss it in detail, but no. There is just the brief statement (chapter 2 page 140) that “ The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid“, with no justification. Ramaswamy et al. (2001) is of course the IPCC TAR. What make is even worse is that there are three different formulas for CO2 in the TAR! So they can’t all be valid. Which is it? Presumably the log one, but this isnt actually stated.
The IPCC AR4 then cites Collins et al 2006. But Collins et al 2006 suggest a surface LW forcing for doubling CO2 of around 1 – 1.5 W/m^2 (Table 8, fig 4, as pointed out by Frank) – nowhere near 3.7. In fact Collins et al also have a SW surface forcing, which is around minus 1, making the net surface forcing due to doubling of CO2 approximately zero!
Sorry for the all the exclamation marks but the emperor’s lack of clothes on this crucial issue is astounding.

• Brian H

It’s almost as if the significance attributed to the cites is disconnected from their actual content.

Nice work if you can get it.

44. Barry Moore

As always a very thoughtful and insightful post by Dr. Curry. I think it is legitimate to examine many special cases in isolation in order to build a comprehensive picture of an incredibly complicated system that we are far from understanding, thus how can we even begin to formulate a computer model to analyze such a system. I think a great deal more basic analysis in accordance with the laws of physics is required which Dr. Curry has attempted here.
Slightly off topic but a subject that has puzzled me for many years is how do we determine the increase in radiation absorbed by the earth’s surface caused by an increase in CO2 in the atmosphere?
Clearly this cannot be done empirically, so we turn to basic physics. We know the resonant frequencies of the CO2 molecule, we know the radiation spectrum of the earths surface at any temperature. So we take a surface temperature of 20 deg C as an example. The shorter wavelengths of resonant frequencies in the band widths 2.3 to 2.6 micrometers and 4 to 4.4 micrometers are well into the wings of the spectrum so they are insignificant, the wavelengths in the bandwidth 14+ micrometers are the only ones of significance. Even at these wavelengths the CO2 has to compete with H2O which has resonant wavelengths in the same range and except for the cold regions of the world water outnumbers CO2 by up to 100:1.
A Beer’s law calculation at any resonant frequency under current condition of concentration and pressure at the surface indicates that the resonant frequencies will be reduced by 50% for each 2 meters of atmosphere. The “close neighbors” to the resonant frequencies will travel higher into the atmosphere since they are not absorbed so readily. When a CO2 molecule is energized by absorbing outgoing IR radiation it can loose this energy in one of 2 ways, by reradiation or by collision with other molecules. The time it takes to reemit energy is well known, in addition the time to collide is well known. A molecule will collide about 1000 times before it has time to reemit. Consequently the vast majority of energy is transferred by collision.
With respect to reemitted energy the CO2 molecule will reemit energy exactly at one of its resonant frequencies and as the beer’s law calculation shows the mean path of a resonant frequency is only 2 meters so how can this energy return to the earth’s surface.
The IR will be absorbed in the downward direction just as easily as in the upward direction so the more CO2 in the atmosphere the harder it is for IR to return to the surface.

• Brian H

Given that collision frequency, it seems utterly implausible that ANY measurable re-(back-)radiation would occur. Even one collision knocks the energy level of the CO2 out of the radiative box, I would think.

45. steven mosher

PaulM | December 12, 2010 at 5:43 pm | Reply

Judith is quite right to suggest that these figures “should be disputed”.
On the previous post she wrote
“trying to nail down what we actually understand; currently tearing my hair out since I can’t even make sense of how people are coming up with 1C and how to interpret it. So everyone seems to believe this number, i’m trying to figure out why people have confidence in this (I’m having less and less).”
Well, Steve Mc has been asking on his blog for years for an up-to-date, detailed paper explaining where these numbers come from, with no satisfactory response.

#####

a little while back I pointed Steve at SoD’s site, He wrote the following:

http://climateaudit.org/2010/05/02/scienceofdoom-com/

FWIW

• Jim

More and more it’s looking like the CO2-doubling-radiative-forcing number is next to meaningless.

• Jim D

It is well defined at the top of the atmosphere, and my opinion is that is where it matters most. So far from the various ways of translating it to the surface, I have not seen anything that says the same number is not a good order-of-magnitude estimate there either, but it can’t be as exact for sure.

• Jim

That makes some sense, I think. So you are saying it could be somewhere between 0.1 and 9?

• Jim D

Maybe I meant base-2 order of magnitude 0.5-2.0.

46. While everyone is pulling their hair out, it still seems a fundamental source of confusion is how the energy budget at the TOA can determine the surface temperature response. I’ve tried to make it clear in my previous post that the imbalance of the whole planet is eventually communicated to the surface, and proper considerations of the surface temperature cannot be determined without invoking satisfaction of the planetary energy budget (and in fact you can’t even complete the discussion of the surface budget without talking about the primary of TOA fluxes). This is discussed in Ch. 3 of the NRC 2010 emissions report, and Ray also spends a whole section in his upcoming textbook on the surface energy budget fallacy, with plenty of work-it-out exercises and figures for people to chew on. Various other texts like Houghton’s ‘Physics of Atmospheres’ also emphasize the imbalance at the TOA.

In this sense, the popular media explanations of the greenhouse effect that more IR absorbers warms the planet by re-directing more IR back to the surface is just incomplete, and may not even be true in some instances.

There’s suggestions that one could rectify the surface budget fallacy by not only computing the surface budget (with all of the terms in the surface energy budget), but also the atmospheric budget. I’ll think a bit on how well this can work out; in equilibrium, keep in mind, both the TOA and BOA budgets must be satisfied (Jianhua Lu above has argued that the TOA forcing is oversimplified, but also acknowledges the surface budget fallacy). The best way to think about the role of the surface budget is that it helps determine the gradient between the ground and lower air temperatures, but the air temperature itself is determined by different processes. For example, evaporation does not act as a buffering mechanism to regulate tropical temperatures. Rather, it is more or less dragged along to accommodate whatever cooling is necessary to make the low level air temperature and surface temperature ratio to be near unity. A landmark paper that helped make this issue clear was Pierrehumbert’s 1995 paper on Radiator Fins, where among other things, he refutes Ramanathans work on a ‘tropical thermostat’ by clouds.

OTOH, where the surface budget seems to play a significant role is in diagnosing hydrologic responses to global warming. In warm climates for example, the net IR heating at the surface (up minus down) goes to nearly zero and the maximum evaporation/precipitation is constrained by the absorbed solar radiation.

• Jim

I get that incoming and outgoing radiation energy must balance and that radiation is the only way the Earth could come to equilibrium with the Sun and space. What I don’t understand is how convection comes into play WRT to the atmosphere/Sun-space interface. Is this correct?: With no convection whatsoever, the change in temperature with a doubling of CO2 would be different than if parcels of air are allowed to rise, thus allowing the various forms of lapse rates to take effect. If that statement is true, then the mechanics of the atmosphere do matter, so the 1 C number is in fact subject to adjustment due to same. It just seems there are more underlying assumptions than are being made clear. At least to me.

• The 1.2 C rise (a better estimate than 1 C) is calculated based on an unchanged lapse rate as well as an absence of other feedbacks. Feedbacks are then applied. The lapse rate feedback is negative, but others are positive, and the estimated climate sensitivity to doubled CO2 as probably being in the range of 2 to 4.5 C rather than 1.2 C is based on the combined effect of all feedbacks.

• hunter

Don’t forget that important little word, ‘if’.

• Jim

I may not understand the lapse rate well enough. My understanding is that it is derived basically from the ideal gas law. But in order for it to actually work, a parcel of air must rise. That is convection. So it seems to me convection is built in as an assumption when use utilized the lapse rate. Or are you saying the lapse rate is purely a radiatively defined concept?

• Without convection the lapse rate would be larger as IR radiation alone would not tranfer all incoming energy back out without a steeper temperature gradient. Such a lapse rate is, however, not stable but leads to some parcel of air going up and others down. This means that convection occurs until the lapse rate has been to the highest value that keeps the atmosphere stable. Some convection is then needed continuously to maintain the stable lapse rate. The stability limit depends on the thermodynamic properties of the gas with moisture content having a big role.

Smaller lapse rates than the stability limit are possible and do occur in polar regions. In these conditions the convection is not needed due to the low surface temperatures.

• Jim

So for the purposes of the discussion of delta T with a 2XCO2 increase, there is an assumption of convection. So this isn’t really a number that depends only on radiation when it comes to the calculation of the surface temperature.

• I’ve been curious about how the modelers address this. A convective adjustment may (or may not) be implicit in the estimation of delta T. The no feedback calculation requires lapse rates to remain unchanged, but also a variety of other artificial constraints , such as no change in evaporation, atmospheric water vapor, clouds, or albedo despite a change in temperature – these latter events are not ignored but addressed as feedbacks as a means of artificially separating different responses to a forcing into different components to be analyzed individually and eventually recombined into a climate sennsitivity value.

• As far as I have understood Judith started this thread with the same concern. Many people (including me) have also brought up the issue that there is no way of defining a warmer atmosphere without some arbitrary choices. Thus no unique “no-feedback CO2 sensitivity” exists, but one has a choice of several alternative somewhat different definitions.

It is possible that all “almost natural” ways of defining the concept lead to very similar numeric values, but this is by no means certain.

• Rob Starkey

Chris
You wrote- “that the imbalance of the whole planet is eventually communicated to the surface”- This does not seem to make logical sense. Could you “estimate” over what time period this “eventually” happens and could you expand of the cause of that specific time period?

• Chris, my point is that there is many many different ways for this heat to get distributed in the climate system. If you think about the surface temperature as the primary focus, there is no simple, unique way to quantify what the surface temperature increase is based on tropopause radiative forcing. This is the fallacy, thinking that you directly relate surface temperature change to a tropopause radiative forcing. The surface energy budget determines the surface temperature.

I don’t have pierrehumbert’s book. I’ve looked again at ch3 in NRC emissions. I see the same old stuff. Yes, the radiative balance depends on the surface temperature. But the reverse is not true: there is no direct relation whereby tropopause RF changes the surface temperature. Fig 3.1 (presumably from Pierrehumbert’s book) shows that the radiation balance at TOA depends on surface temperature. The slope of the line does not give a surface temperature from the RF, as a causative mechanism. This is the fallacy, to assume that it does.

• Judith:
Chris Colose keeps talking about standard (simplified) account for climate sensitivity, where some TOA “surface” determines emission temperature, which “determines surface temperature” via “average lapse rate”. He thinks that this surface is simple and easy to deal with. In fact, that very TOA surface looks like this function:

This is about the actual shape that must be convoluted/integrated down to sub-nanometer resolution with actual temperature profiles in order to obtain outgoing irradiance. Unlike most simplifications (which is also used by Ray), the real atmosphere is not isothermal above tropopause; it has opposite gradient to troposphere. Now please let Chris to convince us (and yourself) that the elevation this function by factor of 2 produces less OLR by exactly 3.7W/m2.

• DeWitt Payne

While it’s not trivial, it’s not that hard either. It’s the sort of thing computers do best. In a full bore line-by-line model you also have to deal with pressure and Doppler broadening. But a modern multi-core desktop should be able to calculate irradiance for a given temperature and concentration profile in a few hours at most and probably less. A band model like MODTRAN runs orders of magnitude faster and produces results almost as good.

• DeWitt, here is a little problem. MODTRAN has the spectral resolution of 2cm-1. The actual spectrum has peak features as narrow as 0.01cm-1, with amplitude differences of three orders of magnitude. Therefore the 2cm-1 flat averaging cannot possibly give the same result as a fully resolved algorithm when convolution is done with a function that changes its sign in the middle of optical path. You probably can have a good correction for this band-averaging for one reference concentration, but I have real concerns how this method handles a CHANGE in concentration.

One more thing. There is not-so-widely disseminated fact that the actual contribution to radiative imbalance is produced by side skirts of strong absorption lines, in particular by those areas where the optical thickness is marginal, of the order of 1. As I understand, the LBL algorithms are based on Rosseland (diffusive) approximation. However it follows from RTE literature that this approximation does not work well for intermediate optical thickness of 1. So we seem to have another problem here – the effect is produced by an area of spectrum where the employed approximation does not work. I can’t believe this combination can produce accurate results, unless I am missing something here.

• “there is no direct relation whereby tropopause RF changes the surface temperature. “

I’m puzzled by that statement. I recognize that quantifying the relationship involves some uncertainties, but are you claiming that there is no known physical means by which a tropopuase RF is transmitted to the surface?

I think we can acknowledge heterogeneity depending on meridional, zonal, seasonal, diurnal differences, etc., involving temperatures, lapse rates, humidity, ground-air coupling, and other variables, and affecting both the magnitude of surface changes and the distribution among different energy transport mechanisms. This influences quantitation, but for simplicity in considering mechanisms, we can start with a purely radiative model and a global averaging. Is it not then fair to state that the link between tropopause and surface is given by the radiative transfer equations along with the proper input in terms of ghg concentrations and spectroscopic properties, and starting atmospheric temperatures? From an intuitive perspective, this mechanism can also be perceived as a consequence of increased IR opacity at a relevant high altitude, so that IR sufficient for radiative balance no longer escapes, because a greater fraction is transmitted downward. At each lower level, the increased downdwelling IR results in a warming and an increase in downward IR from that level, eventually including the surface in the warming effect. With sufficient warming, the same radiative transfer equations show that upward IR will rise enough for sufficient quantities to escape to space, albeit at a higher altitude than before, warmed sufficiently so that its IR emissivity allows OLR to balance incoming absorbed radiation.

This is a qualitative rather than a quantitative description, but the equations render it quantitative. My point is that it does in fact describe a mechanism linking tropopause RF to surface temperature change. We also know that a purely radiative response is not the way our climate reacts, mainly because of the presence of liquid water on the surface and its capacity to vaporize, and also because of instabilities that can lead to convective adjustments. These too are mechanisms, and subject to quantitation.

All this makes a quantitative description of the tropopause/surface link difficult, but I don’t see it as implying the absence of physical mechanisms. Also, not to belabor the point, but to the extent we can quantify lapse rates, we should be able to get a reasonable approximation of how much a tropopause radiative imbalance will affect surface temperature. It won’t be exact, but it should give us a rough idea, should it not?

• Jianhua Lu

Chris,

Nice to read your comments both here and at realclimate under Andy’s post.

You said that ” There’s suggestions that one could rectify the surface budget fallacy by not only computing the surface budget (with all of the terms in the surface energy budget), but also the atmospheric budget. I’ll think a bit on how well this can work out; ”

I would mention that the TOA radiative balance equation is the vertical integration (summation) of total energy conservation equation (as in Peixoto and Oort, 1994 Chapter 13) and land/ocean surface energy budget equation ( for the ocean, the surface energy budget equation is the vertical integration of total energy conservation equation in the ocean (Wunsch and Farrari, 2004? Annul Review Fluid Mechanics). My approach in the paper ( the application example in http://www.springerlink.com/content/6677gr5lx8421105/fulltext.pdf ) is that we can directly use the energy conservation equation to analyze the climate feedbacks which essentially are the changes in the energy cycle of the climate system, including both the radiative feedbacks and also dynamic feedbacks (surface heat fluxes and atmospheric/oceanic energy transport feedbacks). Note both climate forcing and climate feedbacks have vertical and horizontal structures. Figure 1 in our paper shows that the sum of decomposition can recover almost exactly the simulated climate sensitivity ( here it means changes in both atmosphere and surface temperatures).

• Chris’s explanations are just a combination of waffle and spin. The argument that heating the TOA must heat up the surface by the same amount just doesn’t make sense.
Imagine putting a load of electric heaters high up in the earth’s atmosphere – would they warm the surface?

47. Dr Judith,

I don’t think there is any such thing as “no feedback” sensitivity. The temperature rise and fall with perturbation of CO2 concentration can be characterised as a weak negative feedback keeping the absorbed and emitted radiation equal, or radiation emission constant where the absorbed radiation is constant.

48. Alexander Harvey

A Question of Attribution

It srikes me that contention is great wherever there arises a question of attribution. The feedback issue is just such a question as is the allocation of the late twentieth century temperature rise between its plausible causes.

I think that the fount of vexation that spouts whenever attribution needs be made is something worthy of investigation for its own sake.

Alex

49. Geoff Sherrington

A thought experiment. If all people on Earth covered themselves with blankets and so felt warmer, would this affect the TOA input-output relation?

It’s not trivial, because the people are mostly at the surface and much of the emotion in the topic is about the human condition.

Now for the reverse. If the TOA input-output relation changed by some unstated mechanism, so that comparatively less energy was incoming, would this lower the surface temperature? (And, if you like, cause our toy people to put on blankets).

We’re not talking magnitudes in the toy, we’re talking processes including dynamics over time spans long enough to complete the equations.

50. Alan D McIntire

I’ll admit that I don’ t know anything about CO2 sensitivity, but I’d like to put in my 2 cents just to see if I understand the question being raised by listing what I understand to be the main points:

The flux at earth is about 1368 watts. When you consider the fact
that the earth is a sphere,
with a surface area of 4 pi r^2, and the face presented to the sun
is a circle with an area of
pi r^2, the average flux is (pi r^2/4pi r^2)= 1/4 * 1368 = 342
watts.

For a blackbody, climate sensitivity would be
1/4 dT/ dS
For a surface temperature of 288 K, this amounts to
(1/4)( 288K/342 watts) = 0.21 K/watt

Earth is not a blackbody, the albedo is about 0.3
342 watts *(1-0.3) = 239.4 watts

For a “graybody” earth,
dT/dS = 1/4 (288 K/239.4 watts) = 0.3 K/ watt

That’s where Colose gets his figures, by using Boltzmann’s
equation. which gives a figure for
climate sensitivity of 0.21K/watt to 0.3 K/watt.

Chris Colose is figuring that 0.3K effect from a 4 watt incrrease to a 240 watt flux. That 240 watts is what the earth receives and radiates at the top of the atmosphere, not what the earth’s surface receives, closer to 490 watts thanks to back radiation. That 490 watts includes about 100 watts in convection and latent heat of vaporization. That 0.3K sensitivity is the result we’d get if the flux received from the sun increased by a small factor from that 240 watts, and the relative greenhouse effect stayed the same as it is now. It’s not computing what the non feedback effect would be if the flux received from the sun stayed at 240 watts, and the surface flux increased from 390 watts in sensible heat and 100 in latent heat to 390 sensible plus 100 latent + 3.7 of some unknown mixture of latent and sensible heat

• Jim D

Alan, close, it is OK up to the point of dT/dS.
S= sigma *T^4 where sigma is 5.67e-8. This is the black-body equation.
From this you can get dS/dT and I leave that as an exercise worth going through. The result of dS/dT depends on T itself. For the top-of-atmosphere this would be 255 K, which you get from equating 342*0.7 W/m2 to a temperature using the above formula for S.

51. ge0050

This article references a peer reviewed work that proposes the underlying equation is wrong because it ignores the boundary condition.

“Miskolczi’s story reads like a book. Looking at a series of differential equations for the greenhouse effect, he noticed the solution — originally done in 1922 by Arthur Milne, but still used by climate researchers today — ignored boundary conditions by assuming an “infinitely thick” atmosphere.”

• but still used by climate researchers today — ignored boundary conditions by assuming an “infinitely thick” atmosphere.”

He didn’t so much as ignore but found he had a singularity at the surface which eventually was “coped with” by having an unprecedented two boundary conditions for a first order differential equation. As can be seen (Eqn 1&2) in this paper by Lorenz and McKay which suggests an unobserved temperature discontinuity at the surface.

52. ge0050

Greenhouse effect in semi-transparent planetary atmospheres, by Ferenc M. Miskolczi.
IDŐJÁRÁS, Quarterly Journal of the Hungarian Meteorological Service. Vol. 111, No. 1, January–March 2007,

http://met.hu/doc/idojaras/vol111001_01.pdf

53. Joe Lalonde

Judith,
You are just hitting the tip of the problem and slowly understanding it is much bigger.
What is correct science and what is “Hollywood Science” or “Religious Science”?
Many theories are currently being used as science, what is correct and what is fiction?

I have made great strides in these areas by just simple measurements or time frames.

54. dp

I’m going to ask the same question I asked at Anthony’s site – Did more energy arrive on earth in 2010 than left earth on 2010? And the next question I’ll ask regards each past year for the previous billion years. What is the norm?

And then I’ll have to ask what will happen in 2011 regarding the balance of energy. I think the answer is nobody knows. And the thing is, this is what climate change is all about. What the hell is the energy balance and show me the proof.

Warmists think more energy arrived than left. Fine – where is it? All the really smart people are saying there’s been no important heating (or cooling) since 1998. Watt’s up wif dat?

Don’t show me temperature of the air – that can change because of energy that arrived decades ago and ended up in the oceans and was, “for causes unknown”, released to the air which it heated and which heat is being lost to space, there being no other place for it to go.

So who can answer this fundamental question? Is more heat arriving than is leaving? What is your proof? If you can’t answer that then don’t ask me to authorize congress to spend billions of dollars on your nutter dream.

• Joe Lalonde

dp,
Did more energy get trapped in the atmosphere would be what you are looking for.
Tough question.
If you are looking at what the plant and animal life to converting energy into growth and expansion. Yes we have bloomed and blossomed more giving more gases to the atmosphere.
The atmosphere only bleeds off so much gases all the time due to the planets rotation and holding of the atmopshere as we travel through space and around the sun.
We have too much gases built up in the atmosphere which has given physical changes to ocean salinity. This in turn is reflecting more sunlight.

55. Most skeptic take the pragmatic attitude. For any practical purposes, a sensitivity of 1.2 or 1.0 °C is as irrelevant as sensitivity of zero. It’s really the much higher figures only that make it “interesting”.

My guess is that despite all the possible doubts, a sensible no-feedback sensitivity – and even the total one – will be in the ballpark of 1 Celsius degree. I have done approximate estimates that end with this figure. But it’s true that I don’t know any genuinely convincing papers that calculate the 1.2 deg C figure that you could actually follow.

And this is surely a computation that shouldn’t deserve supercomputers. It is a pretty straightforward system of partial differential equations you may calculate on your PC if not on your paper.

It’s questionable what’s the right “no-feedback” problem when it comes to water. In particular, water is the key greenhouse gas that determines that the troposphere has some thickness at all. So one shouldn’t completely neglect the existence of water in the atmosphere: the height of the tropopause should be realistic even in the no-feedback calculations.

On the other hand, once water is present, its IR spectral lines overlap with those of CO2 and make CO2 less potent a greenhouse gas – for the same reason why the temperature dependence on CO2 concentration becomes logarithmic: the previous molecules have already done much of the effect, anyway.

That’s one of the many reasons why I believe that when taking the natural dynamics of the climate – and especially water and its circulation – properly into account, we obtain a lower result than the no-water no-feedback sensitivity, whatever the latter exactly is.

• Lubos wrote:
“… a sensible no-feedback sensitivity – and even the total one – will be in the ballpark of 1 Celsius degree.”

Could you please define what the “sensible no-feedback sensitivity” is?

Judith/IPCC defines it as:
“The no feedback sensitivity is the direct response of the surface temperature to radiative forcing by the increased CO2, without any feedbacks.”

Could you define what the “direct response” is? On which time scale? In my experience, if radiative influx changes (either by 3.7W/m2 from a small cloud, or by 200W/m2 when Sun begins to shine in the morning), surface temperature responds by noon. Is this a sensible response?

I doubt that a sensible definition exists.

56. alistairmcd

Do the calculation with the no-convection value of GHG heating of 60K. I make the answer about 0.7K.

57. Is there any way to turn what has been learned here into a journal article? A Review of Uncertainties, or something like that? What journal would publish it?

• A better goal might be to write a comprehensible book chapter on the subject that people can actually understand. Too many questions IMO to actually do that at this point.

• RobB

Yes – It’s a bit hard to see where the consensus lies at the moment. Very little seems to be agreed.

• Uncertainty is not about consensus. It is about unresolved issues and the comments on this thread reveal several quite clearly. Every active area of scientific inquiry has big fights at the frontier. These need to be summarized and publicized. In many ways the fights are as important as the research results.

• David L. Hagen

Even if all the issues were “resolved”, there is still uncertainty from both statistical variations in measurement (Type A), and bias in measurement (Type B).

• The questions would be the topic. Science is all about questions.

• agreed, not enough questions are being asked about all this

• RobB

Yes – that’s fine for those seeking to solve the ‘wicked problem’ but very tricky for those trying to learn a bit more about the subject….who to believe!!

• “Who to believe” is not the problem I am trying to solve. I see this as a problem in science communication, namely communicating issues in addition to results. Thanks to Web 2.0 this is the Age of Issues. Journalism has already been transformed. It used to be that if you disagreed with a newspaper article you could write a letter to the editor and hope it got published in a few weeks, but only a few did, due to space constraints. Now we get 100 real time comments so the issues become clear very quickly. This leads to articles about issues, etc.

By the same token the hundreds of comments on this thread are important raw data on a major scientific issue that is not well recognized (and is policy relevant). There must be some way to convert all this data into a useful result, an issue analysis if you like.

(Of course I just happen to have a 1975 textbook on issue analysis, but it is free so I am not selling anything here.
http://www.stemed.info/reports/Wojick_Issue_Analysis_txt.pdf It is just that I have been at this for a long time.)

Web 2.0 raises issue analysis to a whole new level. How should science respond? This blog is clearly at the forefront of this question.

58. hunter

When airplanes are designed, they apply the known physics of aeronautics, yet mistakes are still made.
Why is climate science so sold on the idea that this theory of climate that involves large positive feedbacks is so good?
The answer seems to be that the large feedbacks are proper because the models say so. For those of us who are mere proles, this seems circular. The response of those heavily invested in the current theory (let’s call it what Holdren is calling it today, ‘global climate disruption’) only strengthens that impression.
Dr. Curry is at least willing to look at some of the assumptions of this theory and for that many of her colleagues have turned on her. Which frankly only confirms the impression of circular and group think dynamics.
I look forward to where this is going and am grateful to Dr. Curry for having the guts to go there.

59. mkelly

http://www.scribd.com/doc/34962513/Elsasser1942

Dr. Curry please read above paper. It is about radiative transfer in the atmosphere.

60. Oslo

Warning: potentially embarrassing question from a layman:

Let’s say CO2 is a fur coat.

If it is worn in winter, it keeps temperature close to body temperature by preventing the escape of heat to the outside.

If it is worn in extreme summer heat, it also keeps temperature close to body temperature by preventing incoming heat.

CO2 would have both these effects on earth. Both preventing escape and blocking of heat.

But earth does not produce heat. It gets heat from outside by radiation.

So imagine we put the fur coat around a rock and expose it to sunlight.

Would not a thicker fur coat prevent heating, rather than increase it, since the blocking of incoming would always be stronger than the blocking of outgoing heat?

• Oslo – CO2 is almost completely transparent to incoming sunlight, and the other major greenhouse gas, water vapor, is relatively transparent. Both, however, are efficient at intercepting outgoing infrared radiation from the Earth’s surface and atmosphere The disparity is due to the different wavelengths of incoming solar energy and outgoing infrared energy. The greenhouse effect reflects this difference – i.e., those gases let sunlight pass through, but impede the escape outward of the Earth’s heat.

• Oslo

Thank you Fred,

I am slowly getting wiser.

Since sunlight is broad-spectrum and therefore also contains longwave, I guess there must be at least some blocking effect, even though most of it is shortwave and therefore passes through.

And I understand that warming is caused more by longwave, since this is more easily absorbed by molecules (f. ex CO2) than incoming shortwave. (which implies that warming would be strongest in the lowest part of the atmosphere, closest to the main source of longwave radiation – earth?)

The last piece in the puzzle for me, to get a rough idea of how it works, would be how the shortwave is transformed into longwave in the atmosphere and on earth..

• Oslo – Even the longer solar wavelengths are not abosrbed to any extent by CO2, although there is modest absorption by water.

Longwave infrared emitted from the Earth’s surface (and also some from the atmosphere) is absorbed more strongly where there are more greenhouse gas molecules (nearer the surface), but the energy is also emitted more strongly there. Ultimately, the warming effect is distributed throughout the various altitudes.

Solar shortwave (SW) radiation is absorbed by the Earth, warming the Earth. The reason it is SW is that SW radiation is the main type of energy emitted by an object at the sun’s temperature. However, the Earth, at the temperature it is warmed to, emits mainly in the LW range. In other words, the wavelength spectrum is dependent on temperature. For more on this, you can look up the Stefan-Boltzmann equation as well as the topics of absorptivity and emissivity.

• Oslo

Thank you Fred and Pekka. Great answers.

• Michael Larkin

Oslo,

I would like to second your thanks.

• Oslo, I’m afraid the answers you have been given, although broadly correct, are a bit misleading. If you look at the paper by Collins et al, 2006, link at the top, you will see that the surface shortwave forcing from doubling CO2 is about -1, which counters the surface longwave forcing of about +1 (see fig 4b and Table 9 2nd column of numbers in the paper).
The corresponding text says:

The LBL shortwave forcings are positive or zero at the TOM and uniformly negative at the surface. This implies that WMGHGs and H2O enhance the absorption of solar radiation by the surface-atmosphere column. However, since the additional absorption occurs in the atmosphere and primarily aff ects the downwelling direct-beam radiation, the higher absorption also reduces the net surface insolation. This does not necessarily imply a cooling e ffect on the surface climate
because of coupling between the surface and troposphere.

The last sentence illustrates the intrinsic bias of the climate scientists. Whenever they find a cooling effect, they have to try to downplay it. Can you imagine a climate scientist, on finding a warming effect, saying that it will not necessarily induce warming because of some coupling?

• Brian H

SW shadowing! Would that help explain some peculiar results some claim from home experiments with CO2-filled bottles, which seem to warm about 1°F less in direct sunlight than ordinary air-filled ones?

In any case, a vivid reminder that isolating one phenomenon (or slice of the EM spectrum) is not as helpful as it sounds. And that a “no-feedback” figure has hidden assumptions that make it much wobblier than is claimed.

• You’re welcome, Oslo, and Michael. The answers Pekka and I gave are correct, and they are not misleading, despite a claim made below. SW radiation, as we stated, is absorbed by the surface and to a lesser extent by water mainly in lower parts of the atmosphere. LW radiation is emitted by the surface and by the atmosphere. Although we didn’t discuss CO2 increases, their net effect is a warming one – both the atmosphere and the surface warm. Many details are discussed elsewhere in this thread and in the threads on the greenhojuse effect.

• About 25% of the energy of solar radiation is absorbed in the atmosphere and about 30% reflected back, but this includes reflection from the surface as well as from clouds. Thus less than half is absorbed by the oceans and continents. Almost all energy of solar radiation comes from either visible light or near infrared (less than 2 um wavelength), a small fraction also from UV. CO2 has practically no part in stopping solar radiation, but water has.

When shortwave radiation is absorbed, its energy is converted to heat. The temperature determines how much long wave IR radiation is emitted by the surface and greenhouse gases. (Oxygen and nitrogen cannot emit IR.) In the case of gases the emissivity limits the radiation to some wavelengths while solid and matter emits IR at all wavelengths within limits set by the temperature.

• Kevoka

Oxygen can absorb/emit in the LW IR spectrum (one band is around 6.6.7um), albeit weakly. Please refer to the Hitran database. This is attributed to collision induced absorption.

61. Heikki

Excuse one more dumb layman question:
In the discussions it is mentioned that all energy that will effect the surface energy budget originates from the sun. How is the hot, slowly cooling core of our planet affecting the surface energy budget? There is a lot of energy left in there, so there would still be some heating even if the sun wouldt rise tomorrow? Would changes in the greenhouse affect the rate at wich energy from the core is transfered to the surface? Forcing or feedback?

I read somewhere that our earth would be 30C colder if there was no water (could have ment water vapour only). Is there any evidence, or paperes regarding this? Has anyone tried to figure out what our climate would be without any water, no oceans, no ice, no clouds? A purely theoretical question, I know and I hope, but could it help us understand the complexity of feedbacks and warming on the surface?
( that was more than one silly question, sorry…).

• Heikki – regarding your first question, geothermal heat is not zero, but on a relative basis contributes only negligibly to the climate’s energy budget, mainly because the thermal conductivity of the Earth’s crust is very low.

As to a planet without water, the analysis is complicated by countervailing effects – the absence of water vapor and cloud water/ice reduces the greenhouse warming effect but the absence of clouds, snow and ice reduces the cooling effect of the Earth’s albedo. The calculations have been done, but I don’t have the results at my fingertips. It’s my impression that we would end up slightly cooler than we are now, but not the 33 deg C cooler that comes from assuming the absence of greenhouse gases but with an unchanged albedo.

• One convenience of a planet without water is that it would be possible to measure forcing directly with reasonable accuracy, because most of the major feedbacks depend on the presence of water.

• Fred Moolton writes “to measure forcing directly with reasonable accuracy,”

I thought that we had established that radiative forcing cannot be measured.

• It can’t be measured on a planet with water, but it’s certainly theoretically measurable.

• Heikki

I remembered where I had read the claim that a no-H2O environment would be 30C cooler. It was a newspaper article, and looking at the source it referred it was evident that the writer had somehow confused the 33C heating of the atmosphere with no-H20, as Fred assumed.

62. Howard

Judith:

Based on this thread, my conceptual understanding is that basic physics is pretty well undisputed that double CO2 yields an increase in radiative forcing of 3.7-W/m2. What is disputed is the how to calculate resulting surface temperature increase given a no-feedback (and no water as Lubos asserts) scenario.

Conceptually, I’m thinking that any resulting temperature increase from a doubling of CO2 is a priori a feedback. Figuring out the multiple processes and time dependent positive and negative feedbacks from the 3.7-W/m2 forcing is the elephant in the room (I know I’m over-stating the obvious here).

Therefore, it now seems to me that competing calculations of the CO2 no-feedback sensitivity is a meaningless intellectual exercise in the grand scheme of things.

I look forward to hearing how this conclusion is wrong. Thanks.

• You certainly make a good point. The challenge is to make sense of this whole thing in some sort of a conceptual way, that isn’t just a bunch of numbers crunched by a climate model

• Howard

OK Judith, I’ll bite.

You ask for concepts, I got a million of ’em ;)

Conceptually, my SWAG is that the feedbacks from increased radiative forcing creates and/or changes weather. How these weather changes effect climate are unknown. However, since the changes in CO2 induced radiative forcing are glacially slow and very well mixed, my conceptual guess is that these changes are subtle and very hard to detect because they don’t meaningfully alter the dynamic equilibrium of the pure virgin atmosphere/hydrosphere/biosphere.

Geology 3 (Hysterical Geology) presents undisputed solid textbook science that we are in the middle (or perhaps nearer the end of the middle) of a relatively cool, yet long interglacial warm and cozy respite from the stark frozen hell of 1-mile thick ice over Chicago. I think if we are discussing concepts, this is a very important one to consider.

As an aside and slight conceptual digression with a tip of the cap to Roger Pielke Sr, perhaps this enjoyable and profitable Holocene climate is due, in part, to Anthropogenic Climate Moderation from 10K-years of agricultural and landuse alteration. It’s interesting that the fruit of knowledge may have resulted in a prolonged life in the garden of Eden. Conceptually, I always believed that Mother Gaia loves us, even when we are bad or having too much fun.

The feedbacks from the slowly increasing, well-mixed, homogeneous, isotropic, non-condensible CO2 are quite boring. Instead, lets conceptualize the short-lived heterogeneous, anisotropic human and earthly garbage like carbon black soot, silicic volcanism, sulfate aerosols, concrete cities, asphalt jungles, irrigated agriculture, dammed rivers, solar storms, meteor showers, etc..

In the real world, what works in physics, business and seduction is leverage. Conceptually, the so-called second and third order forcings listed above are leveraged to first order by the fact that they are concentrated. Soot on snow melts next to clean snow and temperatures drop like a rock after Mt Pinatubo blows it’s top. Yet, there is little published on the weather feedback effects of irrigated agriculture spewing the #1 atmospheric greenhouse gas pollution via evapotransporation on the order of 100% of the total concentration of in the atmosphere (dihydrogen monoxide). For now, lets ignore the potential climatological effects from nutrients discharging to the oceans from the gargantuan industrial footprint.

Judith says we need to evaluate concepts for the feedbacks due to a doubling of the well mixed and slowly increasing CO2. From my conceptual perspective, this question is not a very high priority until we get a conceptual and quantitative understanding of all the other acute and concentrated forcings first.

In the mean time, perhaps as a society we should look to ways we can mitigate environmental and human health harm that is occurring right now, instead of making energy (and as an inconvenient byproduct environmental and human health protection) incredibly expensive based on some numerically modeled future scenario calculated with an understated level of certainty.

63. David L. Hagen

Judith

Conceptually, the simplest to me seems to be to:
1) scale the atmospheric temperature lapse rate (and thus the surface temperature) to accommodate
2) the change in absorption/emission from the doubling of CO2 sufficient to
3) retain the TOA boundary conditions and
4) TOA thermal equilibrium.

i.e.,
3.1) holding solar insolation constant,
3.2) no incoming long wave radiation,
3.3) outgoing short wave radiation plus long wave radiation equal the incoming solar insolation.

5) Hold all other parameters constant, including
5.1) atmospheric CO2 concentration
5.2) atmospheric humidity profile.
5.3) surface albedo,
5.4) clouds (including cosmic rays)
5.5) convection, and
5.6) conduction.

1.1) Scaling the lapse rate may require both a slope and a pivot altitude.
(See Vaughan Pratt’s 2 tap bathtub.)

1.2) Alternatively, scaling the lapse rate may involve the assumption of holding the atmospheric profile constant above a given altitude.
e.g. US-ST76 standard atmosphere assumes a constant temperature above about 11 km.

Spencer points out that the base Co2 concentration assumed makes a big difference. e.g. adding 300 ppm to 0 ppm vs 300 ppm vs 1200 ppm.

A major question is what column moisture to use. The global average precipitable moisture that Miskolczi reports from averaging TIGR is about twice that of US-ST76. etc. See slides 67, 68.

• David L. Hagen

Gerard Roe “Feedbacks; Timescales, and Seeing Red
to find out that Roe distinguishes lapse rate as a feedgback from CO2 doubling sensitivity.
So “no-feedback” includes no lapse rate change?
If so, change my comment to lapse rate only feedback!

64. Michael Larkin

As a bozo on the bus, I have been following this discussion (as best I can) with a mixture of interest and amazement. I wonder how many ordinary folk realise how artificial all these speculations are. I mean – come on – stopping the world, holding everything constant, but immediately doubling the amount of CO2 to come up with a figure of no-feedback sensitivity. Are people listening to themselves?

We’ve got the world we’ve got. The earth and its atmosphere is what it is. CO2 isn’t going to double in one fell swoop, and its effect isn’t going to be instantaneous. We can’t arbitrarily rule out actual phenomena such as convection, or the interdependency of variables, some of which may not even be known.

The more I read, the more I become aware of the ignorance of experts; I’m not saying they aren’t clever or sincere or doing the best they can, but if only more had the humility to recognise that.

It’s been pointed out that the primary usefulness of this thread might be in raising questions that may not have been asked before, or if they have, not seriously: and how salutary is that? Dr. Curry has emphasised uncertainty in the past, but underlying that is a lot of ignorance. It’s only at this blog, and particularly with this thread, that I have come to realise how flaky and shaky all this stuff is (don’t get me wrong: it’s a good realisation).

Take a few laws of physics, add a soupcon of well-known and well-validated equations applicable to simple and sometimes theoretical situations; stir in judicious amounts of SWAG (GCMs perhaps being the most egregious example); then have at interpreting mostly manipulated empirical data in light of that. Meanwhile, let the movers and shakers who can’t perceive the ignorance and uncertainty press for action with who-knows-what unforeseen consequences.

I’ve said it before – it’s FUBAR, but there’s sweet Fanny Adams ordinary people can do about it. If only they could see and grasp even the flavour of the discussion here: it’s downright mediaeval. Seriously – I’ve read discussions about parapsychology that are based on appreciably less speculation and ignorance.

• Arfur Bryant

Michael Larkin,

Shame on you, Sir. How dare you attempt to introduce the concept of ‘the real world’ into an esoteric discussion on CO2 sensitivity?

Ps, Well said, mon brave… :)

• The discussions on no-feedback sensitivity tell more about difficulties in presenting the understanding on the atmospheric behaviour that about the understanding itself. The question is almost semantic: what is the meaning of the expression “no-feedback”, when such changes are considered that are themselves at least partially feedback.

For the real CO2 sensitivity with feedbacks these problems are not present, but then we face naturally the serious gaps in detailed knowledge of atmospheric processes.

• Michael Larkin you write “I’ve said it before – it’s FUBAR, ”

I agree with you 100%. I wrote at December 12, 2010 at 9:53 am

“So my claim is that deltaT (change in surface temperature for a doubling of CO2 with no feedbacks) is a hypothetical and meaningless number. This, together with your words, has enormous political implications.”

I still think I am correct. Would you agree with me? And if you do, does not this completrely negate the idea that the IPCC has shown that CAGW is real? I cannot prove that CAGW is wrong, but the IPCC has not proved that it is right.

• Michael Larkin

I certainly agree that no one has shown CAGW to be real, whether it is or not. But it’s worse than that, methinks. Underneath it all, no one understands how the climate system works and people are playing with concepts that have limited relation to the real world.

65. Christopher Game

Am I jumping the gun by asking the following question?
As I read for example Bony et al. 2006, Soden and Held 2006, there are in the IPCC dogma four “feedbacks”: increased atmospheric optical thickness due to increased water vapour column amount due to sustained relative humidity; cloud radiative effects; albedo effects; lapse rate effects.

It seems to me that it is imaginable that there would also be a change in evaporative-circulatory cooling of the land-sea surface, with an effect on the land-sea surface temperature by change in rate of evaporation.

Would some kind person be willing spare a little time to tell me, is this possibility reperesented in the four “feedbacks” of the dogma? How? Why?

Christopher Game

• surface evaporative cooling seems to be left out in these analyses (of course it is included in the global climate models

66. Mike Jonas

I am not convinced that this conversation is meaningful. Judith, you ask “So how do we define this problem to make sense? Or can we?“. I don’t think we can (as I attempt to explain below), and even if we can I doubt it can be meaningful. Unsurprisingly, most of the arguments in the comments are based on what is or is not in the definition.

We are trying to talk about CO2 and temperature only, the “no-feedback” equilibrium sensitivity: “The no feedback sensitivity is the direct response of the surface temperature to radiative forcing by the increased CO2, without any feedbacks.“.
Peter317 says “Any increase in the temperature has to increase the energy flow away from the body by all means it can, ie conduction, convection, evaporation as well as radiation. When the outflow of energy equals the inflow then thermal equilibrium exists with the body at a higher temperature. But that temperature cannot be as much as ~1C higher, as the radiative outflow has to be less than the total outflow.“.
Are conduction, convection and evaporation themselves feedbacks, or are they part of the no-feedback equation?
If they are feedbacks, then surely Fred Moolton is correct (no-feedback sensitivity is ~1C).
If conduction, convection and evaporation are not feedbacks, it is still the case that they cannot of themselves leave the system (ie. into space). Therefore when equilibrium is reached they will have zero net effect. So we still have ~1C.
But the heat carried from the surface by conduction, convection and evaporation can then be radiated from the atmosphere. If this is not feedback, but part of the no-feedback equation, then Jeff Id and Peter317 are correct, ie. <1C.
But in the case of evaporation, with higher temperatures there will then be more water vapour (and more CO2) in the atmosphere. Thus more outgoing radiation will be trapped. Is this a feedback then it doesn't count, but if not then it does. And so on, and so on.

So unless you are absolutely specific as to what is a feedback and what isn't, the no-feedback sensitivity cannot be calculated. Whatever you decide on, the calculation is in any case a highly artificial construct and IMHO unlikely to be meaningful.

To my simple mind, the key statement here is by Chris Colose talking about the troposphere : "It is the corresponding warming of the low level air that drags the surface temperature along with it“. Since the planet is showing obvious signs that it is the other way round in practice – surface temperature drags the low level air temperature along with it – we are arguing about something that appears to be of rather low importance. Wouldn’t we do better to look at what causes the surface temperature changes in the first place?

• Mike, I agree, I am asking the question of what causes the surface temperature to change in the first place.

• Mike Jonas

Actually, you aren’t asking what causes the surface temperature to change in the first place. You are asking about the effect of doubling CO2 : “the direct response of the surface temperature to radiative forcing by the increased CO2, without any feedbacks“.
If you think you are asking “what causes the surface temperature to change in the first place“, then, like the IPCC, you have assumed the cause is CO2.
In the real world, the fact that surface temperature drags the low level air temperature along with it shows this assumption to be false.

• Brian H

Indeed. As was commented elsewhere, getting cause and effect inverted in understanding control circuits is a common mistake.

For some reason I am suddenly reminded of Skinner’s seminal study, “Superstition and the Pigeon”, in which he first found that randomly giving food to starving pigeons caused them to develop all sorts of elaborate dances and gyrations which they went through repetitively. He described it as “operant conditioning” rather than hopeful goal-directed behavior, though, and thus was born the multi-decadal charge down the Behaviorist cul-de-sac.

Perhaps it’s the feeding of the CAGW pushers with billion\$\$ and vast political clout that has so locked-in their conditioned responses? Nawww …

67. snide

http://scienceofdoom.com/2010/12/07/things-climate-science-has-totally-missed-convection/

With apologies to my many readers who understand the basics of heat transfer in the atmosphere and really want to hear more about feedback, uncertainty, real science..

Clearing up basic misconceptions is also necessary. It turns out that many people read this blog and comment on it elsewhere and a common claim about climate science generally (and about this site) is that climate science (and this site) doesn’t understand/ignores convection.
The Anti-World Where Convection Is Misunderstood

Suppose – for a minute – that convection was a totally misunderstood subject. Suppose basic results from convective heat transfer were ridiculed and many dodgy papers were written that claimed that convection moved 1/10 of the heat from the surface or 100x the heat from the surface. Suppose as well that everyone was pretty much “on the money” on radiation because it was taught from kindergarten up.

It would be a strange world – although no stranger than the one we live in where many champions of convection decry the sad state of climate science because it ignores convection, and anyway doesn’t understand radiation..

In this strange world, people like myself would open up shop writing about convection, picking up on misconceptions from readers and other blogs, and generally trying to explain what convection was all about.

No doubt, in that strange world, commenters and bloggers would decry the resulting over-emphasis on convection..
First Misconception – Radiation Results are All Wrong Because Convection Dominates………………….

68. Tomas Milanovic

Judith just look at how bungled this “sensitivity” concept is .

(1) F = ε.σ.T⁴(definition of emissivity)
dF = 4.ε.σ.T³.dT + σ.T⁴. dε => dT = (1/4. ε.σ.T³).(dF – σ.T⁴. dε)

Ta = 1/S . ∫ T.dS (definition of the average temperature at time t over a surface S)

Now if we differentiate under the integral sign even if it is mathematically illegal because the temperature field is not continuous we get :

dTa = 1/S . ∫ dT.dS

Substituting for dT
dTa = 1/S . ∫ [(1/ 4.ε.σ.T³).(dF – σ.T⁴. dε)] . dS

Now this is a sum of 2 terms :

dTa = { 1/S . ∫ [(T/ 4.ε) . dε].dS } + { 1/S . ∫ [dF/4. ε.σ.T³].dS }

The first term is due to the spatial variation of emissivity.
It can of course not be neglected because even if the liquid and solid water has an emissivity reasonably constant and near to 1 , this is not the case for rocks , sable , vegetation etc . It is then necessary to compute the integral which depends on the temperature distribution.
E.g same emissivity distribution , different temperature distributions give different values of the integral and different “sensitivities”.

The second term is more problematic.
Indeed the causality in the differentiated relation goes from T to F . If we change the temperature by dT , the emitted radiation changes by dF .
However what we want to know is what happens with T when the incident radiation changes by dF.
This is a dynamical question whose answer can not be given by the Stefan Boltzmann law but by Navier Stokes (for convection) , the heat equation (for conduction) and the thermodynamics for phase changes and bio-chemical energy.

OK as we can’t answer this one , let’s just consider the final state postulated as being an equilibrium.
Not enough , we must also postulate that the initial and final “equilibrium” states have EXACTLY the same energy repartition in the conduction , convection , phase and chemical energy modes.
In other words the radiation mode must be completely decoupled from other energy transfer modes.
Under those (clearly unrealistic) assumptions we will have in the final state dF emitted = dF absorbed.

Now comes the even harder part . The fundamental equation (1) is only valid for a solid or some liquids so the temperatures and fluxes considered are necessarily evaluated at the Earth surface.
If we took any other surface (sphere) going through the atmosphere , all of the above would be gibberish.

Unfortunately the only place we know something about the fluxes is the TOA because it is there that we will postulate that radiation in = radiation out.
This is also wrong (just look at the difference between the night half, the day half and the sum of both) but this is the basic assumption of all and any climate models sofar.
So what we postulate at some height R where the atmosphere is supposed to “stop” is :
FTOA = g(R,θ,φ) with g some function.
From there via radiative transfer model and assuming known lapse rate, we’ll get to the surface and obtain F = h(R,θ,φ)] with h some other function depending on g (note that h , so F depends also on the choice of R e.g the choice where the atmosphere “stops”)
Last step is just to differentiate F because we need dF in the second integral.
dF = ∂h/∂θ.dθ + ∂h/∂φ.dφ

Now substitute dF and compute the second integral . The sum of both gives dTa , e.g the variation of the average surface temperature.
We can also define the average surface flux variation :
dFa = 1/S . ∫ dF.dS

It appears obvious that {∫ [(1/ 4.ε.σ.T³).(dF – σ.T⁴. dε)] . dS} / ∫ dF.dS
(e.g dTa/dFa) will depend on the spatial distribution of the temperatures and emissivities on the surface as well as on the particular form of the h function which transforms TOA fluxes in surface fluxes.
It will of course also change with time but this dynamical question has been evacuated by considering only initial and final equilibrium states even if there actually never is equilibrium.

A careful reader will have noted and concluded by now that it is impossible to evaluate these 2 integrals because they necessitate the knowledge of the surface temperature field which is precisely the unknown we want to identify.
The parameter dTa/dFa is a nonsense which can only have a limited use for black bodies in radiative equilibriums without other energy transfer modes.
The Earth is neither the former nor the latter.

69. manacker

Judith

I believe that Mike Jonas has raised a really basic question here.

we are arguing about something [CO2] that appears to be of rather low importance. Wouldn’t we do better to look at what causes the surface temperature changes in the first place?

I happen to think that clouds are a very good place to look.

[Roy Spencer has voiced a similar opinion on “clouds” as a natural forcing factor.]

If we are myopically looking at basically all climate forcing as responses to GHGs (primarily CO2) or feedbacks to these responses, we are missing this very basic question.

It would appear to be a topic that is separate from that of this thread (which already presupposes CO2 as a cause), so should be covered separately – maybe with a opening comment by Mike Jonas, similar to the ones he has posted here.

Should open up some interesting discussions.

What do you think?

Max

• clouds will be a big topic here right after the first of the year.

• manacker

Great!

Thanks.

Max

70. Unwashed

You people are awesome. Really. I’ve followed this thread all the way down and it would be difficult to fully express my personal gratitude to have been witness to an adult conversation by real experts on a topic fundamental to climate research. No appreciable snark, lots of alternate explanations offered and explained in detail (some frankly over my head), and enormous mutual tolerance for differing perspectives and preferred methodologies.

Thank you.

Venturing a naive observation based on experience in other fields, it seems to me there are two primary uses for models:

1. To improve knowledge of how complex processes work in theory, and….
2. To forecast how those complex processes will eventuate in the real world.

It seems to me that’s a vital distinction of purpose. We may develop a very high degree of confidence about whether theories tested in models produce sensible outcomes which may be instructive. Achieving that minimum goal, however, offers absolutely no assurance any generally sensible model result offers a go-ahead with confidence to project changes of only a few degrees surface temperature over a future century when forecast error of the model is neither known nor knowable.

Many of us non-experts who remain stuck in the “sceptics” camp are not happy to be regarded as ignorant AND stupid, though perhaps we are both and just don’t “get it”. Some of us simply cannot accept as remotely credible a claim to 95% confidence in distant-future forecasts generated by models which are inherently incapable of faithfully reproducing the full complexity of the real world climate, particularly when they rest on elementary assumptions which are neither proven nor provable.

• Brian H

I would like to point out here that 95% confidence is an absurdly low bar to clear for a subject as wide open to error and bias sources as “Climate Science”. It is specifically such things as data selection and confirmation bias that a high level of confidence of rejection of the null hypothesis (temperature is not controlled by CO2 fluctuations, in this case) is supposed to prevent. Given the stakes, a very high level of confidence is warranted!

The lack of ab initio participation by professional statisticians is one of many travesties in the field.

• Brian H

edit: “demanding a very high level…”

71. Brian H

edit: “demanding a very high level…”

72. novandilcosid

When I look at the temperature and sea level (Fairbridge curve) history of the planet over the last 20,000 years, I am struck by the similarity to the response of a second order under-damped system to a step function, possibly with additional third-and-higher order factors which account for the 30-odd year sinusoidal minor variation and other as yet undetected responses.
The Bode equation does not describe the planetary temperature/sea level responses, nor could it generate them, it is a far too simplistic representation of climate. The bottom line of that equation also implies unconditional instability which does not appear to be a feature of our planet.

I think Dr Curry is correct to be concentrating on the Surface. It receives energy from the sun and transfers it to space (some through the atmosphere). In a very real sense the surface determines the temperature of the atmosphere, NOT the other way round: the atmosphere transfers no net energy to the surface, all the energy flow is in the other direction.

Suppose that the temperature of the surface dropped by 10 DegC. Is there any doubt that the atmosphere would respond by also falling in temperature? It has to.
On the other hand it is NOT possible for the tropopause to heat the surface. All that might happen is that an increase in atmospheric temperature might decrease conduction from the surface and also decrease the net radiation from the surface into the atmosphere (ie increase the back radiation from the atmosphere). This change in the surface conditions will tend to heat up the surface, but its temperature response to these changes is markedly less than the response of the upper atmosphere, as there will necessarily be a change in the evaporation rate. (This is NOT a feedback. Evaporation is a determinator of the surface temperature. The rate of evaporation is a major assumption in the models – if you assume a high rate of change with temperature you get virtually no temperature increase. If you assume a really low rate as the models do, then you get scary temperature changes.)

The transfer of energy to the atmosphere from the surface is effected in 3 ways:
a. Conduction, about one fifth.
b. Radiation, temperature dependant and greenhouse affected, about one fifth.
c. Evaporated water, temperature dependant, about three fifths.
All this energy, equal to the surface insolation, enters the atmosphere as kinetic energy.

That energy leaves the atmosphere by radiation from IR active molecules. The vast amount is from water, mostly from the area around the tops of the clouds. A small amount is radiated by CO2, mostly from near the tropopause (Don’t think it’s small? Take a dekko at the spectra of outgoing radiation. The CO2 contribution is not great.)

I dislike very much the notion that one could suddenly double the amount of CO2, get a “radiative forcing” at the Tropopause, and then have this heat up that area, and “work its way down” to the surface (IPCC AR4 WG1 Fig2-2). That doesn’t happen.

If there is a doubling of CO2 we expect a DECREASE in the amount of energy radiated from the CO2 layer balanced by an INCREASE in the amount of energy radiated from the water layer. ie the upper atmosphere warms sufficiently for the NET outgoing radiation to balance. There is no reason to suppose that the temperature change will be uniform.

If one wishes to invoke a constant lapse rate to require the surface to respond by the same amount, it is necessary to demostrate the mechanism by which the elevated surface temperature is maintained. How does the back-radiation increase sufficiently for both the Plank Radiation and the elevated evaporation rate to be maintained?

• Jim D

A brief form of the constant-lapse-rate argument is that the troposphere primarily gets its heat via transfer from the surface, which in turn is primarily heated by the sun. Taking these net transfers into account, the troposphere’s temperature is governed by the surface temperature and therefore has to warm from the surface upwards. The lapse rate is due to the convection that supplies this transfer of heat from the surface to troposphere.
I deliberately refer to the troposphere here, as that is the part of the atmosphere that is influenced by the surface.

73. novandilcosid

Oops! Error in the above. I stated:
“All this energy, equal to the surface insolation, enters the atmosphere as kinetic energy.”

I forgot about the window direct to Space – again! It should read:
“All this energy enters the atmosphere as kinetic energy.”

74. Geoff Sherrington

Another thought experiment. If all weatehr station thermometers/thermistors were raised an extra meter above the ground, the nett effect would be a cooler global temperature measurement. Would this affect the balance of output/input at the TOA?

75. philc

“a metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2concentration.”

This formulation seems rather simplistic and biased. What about other sensitivities such as the climate response from changes in surface use/configuration. Or, climate response to perturbations from point heat sources(UHI). Or, climate changes in response to changes in the sun’s magnetosphere.

The formulation of the statement assumes its answer.