CO2 no-feedback sensitivity: Part II

by Judith Curry

So how to define this problem to make sense?  Or can we?  To focus the discussion started on the previous thread, I am highlighting some of the defining or thought provoking statement from the the previous thread:

Mike Jonas states:

So unless you are absolutely specific as to what is a feedback and what isn’t, the no-feedback sensitivity cannot be calculated. Whatever you decide on, the calculation is in any case a highly artificial construct and IMHO unlikely to be meaningful.

The essence of the challenge is described by Pekka Pirila:

The discussions on no-feedback sensitivity tell more about difficulties in presenting the understanding on the atmospheric behaviour that about the understanding itself. The question is almost semantic: what is the meaning of the expression “no-feedback”, when such changes are considered that are themselves at least partially feedback.

For the real CO2 sensitivity with feedbacks these problems are not present, but then we face naturally the serious gaps in detailed knowledge of atmospheric processes.

Tomas Milanovic lays it all out:

Judith just look at how bungled this “sensitivity” concept is .

(1) F = ε.σ.T⁴(definition of emissivity)
dF = 4.ε.σ.T³.dT + σ.T⁴. dε => dT = (1/4. ε.σ.T³).(dF – σ.T⁴. dε)

Ta = 1/S . ∫ T.dS (definition of the average temperature at time t over a surface S)

Now if we differentiate under the integral sign even if it is mathematically illegal because the temperature field is not continuous we get :

dTa = 1/S . ∫ dT.dS

Substituting for dT
dTa = 1/S . ∫ [(1/ 4.ε.σ.T³).(dF – σ.T⁴. dε)] . dS

Now this is a sum of 2 terms :

dTa = { 1/S . ∫ [(T/ 4.ε) . dε].dS } + { 1/S . ∫ [dF/4. ε.σ.T³].dS }

The first term is due to the spatial variation of emissivity.
It can of course not be neglected because even if the liquid and solid water has an emissivity reasonably constant and near to 1 , this is not the case for rocks , sable , vegetation etc . It is then necessary to compute the integral which depends on the temperature distribution. E.g same emissivity distribution , different temperature distributions give different values of the integral and different “sensitivities”.

The second term is more problematic. Indeed the causality in the differentiated relation goes from T to F . If we change the temperature by dT , the emitted radiation changes by dF . However what we want to know is what happens with T when theincident radiation changes by dF. This is a dynamical question whose answer can not be given by the Stefan Boltzmann law but by Navier Stokes (for convection) , the heat equation (for conduction) and the thermodynamics for phase changes and bio-chemical energy.

OK as we can’t answer this one , let’s just consider the final state postulated as being an equilibrium. Not enough , we must also postulate that the initial and final “equilibrium” states have EXACTLY the same energy repartition in the conduction , convection , phase and chemical energy modes. In other words the radiation mode must be completely decoupled from other energy transfer modes. Under those (clearly unrealistic) assumptions we will have in the final state dF emitted = dF absorbed.

Now comes the even harder part . The fundamental equation (1) is only valid for a solid or some liquids so the temperatures and fluxes considered are necessarily evaluated at the Earth surface. If we took any other surface (sphere) going through the atmosphere , all of the above would be gibberish.

Unfortunately the only place we know something about the fluxes is the TOA because it is there that we will postulate that radiation in = radiation out.
This is also wrong (just look at the difference between the night half, the day half and the sum of both) but this is the basic assumption of all and any climate models sofar. So what we postulate at some height R where the atmosphere is supposed to “stop” is :
FTOA = g(R,θ,φ) with g some function.
From there via radiative transfer model and assuming known lapse rate, we’ll get to the surface and obtain F = h(R,θ,φ)] with h some other function depending on g (note that h , so F depends also on the choice of R e.g the choice where the atmosphere “stops”).  Last step is just to differentiate F because we need dF in the second integral.
dF = ∂h/∂θ.dθ + ∂h/∂φ.dφ

Now substitute dF and compute the second integral . The sum of both gives dTa , e.g the variation of the average surface temperature. We can also define the average surface flux variation :
dFa = 1/S . ∫ dF.dS

It appears obvious that {∫ [(1/ 4.ε.σ.T³).(dF – σ.T⁴. dε)] . dS} / ∫ dF.dS
(e.g dTa/dFa) will depend on the spatial distribution of the temperatures and emissivities on the surface as well as on the particular form of the h function which transforms TOA fluxes in surface fluxes. It will of course also change with time but this dynamical question has been evacuated by considering only initial and final equilibrium states even if there actually never is equilibrium.

A careful reader will have noted and concluded by now that it is impossible to evaluate these 2 integrals because they necessitate the knowledge of the surface temperature field which is precisely the unknown we want to identify.
The parameter dTa/dFa is a nonsense which can only have a limited use for black bodies in radiative equilibriums without other energy transfer modes.
The Earth is neither the former nor the latter.

So can we salvage anything from this concept?  Does the method proposed by Jinhua Lu help conceptualize this problem in a better way?

236 responses to “CO2 no-feedback sensitivity: Part II

  1. I was actually becoming less skeptical. Now, not so much.

  2. If anyone needs a physics resource, Hyperphysics has a lot of information presented in a useful and interesting format.

  3. The more I read these threads – the more it becomes clear just how very little is known about CO2’s effect on climate.

    How such supposed ‘certainty’ is derived from such poor empirical evidence ie: computer models, which in turn are based on the so poorly understood radiative properties of all the climate components, let alone CO2, is simply laughable.

  4. Richard S Courtney

    Dr Curry:

    You ask:
    “So can we salvage anything from this concept? ”

    With respect, I want to go back a further step by asking another question; i.e.
    Why would we want to salvage anything from the concept of ‘no-feedback sensitivity’ when no philosophical or practical uses of the concept have been evinced?

    I remain convinced that the concept is an intellectual cul-de-sac.

    As I tried to imply on the previous thread, I think discussion of the concept is akin to discussing how many photonic angels can dance on the head of a climatological pin. And the previous thread included little that was not discussion of such ‘dancing’.

    Richard

    • Richard – dancing on the head of a pin it may be, but IMHO the answer seems to be crucial. By my (meagre) understanding, it seems that the whole basis of the consensus argument/methodology/subsequent modelling is critically undermined if the radiative transfer (no feedback) science is found to be unsound. Like Jim, my skepticism is increasing the more we discuss this.

      • Richard S Courtney

        RobB:

        Thankyou for your comment.

        First, I need to state that I am an AGW-skeptic and, therefore, my views originate from that position.

        My point is that ‘no-feedback’ sensitivity is a hypothetical concept. There has always been CO2 in the atmosphere and consideration of its absence leads nowhere in understanding how the climate system responds to changes.

        We need to know how the climate system behaves and how it responds to changes, but consideration of a hypothetical different climate system tells us nothing about those things. (Pekka Pirila says this in different words in the quotation of him in the above article although I am not clear that this was his intention).

        Furthermore, I think the concept of ‘feedbacks’ is a similar dead-end and for the same reasons; i.e. the entire system responds to changes and any attempts to decide which of those changes are – or are not – feedbacks is pointless. But that is a subject for a different thread.

        Richard

      • Tomas Milanovic

        RobB
        … the whole basis of the consensus argument/methodology/subsequent modelling is critically undermined if the radiative transfer (no feedback) science is found to be unsound.

        I wouldn’t necessarily say that it is unsound or has been proven unsound.
        I am much more with R.Courtney by saying that it is unfeasible (look at the integrals , it’s impossible to compute even with those unrealistic assumptions like equilibrium etc) and useless even if the equations are sound.

        Once somebody has admitted that the real Earth is neither a black body nor in equilibrium what clearly doesn’t take a superhuman intellectual effort, it is just a small step to conclude that this dTa/dFa (constant averaged climate “sensitivity”) business is just a waste of time.

      • Tomas – I am near the limit of my understanding, but from my reading of the various threads, I am led to ask whether these doubts about the utility of radiative transfer science call into question the mechanization of the climate models? Do assumptions about CO2 sensitivity (no feedback) underpin some of the later science that is used in the consensus argument or is this just a hypothetical discussion? In other words, why are we discussing this thread? What is its significance?

      • The discussion here doesn’t particularly relate back to the global climate models. Owing to the complexity and large number of degrees of freedom in the climate models, and their uncertainty, it is deemed useful to develop some sort of conceptual understanding of the broad energetics of the climate from a systems perspective. I think this is a worthy goal, but linear control theory (the basic for the simplistic feedback/no feedback analyses) doesn’t seem up to the task, at least the way it has been formulated in the context of the climate system. I am planning a post to dig back into the history of how it was formulated in this way. Even if this method turns out to be fatally flawed or merely not useful, it is worthwhile to ponder an alternative system type approach for analyzing the climate system sensitivity to external perturbations.

      • Richard S Courtney

        Dr Curry:

        Thankyou. Yes. That summarises what I was trying to say.

        Richard

      • Alexander Harvey

        Judith,

        Taken in its narrow realm, being the relationship between global temparture anomalies, forcing, noise flux, and thermal and radiative responses; I think that you might struggle to show ways in which a complex non-linear deterministic world produces effects that can readily be distinquished from realisations of such a simple, linear, stochasitc model of the LTI system type.

        By definition the simple model does not do dynamics, it deals in the what not the how. It can be informed by the actual dynamical modes of the world or of an AOGCM but not vice-versa. It should be argued that the simple model has to learn everything it knows from data, but then the AOGCMs can hardly be said to be unschooled in such matters.

        The problem with the simple models is that they learn but what they are taught but they learn it well, and they can project the future and arguably do so with skill and with calculable uncertainty.

        In principle they can be inverted to translate temperature records into flux forcing (signal plus noise), either for the real world or perhaps just as instructively for model output.

        They can be used as a basis for attribution between underlying trend and implied forcing and stochastic variation.

        Given their restrictive definition I am not sure what more could be asked of them.

        They do not do non-linearity, but I should like to know where the global temperature record has content contains evidence that quantifies a non-linearity in the response.

        They don’t do dynamics, they don’t inform as to why the days are warm and the nights are cold or why the Sahara is rather dry.

        They get some things wrong. For instance they over estimate the effect of volcanism due to the published forcings, here they be in good company along with the AR4 ensemble mean.

        They do what they do, and I think they do it well. They may not do what people want or what some may think they are claimed to do. They are but what they are.

        Alex

      • John F. Pittman

        Don’t these comments address the need to look at the models from TOA, application of hyperviscosity in PDE differencing, and Dr. Browning’s work on initial value problems ? Rather than a non-existent frame of reference.

      • Tomas wrote:
        “it is just a small step to conclude that this dTa/dFa (constant averaged climate “sensitivity”) business is just a waste of time.”

        Even a bigger waste of time is to try to compare these “structurally unstable” calculations with available measurements of reality. It is obvious that the function dTa/dFa is not analytical, it has singularities at nearly every point. Due to non-uniform (spotty) spatial distribution of T(s), there is a whole space of functions with different {Ta} but providing same Fa, Fa=constant. For mathematically challenged, I have provided examples of this odd feature of “global averaged temperature” on several blog occasions. Therefore the change in Fa is zero (dFa=0) on uncountable set of functions T(s) where changes dTa are non-zero, dTa/=0, such that the climatological “sensitivity” dTa/dFa becomes infinite. Therefore there are infinite number of “angels” on the climaological pin, which makes all these exercises formally a nonsense.

      • My verbal understanding of that is this:
        There are many loosely interconnected surface regimes on the planet, almost all of which are in serious disequilibrium thermally and radiatively with the TOA and the space around the planet. Some are emitting strongly and cooling, some are absorbing strongly and heating, and others are swinging back and forth. Reasons for these differences are many and varied. Valid and reliable mathematical prediction requires knowing all of them in some detail.

        Is that about right?

      • I think it is a good description. I can only add that “knowing all of them in some detail” requires proper solution of Navier-Stokes Equations coupled with convective diffusion of many substances as temperature, water vapor, and GH gases. More importantly, it requires a coupled equation (and boundary conditions) for transport of dust, nanoparticles and other nucleation inhibitors, about which much needs to be learned. Without them the system cannot produce clouds on its own, and the weather-climate system will lack the major regulating agent.

      • Tomas Milanovic

        There are many loosely interconnected surface regimes on the planet, almost all of which are in serious disequilibrium thermally and radiatively with the TOA and the space around the planet.

        Yes that is a pretty correct understanding to which I would only add that those regions are in disequilibrium not only with the space but with everything, including neighbouring regions in different regimes.
        What you just described is a typical example of dynamics in spatio-temporal chaos.

    • The reason this is being discussed is that it has raised its ugly head many times in discussions of global warming. I remember it being trotted out by Lindzen in one of his presentations and have seen it stated many times that the non-feedback temperature increase with a doubling of CO2 is 1 C or 1.2 C. I’m kind of surprised you even have to ask the question. A more interesting observation is that climate scientists let these kinds of statements stand unchallenged until they are show to be flawed. Then, all of a sudden, there is a great backing down.

      • Richard S Courtney

        Jim:

        You assert in response to me:
        “A more interesting observation is that climate scientists let these kinds of statements stand unchallenged until they are show to be flawed. Then, all of a sudden, there is a great backing down.”

        Your comment is presented as a response to my post.

        I do not know what you mean by “climate scientists” but, with respect, I have not backed down on this although I have repeatedly said that few if any would claim more than 1.2 deg.C rise from a doubling of atmospheric CO2 without feedbacks. That is simply true.

        And I have repeatedly said on this blog and elsewhere that I doubt an increase to atmospheric CO2 could have a discernible effect on mean global temperature. Indeed, I have repeatedly explained that view on this blog and elsewhere.

        My point which you are answering is completely consistent with all I have said about this in the past.

        Richard

      • Richard. I didn’t intend to suggest that you are a climate scientist. However, I do recall how climate scientists began to dismiss the importance of the hockey stick chart after the statistics were shown to be shaky. After that they could be found stating that the hockey stick chart wasn’t that important after all. I’m wondering if any of them will abandon this concept as well.

      • Calculating of feedback in engineering sciences (electronics, mechanics, acoustics etc), where all components and parameters are known and measurable, and process is repeatable and verifiable, can be very precise . It is no surprise that contributors with above background are sceptical of both the CO2 amplification and feedback calculations.

      • I share the same thoughts about this, from similar background.

        Although we can argue about forcings and feedbacks forever in theoretical level, the current state of science is nowhere close to being able to separate them from measurements. This is exactly where the people from e.g. engineering background become suspicious; measuring is everything, and basically if your theoretical assumptions are not backed by them or even by and large contradicted (which is my current understanding related to CGMs), you are sent back to the drawing board, or in case of e.g. bridge building, jail…

        As another point, if we consider the inherent properties of the modelling techniques and equations (for example, the ones presented in the opening post) and the fact that we cannot even determine the initial state of the system with good accuracy, I’m becoming more and more sceptic whether this modelling effort will ever give us anything more than a good issue to debate. All this debate about climate not being weather, and classifying it as a boundary problem just reminds me about Mandelbrot, who once stated that you cannot never separate the two. Yes, I know, it is good to have common terminology, but in the end everything affects everything in climate, both in time and spatial domains.

      • Tomas Milanovic

        All this debate about climate not being weather, and classifying it as a boundary problem just reminds me about Mandelbrot, who once stated that you cannot never separate the two. Yes, I know, it is good to have common terminology, but in the end everything affects everything in climate, both in time and spatial domains.

        This very relevant and correct comment made me find out at least one use for this discussion.
        It clearly shows why and how initial conditions matter.
        It is impossible to compute quantities like dTa/dFa without knowing the initial distribution of temperatures , TOA fluxes and emissivities.
        It is like with Reynolds averaging in fluid dynamics – you throw the fluctuation distributions out of the door and they reenter by the window.

        I suspect that this “classification” as boundary problem says more about the ignorance of the relevant initial value-boundary value solution of the dynamics than about the real physical behaviour of the system.
        A kind of alibi.

      • Actually with all this positive feedback + 300% gain with no apparent damping I’m surprised its so cold outside.

    • And you got my sympathy; personal insults are unacceptable whoever they come from.

  5. Surely this discussion clearly demonstrates that the IPCC has come nowhere close to showing that CAGW is real (I use CAGW, since we all agree AGW is real; we just dont know how much more CO2 affects global temperatures).

    Again, surely, this message needs to be forcefully taken to our politicians, before they completely ruin the world economy by chasing overly expensive “green technologies”, in an effort to solve a problem, CAGW, that has not been shown to exist.

    • Whoa Jim! If by “we all agree that AGW is real” you mean we all accept that most of the warming in the last 50 years is due to human activity then by no means do we all agree. I have difficulty imagining any version of AGW that we all agree on.

      • David Wojick writes ” I have difficulty imagining any version of AGW that we all agree on.”

        Sorry. I have been criticized for using the term CAGW. What I am trying to say is what Judith consistently says. We agree that adding CO2 to the atmosphere will cause a warming. We do not know how much. Hence, we know AGW exists. What we disagree with is CAGW.

      • “We agree that adding CO2 to the atmosphere will cause a warming.”

        What we can probably agree is that using a perfect black body, all other things being equal, doubling CO2 will cause a warming of approximately 1.2 degrees C and that the earth is not a perfect black body and all other things will certainly not remain equal.

      • What this thread has highlighted is that the question isn’t if doubling CO2 will capture more heat (or energy for purists) and tend to increase the temperature. It will. The question is how much will it warm the surface where we live. The answer to this question depends on how the extra heat gets spread around. I know this is a bit redundant at this point, but stating it in plain terms helps solidify my understanding.

      • And mine, Jim. Thanks.

      • Accepting AGW means a great deal more than accepting greenhouse gas theory. If AGW means anything it means that human emissions (A) have actually caused warming (GW). I am not convinced that the CO2 increase has caused any warming, nor that the CO2 increase is due to human emissions. So I certainly do not agree with AGW, in any form.

        Note by the way that I have started using the term CAGW, which I think makes a very important distinction with AGW. One of the greatest confusions in the debate is the supposition that accepting that humans have caused some warming (AGW) implies that there is a likelihood of dangerous future warming (CAGW). AGW in no way implies CAGW, which is far more speculative. (Then there is the other central fallacy of assuming that GW implies AGW, but that is another issue.)

        AGW (and GW) per se are irrelevant to the policy debate, which is only about CAGW. As Article 2 of the UNFCCC makes clear, we are only interested in preventing dangerous warming. Perhaps it should be DAGW, but that is a quibble at this point.

      • Would you say a doubling of C02 would lead to a cooling?
        or would say you have no idea whatsoever?

        Try to make sense of the entire history of the planet with a theory that suggest no impact whatsoever or a cooling impact.

      • A typical Warmist response, I’m afraid. Having failed to make any coherent and validated predictions (and begging off by calling them “projections” and “scenarios”), the Warmist responds to any demand for validation by demanding a complete ab initio justification and derivation and documentation of non-AGW assertions.

        Bogus. The null hypothesis is that all variations experienced since at least the end of the last Ice Age are entirely within the bounds of natural variation. Disprove that, and you have some AGW to talk about. Otherwise, not.

    • Seems to me this is a bit of a hasty conclusion Jim. It’s a bit of a slippery slope to argue that because we don’t know how to accurately and quantitatively model this sensitivity then we must not know anything at all about climate change past, present, or future.

      This is admittedly a very important question that, apparently, has been handled somewhat carelessly to date and I’m finding this topic very interesting.

      • FiveString writes “Seems to me this is a bit of a hasty conclusion Jim. It’s a bit of a slippery slope to argue that because we don’t know how to accurately and quantitatively model this sensitivity then we must not know anything at all about climate change past, present, or future.”

        That is not my argument. The IPCC has produced a logic that claims to show that CAGW is real. What this thread shows is that one of the key steps in this logic is wrong from the point of view of the science, the physics. Therefore, I argue, the IPCC process has not proven that CAGW is real.

  6. Quote, L. MARK BERLINER, “What should we make of results of any analyses that seek to use high-temperature, high-CO2-level temperatures to back-cast temperatures with no adjustment for CO2? To me, not much, given that I do not believe the principal components can account for all the known and unknown sources of variation and nonstationarity.”

  7. I would argue that if the concept of no-feedback sensitivity is incoherent then the statement that the no-feedback sensitivity is 1 degree must be false. But at the more scientifically literate levels of policy discourse this statement is the core of CAGW. To many people this statement is the end of the debate. Thus from a policy perspective the negative results of this thread loom quite large. That the incoherence of this core concept is scientifically irrelevant is hard to believe. But even if it is true, this incoherence is an important finding, because this is fundamentally a policy debate.

  8. Craig Goodrich

    “Does the method proposed by Jinhua Lu help conceptualize this problem in a better way?”

    I don’t know about conceptualization, but in the abstract I find:

    The enhanced vertical moist convection
    in the tropics acts to amplify the warming in the upper
    troposphere at an expense of reducing the warming in the
    lower troposphere and surface warming in the tropics. As a
    result, the final warming pattern shows the co-existence of
    a reduction of the meridional temperature gradient at the
    surface and in the lower troposphere with an increase of the
    meridional temperature gradient in the upper troposphere.
    In the tropics, the total warming in the upper troposphere is
    stronger than the surface warming.

    As I read that, Lu & Cai are proposing another model with an upper-troposphere tropical “hot spot” which, as Christy et al have demonstrated, simply ain’t there. Or do I misunderstand L & C?

    • Craig,

      No, you didn’t misunderstand us. Based on my understanding about the dynamics of tropical atmosphere, I believe the existence of the tropical ” hot spot”, though the dynamics may be more complicated than that mentioned in the abstract of Lu & Cai (2010).

      One reference by Fu et al. may be relevant to your comments.
      http://www.ncdc.noaa.gov/oa/climate/research/2005/nature02524-UW-MSU.pdf

      • Oh, goody! A falsifiable prediction!
        I hope there are hard numbers attached, and that it can thus be speedily disposed of.

      • Jianhua, its good to see you are considering other models of the processes and responding to discussions such as this.

        How do you reconcile your prediction of an upper troposphere hotspot with the data that we have from satellites and radiosondes that suggest no such hot spot? Do you have data that suggests that there is a hotspot?

        Eddie

  9. Maybe this question was answered in the first post on this subject and if so, I’ll rehash it for argument’s sake.

    What is the point in calculating a parameter of a physical system that one cannot measure?

    I could understand calculating simplistic formulations of the physical response of an ideal system in class work for the sake of pedagogy, but what utility does the climate science community find in producing calculations of the ‘no-feedback sensitivity’ to a doubling of CO2 if it is an understood oversimplification we have no hopes of measuring?

    Are we not lost to never know if such a calculation is correct, especially given the seemingly innumerable ways the climate can produce a 1 to 1.2 degree warming over some given period of time?

    It seems to me the larger issue in the context of the ‘sensitivity question’ is how does one infer the correct feedbacks from the observational data? No doubt model results help with this, assuming those results are close enough to correct to be useful. This seems to be the case, by and large for atmospheric dynamics.

    How does knowing how an ideal climate system would respond, which we still can’t control for experimental purposes, shed light on the real climate?

    I’m still, obviously, confused here.

    • Maxwell writes “What is the point in calculating a parameter of a physical system that one cannot measure? ”

      30 years ago, the IPCC tried to prove that adding CO2 to the atmosphere would produce a catastrophic rise in global temperatures. They could not use the “scientific method”, because, as has been noted many times, the “scientific method” does not work if you cannot do controlled experiments. So they invented a hypothetical way to try and end run this fundamental difficulty. Politicians, eminent scientists and scientific institutions have believed this IPCC “proof”. One of the elements in this “proof” is how much doubling CO2 will increase global temperatures without feedbacks. This is an essential step in the IPCC process. That is why it was done.

      And if it is wrong, then the IPCC “proof” is wrong.

      • Jim,

        thanks for your take.

        I do think that ‘proof’ in this context is the observational data showing a clear warming trend since the combustion engine took off, nearly 200 years ago. This is coupled with the fact that CO2 is a significant greenhouse gas and the fact that an increased greenhouse effect will increase the surface temps of the planet.

        Statistical inference tells us that all of these facts are not unrelated.

        I’m also fairly certain that calculations of the ‘no-feedback sensitivity’ to doubling of CO2 existed well before the founding of the IPCC in the late 1980’s. Washington and Parkinson devote an entire section to it, in fact, in the second edition of ‘Introduction to Three Dimensional Climate Modeling’ published in the mid-1980’s. Most of the reference therein were written well before the publishing of that edition.

        Lastly, if we cannot measure the calculated parameter of interest, then we cannot say one way or another whether it is right OR IF IT IS WRONG. So to make a conclusion based on incorrectness of the calculation itself because we cannot measure the parameter in the real world shows a lack of use of the scientific method as much as claiming its correctness. All we can say is that we simply do not know to what extent it is either correct or incorrect. That’s all the scientific method tells us about this situation.

        So while I understand your distaste with the political process that is now surrounding climate and environmental policy, I think that you’re mischaracterizing the situation slightly.

      • Maxwell writes: ‘I do think that ‘proof’ in this context is the observational data showing a clear warming trend since the combustion engine took off, nearly 200 years ago.’

        The clear warming trend is proof of nothing other than a warming trend (following the little ice age) ! Correlation does not at all assume causation.

        Even the most stringent advocates of CAGW belive it only really kicked in during the last 50 years.

      • RF,

        ‘The clear warming trend is proof of nothing other than a warming trend…’

        Right, that’s why I also included the caveats that we know that CO2 is a greenhouse gas and an increased greenhouse effect increases surface temps. Using inference and induction, the relationship between T and CO2 concentrations is more than a simple ‘correlation’. We have a working physical model that coarsely predicts the observations we seen. It certainly doesn’t predict everything we see, but that’s not a sufficient reason to reject the physical model in this case.

        Also note that I have no where made the claim that these points substantiate the most dire predictions associated with future climate. So I don’t understand why you have brought that up in your comment. It’s important to stay on point if a conversation is going to be meaningful.

      • You are right that we should not throw out the model because it does not predict the details, however that is also not a good reason to spend all of your money on the predictions it does make.

        Entire scientific theories that modeled the course details correctly have been thrown out after further study.

      • Again: the null hypothesis is that all swings since the Ice Age are within the bounds of natural variation. Until that is disproven (at far better than a highly dirty-data contaminated “95%” level), then NOTHING should be based on any competing hypotheses.

        The Precautionary Principle rules strongly against acting to cause vast certain harm to preclude dubious minor to moderate harm.

      • Stilgar,

        I agree that climate science should not become modeling science. I think the place where there is most room is more effectively measuring the important observables in the climate system that allow researchers to more effectively assess feedbacks, if possible. I think that necessitates getting researchers not just in the climate science field involved in designing and carrying out experiments to a greater extent than we have seen so far.

        Science takes lots of creativity. Especially science as difficult as we’re discussion here.

      • “…the ‘scientific method’ does not work if you cannot do controlled experiments”.

        I often encounter statements like this, which I categorically reject. As an astrophysicist I rather have to, because in my field one cannot perform any controlled experiments. The scientific method depends on having falsifiable hypotheses and predictions. That process, facilitated and spurred by the self-correcting nature of peer-review, does not rest upon the ability to perform active experiments.

        Passive remote sensing can reveal a lot, for instance everything we know about the origin of the Universe itself, which is, in fact, a remarkable amount.

      • Are you saying that no hypothesis in astrophysics has EVER been disproved by observation? I know there will be some hypotheses that you will never be able to falsify in your field, but certainly not all?

      • That is not what he is saying at all. The point is the distinction between observation and “controlled experiment.” Science only requires the former, not the latter. A controlled experiment is just a special case of an observation, one in which we create the conditions to be observed.

      • I realize there are areas of science that aren’t amenable to a controlled experiment. But in those cases, I think it is safe to say the conclusions are less certain, generally speaking. We frequently hear climate scientists say this model or this concept is the “best we have,” so we have to go with it. It is entirely possible that the best we have is totally inadequate science-wise, but even more so policy-wise. Likewise, we may have the smartest humans working on whatever problem in question. That does not guarantee that a true solution will be found. Just look at Long Term Capital. Two of the principals were to go on to win a Nobel prize. After it was all said and done, the government had to bail them out. So having the smartest humans participate in an endeavor does not guarantee success. Personally, I don’t want to take down my and my children’s standard of living based on tea leaf readings. All the philosophizing in the world won’t convince me to do that. Only a solid argument that give me confidence that catastrophe is highly likely would do that, and even then there might have to be some trade-offs. In the final analysis, there may be no means but war to force the major countries to sharply curtail use of carbon fuels. That’s one reason I think nuclear power should be accepted by just about everyone and a moon-shot effort made to build out the power base.

      • FiveString,

        I think that’s an excellent point, but it’s also a matter of semantics.

        I think astrophysics gets something closer to control than climate science because the number of systems being studied is ‘astronomically’ larger…sorry I couldn’t resist myself.

        Studying a such large N system produces a great deal of variation in the phenomena being observed. That variation in observations then feeds back into the theory to improve calculations and theoretical understandings of what is happening in our universe.

        Climate scientists have one system to study and are at the mercy of its ‘preferred’ dynamics with no other system to compare to. I think that’s the biggest reason why modeling has become as large a part of their research design as it has. It’s more systems to study under ‘controlled’ regimes.

        So while I agree that science can be done without direct control over the system of interest, as is done by you and your colleagues, I don’t think that astrophysics is as analogous to climate science as you present. If one is interested in supernovae, there are thousands happening all the time we can observe to get the variation necessary to make informed theoretical progress on the possible ways in which they can occur. We have no such luck, at this point, in the context of climate science.

        Plus, astronomy and astrophysics have a 400 year head start on climate science…

      • I don’t disagree, but feel compelled to point out that it’s not always the case that there are large numbers of systems to work with in astronomy. One case would be heliophysics, which involves studying the only star whose surface we can resolve and whose atmosphere envelops the Earth. And I already alluded to another, namely cosmology, where remote sensing via missions like COBE and WMAP, combined with what we know about atomic physics, has revealed fundamental facts the early Universe, nucleosynthesis, etc.

        But that’s not the topic here, so please forgive the digression. I simply wanted to point out that the scientific method does not depend on the ability to perform controlled experiments. As someone who has spent a good deal of time working with and evaluating models I have confidence in their potential to make solid predictions. I also know how easy it is for them to be wrong and for one to be led astray by the results. My visits here are an open-minded attempt to understand where clmate modeling and climate science as a whole stands today.

      • Five String: So you have confidence in models. I don’t. Orrin Pilkey and Linda Pilkey-Jarvis have have surveyed numerous futile attempts to model natural pro0cesses, from predicting cod fishery yields, to environmental impact statements, climate forecasts, beach erosion problems, Yucca Mountain drainage, and more. Such models often involve approximations or guesstimates. They also may depend strongly on initial values which are poorly known but have a strong influence on the outcome. Plus, “adjustments” may be required to make them correspond to reality that evades calculation and these “adjustments” are nothing more than fudge factors. In addition, since these adjustments are opaque to outsiders political pressure can be, and has been, exerted to get the “right” answer which is then passed off as a “scientific” fact. Use of supercomputers enhances the ability to do these adjustments because of the additional degrees of freedom they provide. One result of this is that we have no more codfish in the North Atlantic Ocean. They have come to the conclusion that none of these models can be trusted to give quantitative results. To them modeling of natural processes is simply “Useless Arithmetic” which is the name they gave to their book. And they did not even get into financial modeling that screwed up our economy.

      • In brief, models systematize the opinions of the modelers. In fact this makes the requirement for “falsification testing” urgent and essential. Opinions are highly resistant to such testing. Models must not be.

    • randomengineer

      What is the point in calculating a parameter of a physical system that one cannot measure?

      I don’t know, I’m not a climate scientists, but it should be obvious that they’re reaching for understanding the baseline condition. Feedback (aggregate) magnitude can then be calculated from there. Without a baseline there’s no way to determine magnitude.

      • ‘Without a baseline there’s no way to determine magnitude.’

        I totally agree, but if you cannot substantiate that your ‘baseline understanding’ is correct via observational data, why are you using that baseline?

        I would imagine that the no-feedback sensitivity is ‘consistent’ with other idealized aspects of the climate system, maybe simplified circulation patterns or something along those lines. But adding each new level of complexity necessary to get to a changing temperature might make the assumptions implicit in the case of other idealized behavior insufficient for production of the observed climatic behavior.

        I don’t know.

        When this number (no-feedback sensitivity) is calculated in a computer simulation of a climate model, what are the benchmarks used to ‘good agreement’ with observational data?

  10. Several years ago Steve McIntyre began asking where one might find an engineering quality exposition of the CO2 forcing/feedback effects presented in the IPCC. Commenters repeatedly made fun of him at CA but were unable to provide one. Everything they pointed to was so simplified or idealized (ie not a real earth system at all, no heterogeneity, no oceans, no clouds) that it was simply elegant hand-waving (or “theoretical” if you prefer). These 2 threads suggest that still no one can provide such a calculation that is not swiss cheese. And yet, angry words when one asks for it? Really…

    • Commenters repeatedly made fun of him at CA but were unable to provide one ……And yet, angry words when one asks for it?
      The truth is incontrovertible, malice may attack it, ignorance may deride it, but in the end; there it is. – Winston Churchill in the House of Commons, May 17, 1916

    • Christopher Game

      Not knowing of Steve McIntyre’s questions, some time ago I asked the same questions and was not answered. Some of my objection is at
      Christopher Game | December 14, 2010 at 12:26 pm
      http://judithcurry.com/2010/12/11/co2-no-feedback-sensitivity/#comment-21617.

    • Craig, yes, I mentioned Steve’s repeated unanswered request for a detailed scientific up-to-date paper explaining CO2 forcing on the previous thread (and the IPCC AR4’s amazing lack of substance on this question). Steve Mosher replied, linking to Steve Mc’s post about the scienceofdoom blog, which is a small step in the right direction, though it still glosses over the details and is anonymous and not peer-reviewed.

  11. Given that human caused CO2 is only a small percentage of total CO2 produced each year, is it reasonable to conclude that the increase in CO2 in the atmosphere is caused by humans?

    The entire theory of AGW rests on this assumption, which seems extremely unrealistic, given the vast quanities absorbed, stored and released naturally.

    It seems highly unlikely to me that CO2 levels are so delicately balanced. Pre-industrial levels were at the point where plant productions shuts down. This suggests that the CO2 levels were about as low as they could go naturally, and that any small percentage increase in CO2 production by humans would be absorbed by increased plant growth.

    There is very strong historical evidence that warming causes CO2. Therefore, the increased CO2 we are seeing may not be human in origion, it may simply be a result of the natural warming since the little ice age.

    I realize there has been some indication this is not the case, based on carbon isotope ratio’s, which suggest the increase if human in origion. I would find that more convincing if pre-industrial CO2 levels were not so close to point at which plant production shuts down.

    To me, this “co-incidence” that pre-industrial CO2 levels so closely match the minimum required for photosynthesis in many plants, stongly suggests that we have missed something really fundamental in our understanding of CO2.

    • randomengineer

      Given that human caused CO2 is only a small percentage of total CO2 produced each year, is it reasonable to conclude that the increase in CO2 in the atmosphere is caused by humans?

      As per Prof Vaughan Pratt — the claim seems to be that nature provides 280 ppmv and humans provide the rest; he cites a paper by David Hofmann (NOAA) showing this. Go to the radiative model confidence thread and search for HOFMANN for the link to the paper.

      If Hofmann is even in the correct ballpark, how we got an MWP or LIA is even more mysterious than ever.

      I reckon he’s not in the correct ballpark. Nevertheless this notion seems to be one of the core assertions/assumptions of the “A” part.

      • This issue would make a good thread, but it is off topic here. The fact is that the annual CO2 increase is greater than human emissions, so on a simple minded, steady state reservoir model we make the difference. But the increase is not made up of human emitted CO2, so we are at best the cause of the increase, not the source. Causality is one of the most difficult concepts humans have. When you try to pin down the human causality of the CO2 increase , the case quickly gets as murky as the no-feedback sensitivity case.

    • “Chiefio” (E.M. Smith) suggested in a fine posting some time ago that the biosphere tends to drive CO2 down to the levels that it is starving itself, and then the system jitters around that point, absent any massive global outpouring, like from the flood basalts, etc.

      So we would do the plants and animals of the world a great favour by hiking CO2 as high as we can without subjecting it to such drastic “recycling” events.
      In light of this, I suggest subsidizing coal production and offering electricity generated from coal plants at steeply discounted rates, perhaps even free.

  12. Richard S Courtney

    This thread is about ‘no feedback’ climate sensitivity.

    People have pointed out three aspects of this issue; i.e.
    1. The demonstrable existence – or non-existance – of ‘no feedback’ climate sensitivity.
    2. The usefulness of a determination of ‘no feedback’ climate sensitivity to understanding of climate behaviour.
    3. The significance for political policies of an ability – or inability – to determine the existence and usefulness of ‘no feedback’ climate sensitivity.

    I respectfully suggest that it would be most useful to all if we were to consider these matters sequentially.

    Richard

    • randomengineer

      Why sequentially? Purpose?

      The premise of a figure for sensitivity is an exercise in establishing a baseline. No baseline, no way of determining much else; you have to have a reference standard.

      • Richard S Courtney

        randomengineer:

        You ask me:
        “Why sequentially? Purpose?”

        Sorry, I thought it was obvious. My bad.

        The purpose is to avoid the same mistake as IPCC made by having WG1, WG2 and WG3 conducting their considerations simultaneously.

        If ‘no feedback’ climate sensitivity does not exist then there is no purpose in considering its usefulness. Efforts to determine its usefulness are wasted and could have been spent on assessment of the possibility of determining its existence.

        If ‘no feedback’ climate sensitivity does not exist or it adds nothing to understanding then that can be fed into the political considerations.

        It is a cart before the horse’ thing.

        Richard

    • Alexander Harvey

      Richard:

      “1. The demonstrable existence – or non-existance – of ‘no feedback’ climate sensitivity.”

      It is existence in that it answers an if-what question.

      If a radiative forcing equilivalent to a doubling of CO2 concentrations was imposed on the earth system; what would be the rise in average global temperatures (where the warming was performed according to a mode in which there were no changes in various properties such as, lapse rate, water vapour content, cloud amount and distribution, albedo amount and distribution, etc.) required to produce an equivalent but opposite change in the flux balance of the earth system.

      It exists in that sense, it answers the question as posed.

      Alex

      • Richard S Courtney

        Alexander Harvey:

        I think we are having a semantic argument.

        I do not agree that “It exists in that sense” because the “It” I percieve is a condition state of the atmosphere (e.g. with assumed 280 ppmv pre-industrial atmospheric CO2 concentration) which includes all the associated feedbacks. That “It” is not a ‘no-feedback’ situation.

        Double the atmospheric CO2 concentration and the condition state changes so all the feedback values change. Again, that is not a ‘no-feedback’ situation.

        The ‘no-feedback sensitivity’ is not relevant to either condition state.

        Anyway, that is my opinion. And – as always – my opinion can be changed by new information.

        Richard

      • Alexander Harvey

        Richard,

        You may be right, it is about a definition.

        You may find that it is defined as something that you do not find meaningful or in anyway useful.

        At its heart is the concept that one can calculate the change in radiation balance at the TOA, between an atmosphere as it is, and when the whole vertical column is 1C higher in temperature but nothing else had changed. Further to that one can do similar calculations all over the globe and calculate a globally averaged response in the radiation balance. This can be done provided that we have sufficient knowledge about the atmospheric state. It is done in practice in all the GCMS. In principle it could be done for the real world given sufficient data.

        Once you have that figure you can divide it into the figure derived for the forcing due to a doubling of CO2.

        This gives the increment by which the whole world would have to increase in temperature to offset the forcing due to CO2 (if nothing else changed, not one thing but the temperature).

        That is all that it is. It does not do anything or imply anything, it doesn’t inform one about how the dynamics would change, or how the new energy balance would be attained.

        It is a value with a definition, in itself it is not very useful except that it does allow models to be compared on the narrow point of seeing if they express similar values and forms a basis for defining feedback factors.

        It may be one of the least informative pieces of information out there but it does have a value that is thought to be known with a good deal of accuracy.

        Alex

      • Richard S Courtney

        Alex:

        Sincere thanks for your reply. It clarifies matters for me in a manner that solidifies my opinion.

        I think we agree that this is “a matter of definition”.

        You say;
        ” It is done in practice in all the GCMS.”
        And
        “This gives the increment by which the whole world would have to increase in temperature to offset the forcing due to CO2 (if nothing else changed, not one thing but the temperature).”

        Right!
        That definition specifies ” the increment by which the whole world would have to increase in temperature”, but such a specification is completely unreal.

        The “whole world” will change. At issue is how it changes and the net effect of all the changes. The definition specifies a reductionist methodolgy for assessment of each change.

        But such a reductionist method is not possible when each change affects every other change. The entire system adjustment needs to be assessed to determine how the “whole world” will change. Indeed, this is why GCMs use finite element methods that iterate to stability: it is not possible to assess the effects of the changes sequentially.

        But you say;
        “This gives the increment by which the whole world would have to increase in temperature to offset the forcing due to CO2 (if nothing else changed, not one thing but the temperature).”

        I think it is a strange methodology that decides a reductionist approach will not work so adopts the principles of GCMs then uses those GCMs to determine the datum for starting a reductionist analysis of global temperature change. The only reason to do this is an assumption that the system will experience a global temperature change.

        But the system change may negate any such global temperature change. Hence, the ‘no-feedback’ sensitivity is a statement of a prejudice (i.e. a belief not supported by evidence) and nothing else.

        I prefer to agree the entire systems approach and to consider what it indicates.

        Richard

      • Alexander Harvey

        Richard,

        You seem to be stating that the CO2 no-feedback doesn’t answer certain questions. That is correct. It provides a value for an effect that is extremely unlikely to be realised.

        That is what it is. Stating that it doesn’t answer questions that you might have, which are ones for which it does not supply a complete answer, can only be answered by questioning as to why you are criticising it for not supplying an answer to questions that it is not capable of answering.

        It is what it is, and isn’t what it is not.

        Alex

      • Alexander Harvey: “If a radiative forcing equivalent to a doubling of CO2 concentrations was imposed on the earth system; what would be the rise in average global temperatures … required to produce an equivalent but opposite change in the flux balance of the earth system?” A convoluted question, if ever. The answer is zero. That is because the global average annual infrared optical thickness of the atmosphere has been unchanged for 61 years. Ferenc Miskolczi determined this from NOAA database of weather balloon observations going back to 1948. This is an empirical observation, not derived from theory, and overrides any contrary results from theory.

      • You mean, it should override any results from theory. Theory-lovers, however, have an infinite capacity to ignore inconvenient observations, evidently.

  13. Given that agricultural land use in pre-industrial times was 5% of the land surface, and is now almost 40% of the land surface, how much of the change in atmospheric gasses, including H2O and CO2 isotopes might a result? How much of a change in temperature?

    From personal experience, there is quite a bit of difference in temperature and humidity walking through a forest or jungle, as compared to the same land after it has been converted to agriculture or urban use. Has this been acounted for in the climate models?

  14. Alexander Harvey

    The CO2 no-feedback concept is being criticised for failing to explain things that it does not claim to explain.

    It does not claim anything more than to quantify how the radiative balance at TOA would change if the surface and the atmosphere warmed by a temperature increment evenly throughout the column height with everything else being unchanged.

    It does not attempt to explain how this is to be achieved, or if it could be achieved.

    It is based on the radiative properties of the surface and atmosphere without reference to any dynamics whatsoever. It pays no heed to the energy balances, or how the are achieved. The radiation is not a function of the how, but just the atmospheric state in terms of temperature and composition.

    Viewed in this way the macrocopic effects of the real dynamics enter into the other equations in the form of the feedback coefficients, which give the radiative effect of changes in the temperature (lapse rate) and water vapour profiles, amongst others. Again these changes are not explained but they are evaluated in terms of their implications for TOA flux.

    In this way an attempt is made to attribute the TOA flux changes to various modes, the first of which is the no-feedback mode.

    The set of modes is far from complete, e.g the no-feedback mode is not decomposed into the primary spherical harmonic and modes representing polar amplification, land-sea, etc., but this could be done.

    It is a scheme for attributing changes in the TOA flux to these various radiation modes. It is just that an attribution scheme, a means of allocating the flux changes to changes in the surface and atmosphere. It neither informs nor is informed by the constraints and dynamics of the system.

    If you are looking for it to explain itself or anything else you will be disappointed as it doesn’t.

    It has to be taken in its own terms, to analyse it in terms other than those it is stated in, would be fruitless. To query how it might produce a rise in surface temperatures is futile. It is a attribution sheme, it doesn’t do anything.

    Alex

    • Alex – is it not true that the lapse rate is determined by the dynamics of the atmosphere? If so, dynamics plays a part in even this over-simplified model.

      • Alexander Harvey

        Jim,

        I think you get to the point.

        This simple radiative model only covers one aspect, the radiation. This is the easy part. You can calculate the radiation given the state of the atmosphere. The radiation is only dependent on the state, not how it got to that state. If a volume of the atmosphere has a particular temperature, height, a known quantity of radiatively active gasses including water vapour it is possible to calculate how that volume contributes to the TOA radiation balance.

        This simple radiative model cannot do the dynamics it has to be supplied with the atmospheric state.

        I think people are trying to see where the simple model does something magical, which is a problem as it doesn’t do anything magical. It is a dumb process that asks to be supplied with the atmospheric state before an applied a forcing and after that forcing. It also asks for the change in water content, before and after, and how the lapse rate change and the ice cover.

        Then it chugs through a few calculation and outputs some values that represent its attribution of the changes in TOA radiation to the various components (Planck, Lapse Rate, Water Content, Ice Cover).

        It uses matrices (kernels) that are representations of how at each box ( latitude, longitude, month, height) a change in temperature (or water content) would change the radiative balance at TOA. The changes in temperature and water vapour have to come from a model that has the dynamics that can calculate such things. If we had such historic data for the real world we could decompose the radiative balalnce in the same way but we don’t.

        Alex

      • Alex – I appreciate your willingness to discuss these topics.

      • Alexander Harvey

        Jim,

        That’s OK.

  15. If we start with 0.7C / century as the observed warming. Subtract warming caused by recovery from LIA as part of natural cycle. Subtract warming due to land use changes. Subtract warming measurement bias due to urbanization. Allowing for the uncertainty, how much are we left with to confirm the no feedback CO2 sensitivity?

    • You forgot to include a sun which became more active as the C20th progressed. And the terrestrial amplification of the solar signal identified and quantified by Nir Shaviv in his JGR paper ‘Using the Oceans as a Calorimeter’.

      The accumulation of solar energy in the oceans along with the other factors you mention doesn’t leave much for co2 IMO.

    • You missed: “subtract warming due to unexplained “adjustments” to the temperature record” which always increase warming.

  16. Let me try to put this in a broader context. The IPCC approach starts with an estimate of change in radiative forcing for a doubling of CO2. It ends with an estimate of how much surface temperatures will rise. Somewhere in this process, one must have a way of estimating change in temperature from change in forcing. If this cannot be done scientifically, then the IPCC process cannot prove that CAGW is real.

  17. I am no mathematician but I would be interested to read some informed comment on Tomas Milanovic’s analysis in the main post.

  18. Judith,

    Based on experience with the single column model (Manabe-Wetherald 1967 model) and idealized GCM, I tend to think that CO2 no-feedback sensitivity (1~1.2 K) is quite robust, even if we use different definitions about the feedbacks.

    In your reply to Christopher Game, you said ” surface evaporative cooling seems to be left out in these analyses (of course it is included in the global climate models)”

    In fact, the surface evaporative cooling effect was included in the TOA-based climate feedback as part of lapse rate feedback. In a climate system with physical sensing, the evaporative cooling must be companied with same amount of atmospheric condensational heating. In fact, almost all of the dynamic feedbacks (evaporation, sensible heat, energy transport) in TOA-based climate feedbacks and the vertical inhomogeneity of radiative feedback have been lumped in the lapse rate feedback in TOA-based feedback analysis.

    Given the limitation of TOA-based analysis, I still think it’s a useful framework, maybe better than surface-based analysis. Possibly it is Ming Cai ( http://www.agu.org/pubs/crossref/2005…/2005GL024481.shtml ) in his 4-box model who first uses both atmosphere and surface energy budget.

    When we proposed the CFRAM, one reviewer thought our concept of climate feedback was not right, so we constructed a Manabe-Wetherald singl-column-model, and made a detailed comparison between the PRP method, our CFRAM, and online suppression method ( Ed Schneider et al.) by applying them to same doubling CO2 simulation. The comparison may be of interest to the readers here http://www.springerlink.com/content/q8265w78n98480w8/ . We also included an analysis how all of the dynamic feedbacks including evaporative cooling (and surface sensible heating) are lumped into the lapse rate feedback in PRP method (See the tables in the last link).

    • Jianhu,

      what other climate parameters are you monitoring in your model results that give you confidence that the n0-feedback sensitivity number is ‘robust’? How are these other parameters measured in the real climate system?

      Thanks.

      • Maxwell,

        In fact, in our idealized models, almost no parameter ( as used in full GCMs to parameterize the sub-grid processes) is used.

        The basic picture of the energy cycle especially the energy transport in the atmosphere is consistent with the existing picture based on decades of observations.

        Also, I trust the long-wave radiation calculations (related to 2CO2, and temperature) in recent generation of radiative transfer models.

      • Does your study account for the ~1% cooling associated with SW shadowing by CO2 doubling?

    • Christopher Game

      Dear Jianhua Lu,
      Thank you for your kind reply, in which you tell me that the land-sea surface cooling effect (what I called the evaporative-circulatory cooling) is included in the lapse rate feedback. Would you be willing to expand with some details on that, and tell me how that is done? Does the rate of evaporation increase, or does the rate of precipitation decrease, so as to keep up the relative humidity? Does the rate of circulation increase or decrease? When the Planck-feedback-only sensitivity is calculated in the models, do you mean that the rates of circulation, evaporation, and precipitation are held constant?
      Yours sincerely, Christopher Game

      • Hi Chris,

        Yes, both evaporation and precipitation increase, in terms of the rate of circulation, it (mass circulation) may be slowed down in the time-mean sense, but the variance may become larger. Anyway, uncertainties still exist about the change in how circulation changes.

        A good reference about the evaporation and precipitation is Held and Soden (2006): http://journals.ametsoc.org/doi/abs/10.1175/JCLI3990.1

        I also did some research on hydrological cycle change, the results may is consistent, but maybe from a different perspective, with Held and Soden (2006).

      • Christopher Game

        Dear Jianhua Lu,
        Thank you for this reply.
        It seems to me very surprising that the rates of evaporation and precipitation would both increase and yet the rate of circulation would decrease, all of course supposing the final steady state after doubling CO2. Would you tell me why you say the rate of circulation would decrease? And just checking: for the no-feedback case, the rates of all three are held constant?
        Yours sincerely, Christopher

    • Hello,

      please forgive the dumb question, but you say:

      “the evaporative cooling must be [ac]companied with same amount of atmospheric condensational heating”

      What about the work done lifting billions of tons of water thousands of metres up into the atmosphere. Surely it doesn’t get there “for free”? Is there an energy “loss” here?

      Apologies again for the daft question; just interested.

      Thanks.

  19. Of those who have commented within the thread, Jianhu may be the only modeler, and although I’m not one, I believe he is correct is stating that the no-feedback sensitivity is robust. That is not to say there is no uncertainty, but only that the uncertainty is limited. One can use the standard definitions of “no-feedback” to calculate a surface temperature change of 1 C as an approximation simply via Stefan-Boltzmann (see the earlier thread), and the modeled values of 1.2 C are not radically different.

    The no-feedback case is defined as one in which there is no change in albedo, clouds, humidity or lapse rate (and hence no change in evaporation). Given a forcing at the tropopause (or TOA depending on definition), and an unchanged lapse rate, the climate response at the surface is relatively constrained, because we can simply calculate downwards from the altitude dominated by radiation to the surface. The 1 deg C approximation assumes a single linear lapse rate and a single flux-weighted mean radiating altitude. The models attempt to consider spatial and temporal variations in both the magnitude and linearity of the lapse rate, as well as different altitudes of radiation as a function of wavelength, and the modelers should comment further on the details. I don’t know how important it is to incorporate spatial variation in emissivity as opposed to assumed values close to unity, but they should comment on that as well – I tend to doubt it makes a huge difference.

    Feedbacks add a further layer of complexity, of course, and deserve a thread of their own (or many threads), but to date, the decomposition of climate responses into forcing-only and feedback responses, followed by recombining them after appropriate attention to their interactions has been useful in generating outputs that have performed relatively well on a long term global scale. Attention to remaining uncertainties is warranted, but an implication that the forcing plus feedback approach is futile is not.

    • Aaaaargh…this is doing my head in! Just when I think I’ve got it along comes a comment like Fred’s to confound me. What seems to be coming out in this discussion is the difference between the explanations provided by the modellers and those who develop an opposing argument based on theory and mathematics such as Tomas in the main post. Both sides of the argument seem plausible and it leaves me confused and mightily discouraged!

    • Fred can you be specific about “modeled values of 1.2 C”: which papers are you referring to, e.g. RC models such as Hansen et al. 1981?

      • Judy – Here is one I referenced before, but I’ll try to do more homework to find additional sources – Soden and Held

        This paper divides responses into the Planck response (the radiative response to a TOA forcing without additional feedbacks) plus water vapor, lapse rate, albedo, and cloud feedbacks. The Planck response assumes that “the temperature change is uniform throughout the troposphere”, and incorporates deviations from that assumption into the lapse rate feedback.

        If we take forcing from (doubled CO2 to be 3.7 W/m^2, we can divide that quantity by the various values of the Planck response to yield a value averaging about a 1.2 C change in surface air temperature per doubling.

        I’ll try to pursue this further.

      • thx, the soden and held seems to be the main one that is recent.

  20. I s this analysis sound? If not, perhaps an explanation of its error/s would illuminate the “no feedback” question. If so, what is there to worry about?
    http://www.palisad.com/co2/eb/eb.html

  21. We can obtain empirical estimates of climate sensitivity by comparing the temperature increases shown by climate models with the increases in radiative forcing that generate these temperature increases, assuming 3.7 watts/sq m for a doubling of CO2 and a linear forcing vs. temperature relationship. I did this for the GISS model E and Ocean-Atmosphere models and got the following results. (The delta forcings for 2000-2100 are estimates but probably aren’t too far off):

    GISS E, 1880-2003, all forcings combined: Delta forcing 1.75 watts/sq m, Delta T 0.53C, climate sensitivity = 1.1C

    GISS E, 1880-2003, well-mixed GHGs only: Delta forcing 2.72 watts/sq m, Delta T 0.97C, climate sensitivity = 1.3C

    GISS O-A, 1900-2000: Delta forcing 1.73 watts/sq m, delta T 0.69C, climate sensitivity = 1.5C

    GISS O-A, 2000-2100, IPCC SRES A1B: Delta forcing 5.2 watts/sq m, delta T 1.9C, climate sensitivity = 1.4C

    GISS O-A, 2000-2100, IPCC SRES B1: Delta forcing 3.1 watts/sq m, delta T 1.2C, climate sensitivity = 1.4C

    Climate sensitivity may or not be a theoretically-meaningful value, but it’s what this thread is all about, and if we want quantitative estimates I submit that these numbers are about as good as we are going to get (the models can’t replicate observed 20th century global warming with higher or lower sensitivities). Interestingly, however, they are much closer to the 1 to 1.2C no-feedback estimates than they are to the 3.0C the IPCC claims to have obtained from climate models.

    • R. Andrews wrote: “We can obtain empirical estimates of climate sensitivity by comparing the temperature increases shown by climate models with the increases in radiative forcing”

      ??? How is that comparing an output of a model with a model of “forcing” can be qualified as “empirical estimate” ???

      • The estimates are empirical in the sense that they are derived by matching model output to observations rather than theoretically. But whatever you call them you get the same results.

      • I’m sorry, I was under impression that you’ve been comparing assumed 3.7W/m2 unobservable “forcing” with computer model output. I wouldn’t doubt that you have the same results out of this exercises. But which observations enter this picture? Observations of display outputs?

      • The observations are the historic surface temperature record for the 20th century, which the models must be able to replicate before they can be used with any confidence at all to predict future temperatures. The forcings are from GISS and IPCC tabulations. Hope this answers your question.

      • Richard S Courtney

        Roger Andrews:

        You say;
        “The observations are , which the models must be able to replicate before they can be used with any confidence at all to predict future temperatures.”

        Sorry, but that does not make the indications empirical derivations from the real climate. Each model is ‘tuned’ to match “the historic surface temperature record for the 20th century” by use of unique values of climate sensitivity and aerosol cooling. So, the agreement between the model results you are stating provides an indication that the ‘tuning’ provides each of them with a correct match to “the historic surface temperature record for the 20th century” which provides them with similar outputs for ‘no feedback’ sensitivity.

        These similar outputs are not surprising. And it does not follow that the values you report indicates anything empirical concerning the real climate.

        Richard

      • Richard;

        The adjective “empirical” evidently means different things to different people, but Webster’s Dictionary defines it as “making use of, or based on … experiment rather than theory”. I think I used it appropriately in this case because tuning climate models to match observations is experimentation, not theory. However, in the light of hindsight I clearly shouldn’t have used it at all.

        I agree with you that my estimates probably don’t tell us much if anything about the “real climate”. My point, however, was that my climate model-based estimates are much lower than the 3C the IPCC says it gets from climate models. I guess I’m still hoping that eventually someone will explain the reasons for this difference to me, but so far no one has.

        Could you also please enlarge on your comment to the effect that we shouldn’t be surprised when we get ‘no feedback’ climate sensitivities when we tune climate models to match observed temperatures? I’m not saying you’re wrong, but man-made GHGs supposedly generate large positive feedbacks and the climate models supposedly simulate these feedbacks .

        Roger

      • Richard S Courtney

        Roger Andrews:

        In response to my having said;
        “Each model is ‘tuned’ to match “the historic surface temperature record for the 20th century” by use of unique values of climate sensitivity and aerosol cooling. So, the agreement between the model results you are stating provides an indication that the ‘tuning’ provides each of them with a correct match to “the historic surface temperature record for the 20th century” which provides them with similar outputs for ‘no feedback’ sensitivity.

        These similar outputs are not surprising.”

        You ask me:
        “Could you also please enlarge on your comment to the effect that we shouldn’t be surprised when we get ‘no feedback’ climate sensitivities when we tune climate models to match observed temperatures? I’m not saying you’re wrong, but man-made GHGs supposedly generate large positive feedbacks and the climate models supposedly simulate these feedbacks .”

        Sorry, I thought it was obvious. The issue is as follows.

        The models are fitted to the observed temperature change. The temperature change (delta T) is matched by combining the no-feedback GHG sensitivity (S), the aerosol cooling effect (A), the effect of the feedbacks (F), and the time period (t): i.e.

        S * F = (t * A) /T
        Or
        F = (t * A) / (T * S)
        Or
        F = (t/T)*( A/S)
        But (t/T) is fixed by the matching of the model to historical data so is a constant, k1.
        And (S/A) is adjusted to obtain the matching of the model to historical data so is a constant, k2.
        i.e.
        F= k1 * k2.

        In other words, the indicated effect of the feedbacks (F) is a constant defined by the matching of the model output to the historical data by adjusting the no-feedback GHG sensitivity (S), the aerosol cooling effect (A).

        Richard

      • Richard

        Thank you. I think we agree that fitting output from a climate model to observations gives a constant climate sensitivity that is close to the no-feedback sensitivity, i.e. 1C or a little higher.

        However, this doesn’t explain why the IPCC, having performed the same basic exercise using similar models, comes up with 3C. I must still be missing something.

      • Richard S Courtney

        Roger:

        With respect, it is not correct to say the IPCC has a no-feed back sensitivity of 3C: that is their approx. best estimate for sensitivity including the feedbacks.

        Richard

      • Indeed I am. Back shortly

      • I should have realized this sooner, but the reason I can’t replicate the IPCC’s 3C climate
        sensitivity estimate by comparing model temperatures with radiative forcings during the 20th and 21st centuries is simply that the IPCC’s estimate is derived from equilibrium temperatures that aren’t reached for hundreds of years after CO2 concentrations stabilize. As a result the IPCC’s equilibrium climate sensitivities are about 80% higher than its transient sensitivities (average 3.2C vs.
        1.8C according to AR4 Table 8.2). The transient sensitivities are much closer to my numbers, but still on the high side relative to observations.

  22. Well, I’m not going to comment on the comments. I’m just going to thank Mike Jonas and Judith Curry for putting out the math where I can actually see the work and problems. This little exercise fills many of the gaps in my knowledge.

  23. Judith,
    One day you’ll understand that climate models off temperatures or CO2 feeds is garbage science.
    Medicine should be the frame work off understanding planetary energies and actions by understands all the chemical players and all the energy players that work differently at each time frame.

  24. You folks have been working too hard, how about a relaxing listen to the new and improved Hockey Stick Blues?

    http://www.gather.com/viewVideo.action?id=11821949021918437

  25. Judy: I appreciate the hard work you have put in with this and the radiative transfer models. I do not question the logic or math but I do think that this is not the whole story. And why do I think that? Because it is a fine theory but it does not explain observed facts. The history of global warming since the start of the twentieth century can in no way be explained by these theoretical scribblings. And Miskolczi has shown that not even the actual existence of the carbon dioxide greenhouse effect can be verified. There is much more to this and requires graphics so I will just leave it, compose a longer story, and communicate it to you directly. The website address above is to my book on the Amazon.com site. If you don’t have it wait until the end of the month and get a revised edition. And if you do have it, get it anyway for the extra ten figures I have added.

  26. RobB December 14, 2010 at 12:58 pm I am no mathematician but I would be interested to read some informed comment on Tomas Milanovic’s analysis in the main post.

    So would I. It is all very well Fred Moolton and Roger Andrews re-stating how the estimations have been made before, and declaring them to be “robust”; whatever that means. But what we surely need is someone who supports CAGW showing why the case made by Tomas Milanovic is wrong.

    Because if Tomas is NOT wrong, then the IPCC IS wrong.

    • Jim – I did more than state how “estimations” have been made, but offered a theoretical basis for their validity. It is based on the principle that the TOA or tropopause forcing can be mathematically coupled to the surface air temperature via lapse rates, and that the models incorporate the spacial and temporal heterogeneities needed to use this principle to refine the 1deg C approximation derived from differentiating the Stefan-Boltzmann equation into a more accurate 1.2 deg C value.

      Regarding Tomas’s analysis, I believe the modelers are better qualified than I to judge, which is why comments by Judith Curry and Jianhua Lu will be more informative than my opinions or those of others here. However, I have reservations about a number of his points. For example, I suspect that the models would not approach surface changes by trying to integrate dTa to get a surface temperature change averaged over a large surface. Rather, they would be more likely to calculate temperature anomalies and average them. Anomalies in individual grid locations will influence those elsewhere, and the models will address this, but it is not the same as trying to estimate mean temperatures.

      I’m not convinced of the need to treat emissivities as a variable, but would welcome empirical data on this.

      I believe it is incorrect to state that emissivities and the corresponding fluxes based on Stefan-Boltzmann are valid only for liquids and solids. In fact, their use in the radiative transfer equations, thoroughly supported by spectroscopic data, is a critical element of climate change estimations.

      I see no reason why initial and equilibrium states should be required to exhibit the same energy partitionings. For a no-feedback element of sensitivity, certain items are fixed – e.g., latent heat transfer – but there is no reason I know of to fix conduction (a small contributor in any case), and I expect that convective adjustments may be implicit in the requirement for an unchanged lapse rate.

      I don’t believe the question of where the atmosphere “stops” is as critical as Tomas implies. TOA and tropopause estimates (at different chosen tropopause heights) yield somewhat differing values, but as long as the chosen altitudes are stated, the model outputs can be acknowledged to be based on a different definition of forcings. In any case, the tropopause is characterized by lapse rates close to zero (positive below and turning negative as one ascends into the stratosphere), and slightly different altitudes will probably not result in major discrepancies.

      On an overarching level, I think some of these concerns get to the question of where to focus for a best estimate of no-feedback sensitivity. If the focus is limited to the surface, many complexities encumber the calculations. However, if the TOA energy budget is brought into the mix (as Chris Colose has suggested elsewhere), some of the difficult estimations become unnecessary. In that sense, the use of lapse rates (plural) becomes an important tool for linking the altitudes where radiation is essentially the only important component of radiative balance to the surface, where multiple energy transport mechanisms exist. At this point, the problem becomes more tractable, at least for good approximations.

      • Alexander Harvey

        Fred,

        You have on a couple or more occassions given the good advice of reading Soden & Held 2005, I might add Held & Soden 2000. Between them they give an illustration of how these feedbacks are calculated. From that it can be deduced what they are. In particular it might be clear how they are connected to surface temperatures, what they can be used for and what they cannot.

        They are mostly calculated from atmospheric temperatures and atmospheric compositons. Surface emissivity is only relevent to the small proportion of the radiation that passes from the surface to outer space (about 10%) the emissivities of most materials (in the window band) only vary by about 10%, so a tenth of a tenth is not going to make a lot of difference. Relating them back to just a surface radiance effect is not possible as they are not calculated in that way. Much if of Tomas has writen seems beside the point, it is arguing that the concept doesn’t explain something that it doesn’t claim to explain.

        I am not sure where the idea has come from that the 1.2C value in itself explains anything. It does however allow the other components that affect the radiative balance to be turned into feedback factors and summed into the (1 – f) term to get a value that is meaningful.

        The Planck feedback as calculated by Soden & Held is well defined and it doesn’t vary much between the models and is in that sense robust.

        Alex

      • Judith, Jim, Fred, and Alex,

        So far I’ve not commented on Tomas’ arguments included in the main text of this blog article.

        In my opinion, Tomas’ argument is wrong. He is wrong mainly because the F in his argument is not well defined and has been changing its meaning along with his argument, and therefore is confusing. Let us have a check on F (and also Fa) in Tomas’ argument.

        Equation (1) F = ε.σ.T⁴ is used as definition of emissivity, then F should be the infrared radiation. But we know that we cannot get any thing new just by differentiating a definition. Also if there is no energy “balance” for the planet Earth, we also will not talk about temperature. Equation (1) is meaningful for the change of temperature (planetary Te as a whole, or surface temperature Ta – for two cases ε have different meanings) only when F means shortwave radiation absorbed by the planet. That is, Equation (1) represents the balance between absorbed (incoming) energy flux and outgoing energy flux which basically determines the temperature of this blue planet. Main point here: (a) F is shortwave radiation in the following derivation, otherwise from just a definition we will not be able to get meaningful sensitivity or anything else; (b) Equation (1) only holds at TOA and globally. Locally both TOA or both locally and globally at each other hight (including at surface), Equation (1) does not hold because of the dynamical (horizontal or vertical) energy transport. The local energy balance at any location ( x,y,z) is determined by the total energy conservation equation in which the radiative heating, other diabatic heating, and adiabatic energy transport must be balanced over a time-period ( Note: the mass-weighted integration of the energy balance over the globe ∫ ( )dz dS reduces to Equation (1) ).

        Then Tomas’s following Equation :

        dF = 4.ε.σ.T³.dT + σ.T⁴. dε => dT = (1/4. ε.σ.T³).(dF – σ.T⁴. dε)

        should mean that a perturbation to energy balance Equation (1), given the d terms are small relative to mean F, T and ,ε. It is not the differential of the definition of emissivity.

        Again, dF is the perturbation in SHORTWAVE radiation absorbed by the planet, and can be external forcing caused by solar activity, volcanos, or so-called feedbacks in ice-snow albedo, water vapor – raised by the change of other external forcing.

        dF can be zero (no short-wave radiative feedback); dε, the change in emissivity can be caused by changes in greenhouse gases, for example, 2CO2, or the water vapor feedback, etc. Then we see the two terms of dT and dε are balanced, which enable us to estimate the climate sensitivity (dT) from dε.

        Then we comes to this equation:

        dTa = { 1/S . ∫ [(T/ 4.ε) . dε].dS } + { 1/S . ∫ [dF/4. ε.σ.T³].dS }

        Tomas said ” The first term is due to the spatial variation of emissivity.”
        he is confusing the dε ( perturbation of emissivity due to external forcing or feedbacks) with the spatial variation. No, the first term just means the spatial average of the dε, and d in dε does not mean spatial variation.

        Tomas said : ” The second term is more problematic. Indeed the causality in the differentiated relation goes from T to F .” Again he is not right here because he is using F and Equation (1) as in a definition, but all of climate change is about the perturbation about the energy balance. F should be shortwave radiation here, it can be external forcing ( causality from F to T), be feedback ( from T to change in snow/ice to F, or directly from cloud to F) – here also shows the weakness of PRP method because the change of cloud does not have to depend on the change of T.

        Tomas said : ” The fundamental equation (1) is only valid for a solid or some liquids so the temperatures and fluxes considered are necessarily evaluated at the Earth surface. ” We have seen this is not right. Then how about the surface?

        The surface energy balance should be:

        Fa (net short-wave) + R_down = ε_s . σ.T⁴ + LE + H
        where R_down is downward long-wave radiation, ε_s is the surface (ocean, soil, or rocks) emissivity; LE is the surface latent fluxes; H surface sensible heat fluxes. Of course, we also could add the ocean heat storage term in the above equation.

        Note, Fa ( net short-wave) is not the same as the F at the TOA, then in the
        relation between FTOA = g(R,θ,φ) at TOA and fluxes at surface all of the dynamics of the climate system are involved, and we are not able to get a h(R,θ,φ) just at the surface.

        At last, we know from above analysis that we the climate sensitivity is not the dTa/dFa.

        A small point about Tomas’s argument: when we are talking about equilibrium in climate, it means a statistical average, For a equilibrium climate state, the diurnal cycle, seasonal cycle, and even inter-annual variability still exist.

        Alex recommended an excellent literature by Held and Soden (2000).

        Welcome comments on my opinion. I am not a real modeler yet; As a surviving young scientist, I study climate just for beauty and fun – it is a wonderful feeling to understand how our Nature mother is working. Sometimes I prefer it not being so important.

      • “Locally both TOA” should be ” Locally at TOA”

  27. JC,
    Is there anywhere in climate physics, the concept of an anti-greenhouse gas?

    • N2 and Argon. They just float around getting beat up by the hyperactive CO2, CH4 and other bullies. :)

  28. Dr. Curry, I thank you for your work on this blog and I am happy you are investigating the ugly cousin of uncertainty, and that is assumption.

    While assumptions are necessary in science, I do not see hardly anyone discussing it. An assumption that has been used as the foundation for certain theory for a long time can be seen by some as becoming fact.

    I see this alot in climate science. After a few assumptions have been made and built upon, no one seems to want to go back and make sure that the assumptions are correct, they just assume that they are (it has been published for 20 years, someone would have found something wrong with it by now if it were not really correct). In fact some will even think you a fool for questioning a long standing assumption.

    Some people get excited over the dramatic things a study concludes, I cant seem to get worked up over the conclusions because I can’t get past the assumptions made (if A is this way, then X and Y will produce Z… well good on you for finding Z, but what if A is not that way?). The you get the self referential logic: well of course A is right because so-and-so found it was that way, but so-and-so assumed something else to get A in the first place.

    Something I think climate scientists and those interested in the topic should remember is this: “Assumption is the mother of all f’ups”.

  29. Christopher Game

    Dear Alexander Harvey,
    Surely the IPCC “forcings and feedbacks” formalism is an attempt to solve a physical problem by use of Procrustean pseudo-mathematical back-of-the-envelope argument – one size fits all. Physics isn’t like that.
    Yours sincerely, Christopher

    • Alexander Harvey

      Dear Christopher,

      I think that it is “not to solve a problem” but to explain it in other terms.

      The problem remains.

      If one accepts that the radiative calculation that gives us the CO2 forcing factor is correct we still need to know the real world’s radiative response to warming even in the simple model.

      Sadly we don’t, not even close.

      We may be able to calculate the “no-feedback” sensitivity with some accuracy but without the knowledge of all the other factors, that is no help either.

      We do not know if we need to multiply it by 1 or 6 or any other value but values between 2 and 4 seem more likely to me.

      There are strands of evidence that point in various directions but nothing that one might stake ones life on.

      The small model approach is useful, without it people here might not have any figures to argue about. The debate is largely cast in terms of small models. In a sense the big models are agnostic in such matters they plough their own furrow and come up with projections without any further insight unless it is tease out in small model terms.

      Well that’s my view.

      Alex

      • Christopher Game

        Dear Alexander Harvey,
        My criticism stands. I am complaining that what you are talking about is pseudo-physical mathematical formalism, not physics. Physics understands the problem then chooses the mathematics to fit the understanding. Here you are choosing the mathematics and hoping that you will find a physical understanding that will fit it. It is not a priori obvious that surface temperature is the most appropriate variable on which to build a simple model. Perhaps some energy flux would be more suitable from a physical viewpoint, but the arbitrary choice of surface temperature for the Procrustean model will mould thinking away from the physical understanding. As you say, the the “debate” is largely cast in terms of small models. But I say, not a variety of small physical models. No, in terms of a single authoriative and orthodox model which is chiseled in stone on Mount Sinai. The IPCC’s model chosen emphasizes the very emotive and at least partly misleading “positive feedback” story without a proper theory of feedback to support it; it is a propaganda trick, and you seem to be defending it tenaciously instead of seeing it for what it is.
        Yours sincerely, Christopher

      • Alexander Harvey

        Christopher,

        I am not choosing anything, this is not my formalism. I am just trying to show how it is defined and hence how we can think about them.

        The small models have very little physical understanding beyond bulk thermal properties, they have no dynamics, they are a means of thinking about the system in simple terms.

        Small models are simple but not necessarily ridiculously simplistic, perhaps there most important use is to allow generalised reasoning to take place.

        They do capture many effects at intermediate to long timescales, there may be good reasons for this. There are known effects whereby locally chaotic effects inform upscale as stochastic noise.

        I am not defending the IPCC, I am not that fond if the way they handle the equations. The transient behaviour of the small models is not dealt with. That is I think that models reduced to the point where they only give equilibrium conditions is not very helpful.

        You seem to think that the feedback formulation is inherently wrong or malicious. It is merely a recasting of the equations in different form to facilitate certain types of reasoning. I think I can see it for what it is, and it does not amount to much. If you are saying that this has been spun in some way to take on more significance than is justified, I am afraid I could not comment beyond saying that I do not attach a lot of significance to the feedback form of the equations as it neither solves nor adds to the solution of the fundamental problem.

        Alex

      • Christopher Game

        Dear Alexander Harvey,
        You are right, I do find the IPCC “forcings and feedbacks” formalism “inherently wrong and malicious”. I am sorry I have not been articulate enough to effectively explain why to you. It is not merely a “recasting of the equations”, but is a sustaining and exploitation of wrong or irrelevant equations.
        Thank goodness for Lu and Cai (2009) at http://dx.doi.org/10.1007/s00382-008-0425-3, and in other papers, who explain what I find wrong far more articulately than I did, and tell how to do it better. When you read Jianhua Lu and Ming Chai you will understand why I get so upset by the IPCC “forcings and feedbacks” formalism.
        Yours sincerely, Christopher

      • Christopher,

        Thank you for your appreciation of Lu and Cai papers. I would say that our papers ( Lu and Cai; Cai and Lu) did not imply that IPCC ” forcings and feedbacks” formalism “inherently wrong and malicious”. We only suggested that there may be weakness in the formalism and the forcings and feedbacks could be measured from a different perspective.

        I believe this is normal because science is not about final truth.

      • Christopher Game

        Dear Jianhua Lu,
        Certainly you are not the source or cause of my personal opinion that the IPCC “forcings and feedbacks” formalism is “inherently wrong and malicious”. I do not mean to attribute opinions like that to you. That opinion is purely mine. You are in a position from your great work where you are not forming emotional opinions like that. My strong emotions about this arise because I have not seen people take an objective scientific and critical approach to this formalism, when I think they ought to have done so. Your work, combined with Stephens and with Bates (which sad to say I did not read before now) will probably lead to others taking an objective and critical approach, and then my emotions will probably settle. The only serious criticism I had read before now was Aires and Rossow 2003, and I was upset that that did not get more and wider recognition. I will continue to study your and Cai’s papers and eagerly await more of them.
        Yours sincerely, Christopher Game

      • Alexander Harvey

        Dear Christopher,

        Thanks for the link.

        This is a chalk and cheese moment. Lu and Cai represents a redefinition, so there can be no direct comparison.

        The “standard” version of feedbacks deliberately excludes the dynamics. In that way it creates a seperation between the purely radiative aspects that are dependent only on the state (not how it got there or how it is maintained), and the state part which is decided by in some other way, typically by an AOGCM.

        It should be noted that the CMIP3 archived output from AOGCMs lack information necessary to make radiative attributions without reference to a radiative scheme, as that level of detail is not stored by the AOGCMs. This in itself enforeces a seperation into raditive and state parts.

        But there is a good reasons to try and exclude dynamics from the definitions. It allows for models to be compared in a neutral and systematic way.

        At present I cannot see how a scheme that attempts to account for dynamical effects such as internal energy balances can be sufficiently neutral to allow for intermodel comparisons to be performed in the same way. Further I cannot at present see the advantage in running AOGCMs and then using definitions of feedback that would either require emulating the AOGCM dynamics or on the AOGCM supplying details of the dynamics, which I am pretty certain that they currently do not.

        There may be merit in such a redefinition, but it seems to go far beyond what is required of the feedbacks and may not facilitate intermodel comparison such as we have.

        I think there are sound reasons for defining a feedback scheme that is as far as possible free of dynamics. It makes it clear what it is.

        Unfortunately it seems that people and I mean a lot of people are not clear as to what it is not. It is not a climate model. It does not and cannot answer questions regarding how much the temperature might change as the result of a forcing without being informed as to the state changes resultant from that forcing (typically supplied by AOGCMs).

        Alex

      • Christopher Game

        Dear Alexander Harvey,
        Thank you for your reply.
        Yours sincerely, Christopher

  30. Having read through this thread, I certainly think Alex Harvey and Fred Moolton have said things along the same lines I would have commented, and I agree with their general views on this subject, so I won’t say more than to recommend reading their comments in particular.,
    I only add, because I think it hasn’t been said, that the no-feedback sensitivity, while being a purely academic exercise, is of value if only to know what a positive or negative feedback is when we see it. Since no estimates of no-feedback sensitivity give a surface warming rate of 3 degrees per doubling, we can say that climate models producing that have a positive feedback, and Lindzen can’t claim there is no feedback without the concept of a no-feedback sensitivity, so it is of value and can’t just be dismissed as a side issue.
    For the record, I prefer the fixed-lapse-rate definition, which is a clear one-answer definition with no free parameters for a given atmospheric temperature and composition profile.

    • Jim D,
      “Since no estimates of no-feedback sensitivity give a surface warming rate of 3 degrees per doubling, we can say that climate models producing that have a positive feedback, and Lindzen can’t claim there is no feedback without the concept of a no-feedback sensitivity”
      ??Are you confusing models with reality??

      • I think there’s a bit of circular reasoning there. This is confirmed by the fact that the reasoning is circular….

      • The point is that nobody, not even Lindzen, can say there is no feedback unless that is defined first, same for positive feedback. Kind of obvious, but it has to be said.

  31. BLAH BLAH BLAH

    +3.7-W/m2 forcing resulting temperature without any other atmospheric response is a useless calculation. It means nothing in the real world.

    It’s pure bullcrap timewasting drivel.

    It can’t possibly help “gauge” the feedbacks. All of the responses to CO2 radiative forcing are feedbacks, including increased surface temperature.

    The only important number is what the Fudge happens to the atmospheric temperature with the +3.7-W/m2 input.

    Yeah, that and a bunch of other natural and unnatural forcings and processes currently being swept under the rug. These disappeared forcings and their associated undocumented feedbacks the muddy the water and bury the tiny and isolated CO2 feedback signal in a storm of noise.

    Welcome to Geology!

  32. Alexander Harvey

    I am not sure why the Planck feedback or its associated no-feedback sensitivity is so contentious.

    How does it affect the equations?

    The calculation of the temperature change in a small model approximation

    T = F/L

    T is the change in temperature, F the change in forcing and L (lambda) has the form of a thermal conductance.

    Now we could partition L into a number of parallel conductances, in particular Lf and Ls.

    T = F/(Lf+Ls)

    Lf for fast (water vapour, clouds, etc.) and Ls for slow (things delayed by thermal inerta, ice albedo, tundra emissions,etc.)

    Now Lf determines a real flux that one might measure. But we can go further and divide Lf into some abstract parts.

    Lf = Lp+Ll+Lw+Lc (Planck, lapse rate, water vapour and clouds).

    Now one of these has to be dependent, the one chosen might be the one we have no idea how to calculate e.g. Lc (clouds) so

    Lc = Lf -(Lp+Ll+Lw) but more of that later.

    Given this abstract decomposition we get

    T = F/(Lp+Ll+Lw+Lc+Ls)

    Now we have a fondness for Lp so we couch the equation thus by dividing top and bottom by it. Now there is a sign convention for F at TOA of down minus up which I must use for the next statge.

    T = (F/Lp) / (1 + Ll/Lp + Lw/Lp + Lc/Lp + Lf/Lp)

    we select the CO2 doubling forcing for F and so rename -F/Lp as the no-feedback climate sensitivity (CS0) and rename the terms in the denominator as feedbacks factors, (note the minus signs):

    fl = -Ll/pLp, fw = -Lw/Lp, etc.

    Giving

    T = CS0/(1 – (fl + fw + fc +fs))

    sum the feedback factors to get one over all f factor and hey presto

    T = CSo/(1-f)

    We have manipulated a lot of symbols but we haven’t actually done anything. It is the same problem we started with.

    All we have done is divide the top and bottom by the same value.Interestingly we can pick any value for Lp and the equation still holds, it is in that sense arbitary (although it is a good idea to try and get a sensible value).

    Now what would happen if we got the value of Lp wrong?

    Well remember that we had a dependent value Lc = Lf -(Lp+Ll+Lw)

    Get Lp wrong and in this case Lc takes up the slack.

    My point is that it is Lf and Ls that along with the thermal inertia determine the short and long run development of the system in the small model approximation.

    Transforming L = F/(Lf+Ls) into L = CS0/(1-f) doesn’t solve the problem it merely restates it in another guise.

    I apologise in advance for any typos as I rarely do so many equations without a mistake or two.

    Now this is one of those moments when either I am nuts or most of you are nuts. Whatever my sanity, I fail to see how the strange abstraction that is the no-feedback sensitivity managed to create such a fuss.

    Alex

  33. Well congrats Dr. Curry! I think you just made history. This is the first blog post that I have read that has well over a hundred comments, and I could not find ONE ad-hom! Just well mannered discussions, explanations and arguments. Hopefully others are watching and realizing that what you set out to do is actually feasible.

    Good on ya!

  34. Judith C

    >I am planning a post to dig back into the history of how it was formulated in this way. Even if this method turns out to be fatally flawed or merely not useful, it is worthwhile to ponder an alternative system type approach for analyzing the climate system sensitivity to external perturbations<

    Yes, please

  35. Judith,

    After the repsonse by Fred, I’d argued why Tomas’s argument- which is included in your article – is misleading. Welcome your comments.

    Jianhua

  36. This is going back to the earlier rather more philosophical discussion about what these models of C02 forcings all might mean. It occurs to me that the whole grand attempt to build a scientific model of how CO2 might act in the atmosphere (with or without forcings) rather begs the question “Why are we doing this?”

    By looking at the question we need to answer my sense is that we constrain the problem more satisfactorily from the top-down as opposed to than making simplifying assumptions from the bottom-up (eg C02 with no forcings).

    This is more by way of a sketch of what I mean, but crudely we want to know if we keep pumping C02 into the atmosphere at the current rate will the temp go up a couple of degrees by 2050 or say only 0.5 degree (and what’s the probability distribution). Also we want to know if mitigation strategies will make any difference, but let’s put that aside for the time being.

    Now given the nature of this problem we can argue that we are interested in short-term impacts of C02 (~50 years seems to be agreed as the amount of time we’ve really been having a significant impact) and we are interested in the next 30 – 50 years going forward (more weather than climate perhaps?) because if we have a high probability of 2 degrees cooling on that time scale (and mitigation looks like it might work) then we’d probably have the basis for making some political choices.

    Given this problem it isn’t clear to me that a GCM is the answer, nor all the studious endeavour that is going in here. I think I would be trying to isolate the short term impact of short term changes to CO2 levels in the atmosphere from the historic record (while partialing out other factors).

    And what I’d be looking to the science to tell me is the likely form of the relationship between C02 concentrations (in all its feedback glory) rather than specifying exactly what it will be to the nth degree.

    The point is that this information on the form of the relationship will give greater confidence in the models used to forecast the future impact of C02. In addition for this problem these top-down models are likely to give a more accurate assessment of the impact of C02 than complex deterministic bottom-up models (for starters you’ll have a more robust assessment of the uncertainties involved).

  37. It would be good if somebody could explain how this average value of 1.0 or 1.2 C has anything to do with the real world of night and day, tropical and polar regions and oceans and land. Can anybody explain?

  38. To consider if the CO2 GW hypothesis as viable we need to know the ratio of CO2 molecules re-radiating absorbed energy against those transferring that energy by collisions to the non GH gases (oxygen and nitrogen, 3000+ of these to each CO2) molecules.
    From Dr. Roy Spencer:
    1) there are 26,900,000,000,000,000,000,000,000 molecules in 1 cubic meter of air at sea level.
    2) at room temperature, each molecule is travelling at a very high speed, averaging 1,000 mph for heavier molecules like nitrogen, over 3,000 mph for the lightest molecule, hydrogen, etc.
    3) the average distance a molecule travels before hitting another molecule (called the “mean free path”) is only 0.000067 of a millimetre.

    Question:
    What is ratio of the CO2 molecules re-radiating the absorbed energy against those transferring the absorbed energy by collisions to the non GH gases ?

    • No takers than?
      – Either the above doesn’t matter; if so please explain why it doesn’t.
      – If it matters than ‘models’ must take it into account; what are the numbers?
      Avoiding difficult question (if it is difficult at all) is no credit to the participants of this blog, among them some of the highest competence.

    • Tomas Milanovic

      Well it is not very difficult.
      You get the power in the 15µ band emitted at some temperature , f.ex 270K (use a black body calculator , units W/m²) measured within 1 m^3 of air.
      Call it P.
      You compute N=P/h.nu where h is the Planck’s constant and nu the frequency of the 15µ radiation.
      N is the number of photons emitted by the CO2 molecules in this 1 m^3 in 1 second.
      Compute the number NM of CO2 molecules in this m^3.
      N/NM is the proportion of CO2 molecules that reradiated.

      Now on the other hand you know that every CO2 molecule experiences some 10^10 collisions in a second.
      So without doing detailed calculation you can already suspect that the ratio you defined will be rather small.

      However I am puzzled why you are interested in this ratio because it is only half of the story.
      Of course as only some 5% of the CO2 molecules are in an excited state at the considered temperatures (this is a constant for constant temperature), only 5% of the collisions will be able to transfer the absorbed energy to the non GH gases.
      What is much more important are the 95% of the collision where the CO2 is NOT excited (e.g didn’t absorb) and BECOMES excited by energy transfer from the non GH gases to CO2.

      In LTE (local thermodynamic equilibrium) these both energy transfers are equal. E.g there is as much energy transferred from excited CO2 to non GHG as there is energy transferred from non GHG to CO2.

  39. Tomas Milanovic

    Jianhua Lu I am afraid that you misunderstood the whole argument. It may be that immoderate use of computers leads to a loss of ability to do correct mathematics but most of your comments are irrelevant or simply wrong.

    In more detail.

    He is wrong mainly because the F in his argument is not well defined and has been changing its meaning along with his argument, and therefore is confusing.
    It is clear that you have been confused but that has more to do with the understanding of the mathematics than with the argument itself.
    F is perfectly defined by F = ε.σ.T⁴. It cannot be more precisely defined, right? Given ε , σ and T it can be locally computed everywhere.

    Equation (1) F = ε.σ.T⁴ is used as definition of emissivity, then F should be the infrared radiation.
    No. F is total power flux emitted by a body at temperature T (unit W/m²). It is integrated over all frequencies (Planck’s law). This should really be trivial.

    But we know that we cannot get any thing new just by differentiating a definition.
    And the point is? I have not said that I get something new. However computing dF where d is the differential operator is perfectly legal and necessary in this case.

    Equation (1) is meaningful for the change of temperature (planetary Te as a whole, or surface temperature Ta – for two cases ε have different meanings) only when F means shortwave radiation absorbed by the planet.
    Sorry but this is just meaningless rambling.
    Equation (1) is Planck’s law and has been known for more than 1 century already.
    Also ε (emissivity) has only 1 meaning – it is the ratio between the emitted power and the power that would be emitted by the same body if it was a black body at the same temperature. That’s why it is always <1. This has nothing to do with some averages like Te or Ta.

    That is, Equation (1) represents the balance between absorbed (incoming) energy flux and outgoing energy flux which basically determines the temperature of this blue planet.
    Of course it doesn’t represent anything such.
    Planck’s law only gives total emitted power for a body in equilibrium with radiation.

    (b) Equation (1) only holds at TOA and globally. Locally both TOA or both locally and globally at each other hight (including at surface), Equation (1) does not hold because of the dynamical (horizontal or vertical) energy transport.
    Ad nauseum. No . Planck’s law holds everywhere for grey bodies what is what we consider here. It depends on no transports only on radiative equilibrium.

    Then Tomas’s following Equation :
    dF = 4.ε.σ.T³.dT + σ.T⁴. dε => dT = (1/4. ε.σ.T³).(dF – σ.T⁴. dε)
    should mean that a perturbation to energy balance Equation (1), given the d terms are small relative to mean F, T and ,ε. It is not the differential of the definition of emissivity.

    Repeating again the same misunderstandings. Equation (1) is not an “energy balance” and the differentials are NOT perturbations. In the equation above you have the correct expression of the differential of F .
    If F is a function f of T and ε, you have its differential dF = ∂f/∂T.dT + ∂f/∂ε.dε . This is also trivial.

    Again, dF is the perturbation in SHORTWAVE radiation absorbed by the planet …
    Again not at all! dF is the differential of F which is the EMITTED TOTAL POWER integrated over all frequencies.

    Then we comes to this equation:
    dTa = { 1/S . ∫ [(T/ 4.ε) . dε].dS } + { 1/S . ∫ [dF/4. ε.σ.T³].dS }
    Tomas said ” The first term is due to the spatial variation of emissivity.”
    he is confusing the dε ( perturbation of emissivity due to external forcing or feedbacks) with the spatial variation. No, the first term just means the spatial average of the dε, and d in dε does not mean spatial variation.

    Here we touch a mindboggling mathematical abyss. The first term is the contribution of emissivity variations to the variation (differential) of the average temperature (dTa). The d in dε means again a differential.
    I supposed sufficient knowledge of basic differential calculus in my post for those who would comment.
    As this is not the case, let’s be more detailed.
    We have ε function of r,θ,φ in spherical coordinates. As we consider surface fluxes , r is constant and as we consider grey bodies ε is independent of frequency.
    Then dε = ∂ε /∂θ.dθ + ∂ε /∂φ .dφ .
    So now you substitute in the first term of the equation above dε and, of course, ,you express the differential surface element dS in spherical coordinates too. What you obtain without surprise is a double integral over the sphere surface with variables θ and φ.
    This has nothing at all to do with “perturbation of emissivity due to external forcing or feedbacks” what would be a total nonsense.
    I apologize to other readers for this mathematical tedium but I didn’t expect that I’d have to go to such basics.

    Again he is not right here because he is using F and Equation (1) as in a definition, but all of climate change is about the perturbation about the energy balance. F should be shortwave radiation here ….

    This is again completely confused.
    Might it be that you don’t know what Plank’s law is about?
    The Planck’s law, which is what I use, is telling how the total emitted power changes when the temperature changes.
    Of course it can’t be used to compute how the temperature changes when the incident radiation changes!
    This was precisely the most important point in my post that I explicitely treated but you seem to have missed.
    If you change the incoming radiation flux, the body doesn’t react only radiatively. It will transform and transfer the incoming energy in many manners – store part of it, transmit elsewhere by conduction and convection and/or change its state. Radiation laws alone say nothing about how the energy is distributed among the different channels.

    The surface energy balance should be:
    Fa (net short-wave) + R_down = ε_s . σ.T⁴ + LE + H
    where R_down is downward long-wave radiation, ε_s is the surface (ocean, soil, or rocks) emissivity; LE is the surface latent fluxes; H surface sensible heat fluxes. Of course, we also could add the ocean heat storage term in the above equation.

    What has that to do with anything? This is just approximate energy conservation for an arbitrary surface/volume. If you want to say that energy is conserved, I think we all agree.

    A small point about Tomas’s argument: when we are talking about equilibrium in climate, it means a statistical average
    A very ill defined and unjustified use of the equilibrium concept. When a physical law is valid only in equilibrium, then it can’t be extended to some arbitrary “statistical averages”. But I wanted to evaluate the averages in my post, that’s why I rigorously computed what the integrals 1/S∫T.dS are.

    Welcome comments on my opinion.

    I have basically 2.
    First is that you hopelessly mix up and confuse global and local what lead you to misunderstand my post. The point was to evaluate dTa/dFa by using the Planck’s law for grey bodies and to show that this parameter is ill defined because it depends on local temperature , radiation and emissivity fields .
    In other words for an infinity of different fields there will be the same dTa/dFa what makes it useless for evaluating temperature answers.

    Second is that you really need more mathematical training. Your knowledge of differential calculus appears at best sketchy.

    • Thanks for your reply, Tomas.

      Yes, I know what Planck’s law is, but none of working scientists tried to just use Planck’s law and its differential to understand climate change and climate sensitivity.

      • Tomas Milanovic

        Jianhua Lu
        Then just 2 remarks :

        1)
        This and variations on teh sam etheme abound:

        the 1C value for a forcing of 3.7 W/m^2 (the canonical value for doubled CO2 based on radiative transfer equations and spectroscopic data) is derived by differentiating the Stefan -Boltzmann equation that equates flux (F) to a constant (sigma) x the fourth power of temperature. We get dF/dT = 4 sigma T^3, and by inversion, dT/dF = 1/(4sigmaT^3). Substituting 3.7 for dF and 255 K for Earth’s radiating temperature, and assuming a linear lapse rate, dT becomes almost exactly 1 deg C.
        Guess what law is used in this argument?

        2)
        Every definition of climate sensitivity I have seen is equivalent to dTa/dFa where d is the differential , Ta is the average surface temperature and Fa is the average total radiative power.
        What I did was just to compute dTa/dFa according to this definition using among others the Planck’s law for grey bodies simply because it is correct.
        Do you consider that it is forbidden to use Planck’s law or that it is wrong?

      • Tomas,

        Using your equation 1, dF = 4.ε.σ.T³.dT + σ.T⁴. dε =>

        dT = (1/4. ε.σ.T³).(dF – σ.T⁴. dε)

        let dF =0 , – σ.T⁴. dε =3.7, T=255K, ==> dT= 1

        (2) Planck’s law (which I believe should be right) only cannot tell us about how climate changes, if there is no information about the ” balance”. Or if there is no “balance”, maybe no climate exists.

      • Tomas Milanovic

        Jianhua Lu

        It is not “my” equation, it’s Planck’s and Boltzmann’s.

        let dF =0 , – σ.T⁴. dε =3.7, T=255K, ==> dT= 1

        Should that be interesting? You choose an exotic case in which your emissivity varied just compensating the temperature variation. And?

  40. We have learned that the no-feedback CO2 sensitivity is a not something that can be observed in nature as nature has all feedbacks. On the other hand it is possible to define the concept in models, and we have been told that the models find rather consistently that the no-feedback sensitivity is 1 – 1.2 K. Robustness of the observation is promising, as it may indicate that there is a good reason for this robustness. If there is indeed a good reason, it might be possible to explain the reason more directly without going through all the processes involved in the models. The reason might be essentially a conservation law valid for all models.

    Basically the postulated conservation law should tell that models, which are very different in many details, lead suppressing feedbacks to the same result: The average temperature of the earth surface increases approximately as much as the climate forcing would indicate for the change of the effective radiative temperature according to the Stefan-Boltzman law.

    If we are able to give good and understandable justification for such an approximate conservation law, we may have found a way to salvage quite a lot from this concept.

    My idea for the possible reason that such a conservation law might be found is based on postulating that the non-feedback changes in the outgoing radiation at TOA would be dominated by changes in radiation originating at earth surface and lowest atmosphere. Whether this postulate is valid in the existing climate models can be checked by their users.

    • Especially in the tropics, much of the outgoing IR emanates from cold cloud tops in the upper troposphere. It is only in the polar and subpolar regions where IR emanates primarily from the surface and lower atmosphere.

      • Judith,

        I agree with you that cloud is a big issue. Soden et al. (2004?) mentioned the difference between cloud forcing and cloud feedback in the PRP sense. I’d been wondering if we can estimate the climate sensitivity andclimate feedbacks with repsect to clear sky condition for which the calculation of long-wave radiation may be more robust. The exist of clouds and their changes can be lumped into cloud forcing. I prefer to using cloud forcing because it is measurable while the cloud feedback as usyually calculated is not.

        The atmosphere of discussion here is wonderful.

      • exist => existance

      • The Soden et al reference is Cloud Forcing and Feedback

        It was the first paper I saw that clearly explained the distinction to me.

      • Yes, I do understand that much of IR leaves from the upper troposphere. My postulate is more limited: The changes in the IR are related to the radiation from the surface and the lower troposphere.

        Thus I postulate that the radiation that is from the top of troposphere already with the lower concentration of CO2 would not change much.

        Stated in a third way, I postulate that the minimum temperature of tropopause would remain essentially constant and the change in the surface temperature would affect more the altitude of this minimum than its value.

        There are more assumptions involved in my proposal, but this is a central one of the assumptions.

    • Richard S Courtney

      Pekka Pirilä:

      You suggest:
      “we have been told that the models find rather consistently that the no-feedback sensitivity is 1 – 1.2 K. Robustness of the observation is promising, as it may indicate that there is a good reason for this robustness.”

      Yes, and the “good reason” is probably that the models utilise similar assumptions and then are tuned to match the same historical climate data (i.e. rise in global temperature over the past century).

      So, what does this “robustness” tell us about the real climate of the real world?

      Richard

      • Richard: “is probably that the models utilise similar assumptions and then are tuned to match the same historical climate data (i.e. rise in global temperature over the past century).”

        A common misconception is that global climate models are tuned to match the real world change they are attempting to simulate. That is incorrect. Models are tuned, via appropriate climate data plus parametrizations, to match a starting climate. If they are then forced with a perturbation – e.g., a rise in CO2 – their output in terms of temperature rise will be what it is. If it matches, fine. If it doesn’t, there is no retuning to make it “come out right”. As models have improved with time, they have performed better, at least on a global long term basis, although less well regionally or for short intervals.

        The 1.2 C rise for a no-feedback scenario is robust for all the reasons we have cited above – i.e., it exploits, among other things, the relationship between TOA or tropopause forcing and surface response. The mathematics linking the two are found in the lapse rates. The physical principles involve radiative transfer within the atmosphere, as quantified by the transfer equations, and entail changes in absorptivity and emisivity in relation to greenhouse gas concentrations and temperature. (Note – of interest re some comments above, emissivity can’t change on the surface in a no-feedback calculation, because albedo is fixed, but critical emissivity changes occur in the gas phase – the atmosphere – because absorptivity and emissivity of atmospheric layers are a function of the concentration of absorbers/emitters such as CO2).

      • Fred,

        Given all of the limitations and uncertainties existed in the models, I believe your description about how models work is right.

        I’ve to go to work now. BTW, I am amazed how you can be going so deep into climate science as a professional cancer scientist.

        Judith’s blog is too attractive, and I will try hard to visit less otherwise my research papers will be in danger : )

      • I agree about this blog being “too attractive” hence a threat to ongoing research. I am way behind on my funded projects. What higher compliment can there be in science?

      • Leonard Weinstein

        Fred,
        Your comment “A common misconception is that global climate models are tuned to match the real world change they are attempting to simulate. That is incorrect. Models are tuned, via appropriate climate data plus parametrizations, to match a starting climate. If they are then forced with a perturbation – e.g., a rise in CO2 – their output in terms of temperature rise will be what it is. If it matches, fine. If it doesn’t, there is no retuning to make it “come out right”. As models have improved with time, they have performed better, at least on a global long term basis, although less well regionally or for short intervals.”
        Is not correct. In fact, there are several “knobs (what you call parametrizations)” which are crude physical models of less understood physics (clouds, ocean currents, aerosols, etc.) that are selected when examining results forced with a perturbation – e.g., a rise in CO2, and then continually adjusted in a back and forth process of looking at data downstream of the initial condition until the best fit to the data is obtained. This approach can seem to show the result of the CO2 rise, but so far NONE of the models has shown real skill when data beyond the range used to adjust the knobs is examined.

      • Models are tuned to initial conditions, but not retuned to make output (e.g., temperature response to CO2) match real world observations. If they were, they would match better than they do, which is fairly well for long term global projections, but less well for regional or short term intervals, and poorly for individual phenomena such as ENSO.

      • Alexander Harvey

        Fred and all:

        This is an important matter, and it suffers much from the black box AOGCM view, due to lack of information as to what a parameterisation scheme is and is not.

        This lack of information leads to assumptions, many of which could probably be cleared up by the modellers.

        It is likely that it would be a curate’s egg, as there may be real issues that remain and canards that could be dispelled.

        The issue of how information flows between data and models, between models, and between modellers is of interest.

        Obviously data informs models but models also inform data (ERA 40). It is a complex web of information flows.

        The model tuning issue really needs addressing at least at the art of the possible level. At a practical level I believe that many of the IPCC runs are so expensive that very little of the models’ parameter space is explored and hence opportunities to tweak them are slim. But that doesn’t in itself mean that the models have not been educated in more subtle inadvertent ways, or exclude the possibility that just getting some basics right informed by neutral information leads to surprisingly good results judged by the ability to reproduce warming.

        Some perceptions could be dispelled quite readily, perhaps that AOGCMs are fed with the IPCC CO2 forcing value and with a preselectable climatic sensitivity. For one thing it is not clear how such figures could be plugged in, for a second I believe analysis of the models indicates that they come up with their own values for both which differ from the CO2 forcing by a small degree and give a range of values for the sensitivity. The last point being why the IPCC sensitivity has a range.

        Some of the confusion comes from the vocabulary. Parametisation sounds like Parameter. If they had thought to name subscale parameterisation schemes as subgrid scale solutions of dynamical equations informed by and informing grid scale dynamics, perhaps it might be more clear that they contain as much physics as there is time to calculate.

        Alex

      • Alex – As usual, you approach this issue with a greater degree of sophistication than typifies exchanges on the subject. I am not qualified to construct climate models, but my information comes from individuals who do this for a living. They would probably agree with you that when new models are developed (a heroic undertaking), they try to learn from deficiencies of the older models. On the other hand, this learning process entails attempts to better address mismatches between modeled and observed climate phenomena in a system unperturbed by a forcing of interest. They would also, I believe, tell you that once a model is forced with a perturbation, the model is not retuned to adjust the output – i.e., new models are not tested against the results of their simulations in order to perfect them. This is particularly of interest regarding climate sensitivity. It is an emergent property of GCMs used in the manner we’re discussing – i.e., it is not an input but comes out of the basic equations (momentum, energy, mass, heat capacity, etc.) plus the input observational climate data and the parametrization that have been used in the original tuning prior to running simulations.

        What all this means, as far as I can tell, is that the evolution of models toward improved performance is based mainly on their improved ability to match a starting climate, and that given the complexity of model construction and testing, after-the-fact modifications based on output have no role in the operation of existing models, and in terms of guiding new model development, are likely to play only a very indirect and subordinate role in improving their performance.

        Interestingly, climate sensitivity is in fact an input in a particular type of model operation, applied mainly to paleoclimatic reconstructions. In some cases, different sensitivity values are entered in order to see which one best reproduces observational data. Here, the question is not how the climate will change based on a set of forcings, because that is already known. Rather, the models are used to determine which sensitivity values reproduces the observed data. It is one of several complementary methods used to assess climate sensitivity.

      • Determining which results from a model have been introduced by modelers’ conscious or unconscious wish to get those results and which are unbiased results from the model is often extremely difficult to decide. In building models the modeler’s make many choices whose consequences are to some extent known to them. It is practically impossible to make these choices in a perfectly objective manner. From what I have learned about climate modeling, the field is certainly not free of these problems.

        I believe that most of modelers are well aware of this issue. Being careful and making a lot of effort to avoid bias helps, but does not solve the issue completely.

      • Pekka – I very much agree with you about the risk of bias- it’s a constant danger, and I’m sure it has caused distortions. At the same time, I would also recommend the video Alex Harvey linked to below on parametrization (or if you’re British, parametrisation). It’s enlightening and humbling. It illustrates how difficult it is to introduce even minor adjustments to solve one problem without causing a model to misbehave conspicuously in multiple other ways. That suggests the impracticality of applying arbitrary corrections with the hope of making entire model simulations that are run over decades yield a desired result while preserving their ability to reproduce basic climate behavior. For example, one could tweak a model to make it warm up more via an arbitrary humidity adjustment, but then it would be difficult to explain why the adjusted model is causing the climate to rain 24 hours per day in the Sahara Desert.

        Here is the link again – parametrisation

      • Alexander Harvey

        Fred,

        as ever you are generous with your time.

        The science is so exciting.

        Re below: I am pleased that you liked the video, there is a wealth of relevant material on that site.

        Alex

      • Richard S Courtney

        Fred Moolten:

        You assert:
        “Models are tuned to initial conditions, but not retuned to make output (e.g., temperature response to CO2) match real world observations.”

        Sorry, but NO! That is not true.

        I have twice covered this matter in threads on this blog.

        My comment was in the thread titled “What can we learn from climate models” that is at
        http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/

        It was as follows:

        “Richard S Courtney | October 6, 2010 at 6:07 am | Reply Ms Curry:

        Dr Curry:

        Thank you for your thoughtful and informative post.

        In my opinion, your most cogent point is:
        “Particularly for a model of a complex system, the notion of a correct or incorrect model is not well defined, and falsification is not a relevant issue. The relevant issue is how well the model reproduces reality, i.e. whether the model “works” and is fit for its intended purpose.”

        However, in the case of climate models it is certain that they do not reproduce reality and are totally unsuitable for the purposes of future prediction (or “projection”) and attribution of the causes of climate change.

        All the global climate models and energy balance models are known to provide indications which are based on the assumed degree of forcings resulting from human activity resulting from anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature. This ‘fiddle factor’ is wrongly asserted to be parametrisation.

        A decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
        And my paper demonstrated that the assumption of anthropogenic aerosol effects being responsible for the model’s failure was incorrect.
        (ref. Courtney RS ‘An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre’ Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

        More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
        (ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).

        Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model.

        He says in his paper:
        ”One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

        The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.

        Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange ) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.”

        And Kiehl’s paper says:
        ”These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.”

        And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

        Kiehl’s Figure 2 can be seen at http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
        Please note that it is for 9 GCMs and 2 energy balance models, and its title is:
        ”Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.”

        The graph shows the anthropogenic forcings used by the models show large range of total anthropogenic forcing from 1.22 W/m^2 to 2.02 W/m^2 with each of these values compensated to agree with observations by use of assumed anthropogenic aerosol forcing in the range -0.6 W/m^2 to -1.42 W/m^2. In other words, the total anthropogenic forcings used by the models varies by a factor of almost 2, and this difference is compensated by assuming values of anthropogenic aerosol forcing that varies by a factor of almost 2.4.

        Anything can be adjusted to hindcast obervations by permitting that range of assumptions. But there is only one Earth, so at most only one of the models can approximate the climate system which exists in reality.

        The underlying problem is that the modellers assume additional energy content in the atmosphere will result in an increase of temperature, but that assumption is very, very unlikely to be true.

        Radiation physics tells us that additional greenhouse gases will increase the energy content of the atmosphere. But energy content is not necessarily sensible heat.

        An adequate climate physics (n.b. not radiation physics) would tell us how that increased energy content will be distributed among all the climate modes of the Earth. Additional atmospheric greenhouse gases may heat the atmosphere, they may have an undetectable effect on heat content, or they may cause the atmosphere to cool.

        The latter could happen, for example, if the extra energy went into a more vigorous hydrological cycle with resulting increase to low cloudiness. Low clouds reflect incoming solar energy (as every sunbather has noticed when a cloud passed in front of the Sun) and have a negative feedback on surface temperature.

        Alternatively, there could be an oscillation in cloudiness (in a feedback cycle) between atmospheric energy and hydrology: as the energy content cycles up and down with cloudiness, then the cloudiness cycles up and down with energy with their cycles not quite 180 degrees out of phase (this is analogous to the observed phase relationship of insolation and atmospheric temperature). The net result of such an oscillation process could be no detectable change in sensible heat, but a marginally observable change in cloud dynamics.

        However, nobody understands cloud dynamics so the reality of climate response to increased GHGs cannot be known.

        So, the climate models are known to be wrong, and it is known why they are wrong: i.e.
        1. they each emulate a different climate system and are each differently adjusted by use of ‘fiddle factors’ to get them to match past climate change,
        2. and the ‘fiddle factors’ are assumed (n.b. not “estimated”) forcings resulting from human activity,
        3. but there is only one climate system of the Earth so at most only one of the models can be right,
        4. and there is no reason to suppose any one of them is right,
        5. but there is good reason to suppose that they are all wrong because they cannot emulate cloud processes which are not understood.

        Hence, use of the models is very, very likely to provide misleading indications of future prediction (or “projection”) of climate change and is not appropriate for attribution of the causes of climate change.

        Richard”

        Richard

      • Richard – see my reply to your later short comment on this topic.

      • Richard S Courtney

        Fred Moolten:

        You say for me to look for your reply to my “later short comment”.

        Sorry, I fail to find it. This must be me fault, but I would be grateful if you were to point me to it.

        Richard

      • Richard S Courtney

        Ooops! It has now appeared. Sorry.

        Richard

      • My fault. I didn’t finish writing it until a few minutes after I referred you to it.

      • I think you may be right Leonard.
        I recall (though vaguely) Vincent Gray saying models are ‘tuned’ to a period (of about 10yrs) with no natural variations, usually a period post 1940’s.
        The ‘knobs’ are then tweeked until the model matches the chosen period.

        I could well be wrong as this is a recollection only.

      • Models are tuned to reproduce a climate state, including its temporal variations. What does not happen is for them to be tuned after they have been asked to simulate a changing climate in order for the simulation to match observations.

      • Richard S Courtney

        Fred Moolten:

        In response to my accurate statement that said;
        “the models utilise similar assumptions and then are tuned to match the same historical climate data ”

        You have restated my statement but implied my statement is a “misconception” by writing;

        “A common misconception is that global climate models are tuned to match the real world change they are attempting to simulate. That is incorrect. Models are tuned, via appropriate climate data plus parametrizations, to match a starting climate. If they are then forced with a perturbation – e.g., a rise in CO2 – their output in terms of temperature rise will be what it is. If it matches, fine. If it doesn’t, there is no retuning to make it “come out right”. As models have improved with time, they have performed better, at least on a global long term basis, although less well regionally or for short intervals.”

        Please see my comment (at December 15, 2010 at 5:44) in response to a comment you make below for an explanation of why my correct statement is important.

        Richard

      • Richard – My statement that models are not retuned to make them coincide with observed data is correct. However, its disagreement with the points you make is only partial, as you can see by revisiting my reply above to Pekka Pirila’s concern about bias in model construction. Basically, new models are developed in ways that attempt to correct deficiencies in older ones. Once developed and forced with a perturbation (e.g., increasing CO2), they are not retuned. However, the model development itself is influenced by the experience with older models.

        As I pointed out, the complexity of model development makes it almost impossible to construct a new model specifically designed to yield a desired outcome without forfeiting the ability to simulate existing climate states. Therefore, even initial tuning to produce an outcome is impracticable. Models, do however, as you point out, evaluate possible reasons for past deviations. The aerosol forcing you mention is an example, but accurate modeling of aerosols is definitely not a “fiddle factor” but a legitimate attempt to account for an important factor in climate change. One can’t parametrize aerosols simply to yield a result without yielding obvious disparities with known aerosol behavior.

        In my further comments replying to Alex Harvey, I discussed this in terms of the video he linked to, which I commend to your attention. It illustrates more graphically than I can explain in words why adjustments are strongly constrained by real world data and established physics principles, and can’t simply be made at will. The link, again, is Parametrisation

        I don’t believe one can view that lecture without appreciating the constraints that prevent modelers from simply willing their models to do their bidding. It is for that reason that the fairly good correspondence between model results and long term global temperature trends are robust. The fact that different models will achieve similar results with different parameters is not paradoxical, because consistency requires models that are tuned to existing climate states to adjust individual parameters so that they deviate from some hypothetical mean in the same direction. It is not a symptom of bias.

      • A good example of my last point is found in model assessments of water vapor and lapse rate feedbacks. The water vapor feedback is positive and the lapse rate feedback negative, partially canceling out the water vapor effect. Different models exhibit different sensitivities to both feedbacks, but models more sensitive to water vapor are also more sensitive to lapse rate. As a consequence, the difference between the two (which is a moderate net positive feedback) is less variable than either feedback alone., and is a more reliable indicator of true climate behavior.

      • Fred,

        While I agree with your statement on the joint water vapor
        and lapse rate feedback in the models, I would say it is not a
        whole story, but just part of a story. I will submit a paper
        before Christmas on the untold another part of the story.
        The paper also used CMIP3 model outputs, but based on a
        different perspective. I hope I would be able to convince my peers.

      • Christopher Game

        Dear Jianhua Lu,
        I am just starting to read your papers, with particular note of your use of feedback gain matrices; this is something that is looking like physics; a welcome contrast to the hopelessly flawed IPCC “forcings and feedbacks” formalism that is the starting point of the present blog about “no-feedback sensitivity”, which is just a propaganda trick. Now I will press on with reading your papers.
        Yours sincerely, Christopher Game

      • Richard S Courtney

        Fred Moolten:

        Thank you for your response.

        I agree with your statements saying:
        “As I pointed out, the complexity of model development makes it almost impossible to construct a new model specifically designed to yield a desired outcome without forfeiting the ability to simulate existing climate states. ”
        And
        “adjustments are strongly constrained by real world data and established physics principles, and can’t simply be made at will.”
        And
        “consistency requires models that are tuned to existing climate states to adjust individual parameters so that they deviate from some hypothetical mean in the same direction. It is not a symptom of bias.”
        (Assuming that by “bias” you mean the modellers are fixing an outcome they want.)

        But we part company – by a long way – when you assert;
        ” initial tuning to produce an outcome is impracticable”.

        The ‘aerosol cooling’ to obtain a match with past average global temperatues is very practical; each model uses it and each model uses a unique value of the assumed ‘aerosol cooling’ to produce that desired outcome.

        Please note that this use of the ‘aerosol cooling’ is a certain fact.

        And you state a demonstrable error of fact when you assert;
        “The aerosol forcing you mention is an example, but accurate modeling of aerosols is definitely not a “fiddle factor” but a legitimate attempt to account for an important factor in climate change.”

        No! The the Hadley Centre published that it adopted the assumption of ‘aerosol cooling’ as a method to stop their model from ‘running away’. The true magnitude of ‘aerosol cooling’ is not known and they say chose the value which obtained a fit to mean global temperature. Either they lied or you are mistaken, and I choose to believe you are mistaken.

        As I explained, Kiehl’s study determined that each model uses the same adjustment but each model uses a different and unique value of assumed ‘aerosol cooling’. As I said, his Figure 2 shows this and can be seen at http://img36.imageshack.us/img36/8167/kiehl2007figure2.png

        That range of values is not, as you assert ,”accurate modeling of aerosols”. How wide does a range havet to be before it is inaccurate?

        Given that range of assumed values of the ‘aerosol cooling’ to obtain a fit in each case, the input values can only be a ‘fiddle factor’. And I do not think it reasonable to agree your claim that it is ” a legitimate attempt to account for an important factor in climate change”.

        Furthermore, you did not query my statement that said of the Hadley Centre’s model
        “And my paper demonstrated that the assumption of anthropogenic aerosol effects being responsible for the model’s failure was incorrect.”

        If you had I could have explained the matter. Simply, the initial use of the ‘aerosol cooling’ input was to determine if that cooling was responsible for the model indicating too much warming over the twentieth century. The required degree of cooling to adjust the model was known, but the actual degree of cooling that had happened was not known. However, the spatial distribution of the cooling that had happened in reality was known. So, the required degree of ‘aerosol cooling’ was input to themodel and distributed spatially according to its known distribution. This was a reasonable test because if the ‘aerosol cooling’ were responsible for the model ‘overheating’ then the adjusted model should indicate a spatial distribution of surface warming that had a reasonable match to the observations. But there was no such match. For example, the model indicated regions of cooling which failed to agree with the regions where cooling had been observed: indeed, it indicated the greatest cooling in the region where most warming had been observed. Hence, it was demonstrated that absence of ‘aerosol cooling’ in the model was not the cause – at least, not the sole cause – of the model ‘overheating’.

        This finding is pretty damning of the model performance so they chose to proclaim the match of the models indications to mean global temerature which – of course – they had fixed by their choice of the degree of ‘aerosol cooling’.

        Each model is similarly adjusted, but each model uses a different adjustment.

        Richard

      • Richard – In seeking areas of agreement with your last comment, I suppose I can say that I agree with your statement that we disagree.

        Before making my main point, a few prefatory ones.

        First, I acknowledge that aerosol forcing is associated with some uncertainty, but there is no plausible evidence to suggest that the range threatens major conclusions about the magnitude of anthropogenic effects. To undrerstand why, it’s important to realize that abundant evidence indicates that aerosols have a net cooling effect, and the question at hand is how much. If it is small, that implies climate sensitivity to perturbations is small, and the CO2 sensitivity may be slightly less than typically estimated, but not very much less, because current aerosol forcing estimates are already in the low range. The more uncertain end involves the possibility that aerosols cool far more than generally estimated, because the upper end is less constrained. This makes climate sensitivity to perturbations larger than generally estimated, and implies not only a larger than estimated CO2 potency but also a greater danger of unmasking CO2-mediated warming as a consequence of pollution controls.

        2. A small point – you claim the Hadley center “assumed” (your word) aerosol cooling to make their model consistent with data. I wonder whether you can quote from Hadley the passage where they refer to cooling as an assumption, as opposed to a real phenomenon whose potency needed to be evaluated.

        3. My main point has little to do with the history of Hadley’s climate assumptions, and much to do with my statement that aerosol forcing and its effects in current and recent model simulations are strongly constrained by real world data – they are not made up to fit a model to the data. For the sake of expediency, I will simply refer you to chapter 2 of AR4 WG1, and more importantly, the many dozens of references on the subject. The latter, and not IPCC interpretations, are what one needs to review for an accurate perspective. What they show (along with the text) is that aerosol optical depth and its variations – the primary components of forcing – are now thoroughly matched against multiple observational datasets, derived principally from satellite monitoring.

        These data leave room for some uncertainty, but not infinite room, and the issue is addressed in the text and in the acknowledged range of possible direct and indirect aerosol effects I mentioned above. It leaves little room, in my view, for a conclusion that models do what they want with aerosols to bring their output into line with mid-century and late 20th century temperature trends. I would also recommend the same references to others interested in this topic.

      • First I want to emphasize that I am definitely not against the use of models. My whole career starting from early 1970’s has involved using models in physics, engineering and techno-economic systems. While I have used models and followed their use by other scientists, I have remained convinced that they are very useful when they are often the only way of putting together all relevant information. On the other hand I have seen time after time that many people have far too much trust in straightforward interpretation of the results. All complicated models are influenced by subjective choices of their authors and their results are typically best valid when they are considered in relation to these choices.

        Often the model builders learn most, and what can be given to outsiders is finally their knowledge which can visualize by the model results. At the same time one could pick from the model runs many results, which are obviously misleading. These results appear wrong unless it is understood that a model variable of certain name is not even supposed to correspond to the variable of the same name in real world.

        Large climate models appear to have all the same advantages and problems of interpretation I have described above. One can build the model with different basic choices concerning basic processes – the choices in including aerosol forcing is one example, parameterization of cloud formation or those features of ocean oscillations that cannot (yet) be modeled from basic principles are others. When one has made such choices, he can continue with the details and tune the details to attain acceptable agreement with present empirical knowledge. The experience shows that similar success can be obtained from different starting points, but the resulting models are not equal. They give different results when applied to situations outside those used in tuning.

        The modeler cannot decide, where this process leads, but he has some previous knowledge on likely general lines. He knows that certain choice at the early stage of the process is likely to lead to a specific direction. That includes influences on the climate sensitivity in the model.

        Comparing different models is one of the common approaches in estimating their skill in making forecasts. Similarity of the results increases thrust, but it is not a proof, as no modeler works isolated from the others. Many basic ideas are known to all of them and it is certain that they all have made similar choices also on issues where other choices could have been done with equal justification. All are also influenced with the common limitations in making models solvable and stable enough for giving any results. All this means that each modeler builds his or her own subjective view on the reliability of the results. The collection of these inside subjective views is the best knowledge on their value. I believe that no one who has not worked closely with the models, can really judge their reliability. Unfortunately we are faced with the situation that the best knowledge is shared only by people who cannot be fully objective whatever they try.

      • Richard S Courtney

        Pekka Pirilä:

        Well said! Excellent” Thankyou.

        Richard

      • Richard S Courtney

        Fred:

        It saddens me that this discussion is following the typical route of AGW discussions. That route is:

        1.
        An AGW-supporter makes an outrageously false assertion;
        e.g. “Models are tuned to initial conditions, but not retuned to make output (e.g., temperature response to CO2) match real world observations.”
        2.
        A climate realist provides evidence that the assertion is false;
        e.g. http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
        3.
        The AGW-supporter ignores the evidence, downplays the issue, and repeats the untrue assertion; e.g. “initial tuning to produce an outcome is impracticable”.
        4.
        The climate realist stands his/her ground and provides additional evidence;
        e.g. “The the Hadley Centre published that it adopted the assumption of ‘aerosol cooling’ as a method to stop their model from ‘running away’. The true magnitude of ‘aerosol cooling’ is not known and they say chose the value which obtained a fit to mean global temperature.”
        5.
        The AGW-supporter downplays the issue and tries to change the subject:
        e.g. “First, I acknowledge that aerosol forcing is associated with some uncertainty, but there is no plausible evidence to suggest that the range threatens major conclusions about the magnitude of anthropogenic effects. To undrerstand why, it’s important to realize that abundant evidence indicates that aerosols have a net cooling effect, and the question at hand is how much. If it is small, that implies climate sensitivity to perturbations is small, and the CO2 sensitivity may be slightly less than typically estimated, but not very much less, because current aerosol forcing estimates are already in the low range.”

        And that is where we are.

        The fact is that the models are ‘tuned’ to get an output.

        As I said;
        “And Kiehl’s paper says:
        ”These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.”

        And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.”

        No amount of weasel words changes the fact that models are ‘tuned’ to get an output.

        So, I will reject your attempted change of subject that is equally false. It consists of a set of assertions.

        (a)
        You assert;
        “but there is no plausible evidence to suggest that the range threatens major conclusions about the magnitude of anthropogenic effects.”

        Assertion (a) is wrong on two counts.

        Firstly, without use of the assumed ‘aerosol cooling’ the models drift and, therefore, they are inherently incapable of quantifying “magnitude of anthropogenic effects”. The adoption of assumed ‘earosol cooling’ masks the fact that each model is are inherently incapable of quantifying “magnitude of anthropogenic effects”. My summary points 1 to 5 in my post at December 15, 2010 at 5:44 pm explained this.

        Secondly, it is a blatant falsehood to suggest this does not “threatens major conclusions about the magnitude of anthropogenic effects.” I quoted Kiehl’s succinct statements that it does; i.e.
        “The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.”
        and
        “These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.”

        So, the effect of the ‘aerosol cooling’ is to enable the models to “models differ by a factor of 2 to 3 in their climate sensitivity”.

        A factor of 2 to 3 is a significant difference in “the magnitude of anthropogenic effects”.

        (b)
        You assert;
        “To undrerstand why, it’s important to realize that abundant evidence indicates that aerosols have a net cooling effect, and the question at hand is how much.”

        But nobody disputes that “that aerosols have a net cooling effect”. Indeed, that is why they are used as an excuse for tuning the models. And nobody knows “how much” they cool which is why they are a useful excuse for providing the different degree of tuning that each model require to fit its output to past average global temperature.

        (c)
        You assert;
        “If it [i.e. aerosol cooling] is small, that implies climate sensitivity to perturbations is small, and the CO2 sensitivity may be slightly less than typically estimated”

        “Slightly less”!?
        The effect of the cooling is to adjust climate sensitivity by ~50%. By no stretch of the imagination can that be called “slightly less”.

        As I said;
        “The graph shows the anthropogenic forcings used by the models show large range of total anthropogenic forcing from 1.22 W/m^2 to 2.02 W/m^2 with each of these values compensated to agree with observations by use of assumed anthropogenic aerosol forcing in the range -0.6 W/m^2 to -1.42 W/m^2.”

        (d)
        You assert that the ‘aerosol cooling’ makes;
        “the CO2 sensitivity may be slightly less than typically estimated, but not very much less, because current aerosol forcing estimates are already in the low range.”

        That is complete blather.

        I have repeatedly explained that the used values of ‘aerosol cooling’ are NOT “estimates”. Each model adopts a different value of assumed ‘aerosol cooling’ as a ‘fiddle factor’ to obtain a fit of the model’s output with global temperature change over the twentieth century.

        I have repeatedly explained that the used values of CO2 sensitivity in the models varies 1.22 W/m^2 to 2.02 W/m^2 and this large range is enabled by use of assumed ‘aerosol cooling’ values as a ‘fiddle factor’. The difference between these values is 56% and that is probably much less than the total error introduced by the ‘aerosol cooling’ fix. I wonder what error you would consider to be large when you assert that more than 50% error is “not very much”.

        Clearly, you like computer models. And I suggest that if you want a good, realistic computer model then I recommend ‘F1 Racing’. It is based on fundamental physics (it has to be or the cars would not perform realisticly), it is a much better representation of reality than any climate model, it is much cheaper than any climate model, and you can play with it yourself. But I warn you that winning the Monte CarloGrande Prix using that model is no indication of how you would do in a real racing car in the real Monte CarloGrande Prix because models are not reality.

        In clonclusion, I can do no more than to again repeat my earlier summary.

        So, the climate models are known to be wrong, and it is known why they are wrong: i.e.

        1. they each emulate a different climate system and are each differently adjusted by use of ‘fiddle factors’ to get them to match past climate change,

        2. and the ‘fiddle factors’ are assumed (n.b. not “estimated”) forcings resulting from human activity,

        3. but there is only one climate system of the Earth so at most only one of the models can be right,

        4. and there is no reason to suppose any one of them is right,

        5. but there is good reason to suppose that they are all wrong because they cannot emulate cloud processes which are not understood.

        Richard

      • Really, REALLY excellent post sir.

        Your very final point 5 is my main sticking point, though your other points too are very good (i hadn’t even considered your final #3!! what an oversight on my behalf!).

      • I don’t want to dwell excessively on this issue, Richard, because it’s tangential to the main theme of this thread. However, it remains true that models are not retuned as a result of simulations of forcings to make the output match observations. Once initially tuned, they must accept the output of their simulations. A few other points that might be responsible for some confusion on this point:

        1. Models are instruments, not data. When the input data are known accurately, it is illegitimate to tweak the model (the instrument) to make it yield a desired output. For example, one can’t take an accurately established value for CO2, run simulations, obtain undesired results, and then retune the model to make the results come out better. On the other hand, when there is uncertainty regarding the input data (e.g., for aerosols some years ago), it is legitimate to ask the model which input values would yield a result matching observations.

        2. Over the past ten years, uncertainty regarding aerosol inputs has declined significantly. It would not be possible today to enter a wild guess about aerosol optical depth, because the results would be poorly matched to satellite data. Again, I would refer readers to AR4, WG1, chapter 2 and its references.

        3. As you have correctly pointed out (and repeated above), uncertainties remain, although with aerosols, the uncertainties regarding sensitivity are greater on the high than on the low sensitivity side. Models do in fact yield different results on aerosols and other variables, but that is reflected in the rather broad range for climate sensitivity estimates, and current aerosol data are too uncertain to narrow the range but do not widen it either. The remaining uncertainties are also addressed in the article you referenced by Stephen Schwartz et al in NatureReportsClimateChange, who like you, emphasized the importance of reducing the uncertainty. However, in the same article, he and co-authors also stated:

        “The century-long lifetime of atmospheric CO2 and the anticipated future decline in atmospheric aerosols mean that greenhouse gases will inevitably emerge as the dominant forcing of climate change, and in the absence of a draconian reduction in emissions, this forcing will be large.”

        I’ll probably refrain from further discussions on this issue unless important new evidence emerges, but I would ask readers to review the various referenced sources so that they can make their own judgments.

      • Richard S Courtney

        Fred:

        You say:
        “I’ll probably refrain from further discussions on this issue unless important new evidence emerges, but I would ask readers to review the various referenced sources so that they can make their own judgments.”

        OK. I’ll comply with that so you have had the last word on the subject which seems fair because you raised it.

        Richard

      • By “good reason” I mean something which is not dependent on the details of the models but common to any model, where no-feedback sensitivity can be defined in essentially same way, and which is not badly in disagreement with our knowledge about atmosphere.

  41. Tomas Milanovic

    Basically the postulated conservation law should tell that models, which are very different in many details, lead suppressing feedbacks to the same result: The average temperature of the earth surface increases approximately as much as the climate forcing would indicate for the change of the effective radiative temperature according to the Stefan-Boltzman law.

    Well this is certainly an absurdity.
    It can be very easily shown (and actually was) that Stefan-Boltzman law doesn’t apply for temperature field averages.
    If anything, this precise fact should show us that the models have something unphysical in common if they come to a result which is known as wrong.

    • You misunderstand my proposal. It is in a way rather simple, but I understand that it is still not very easy to get it understood.

      I am a physicist by education and for a fair part of my carrier. In physics it is quite common that some complicated processes obey rather simple rules. When that happens, one basic approach is to search for a conservation law, which applies to the processes. Sometimes the conservation law gives a simple explanation for the observed rule while getting the same result analyzing the processes in detail would be much more complicated. My starting point was to try to apply this approach to the present problem.

      • Tomas Milanovic

        When that happens, one basic approach is to search for a conservation law, which applies to the processes. Sometimes the conservation law gives a simple explanation for the observed rule while getting the same result analyzing the processes in detail would be much more complicated. My starting point was to try to apply this approach to the present problem

        Yes , that is exactly what I understood. And that’s why I said that this particular “explanation” has already been proven wrong.

      • How can one prove something so general wrong?

        I didn’t go into any details in this subchain, but I have written a bit more elsewhere (including a short message of December 15, 2010 at 10:50 am higher up in this chain). Your original comment did not describe correctly my idea, but was explicitly different. Thus I do not accept this proof, but neither do I claim that my proposal would necessarily be correct even when it is understood correctly.

      • Perhaps I should add that I know enough mathematics to avoid that particular error that you mention in your original message. Therefore I know that you have not presented proof against my actual idea.

        If somebody can tell where I can find in numerical form the intensity spectrum of the outgoing LW radiation at TOA integrated over the whole earth, then I can present a simplified version of what my hypothesis tells. That would be a preliminary test of the validity of the hypothesis.

  42. Alexander Harvey

    Tomas:

    I can try and talk about these matters with reference to how the calculations appear to be performed in practice by Soden & Held 2006 (An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models).

    Now you might expect that they would need to calculate value based on the longitude, latitude, height, temperature, season, water content, clouds, etc., of a vast array of grid boxes. You would be correct, that is what they do.

    It appears that they spilt the problem into two parts. Kernels that gives the effectiveness of each grid boxes ability to change the radiative balance at TOA according to variation in the atmospheric state parameters, (temperature, water content,etc.) and a matching set of grid boxes that contain data derived from the model output giving the state of that grid box (temperate, water content etc.) They form the product and the global summation.

    They construct kernals so that they can make separate attributions based on certain predetermined modes of system development. Notably if the lapse rate and water conent were fixes, if the lapse rate alone varied, if the water content varied, etc., this is just a view of the data according to a choice to look at that data in certain ways. They could have chosen additional modes, or perhaps different modes. They probably have chosen modes that would capture the majority of the development of the model between initial and final states on an areal and height basis.

    The process leads to an attribution of the effect to their chosen modes, and hence allows the various models to be compared according to that basis.

    That is what I think they have done, and from that we can talk about what the values indicate and what they can be used for.

    The process does not seem to answer any particularly deep questions as it does not inform as to the likely dynamics. The dynamics are an input to the process. The process does inform us as to the implications of those dynamics for the TOA radiative balance.

    I feel that you are arguing that the produced values cannot be related back to the basic science, and that is correct as the basic science that informed them is not present in those values in a way that permits this. That information is in a small part there but only as a handful of global values, where it is distilled into the attribution to the chosen modes. Without knowing the actual data and actual kernals used that insight is meagre.

    The coupling of the radiative values is coupled back to the surface temperatures in a globally averaged sense but only in terms of two basic temperature modes (with and without lapse rates changes) plus a mode that distinquishes between constant water vapour and constant RH effects.

    You may be trying to say that these figures can’t be used to directly inform us as to how the temperature would change given a change in radiative forcings. That would be correct as the variation in such matters as changes in lapse rate, polar amplification, water content, clouds, etc., are required inputs to the process from a source that contains the dynamics necessary to imform us as to how the atmospheric state would change.

    I think that this can be debated, at least with me, without a lot of mathematics, in ways that relate to the ways these quantities are defined and derived, the information that they contain and their difficiencies. There would not be a lot of point for you to argue with many equations what I would agree with conceptually and readily.

    Alex

  43. Alexander Harvey

    There has been some discussion above concerning model parameters.

    For those that are interested in “Model Parameterisation” here is a recent presentation intended to illustrate the process of building parameterisation schemes, including the motivation, the physics to be captured, and the adoption process.

    “How we build a parameterisation scheme: illustrated by Langmuir turbulence in the ocean mixed layer”

    http://sms.cam.ac.uk/media/1072055

    It is over an hour long but it could just inform a few preconceptions. It is a workshop presentation so expect a lot of assumed background knowledge. In essence, the Navier-Stokes equation is solved at the subscale level, statitics are produced and a parameterisation scheme is derived that emulates the N-S solutions. It has meat, it is not an armwaving overview.

    Alex

    • Leonard Weinstein

      Alexander,
      There is a limit to the ability to solve time varying high Reynolds number flows with N-S. When you include low resolution and accuracy initial and boundary conditions, it can only give a relative crude and short time gross statistical representation. When you include whole areas of climate related pieces such as aerosols, clouds, and long period ocean currents where the physics is not even fully understood, you have to make very crude pieces for the model. Add to this a varying Sun (small insolation changes, but also possible spectral shift, and magnetic field effects on cosmic ray/cloud interaction), you really can’t make a model with any realistic long range capability. This is effectively a chaotic process with limiting boundaries. This doesn’t mean you can’t pick values and plugs for all known terms, and study the apparent effect of varying a single parameter such as CO2 content, but the results will in no way represent a real predictable solution for long time periods.

  44. Tomas & Judith: I though this discussion was supposed to be about the no-feedbacks climate sensitivity. In Tomas’s second equation (dF = 4.ε.σ.T³.dT + σ.T⁴. dε), isn’t the dε term zero when dealing considering a no feedbacks scenario? Tomas seems to be considering overall climate sensitivity.

    Further down Tomas says: “OK as we can’t answer this one , let’s just consider the final state postulated as being an equilibrium. Not enough , we must also postulate that the initial and final “equilibrium” states have EXACTLY the same energy repartition in the conduction , convection , phase and chemical energy modes. In other words the radiation mode must be completely decoupled from other energy transfer modes. Under those (clearly unrealistic) assumptions we will have in the final state dF emitted = dF absorbed.” When we calculate radiative forcing high in the atmosphere, we eliminate the complications caused convection and phase change. So the concept of radiative forcing is relevant at altitudes where radiation is the only means of energy flux through the atmosphere. Furthermore, Tomas seems to be saying that we are seeking an equilibrium result, whereas I understood radiative forcing to be an energy imbalance that will be corrected by a temperature rise.

    The main problem with the concept of radiative forcing is that one arrives at different answers when one does the calculations at different altitudes, even when those altitudes are high enough that radiation is the only form of energy flux. Which altitude is most relevant to surface temperature change, the only change of real interest? It seems to me that we should be doing calculating radiative forcing at the boundary between the convective- and radiative-dominated regions of the atmosphere; the location where a constant lapse rate will transfer warming to the surface. (The tropopause is too high.) We have good reasons to believe that this lapse rate won’t vary significantly with anthropogenic GHGs and we have some idea of how water vapor will produce lapse rate feedback.

    Another complication with the IPCC’s version of radiative forcing is their decision to include radiative equilibration of the stratosphere, but not the troposphere.

    • Frank,

      “So the concept of radiative forcing is relevant at altitudes where radiation is the only means of energy flux through the atmosphere”

      If radiation is the only method of energy transport, what you describe cannot be a gaseous atmostphere.

  45. I continue to be baffled by Judith Curry’s comments on this issue. There is a very clear definition of no-feedbacks response. If you have a different definition, state it, and state why it would be more useful. The usefulness of the no-feedbacks response as defined by IPCC and climate modelers is as a reference change in surface temperature to compare with changes once the various “feedbacks” are allowed, and it provides a clear partitioning between “positive” and “negative” feedbacks as has been outlined by numerous people on this and the previous comment thread. It has no practical meaning on its own and never directly factors into any actual climate model calculation; rather it allows a rational partitioning of the problem of climate response into separate relatively understandable components (water vapor change, lapse rate change, cloud change, etc.)

    As I wrote here:

    “we need to be clear on the partitioning between various different “feedbacks” and a “bare” response, without feedbacks. The feedbacks represent different responses of Earth’s climate system to energy flow changes – like changing ice, changing water vapor levels in the atmosphere, changing cloud patterns, etc. But what does “no feedbacks” mean?

    There is a standard definition of the response “without feedbacks”, referenced by the IPCC which is provided by Bony et al (Journal of Climate 19:3445 (2006)), appendix A. It is a physical quantity defined by the atmospheric temperature profile and radiative properties:

    The most fundamental feedback in the climate system is the temperature dependence of LW emission through the Stefan-Boltzmann law of blackbody emission (Planck response). For this reason, the surface temperature response of the climate system is often compared to the response that would be obtained […] if the temperature was the only variable to respond to the radiative forcing, and if the temperature change was horizontally and vertically uniform […]

    I.e. the response without feedbacks is quite strictly defined as the response under which “temperature [is] the only variable” that changes, and it changes by the same amount throughout the surface and troposphere (changes in temperature gradient, by contrast, are referred to as the lapse rate feedback).”

    Under this definition, the no-feedbacks temperature change is very tightly constrained, and most of the uncertainty in no-feedbacks sensitivity is actually in the radiative forcing number, not that response value. The best numbers give a response to doubling CO2 of 1.15 K, plus or minus 10% (with only about 1% of that uncertainty coming from the no-feedbacks response portion, 10% from the radiative forcing uncertainty).

    • I replied apsmith confusion on his website, but apparently my reply was lost in moderation.

      [Your comment has been queued for moderation by site administrators and will be published after approval.]

      So I re-post it here:

      IPCC says: “The most fundamental feedback in the climate system is the temperature dependence […] if the temperature was the only variable to respond to the radiative forcing, and if the temperature change was horizontally and vertically uniform […]”.

      apsmith derives: ” I.e. the response without feedbacks is quite strictly defined as the response under which “temperature [is] the only variable” that changes, and it changes by the same amount throughout the surface and troposphere […] And this response is relatively straightforward to calculate for a given collection of atmospheric temperature profiles. The strict calculation is easily done in models…”

      Apparently apsmith does not recognize the importance of last condition (in bold, emphasis added) in IPCC definition, and the importance of logical operation “AND”. Unfortunately, reality shows that temperature changes are far from being “horizontally uniform”. As result, reaction of “global averaged temperature” of reality to ongoing radiaitve imbalances does not satisfy the smart IPCC definition. The average can drift in any direction even if dFa is zero, making the “sensitivity” dT/dFa infinite, which makes no sense nor use. Obviously, because of this nature, comparing virtual forcing with real[ly crippled] observations of global temperature index is utter nonsense, and leads nowhere. Which shows.

      One must wonder why so many people are confused about this stuff…

      • Al, you seem to have missed my introductory statement which I will re-emphasize:

        “It [no-feedbacks sensitivity] has no practical meaning on its own and never directly factors into any actual climate model calculation; rather it allows a rational partitioning of the problem of climate response into separate relatively understandable components”

        Yes, in reality “temperature changes are far from being uniform”. So the IPCC definition of no-feedbacks sensitivity does not bear directly on that reality. As I said, it “has no practical meaning on its own”. It is a definition, a reference point.

        If you can come up with a better easy reference point for the response of Earth’s climate to a change in radiative forcing, feel free to propose it. The IPCC one is clear and easily calculated, which is where its merit lies. It is a theoretical number, with, once again, no practical meaning on its own.

      • There must be some logical confusion. A reference point has a practical meaning as reference point, to allow “rational partitioning”, as you said. This sounds as a very “practical meaning” to me, which you however deny.

        The IPCC correctly specified two additional conditions when the “no-feedbacks sensitivity” can be calculated, and will give a definite “theoretical number”. Now, as Tomas Milanovic has demonstrated, for any other realistic planet conditions this number is undefined. As I illustrated, this number can be anything from zero to plus-minus infinity. Now here is the question: to what extent this “clear and easily calculated IPCC number” is a reference to plus-minus infinity?

      • That makes it ideal! It’s totally flexible, so just pick your preferred value and the climate will fall right into line …

        Or not.

  46. Christopher Game

    Judith Curry asks:
    “Does the method proposed by Jinhua Lu help conceptualize this problem in a better way?”
    Christopher Game answers:
    Yes. A world better.

  47. Christopher Game

    Christopher Game belatedly reading J.R. Bates (2007) Some considerations of the concept of climate feedback, Q.J.R. Met. Soc. 133: 545-560, the reference to which he got from http://dx.doi.org/10.1007/s00382-008-0425-3. We should have started this blog thread by reading this, and Stephens (2005) at http://journals.ametsoc.org/doi/pdf/10.1175/JCLI-3243.1
    Sorry I didn’t read these before now.

  48. Tomas Milanovic

    Frank

    In Tomas’s second equation (dF = 4.ε.σ.T³.dT + σ.T⁴. dε), isn’t the dε term zero when dealing considering a no feedbacks scenario? Tomas seems to be considering overall climate sensitivity.

    dε would be 0 only if ε was a constant. As it obviously is not constant on the Earth, dε is not 0 either. It doesn’t matter whether there are feedbacks, responses, couplings or however you want to call it.

    I don’t know about different definitions of “climate sensitivities” – e.g local, global, semi global etc.
    I only know of one which is, in my notation, dTa/dFa.
    Then, trivially, because the averaging (subscript a) necessitates to integrate a function by definition, I am looking at what this function is.
    Note that while it is not only usual but necessary to go from local to global because the laws of nature are local (e.g expressed by differential equations), it is impossible to go from global to local.
    This is, again trivially, due to the fact that an uncountable infinity of functions have the same integral.

    That’s why the detailed knowledge of what the field functions (energy flows, temperatures etc) are is necessary to say something about their integrals.

    There is one notable exception – the thermodynamics. It is amusing to note that despite the word dynamics in thermodynamics, the latter is anything but dynamics.
    Thermodynamics makes statements about the global (macroscopic) without knowledge of the local (microscopic).
    This is due to the fact that the unpredictible local (microscopic) has statistical properties which are invariant and independent of the local details.
    This particular and quite unexpected property is studied by the ergodic theory of dynamical systems and finds important applications in the chaos theory.

    To come back to climate science, there is no indication that the climate might have the same property which would allow to formulate a statistical theory of climate that I would call “weather thermodynamics”.
    In any case one thing is sure, climate is not strictly deterministically predictable.
    If we are lucky and mother Nature made us an ergodicity gift, a statistical interpretation is possible.
    If we are not, then we have a really hard problem to solve which dwarfs the Navier Stokes difficulty.

    • In your case dε is 0, if ε is constant in time. Certainly it is not constant when vegetation varies and for many other reasons, but in considering no-feedback sensitivity it seems natural to keep ε constant.

  49. Tomas Milanovic

    In your case dε is 0, if ε is constant in time. Certainly it is not constant when vegetation varies and for many other reasons, but in considering no-feedback sensitivity it seems natural to keep ε constant.

    I beg your pardon????
    One person more who doesn’t know what is a differential of a function yet makes comments on differentials?

    1)We are not considering time dependent regimes. So time dependence of ε is irrelevant. You are completely confusing dε and ∂ε/∂t .
    2)The Nature is not interested in what seems natural to you. You have to take ε as it is, and it is NOT constant. Perhaps you live in another Universe but in ours it is like that.

    So one more time : dε is NOT 0.

    With comments like the one quoted, I am not surprised that there are a few readers who find some aspects of the discussion confusing.

    • The original task that you set out to calculate is to compare two different states of the earth to each other. The differential refers to the difference between these states. The integrals that you calculate contain changes at fixed locations, not differences between locations.

      In this calculation dε is to be calculated as the change at each point separately and at fixed coordinates. Of course we can argue, whether two different states refer to two different times or to some other way of choosing two different states for the whole earth.

      The main point that I had in mind is that dε as used in your formulae refers to two different values of ε at the same location without any contribution from the spatial variations of ε.

      Mathematically, your calculation does not have dx, dy or dz (or other spatial differentials) that would contribute to dε through ∂ε/∂x dx + ∂ε/∂y dy + ∂ε/∂z dx, because x, y and z appear only integration variables constant for each point inside the integral.

      • Alexander Harvey

        Pekka,

        Forgive me if you have covered this.

        My understanding based on the definition is as follows.

        The Planck feedback and hence the no-feedback sensitivity is defined for a change in T uniform in the atmosphric column where nothng else changes.

        That would include the nature of the surface remaining unchanged.

        Changes in TOA radiation balance due to surface emissivity changes could appear as a separate feedback, in as much as they are a response to changing T.

        Changes not due to changes in T could be included as a separate forcing.

        Best Wishes

        Alex

      • Tomas Milanovic

        The original task that you set out to calculate is to compare two different states of the earth to each other. The differential refers to the difference between these states. The integrals that you calculate contain changes at fixed locations, not differences between locations.

        God, Pekka Pirila I am not here to give you lessons in mathematics so please stop this nonsense that only makes us wasting our time.
        I am often amazed by the level of mathematical inculture that reigns in the climate science but this thread could be used as example.
        It’s the last time I will repeat something relatively trivial , so get it right this time.

        I want to compute dTa. Ta is an average, right?
        What is an average?
        Well Ta = 1/S . ∫ T.dS. It is a FUNCTIONAL, a number!
        So what might possibly mean dTa?
        Of course as Ta depends on the function T(R,θ,φ), dTa is obtained by integrating over the whole sphere ∫ δy.dS where δy is an arbitrary function of (R,θ,φ) such as T(R,θ,φ) + δy(R,θ,φ) is very near to T(R,θ,φ) at every point of the sphere.
        dT is such a function δy(R,θ,φ) defined over the whole sphere.
        dTa/dT is a functional derivative, a generalisation of the notion of gradient!

        That’s why I wrote dTa = 1/S . ∫ dT(R,θ,φ) .dS.
        So now you must INTEGRATE this dT which depends on T and ε which both trivially vary over the sphere too.
        So it is easy to see that dε is NOT zero , and that the spatial variation of ε contributes to the value of the integral so to dTa like the spatial variation of T does too.

        Before posting you should have read my comprehensive answer to Jianhua Lu where I have already dealt with these confusions between global (integrals) and local (fields) as it would have avoided to write the same thing twice.

      • I am sorry, but I am also fully certain that your argument is nonsense.

        When you calculate it correctly you get no contribution from dε related to spatial variation of ε.

        The dS that you integrate over is over the same elements in the two situations whose difference is considered in dTa. The differential of the integrand dT(R,θ,φ) is a differential at fixed R,θ,φ. Therefore also dε is a differential at fixed R,θ,φ.

      • Tomas Milanovic

        Ok Pekka, you have already shown that you have no notion of functional derivatives and confused notions of differential calculus.
        If you want to dig the hole a bit deeper, would you stop hand waving and writing confused statements?
        What about some solid mathematics for a change?
        Unless of course, mathematics is just a bother in climate science because it is not necessary.

        So here an exercice.
        We have a relation F = G.H where F,G,H are scalar functions of x and y.
        The spatial average of F over some domain D , noted Fa is:
        Fa = 1/S . ∫ F(x,y).dS
        1) Define dTa
        2) Compute dTa as function of G and H.

        No handwaving accepted.

      • I can only turn your accusations concerning knowledge of mathematics back. This is not a place for a course of mathematics even if you are in need of one.

      • Pekka – I admire your equanimity is the face of undeserved insults. Tomas promises above that this is “the last time” he will repeat his claims, so we can be grateful for that, as long as he keeps his promise.

        In regard to surface de (I don’t have the Greek symbols), it’s close to zero for practical purposes both spatially (as Tomas intends) and temporally. The value of surface emissivity in the infrared range of Earth’s emissions is so close to 1.0 that small variations will make little difference to estimates. Of course, atmospheric emissivity is strongly correlated with CO2 and water vapor increases, and the emissivity values in the atmospheric (i.e., gas phase) of the climate system are critical to understanding climate change. One cannot claim to understand climate without an appreciation of gas phase emissivity.

        I believe that it is also somewhat irrelevant to discuss dTa as it applies to very large areas because that is not how climate estimates are performed (although I defer to the modelers for their expert opinion on this). What is actually computed are changes localized to grid cells rather than on a more extensive scale. These anomalies can then be averaged rather than attempting to arrive at an integrated value of dT over large areas. To be sure, within the grid cells, I expect that parametrizations may be used (modelers please comment), and the anomalies within any one grid cell must be asssessed for their interactions on neighboring cells, but this is a different process from trying to integrate dTa over large areas.

        I find your comments on these issues well-informed in the many instances when you have agreed with me, and also on the occasions when you haven’t. I also admire the civil and polite nature of your commentary.

        Fred

  50. Dr. Strangelove

    Tomas,
    Is it possible to compute global temp. directly from radiation fluxes without calculating local averages? Your main argument of the futility of dTa/dFa is similar to the argument of Gerlich & T paper. Does it mean for every value of Fa there are many possible values of Ta even if we’re dealing with global not local averages?

  51. Tomas Milanovic

    Dr Strangelove

    Is it possible to compute global temp. directly from radiation fluxes without calculating local averages? Your main argument of the futility of dTa/dFa is similar to the argument of Gerlich & T paper. Does it mean for every value of Fa there are many possible values of Ta even if we’re dealing with global not local averages?

    I am not familiar with the Gerlich & T paper so cannot draw some parallels between what they say and what I say.
    But to your questions.
    1) I am not sure I understand correctly the first one. If by “global temperature” you mean the average spatial (surface) temperature, then as I have shown it is absolutely impossible to compute it by knowing only the global (e.g spatial surface average) radiation flux.
    It is only possible to compute it if you know BOTH the local temperature and energy fields under different simplifying assumptions like local radiative equilibrium etc. Please don’t use the term “local average” when you probably mean the local value of the field, the former is an integral, the latter a value of a function. Confusing both leads to errors like Lu or Pikka Pirial did.

    2) Here the answer is a clear yes if by “for every value of Fa ” you mean “for different radiation fields”. I find that using the concept of field is very useful and contributes better to understanding because, this is what we are talking about.
    Temperature is a scalar field what means that every point in space has a well defined value T(r,θ,φ) as well as emissivity.
    Velocity is a vector field. Etc.
    If you know a field in every point, you can of course compute any kind of surface or volume averages but if you know the average of a field, you can never reconstruct the field itself.
    There is an infinity of fields having the same average and the averages are generally not similarily related even when the fields themselves are related locally.

    This is trivial, let’s take f.ex 2 scalar fields f and g such as there is a simple local relation f(r,θ,φ).g(r,θ,φ) = 1
    One has of course not average f . average g = constant because ∫ f.g.dS is not equal to (∫ f.dS).(∫ g.dS)

  52. I seem to be missing something.
    [In what follows, I assume a Kiehl-Trenberth 1997 diagram, ie a stable balanced “average” planet. I fully understand that this is a poor representation of the real world, and that we cannot describe the chaotic system we live in so simply. But my problem is that I cannot even get the KT diagram to work in the no-feedback case.]
    A TOA calculation gives a 1DegC temperature rise due to a doubling of CO2. Many here have stated this means a constant temperature increase down to the bottom of the atmosphere due to the constant lapse rate.
    So this means the surface has increased by 1DegC. What does the new KT Diagram surface balance look like?
    1. There is an increase of 5.5W/m^2 in Radiation from the surface.
    2. There is an increase of between 2 and 5.5W/m^2 in latent heat transport from the surface (note:this is NOT a feedback. It is consequential on having a 1DegC temperature rise in exactly the same manner as the radiation increase).
    3. There is no change in conduction (same temperature differential between surface and atmosphere).
    4. Insolation remains the same.
    5. So back-radiation must increase by between 7.5 and 11 W/m^2. [If back-radiation does not increase to balance the increases in outgoing radiation and evaporation, the surface temperature cannot be maintained at the new higher temperature.]

    It is this last quantity which I cannot reconcile. The temperature of the atmosphere has increased by 1 DegC, so the approximate increase in back radiation due to increased temperature is 4.5W/m^2. The back-radiation from CO2 is already coming from really low down in the atmosphere. {Nicol has calculated that over half the photons at wavenumber 670 come from the lowest 2m. Using standard absorption tables, I find that over half the photons in the 650 band come from the lowest 20m, and in the 600-700 region come from below 500m.} Also the proportion of back-radiation emanating from CO2 is in any case rather small. And the widening of the wings won’t have much effect. So the increase from increased concentration of CO2 is small. {Perhaps someone is able to do the calculation.} Allowing 1W/m^2 for this, we get:
    1. Required increase in back-radiation = 7.5 to 11 W/m^2
    2. Increase due to increased atmospheric temperature = 4.5 W/m^2
    3. Increase due to increased concentration of CO2 = 1W/m^2

    This means a huge defecit in the back-radiation at the surface, a defecit which would drive the system into a different balance where the surface temperature does not increase by as much as the upper atmosphere. In short, the environmental lapse rate of 6.5DegC/km would have to be different in a hotter world.
    Unless someone can find the missing back-radiation…

    • Good question, but you’re missing the point that much of the back-radiation reaching the surface comes from the lower atmosphere, where the temperature is closer to the surface temperature.

      Here’s a question for you – where did your “4.5 W/m^2” “due to increased atmospheric temperature” come from? The radiative forcing due to GHG’s at the TOA is not the same as the increase in back-radiation at the surface because much of the infrared spectrum is absorbed and reradiated several times between TOA and surface.

      • Never mind, I misread your comment – not sure what I was thinking just now.

        The piece you’re missing is the radiative forcing itself, most of which carries through from TOA to the surface – add the 4.5 W/m^2 or so due to atmospheric warming to the 3.7 W/m^2 of forcing due to the GHG’s to get the total (about 8.2) which is in your ballpark.

        I discussed this issue (with somewhat different numbers – again very rough) here:

        http://arthur.shumwaysmith.com/life/content/more_climate_change_basics_part_2

      • The forcing from doubling CO2 results in a reduction of emission at the tropopause. The signs are opposite, unlike an increase in incident solar radiation. Worse, the atmosphere will absorb slightly more SW radiation so part of the increase in radiation toward the surface from increased CO2 is canceled. I do think he’s overestimated the increase in precipitation. Figures I’ve seen are for a ten percent increase for a 4 C change. So a 1 C change would be 2.5% or less, since it’s probably not linear.

        A higher rate of evaporation/precipitation would almost certainly be accompanied by an increase in specific humidity, which would increase DLR at the same temperature as well. If you hold relative humidity constant in MODTRAN, the DLR increases faster than the upward radiation (and the balancing temperature is higher), leaving room for an increase in convection. But that’s two feedbacks and the thread is about no feedback (other than the Planck feedback).

      • Some numbers:

        MODTRAN mid-latitude summer profile, clear sky, all other settings default.

        280 ppmv CO2
        0 km looking down 389.674 W/m2
        0 km looking up 309.604
        net up 80.07 W/m2 (clear sky so it’s higher than the average with cloud cover)
        15 km looking down 283.479 W/m2

        560 ppmv no change in temp or humidity
        0 km looking down, 389.674 W/m2
        0 km looking up 311.582
        net up 78.092 W/m2 forcing 1.978 W/m2
        15 km looking down 279.743 W/m2
        forcing 3.736 W/m2
        Changing only the surface temperature offset to 1.055 C
        0 km looking down 394.698
        0 km looking up 314.942
        net up 79.756 W/m2 or fairly close
        15 km looking down 283.479 W/m2

        Change to constant RH
        The numbers for 280 ppm and 560 ppm for no temperature offset don’t change. But the temperature offset to balance emission at 15 km becomes 1.56 C instead of 1.055 C
        0 km looking down 397.21 W/m2
        0 km looking up 321.85 W/m2
        net 75.36 W/m2
        15 km looking down 283.479
        So now you have to add 4.71 W/m2 convective transport from the surface to bring up and down to the same net value as before.

  53. I’ve come to this discussion quite late, I know. But I think a word of caution is in order.

    This post and the following comment thread are essentially attacking the application of linear control theory to climate systems on the grounds that the detailed dynamics are poorly understood. Attempts to determine the “no-feedback sensitivity,” what I would call the open-loop system gain, are decried as meaningless because there is no solution of the equations of flow or thermodynamics.

    The problem with this line of attack is that linear control theory is useful precisely because it can be applied to large-scale systems where the small-scale behaviour is not well understood. Remember that the theory was developed to help develop electrical and mechanical control systems, at a time when electrical systems were still only understood in terms of Ohm’s law and mechanical systems were understood in terms of Newtonian mechanics. And indeed those concepts are the only ones necessary to formulate and apply usable control theory.

    Now it turns out that electrical and mechanical systems are hideously complex once you start digging around. If you push far enough, not only are they non-analytical, insoluble, but they are also non-deterministic. Both involve random events which are provably unpredictable. But there is some level at which you can say, “I don’t care about all that stuff, I’m just going to assume that at the scales I’m interested in the behaviour is linear.” Once you’ve made that assumption, you can apply linear control theory gaily. Even if you know the system is non-linear, you can still successfully argue that linear is close enough about a particular operating point.

    Any application of linear control theory necessarily assumes that the climate response in the variables of interest is linear at the scale of interest and about the operating point of interest. To apply that theory, it is not necessary to prove from first principles that the response is always linear, or even that we have any idea how the underlying system behaves; it is only necessary to show that the assumption is reasonable, either empirically or theoretically.

    Attack the assumption, by all means. I don’t know if it’s reasonable or not. But the attack must be more than, “There exists a level of detail at which you are unable to model the system, so therefore you don’t know anything about it.” You need to either show from theory that the system is necessarily non-linear, or show empirically that observations are inconsistent with linearity. Someone making the assumption should also either show theoretically that the system is necessarily linear or show empirically that observations are linear to a reasonable approximation, of course. But since we don’t understand the system at any level of detail (as shown adequately in this thread) the justification is likely to be empirical. And there we are into “who has the best hand-wavy justification” territory.

    • Any such “empirical justification” had better be wide open to validation by actual empirical observation. The linearity of the GCMs has generated so many disconnects from empirical observation that it has, as a Japanese scientist famously observed, all the credibiltiy of astrology.

  54. Tom, thanks for your comment. Never too late; your comment has generated over 120 hits on this thread today.

  55. Hey! I simply want to give a huge thumbs up for the
    great data you could have right here on this post. I might
    be coming again to your weblog for extra soon.