Lessons from the ‘Irreducibly Simple’ kerfuffle

by Rud Istvan

UPDATE:  Response from Christopher Monckton

The Monckton, Soon, Legates, and Briggs paper “Why models run hot, results from an irreducibly simple climate model” appeared in the January 2015 Science Bulletin of the Chinese Academy of Sciences (CAS). Hereinafter MSLB.

The paper discusses the divergence between climate models and observed temperatures, and develops the implications for climate sensitivity.

MSLB has created quite a kerfuffle. There was initial dismissal: it was claimed that Science Bulletin is an obscure journal with lax review standards, so the paper is no good. Bulletin turned out to be the Chinese equivalent of Science or Nature. Then came MSM efforts (NYT, Boston Globe, WaPo, even BarackObama.com) to discredit the authors. This has escalated into a more general attack on prominent skeptics like Christy, Pielke, and our gracious hostess [link]. These attacks are growing ugly, for example from BarackObama.com on 2/23/2015: “Bad things are coming for these boys and girls. (Name list) Keep your eye on the media. Several stories.

Dr. Trenberth of NCAR provided NYTimes reporter Gillis a MSLB rebuttal, posted at Matt Brigg’s blog [link].   Gillis did not report Brigg’s reply. Trenberth dismissed the simple model simply because it is simple—and said the ‘pause’ is insignificant natural variation. Yes, but the now 18+ year pause/hiatus is in very serious disagreement with CMIP5 climate model simulations using criteria set out by climate modelers themselves in 2011 [link] . Trenberth’s comments to the NYTimes are indefensibly misleading in my opinion, and provide a vivid object lesson about consensus climate ‘science’ and its reporting.

There was a saying among WWII Army Air Force bomber pilots: “If you are taking heavy flak, you are over the target”. What is it about this target?

CMIP5 climate model simulations continue to diverge from observed temperatures because of the ‘pause’. This suggests the GCMs are oversensitive to increases in CO2 – CO2 continues to increase ‘business as usual’ per RCP8.5, while temperature hasn’t. MSLB discusses this divergence shown by their Figure 2, and then offers a non-GCM way to understand why this is happening and what it means for climate sensitivity.

Simple model images divergenceMSLB proceeds in three steps: (1) derives the ‘irreducibly simple’ climate sensitivity equation, (2) estimates the 5 required parameters from mainly IPCC sources, and then (3) uses both to debunk IPCC sensitivity estimates. The first two steps are useful and instructive. The last step is unfortunately partly wrong. This post covers that last step first.

What’s wrong with MSLB

Background . There is no reasonable doubt that radiative physics says ‘greenhouse gases’ (GHGs) warm the atmosphere. (If SkyDragons need a layman’s explanation, read essay Sensitive Uncertainty in ebook Blowing Smoke.) ‘Grey earth’ modifications of the canonical Stefan-Boltzmann black body radiation equation (1.0C) produce either 1.1C or 1.2C per CO2 doubling. MIT Prof. emeritus Lindzen uses 1.2C. The climate sensitivity question is whether that physics is attenuated or amplified by Earth’s feedbacks, most importantly water vapor (a potent GHG) and clouds. Earth must have some feedback of some sign, operating on the simple physics of 1.2C in some way to produce some final equilibrium temperature response to doubled atmospheric CO2.

Both IPCC AR4 (CMIP3) and AR5 (CMIP5) GCMs estimate the resulting equilibrium climate sensitivity (ECS) at +3.2C on average. There is substantial GCM model-to-model variance, for example in AR4 WG1 Table 8.2. This gives a ‘simple average’ multiplier of (3.2/1.2) ~ +2.67. AR4 gave a point (mode) estimate of +3.0C (~ +2.5).  AR5 only gave a likely range from +1.5C to +4.5C, and explicitly did not give a midpoint value, owing to the discrepancy between climate model and observationally derived values. GCMs derive feedback by doubling (or quadrupling) CO2, then simulating into the future at least 150 years. Most of us do not have the supercomputer resources to do so. Hence the utility of simpler sensitivity models for the rest of us. MSLB said, “The simple climate model outlined here is not intended as a substitute for the general-circulation models. … rather, it is intended to illuminate them. …The simple model provides a benchmark against which to measure the soundness of the more complex models’ predictions.

Bode Model. The Bode feedback model is adopted from electronic circuit design, as MSLB points out. It is very simple. The multiplier over ‘grey’ SB 1.2C is just 1/(1-f), where f is the sum total of positive and negative feedbacks. As Lindzen explained to British Parliament (www.rlindzen@MIT.edu ) [my IPCC added in the red curve]

Simple model images bode 1The Bode curve translates an IPCC estimate of ECS ~3 – 3.2C to a Bode feedback f of ~0.6 – 0.65 for f0 (no feedback, f=0) of 1.2C. It is the simplest possible feedback model, even simpler than the ‘irreducibly simple’ MSLB equation. We shall return to IPCC’s roughly f ~ 0.65, because it contains additional lessons about what is probably wrong with the IPCC GCMs.

Erroneous MSLB Assertion  

JC note:  I am having trouble formatting subscript/superscripts. To anyone that wants to follow the equations carefully, see this pdf file equations with the formatted equations and sub/superscripts

MSLB used a complicated method involving additional math not directly related to the irreducibly simple equation to calculate the equivalent of Bode f. Confusing, although apparently rigorous. The resulting Bode f equivalent is the closed loop gain gt = λ0 * ft where λ0 is the zero feedback radiative forcing (see below) and ft is the sum of all feedback radiative forcings like water vapor, lapse rate, clouds… MSLB estimates the IPCC ft from Figure 3 (which reproduces AR5 WG1 9.43): “The feedback sum ft (right-hand column) falls on 1.5 [1.0, 2.2] W(m2 K) for AR5, compared with 1.9 [1.5, 2.4] W/(m2 K) for AR4.” MSLB Table 1 provides calculated gt (IPCC Bode f) ranging from 0.31 to 0.75 for AR4 and AR5.

The IPCC Bode f could have been derived more simply. An ECS of 3.2C over Plank 1.2C yields a simple Bode multiplier of 2.67 ≈ 2.7 for (1/1-f). The IPCC feedback f is therefore ~0.63 since 1/(1-0.63) = 1/0.37 = 2.7. That is all MSLB had to do to be irreducibly simple in this part of the paper.

MSLB Figure 5 asserts that the more complexly derived values would result in unstable climates (the figure’s x axis should have been labeled g∞ per the accompanying text, a confusing chart/text discrepancy). MSLB therefore argues the GCM models are not trustworthy, because they result in unstable response regimes. It is evident by inspection that any Bode f under or around the inflection (below ~0.75) is stable. IPCC f ≈ 0.65 does not lead to an unstable climate. This conclusion is merely an unsupported “Process engineer’s design limit ≤+0.1” assertion.

simple model images bode 2As is clear from either Lindzen’s f plot or MSLB’s expanded equivalent gt plot, all of MSLB’s calculated IPCC feedback f values are well behaved.

None of the paper’s long ‘step 3’ ‘closed loop gain’ / Bode f discussion relates directly to the ‘irreducibly simple equation’ itself, or to its evaluation. It muddles the rest of the paper and obscures the equation’s utility, in my opinion.

What’s Right with MSLB

The mathematical derivation of the ‘irreducibly simple’ equation is impeccable. The ‘simple’ result (rearranged here for expository convenience) is:

ECS = (λ0/qt) ĸ ln(Ct/C0) rt /(1-λ0ft)

For those with math allergies, here is a translation into sort of English: ECS (equilibrium or ‘effective’ climate sensitivity) = λ0 (the traditional radiative forcing greenhouse effect with zero feedback. A radiative forcing equivalent to f0=1.2C in the Bode model, with the same result.

  • qt is the proportion of total GHG that is CO2. This just scales from the CO2 portion to the whole of anthropogenic GHGs.
  • ĸ  is the CO2 GHG forcing constant.
  • ln (CO2 at t something/CO2 at t=0). This is just the expected rise in CO2. For sensitivity, the traditional test is doubled CO2, so this term is just ln(2) = 0.69.
  • rt (‘transience fraction’), the proportion of the eventual total climate response at time t. This lag arises mainly from ocean thermal inertia.
  •  (1-λ0 * ft), which is where all the bodies are buried. Since ft is none other than the simple (now familiar) Bode f over some feedback time t.

Lets rephrase this ‘irreducibly simple’ equation yet again, in even simpler more common sensical English using no mathy stuff at all: Climate sensitivity equals the radiative forcing from all anthropogenic greenhouse gases including CO2, times the (known since Guy Callender in 1938) logarithmically declining impact of increasing CO2, times the transient lag to climate equilibrium, times some feedback f changing this direct CO2 effect.

We are left with the same old Bode feedback f, wrapped in some interesting additional parameters. The vicious consensus attack on MSLB cannot be motivated by such an innocuous uncontroversial equation/statement. It has to have been motivated by the conclusions derived from it. Lets look deeper.

MSLB derives values for the equation’s five ‘tunable’ parameters using mostly IPCC AR4 and AR5, and some mathematical cross checking on λ0.

  • λ0 ~0.31, the IPCC AR4/AR5 value.
  • qt ~0.83, the IPCC AR5 value (slightly dependent on RCP scenario).
  •  ĸ ~5.35, the IPCC TAR/AR4/AR5 value.

rt is potentially more complicated. MSLB Table 2 and discussion develop a plausible range of values for various assumed ft. A reasonable range is perhaps 0.55 to 0.85. There is a simpler way to derive this ‘transience fraction’ (climate response lag) since rt is tantamount to the ratio of TCR to ECS. The AR4 models (WG1 8.6.2.3) have rt from 0.56-0.76. The newish Lewis and Curry estimate http://www.judithcurry.com/2014/09/24/lewis-and-curry-climate-sensitivity-uncertainty/ gives (TCR 1.3/ ECS 1.7) ~0.76, used here. Professor Lindzen points out that the higher a climate model’s sensitivity, the slower its total response. Higher sensitivity IPCC models should have a smaller rt … and they do [link].

That leaves ft forcings. But, it is easier to substitute f. Anyone can plug any f they think suitable into either the MSLB equation (for λ0 * ft) or into the simpler Bode equation 1.2C*[1/(1-f)]. Or, just use Lindzen’s graph. We found earlier that IPCC AR5 implicitly has an f about 0.65. This can be decomposed along MSLB lines. Water vapor feedback (including lapse rate) is about 65-80% of the total ft (AR5 WG1 figure 9.43 or MSLB Figure 3), so Bode fw ~0.45 to 0.5. The rest is about Bode fc ~0.15 to 0.20, which is mainly clouds since the other stuff is smallish and mostly offsetting. Rules of thumb thinking. Simple models do not need precision to provide instructive lessons.

Water vapor feedback fw

Climate models produce a tropical troposphere hotspot that does not exist in reality. See Prof. John Christy’s presentation to APS [link] . That is because modeled upper troposphere specific humidity is too high. The probable reason is the physical inability of coarse-resolution GCMs to simulate tropical convection cells (thunderstorms). They must be parameterized instead. Essays Models all the way Down and Humidity is still Wet (in Blowing Smoke) provide details. CMIP3/5 climate models also underestimate the precipitation that removes atmospheric water vapor– by about half (citations in the essays). Given these model/observation differences, perhaps the water vapor feedback is on the order of about half the climate models, notionally something like fw ~0.25.

Cloud feedback fc

Dessler’s 2010 clear/cloudy sky satellite paper actually suggests zero cloud feedback, contrary to what he claimed and what is on the NASA website. Essay Cloudy Clouds (in Blowing Smoke) provides much more detail. To a first order very rough observational approximation, cloud fc ~0. AR5 WG1 7.2 says it is uncertain, but certainly positive. (Phrased that crisply, the IPCC non sequitur is obvious). This provides a lesson on the consensus belief in models over observations, a belief deeply unsettled by the undeniable pause/divergence.

Combined, these observations suggest an ft on the order of (0.25), a little less than half the implicit IPCC f. Plug some ft like that into the irreducibly simple MSLB equation using the foregoing parameters, and out pops an ECS of about ~1.75. Or, just spot your preferred f on the x axis, and read ECS off the y axis of the Bode plot above. Use your own ‘red lines’. An f of about 0.25 provides an ECS remarkably similar to the Lewis and Curry result, which was derived using completely different and much more sophisticated methods.

Bottom line

The simple non-GCM models Trenberth dismisses have great utility. Observations now suggest Earth’s ECS is a bit more than half of what the IPCC has proclaimed as settled ‘GCM science’. Simple models like those discussed here can deconstruct f to suggest how and why. That unsettles the IPCC science. MSLB’s simple model has vociferously unsettled the ‘consensus’. Lower ECS silences the urgent, loud alarm to mitigate at COP21 in Paris—unless the consensus’ increasingly nasty public attacks succeed in silencing all the skeptics who point out simple stuff.

Bio notes:  Rud Istvan has published three books

Rud has also authored frequent guest posts at Climate Etc. [link]

JC note:   I appreciate Rud starting a technical dialogue on this controversial paper. As with all guest posts, keep your comments civil and relevant.  Treat this as a technical thread; keep your comments/questions related to the Monckton et al paper; general comments about the controversy surrounding the paper should be made on the Conflict of Interest post.

***UPDATE:  I have received a response from Christopher Monckton.  Since it is longer than the original post, I am providing it as a pdf file that you can download [monckton]

435 responses to “Lessons from the ‘Irreducibly Simple’ kerfuffle

  1. A quick comment of my own on the Monckton et al. paper. There is a debate regarding the utility of simple models such as described here; see Tomas Milanovic’s previous post How simple is simple? http://judithcurry.com/2014/05/23/how-simple-is-simple/

    I personally agree with Tomas on this issue. Simple energy balance or feedback models are useful for back of the envelope arguments. I don’t see that they can provide any insights into why GCM climate models disagree with observations.

    The useful argument made by Monckton et al. is that fast thermodynamic feedbacks (e.g. clouds, water vapor) in climate models are too large. However, this isn’t a novel argument, and the paper doesn’t provide any new insights into the underlying source of the problem.

    Should the paper have been published? Yes. Does it deserve all the hype it has received? No. Have worse papers been published in prestige U.S/EU journals? Yes.

    • Considering an invariant attractor is already a first important simplification. Indeed the necessary and sufficient condition to predict the systems dynamics is to know the large number of the Fourier modes defining the global attractor. Yet the global attractor is only invariant if the coefficients (weightings) called control parameters are constant. If they vary with time because the forcings and/or the boundary conditions vary with time, then the topology and dimension of the attractor may change dramatically regardless whether the variation of the control parameters is small or large.

      IOW chopping down a tree in the Andes might change the TCR by a significant amount. Right?

      • AK,

        My assumption is that you are right. Lorenz was being serious by proposing that something as apparently minor as the flap of a butterfly’s wing might lead to extremely large physical disturbances in a chaotic system such as the atmosphere seems to be.

        Although it’s comforting to believe in reversion to the mean, averages, and all the rest, it doesn’t seem that Nature operates that way. Climate modelling no doubt provides rest, relaxation, and a source of income for the modellers, but provides no noticeable benefit to the rest of us, as far as I can see.

        Live well and prosper,

        Mike Flynn.

      • the butterfly effect does not mean the whole systems will go unstable, it means you can have emergent phenomena triggered by small random fluctuations.

        This amplification is caused by local +ve feedbacks, like those found in the building of tropical storms.

        These will still be bound by the overpowering negative f/b that ensure overall system stability.

        From Monkton’s reply:

        We have some reason to suspect that the shrieking fury to which my co-author Willie Soon was subjected once the usual suspects found they could not fault the paper scientifically stems chiefly from our revelation that the Bode equation cannot be applied to the climate without heavy modification. Those behind the climate scare know this quite well. Now others know it too.

        No, I think what has really frightened them is that is shows that using the Bode approach to represent their simplistic notion of climate as CO2+noise, it does not give the answer they thought it did.

        Rather than rejoicing that the world is safe from imminent catastrophe, they see that this as an existential threat to their gravy train. That explains the vitriolic attacks.

        That they wish to cut off the funding of those who disagree reveals where they feel they are threatened by all this.

      • the butterfly effect does not mean the whole systems will go unstable, it means you can have emergent phenomena triggered by small random fluctuations.

        A butterfly isn’t a boundary. And anyway, my point was that there’s no good reason to suppose a singular TCR or ECS exists. The mechanisms that produce these mythical numbers could well be wandering around from year to year, with no predictability. Thus, efforts to produce a reliable measure of the effect of adding fossil CO2 cannot be assumed to even be workable. (Though, of course, they might be.)

      • the complete talk on butterflies
        http://eaps4.mit.edu/research/Lorenz/Butterfly_1972.pdf

        We must therefore leave our original question unanswered for a few more years, even while affirming our faith in the instability of the atmosphere. Meanwhile, today’s errors in weather forecasting cannot be blamed entirely nor even primarily upon the finer structure of weather
        patterns. They arise mainly from our failure to observe even the coarser structure with near completeness, our somewhat incomplete knowledge of the governing physical principles, and the inevitable approximations which must be introduced in formulating these principles as procedures
        which the human mind or the computer can carry out

      • Greg:
        “…simplistic notion of climate as CO2+noise…”
        Solar plus noise
        Orbit plus noise
        Each of the above could be argued to be not untrue. In each case we have a theory and everything not explained by it is randomness of everything else. When solar is argued to have a small effect, then it’s contribution or power as noise would seem to also be minor. The more you minimize a thing that is not CO2, the more CO2 must accurately and with success explain things.

      • michael hart

        +1 AK

    • Dr. Curry, ” I don’t see that they can provide any insights into why GCM climate models disagree with observations.”

      I donno, if you apply simple models to the regions the models miss the worst, they they can do a lot more. Where the miss the most in terms of energy are the tropics and since convective triggering is a parameterization, i.e. a SWAG, that looks like a prime candidate region for someone that might be into atmospheric thermo-dynamics.

      BTW, convective triggering temperature seems to be a magic >28C, but would more realistically be a range from about 27C with a significant non-linear increase above that and a more linear fade off below that. Amplifier analogies are great, but if you have a Tmax similar to V+, you can only get so much gain.

      • Precisely, the system is always bounded by -ve feedbacks and the effect of TS is regionally a -/ve f/b. The internal mechanics of individual storms are +f/b , which makes the regional effect a stronger, non-linear -ve f/b.

        As well as increasing convective f/b ( which should lead to the hot-spot ) this also leads to a f/b directly controlling incoming solar ( cloud cover ). If they expect a hotspot and don’t find one they have the guestimated parameters wrong.

      • Greg, “As well as increasing convective f/b ( which should lead to the hot-spot ) this also leads to a f/b directly controlling incoming solar ( cloud cover ). If they expect a hotspot and don’t find one they have the guestimated parameters wrong.”

        Yep, it’s more than just clouds though, deep convection changes ozone, stratospheric water vapor, stratospheric advection (the Brewer-Dobson circulation) which can set up Sudden Stratospheric warming and Polar Winter Warming events. Lots of dynamics involved.

    • Steven Mosher

      ” I don’t see that they can provide any insights into why GCM climate models disagree with observations.”

      Yup. its the weirdest approach to diagnosing problems.

      from a different angle. It’s questionable that the mean of models has anything but a pragmatic meaning. If you want to diagnose what is wrong with ‘models’ it’s more sensible to examine them individually.

      hmm. I havent read widely in the GCM work, so Judith are you aware of any work where individual models are compared to each other.. you know where one team says “those guys suck” we are better. or do folks pay
      deference to each others work?

      • On Rudd and his comments on the Soon paper.
        Noticed Zeke and co out in tandem very early to knock it.
        Not sure why
        “There was a saying among WWII Army Air Force bomber pilots: “If you are taking heavy flak, you are over the target”. What is it about this target?”

        Steven Mosher agreed with JC saying
        ” I don’t see that they can provide any insights into why GCM climate models disagree with observations.”
        “Yup. its the weirdest approach to diagnosing problems.
        It’s questionable that the mean of models has anything but a pragmatic meaning. If you want to diagnose what is wrong with ‘models’ it’s more sensible to examine them individually.”
        hmm.
        From the man who says any model is better than none at all?
        On an article entitled “Why models run hot, results from an irreducibly simple climate model”
        He says its the weirdest approach? Seems an eminently sensible approach when discussing model means to examine them together Steven, especially when they are all running hot.

        ” I havent read widely in the GCM work”
        Now he tells us???

        Full quote was ” Simple energy balance or feedback models are useful for back of the envelope arguments. I don’t see that they can provide any insights into why GCM climate models disagree with observations.”

        Why put the boots in so hard and instead discuss his argument, is that so hard.

      • Tend to agree angech2014. No comments at all, apart from Rud’s critique, have contributed to my understanding of the Monckton et al paper. I find that the comments tend to be more concerned with supporting their worldview on AGW than on seriously taking apart what Monckton et all had said.

      • “If you want to diagnose what is wrong with ‘models’ it’s more sensible to examine them individually.”

        Yes and no. They have such large internal variability that is is not linked to real internal variability chronologically, that some averaging is needed to see what the underlying behaviour is.

        Averaging by group can be useful. Here high and low TCS models:
        http://climategrog.files.wordpress.com/2015/02/cmip5_hi_grp_low_grp.png?w=800

        We see that the recent divergence is linked to model TCS. We also see bigger problems elsewhere common to both groups. That is useful diagnostic information.

        We can then compare high TCS models to , say, SSN and see that their divergences are similar to inverted SSN. This suggests under-estimation of solar effects on climate and a likely over sensitivity to volcanic forcing.
        http://climategrog.files.wordpress.com/2015/03/cmip5_xs_hi_ssn.png?w=800

        That is at least a start. I don’t think that would be as clear without some averaging to remove the random variability.

      • Don’t expect the modelers to critique each other’s work. They all share the same goal and they are all getting funded. Why rock the gravy train?

      • This is where ‘madness of the herd’ comes in. Wouldn’t you expect competition to fire some modelers into producing a better product than others? Yes, so why hasn’t it happened?
        ========================

      • catweazle666

        “If you want to diagnose what is wrong with ‘models’ it’s more sensible to examine them individually.”

        Not if the whole concept of pretending that a computer game climate model can by any stretch of imagination project/predict the future state of a non-linear set of processes as complex and chaotic as the Earth’s climate it isn’t.

        None of them work, none of them can possibly work. Period.

        You might as well claim it is necessary to examine each individual yersinia pestis bacillus to determine whether it is infectious.

      • Steve Mosher wrote: “are you aware of any work where individual models are compared to each other.. you know where one team says “those guys suck” we are better. or do folks pay deference to each others work?”

        So far as I can tell, they all pay deference to each other, all models are created equal. The scientific method would require that each model be subjected to strenuous attempts to prove it wrong, and only models that survive all such attempts be deemed potentially reliable. But models that give unrealistic distributions of clouds, or are way off on changes in ocean heat content, or use aerosol forcing that is incompatible with observations are included in the multi-model mean. Of course it may be that there are no models that can survive attempts to prove them wrong.

        35 years ago the estimate of equilibrium climate sensitivity was 1.5 to 4.5 K. Billions of dollars later, it is the same. I suspect that lack of ruthless criticism is part of the problem.

      • Mosh says: “If you want to diagnose what is wrong with ‘models’ it’s more sensible to examine them individually.”

        Perhaps equally pertinent is the idea that you should examine different regions separately. Averaging the response of tropical and polar regions is an aberration.

        Tropics have strong, fast acting feedbacks and are very resistant to changes in radiative forcing. Temperate zones less so. Land “even more less so”. ;)

        Polar regions, particularly the Arctic, have massive interchange of sensible and latent heat acting on decadal scales.

        How can you hope to “average” all that and have something physically meaningful.

        Cowtan & Way simply succeed in further muddying this non physical mix. But since the adjustment goes “the right way” it gets lauded.

      • davideisenstadt

        Steven Mosher, thank you for noting the meaninglessness of the multi model mean…
        I think that even for pragmatic purposes, it is ill-suited for consideration.
        Of course one at a time, most of the current GCMs dont look so good.

    • The first half of their paper is reasonable, though a simple zero-dimensional model doesn’t give you the dynamics that a box or slab ocean model would provide, so its not immensely useful.

      The second part is where it really goes off the rails, with an a-priori assumption of a negative feedback parameter (justified based on some handwaving about circuit design and homeostatic temperatures over the last 800,000 years, ignoring the fact that almost all of the literature on the subject suggests that glacial cycles could not occur without some positive feedback). There is also the bizarre assertion (via misinterpreting a paper from Gerald Roe) that if feedbacks are negative then the Earth system has no thermal inertia and thus transient and equilibrium sensitivity are the same. I’d argue that in its current form the paper should not have been published, and would not have been had it been subject to competent peer review.

      • “The first half of their paper is reasonable, though a simple zero-dimensional model doesn’t give you the dynamics that a box or slab ocean model would provide, so its not immensely useful.”
        MSLB said, “The simple climate model outlined here is not intended as a substitute for the general-circulation models. … rather, it is intended to illuminate them. …The simple model provides a benchmark against which to measure the soundness of the more complex models’ predictions.”
        CM “when we ran our own model for the first time with parameters that we thought reasonable, it faithfully reproduced the observed temperature trend since 1990.”

        The simple model works, Zeke.
        It also measures the soundness of the predictions, Zeke, something immensely useful if you have unsound predictions.

        when we ran our own model for the first time with parameters that we thought reasonable, it faithfully reproduced the observed temperature trend since 1990.
        Rudd concluded that “The simple non-GCM models Trenberth/Zeke
        dismiss have great utility”.

        Perhaps you should add the caveat
        ” I havent read widely in the GCM work”
        like your fellow commentator.

      • I won’t comment on the thermal inertia assertion but I think there is some misunderstanding regarding feedbacks.
        First I’ll start with broader definition of feedbacks and forcings than what’s usually taken into account. Let’s start with phase space of the climate – simply speaking, all possible values of temperature, humidity, albedo, and other factors. Each point in this space corresponds to one certain state of the climate. When climate is put into certain state, it follows certain trajectory in the phase space, with its state changing according to laws of physics. Let’s call the act of following this trajectory a “feedback”. Of course this feedback is sum of all factors that affect state of climate and are considered separate feedbacks when analysing things in detail. Ultimately, if nothing changes, the climate will end up following a “closed” path within certain limited region of the phase space.
        Now we can define positive and negative feedback: positive feedback is any trajectory leading outside from given part of phase space. Negative feedback is any trajectory leading from outside in.
        Note that “positive” and “negative” do not mean “towards higher/lower temperatures” in this context.
        For completeness we can define forcing as any influence that is independent on the climate state and affects the state differently than feedbacks. Earth rotation and orbit, TOA insolation, cosmic rays and human CO2 emissions are such forcings. Forcings perturb the state and put it on a different feedback trajectory. That’s where “positive” and “negative” feedbacks play role because after a perturbation, positive feedbacks move the system out of the stable region, while negative feedbacks are sending it back.
        When looked at from this point of view it is becoming obvious that any stable state of the climate is necessarily governed by negative feedbacks. If there were any substantial positive feedbacks, random perturbations would easily set it on positive feedback trajectory to the next attractor.
        It is also obvious that feedbacks are entirely state-dependent. Each trajectory starts as a positive feedback in some state and ends as negative feedback in a different state.
        When applying that to glacial cycles, there are two distincs states of the climate – glacial periods with low temperatures, high albedo and low humidity, and interglacial periods with high temperatures, low albedo and high humidity. The switch between them is caused by perturbations that set the system on positive feedback trajectory, and then the system following that trajectory. However as long as we’re currently in the interglacial phase, the positive feedbacks we have out of the current stable state are towards glaciation. And by increasing CO2 in the atmosphere and increasing our temperature, we are closing doors for them.

      • Kasuha:
        That was a good explanation. With positive feedback we may see an attractor swap. Failing to see them provides information about how the system operates most of the time.

      • Good explanation, Kasuha. The ice-core variance supports that — high variability during the mid-glacial periods and low variability during the interglacials and the maximum (LGM) glacial. Right now there is little available positive feedback — glaciers are minimal and situated at high-latitudes. Milankovitch parameters tho have been slowly edging the N polar regions into cooler regimes & approaching the threshold of glacier-building positive feedback.

      • TimTheToolMan

        Zeke writes “an a-priori assumption of a negative feedback parameter (justified based on some handwaving about circuit design and homeostatic temperatures over the last 800,000 years, ignoring the fact that almost all of the literature on the subject suggests that glacial cycles could not occur without some positive feedback). ”

        One handwave vs another, Zeke.

    • Dr. C, that seems a bit odd to formulate a criticism of this paper. GCMS model let’s have spent millions, if not billions, to create a dog’s breakfast. And now the touchstone of significance is the undefers ding of this mess!?

    • On a browse through the comments the issue of whether simple models of GCMs is useful is a bit of a recurring theme.

      The point has to be made that if it acceptable to model the climate using GCMs it is equally acceptable to model GCMs using other simplifying models.

      The idea of models of models to gain understandings about them (in the same way GCMs help us understand the climate) is a pretty obvious thing to do, and there is in principle no problem with limiting the model-model to subsets of behaviour in the model.

    • russellseitz

      It’s a good thing irony doesn’t have mass, or Monckton’s defense of Soon would have an event horizon .

      • Yessirrah, m’lord do appeer a bit peculiar up there high on his warhorse, but wait’ll you see the characters on the other side. And wait’ll they see the morass between here and there.
        =================

  2. There are number of internal diminishing feedbacks (both positive and negative) that it may not be possible to calculate sensitivity to any degree of certainty for a single forcing. This in particular applies to the CO2 gas which is at same time independently and feedback released into atmosphere.

  3. A quick correction before dinner, caused by the websites inability to do subscripts and superscripts translated from Word. Docx. Although we really tried for a couple of days. MSLB f sub t is NOT Bode f . Rather, g sub t = (lamda sub 0 times f sub t) is. Got that?

  4. Second quick correction to as posted rather than as written. In second to last paragraph, f sub t should read f. These are not the same. Took me weeks to figure out the difference. g sub t = lamda sub 0 times f sub t = Bode f. Confusing notation.

  5. Rob Bradley

    And so many mainstream scientists remain silent, letting the ‘outliers’ like Trenberth and the Obama funding machine have the microphone…. There is a happy middle to the debate, global lukewarming, that recasts the whole climate science debate.

    • They will jest keep telllin’ the peeple it’s hot hot hot!
      Yer don’t need ter take yr sheep skin coat … Hey,
      switch off that heater!

    • LOL, Beth … that frozen corpse must had died from the heat!

  6. An error in the equation presented in the post is that the formula is for ΔT not ECS. This only becomes ECS if you use ln(2) and rt=1. The “model” is also nothing new because we have seen such sensitivity formulas before, such as in Held and Soden in 2000, or more recently Marotzke and Forster and previous papers by Lindzen and Choi. All they have done is said rt=TCR/ECS and qt is the fraction of the forcing due to CO2, and called this packaging of known variations a new model. Given that the model is not new, the only potentially new part of the paper would have been their new parameters for ft and rt. Rud is correct that ft came out of a mistaken view of stable feedbacks, and he doesn’t mention that their range is actually -0.5 to +0.1, so it is mostly negative feedback. They pull this out of nowhere, saying something about the Ice Ages proving the climate is stable(!?) when actually the Ice Ages require very large feedbacks to account for the 5-10C temperature range (mostly albedo with some CO2). Meanwhile they have also taken rt=1, meaning that TCR=ECS, and then claim that there is no warming in the pipeline. This implies no imbalance in the current climate and that the OHC can’t have been rising. The denial of the imbalance is opposed to observations and is the main thing Trenberth picked up on. In conclusion, the paper used a known equation as their “model” and invented values of two parameters from heuristics neither of which actually agree with observations (Ice Age temperature range and the current energy imbalance). An extremely wrongheaded joint effort by M, S, L and B.

    • Also regarding the hot spot, this is part of the negative lapse rate feedback. If it is absent as Rud says, it really means the sensitivity is higher. It depends on the warming of the tropical oceans, and its possible overprediction by models may be due to the tropical oceans warming at a lower rate than they predicted. In fact the land and Arctic are warming much faster than the tropical oceans.

      • JimD, ” It depends on the warming of the tropical oceans, and its possible overprediction by models may be due to the tropical oceans warming at a lower rate than they predicted.”

        http://climexp.knmi.nl/data/iersstv4_0-360E_-20-20N_n_1980:2014.png

        Ya think? The models also are supposed to somewhat replicate tropical convection and Susan Solomon dug deep in this issue and announce that what ever the cause of the problem, it wasn’t SST. The first place I look after someone is positive what it is not is there. What do you suppose should be the convective triggering temperature included in the models?

      • Jim D, you have the significance of the missing modeled tropical troposphere hot spot BACKWARDS. Upside down backwards. Not a good omen for your fundamental climate comprehension.
        And, it really is missing despite warmunist protestations to the contrary. Read up on this stuff. It is not that complicated. Even about how the ‘hotspot’ is observationally measured. What, radiosonde thermometers are biased like their humidity sensors were, since corrected by calibrations?

      • Rud, the lapse rate feedback comes from the hot spot and is negative. A warmer surface temperature results in an even greater warming in the upper troposphere. This hot spot radiates at a greater temperature than if it wasn’t there, which means it is a negative feedback, because it helps the earth radiate the heat to space. Read papers by Held and Soden on how the lapse rate feedback helps to partially offset the water vapor feedback. If we don’t have the hot spot, it is worse than we thought.

      • Something to explain the lapse rate feedback.
        http://www.astr.ucl.ac.be/textbook/chapter4_node7.html

      • Captain

        You commented back in 2013 on the Oppo et al (2009) paper. I don’t fully understand your comment, but it appears related to this thread where we are discussing feedbacks. Do you believe the Oppo paper is consistent with a feedback of ~0.8?

        http://redneckphysics.blogspot.com/

      • “Also regarding the hot spot, this is part of the negative lapse rate feedback. If it is absent as Rud says, it really means the sensitivity is higher. “

        No, because it means the forcing is being multiplied by the steeper lapse rate, but even with that boost, actual temperature changes are less than low end projections, so energy is either unaccounted for or finding its way out of the troposphere in another way.

        Regardless, it means the fluid dynamics of the gcms as run are missing a lot of energy transfer which may ultimately determine radiative balance.

      • Jim D “This hot spot radiates at a greater temperature than if it wasn’t there, which means it is a negative feedback, because it helps the earth radiate the heat to space.”
        Jim D , You get the hot spot if and only if the feedbacks are very positive and create more heat in the atmosphere.
        More CO2 , more water vapor, more heat.
        A hot spot will always radiate more heat because it is hot.
        But how could it get hot in the first place?
        By having more heat into it in the first place to get hot!
        It is not a negative feedback, it is simply losing more heat the hotter it gets.
        Dampening effect, diminished effect but not negative, Jim D

        “Read papers by Held and Soden on how the lapse rate feedback helps to partially offset the water vapor feedback. If we don’t have the hot spot, it is worse than we thought.”

        Cart before the horse again, First it gets hot from the water vapor feedback. That should cause a hot spot.
        If you do not have a hot spot it’s worse than you thought because your thinking is wrong, something else [albedo?] is negating the water vapor CO2 effect.

      • angech, if the warming is occurring mostly over the land or in colder areas, you don’t get a hot spot because that relies on a warm saturated surface warming up. The lack of a hot spot is consistent with the heating being twice as fast over land than water as it has been for the last 30 years. It is consistent with the tropical oceans being the slowest subsystem to warm up, which delays part of the water vapor feedback too. Just because it is slower doesn’t mean it won’t happen. This is what the imbalance eventually achieves through the raising of the ocean heat content. The transient response is a lot more complex than many think. It is far from linear. See the work by Kyle Armour.

      • rls, “You commented back in 2013 on the Oppo et al (2009) paper. I don’t fully understand your comment, but it appears related to this thread where we are discussing feedbacks. Do you believe the Oppo paper is consistent with a feedback of ~0.8?”

        Yes indeed. Deep convection in the tropics can set up a whole series of negative feedbacks and the Oppo 2009 paper indicates 28C is a pretty hard convective limit. The problem is that hockeysticks still have too much influence on paleo when newer reconstructions of volcanic indicate a much more variable past in keeping with the Oppo 2009 reconstruction.

        0.8 C btw is based on satellite era temperatures. “Sensitivity” isn’t very linear.

      • Jim D, “This hot spot radiates at a greater temperature than if it wasn’t there, which means it is a negative feedback, because it helps the earth radiate the heat to space.”

        No. The stratosphere is warmer and drier than it is supposed to be. The GHE effect that is supposed to trap the heat in the upper troposphere is likely not as large as predicted and being circumvented by convective processes… and hydro-thermal.

      • Captain

        The Oppo et al (2009) paper appears related to the Rosenthal et al (2013) paper, based on an interview between Andy Revkin and Rosenthal. Also: common authors and common sediment location.

        One topic of the 2013 paper is the conclusion that the MWP and LIA were global events. I have found little discussion on that topic. Do you know why they made this conclusion? They, and Oppo 2009, have written that both the northern Pacific and the southern Pacific flow toward the equater. I see at least two possible results from these observations:

        1. The temperatures of the Pacific tend to homogenize, and
        2. The forma shells migrate and settle in the mud of the IPWP, the sediments of IPWP are therefore representative of the entire Pacific.

        I believe, in the Revkin interview, Rosenthal also mentioned cores taken at other locations of the globe; he didn’t discuss details. Also, are the Oppo and Rosenthal papers consistent with your Oceans Lag Law?

        Regards,

        Richard

      • One topic of the 2013 paper is the conclusion that the MWP and LIA were global events.

        That does not mean the MWP was warmer than present, and Rosenthal makes no such claim.

        Why people who believe the MWP was warmer don’t simply write the paper is a complete mystery. The other side has tried to write the paper. Why not your side. Maybe you guys can carry the ball over the goal. Go team.

      • If the hot spot predicted by the theory is missing, it doesn’t mean that the theory has a big hole in it. It means that it’s worse than we thought. This is according to huffpo. We are not supposed to see that this is disingenuous, as usual.

      • JCH, does this mean that we are now warmer than we otherwise would be in a declining Holocene? Well, that would be nice.
        ===========

      • Paleontology shows that we can’t burn up, but that we can freeze.
        =================

      • rls, “One topic of the 2013 paper is the conclusion that the MWP and LIA were global events. I have found little discussion on that topic. Do you know why they made this conclusion?”

        It should be based on IPWP correlation with global SST and “surface” air temperatures and hydrology.

        ” The IPWP is the largest body of warm water in the world, and, as a result, it is the largest source of heat and moisture to the global atmosphere, and an important component of the planet’s climate. Climate models suggest that global mean temperatures are particularly sensitive to sea surface temperatures in the IPWP. Understanding the past history of the region is of great importance for placing current warming trends in a global context.”

        http://www.whoi.edu/main/news-releases/2009?tid=3622&cid=59106

        So it is battle between alpine trees .and the largest source of convection as to which might be a better proxy for climate.

      • rls, here is a quick example.

        https://lh3.googleusercontent.com/-s8dyehGYSEs/VPTbqznmfsI/AAAAAAAAM6U/Ww1mrea28T0/w719-h481-no/ipwp%2Bversus%2Bglobal%2Bsst.png

        That is how the IPWP compares to 70% of the surface of the Earth based on the instrumental record. It pretty clearly is not a bad proxy for “global” temperatures.

        When you use land reconstructions there is a good deal of anti-phase response (things start cancelling out) and many land reconstruction proxies, tree rings, varves etc. have confounding factors that are more difficult to deal with than ocean temperature proxies, but have a higher frequency resolution. So do you go with higher frequency or better accuracy?

        Not that marine proxies are perfect by any means, but the IPWP has less issues with currents than other regions. Mg/Ca is also a more consistent proxy than most, Uk’37 for example has an issue with the desired depth range of the “bugs” and current up welling. Nothing is all that definitive in paleo, but Mg/Ca seems to be a better method.

      • JCH

        The Rosenthal abstract says “We show that water masses linked to North Pacific and Antarctic intermediate waters were warmer by 2.1 ± 0.4°C and 1.5 ± 0.4°C, respectively, during the middle Holocene Thermal Maximum than over the past century. Both water masses were ~0.9°C warmer during the Medieval Warm period than during the Little Ice Age and ~0.65° warmer than in recent decades.”

        And here is an excert: “The findings support the view that the Holocene Thermal Maximum, the Medieval Warm Period, and the Little Ice Age were global events, and they provide a long-term perspective for evaluating the role of ocean heat content in various warming scenarios for the future.”

        Regards,

        Richard

      • Captain

        Thank you for enlightening me on this beautiful sunny, snow covered Michigan Monday.

        Richard

    • Bottom line: M, S, L and B make two assumptions, both wrong. First, that the Ice Ages prove the climate can’t have much of a positive feedback, as if swings of 5-10 C are not proof of positive feedbacks in themselves. Second, that there is no heat in the pipeline, equivalent to the ocean having no heat capacity, also wrong. Their two parameters come out of these assumptions and make for a clown-car trip into a land of their own beliefs. Worse is that people who call themselves skeptics are falling for their arguments.

      • JimD, I can’t believe that we are reading the same paper. Please show where in the Monckton et al paper that these two assumptions were explicitly stated. I read through it several times looking for the two assumptions have been made.

      • Peter Davies asks: “Please show where in the Monckton et al paper that these two assumptions were explicitly stated.”
        The problem I had with the paper is that they don’t seem to explicitly state their assumptions or their method. So it is hardly surprising that people come to different conclusions as to what they were doing. It did seem to me that they were implicitly, but not explicitly, assuming no ocean heat capacity and/or ECS = TCR. But then I quickly came to the conclusion that the paper was not worth deciphering.

      • Peter Davies, the Ice Ages have thermal swings of 5-10 C, which Monckton calls “near thermostasis”. These types of variations only come from positive feedbacks such as ice sheets melting for example, but he limits positive feedbacks to his gt < +0.1 on the basis of the Ice Ages. If he call 5-10 C "near thermostasis", why can't such "thermostasis" also happen in the future. He defeats his own purpose by using the Ice Ages as an argument against positive feedback. The other assumption that rt=1, meaning TCR=ECS, is more subtle and not explicitly stated as a lack of ocean heat capacity. But by setting these equal, he is saying the climate responds immediately and fully to any forcing change. Which only happens with no thermal capacity. This is another nutty idea counter to everything known about the earth's heat capacity, especially the delaying effect of the oceans, which is why everyone else has TCR < ECS. That is everyone else, including all the skeptics. No one has made Monckton's odd claim before, except perhaps himself. So, yes, their gt and rt are their own invented values with no basis in anything but what they wish was true. It's their own little world.

      • “It did seem to me that they were implicitly, but not explicitly, assuming no ocean heat capacity and/or ECS = TCR”

        I must say I don’t understand this at all. Do you think any of the top 50 meters of the worlds oceans do not have an annual max-min greater than the average warming that you expect from back radiation, which must heat the top few micrometers?

      • DocMartyn, the warming penetrates kilometers, as measured. How long does it take to warm that much ocean? This is not an immediate process because the ocean is continually recirculating different water to the surface. The heat capacity is the reason the oceans don’t warm as fast as the land surface, whether you are talking about seasonal changes or climate changes. There is an ocean lag.

    • Particularly albedo, some feedbacks decrease as temperatures increase or are high overtime. Initial condition…

  7. Matthew R Marler

    Good essay. Thank you.

    None of the paper’s long ‘step 3’ ‘closed loop gain’ / Bode f discussion relates directly to the ‘irreducibly simple equation’ itself, or to its evaluation. It muddles the rest of the paper and obscures the equation’s utility, in my opinion.

    I agree. At first reading I thought the main message there was that any positive feedback could not be very large. On rereading, I saw that they meant something more specific and you were one of the people who pointed that out clearly.

  8. “To a first order very rough observational approximation, cloud fc ~0. AR5 WG1 7.2 says it is uncertain, but certainly positive. (Phrased that crisply, the IPCC non sequitur is obvious). ”

    It wasn’t clear to me where that first sentence came from. Are you saying only that AR5 disagrees with Dessler–or with what you think Dessler’s reasoning implied even if he didn’t say it?

    In other words, that confused me. Can you explain further?

    • he means the latter of your two guesses.

    • Joe, Desslers paper claimed to find statitically significant positive cloud feedback. No doubt, because NASA touts it. In fact, the data is a perfect shotgun pattern with an R^2 of 0.02! He found nothing.
      The essay Cloudy Clouds goes through this and a lot of other cloud evidence as well. The AR5 SOD (not the final) literally said there must be positive cloud feedback because that is what the GCM models have, through as yet unknown mechanisms. Umm, maybe not.

  9. Aren’t GCM’s also simple models with their simplifying assumptions masked by glitz?

    • This outsider’s impression is that GCMs are compiled from a sizable number of simple (if not irreducibly so) sub-models with computations compounded until they are maximally complex.

    • any complicated model with a smooth result can be linearised with a simple model. (the von Neumann five parameter quote)

  10. Relying on GCM’s without recourse to “simpe models” is like relying on your car’s navigation system without being able to read a map.

  11. It appears that fw and fc depend on geographical location. How do you then determine the ECS? Also, are there any ongoing efforts to determine, through observations, actual feedback parameters?

    Thank you,

    Richard

    • Two points. Yes they do depend regionally, and the tropics are the most important. But just like there is an ‘average’ GMST anomaly, the notion is that there is some global average for each feedback.
      There are serious efforts to measure both. Water vapor (specific humidity) is measured by radiosondes (corrected for various instrument dry biases), by satellite microwave sensors, and most recently by GPS ‘diffraction’. All three methods show upper troposphere relative humidity declining when models have it roughly constant. See essay Humiditymis still Wet. Clouds have been studied by ICOADS, and more recently by ISCCP. Discussed in essay Cloudy Clouds.

      • Rob Starkey

        Rud
        Isn’t the residency time of CO2 at least equally important to temperature sensitivity when determining if the output of GCM should lead one to believe that there is a concern that requires immediate action?

        Judith—there is a potential thread topic. What do people believe it is and why?

      • Thank you, Rud, for that answer.

  12. Rud, nice explanation, however;

    If I understand it correctly none of the author’s is an electrical engineer or an optical engineer.

    The whole concept of applying electrical engineering theories of properly introducing feedback into a system to explain the climate is frankly, silly.

    Nobody working in climate science really understands the whole concept behind feedback in electrical systems. Using Bode’s name as some sort of “stamp of approval” does not help the case.

    In all electrical circuits using feedback there is some gain involved, voltage, current or power, never energy gain (a clear violation of LOT). In the absence of feedback this gain will normally tend to the creation of oscillations. An old electrical engineers joke states that if you wish to design an oscillator you should by designing an amplifier. Likewise, if you want to end up with a stable amplifier you should begin by designing an oscillator. The difference is often as simple as the value of a single electrical component like a capacitor.

    The climate system has no physical properties that correspond to voltage, current or power. There is only thermal energy and EMR radiation which alternate back and forth as energy gets absorbed and emitted. There is no mechanism whereby thermal energy or IR radiation is amplified. Without amplification the invocation of “feedback” as a mechanism that determines the result is incorrect.

    The climate is simply the electrical equivalent of a big jumbled pile of passive components like resistors, capacitors and inductors interconnected in a bewildering concoction of parallel and series circuits. There is also what an electrical engineer would consider “converters”, for example the Ocean. The Ocean absorbs SW radiation (while destroying/consuming the radiation) and warms thus converting the SW radiation into thermal energy. Then, in parallel the Ocean is converting thermal energy (while destroying/consuming the thermal energy) and emitting LW radiation.

    Again; there is no gain involved.

    Applying any of the classic electrical engineering theories regarding feedback to the climate is incorrect, sorry. The consensus climate scientists that have done it were incorrect and the recent paper you reference in this post is just as incorrect. Without a gain mechanism feedback cannot be invoked, and the fact that some gases in the atmosphere act as converters that absorb/emit IR radiation does not change this simple fact.

    Oh, and regarding flak as an indication that you are “over the target”; yes, bomber pilots correctly observed that, as did fighter pilots. My Father was a P-51 pilot who flew combat missions over Europe escorting the bomber pilots (“The Mighty 8th” USAAF, go team…). Yes, flak is one indicator that you “might” be over the target, but the Germans where known to throw up “fake flak” a few miles away from the actual target to try and confuse the bombardiers that were aiming the bombs. This was somewhat effective in overcast weather and before radar controlled bombing became common. The Germans actually built a “fake” replica of Hamburg (I believe) to confuse the USAAF and RAF crews and covered over one section of a very uniquely shaped downtown lake to “hide” it. And the English made replica cities out in nearby farm fields lit by purposely set fires to confuse the German bombers.

    Real Flak, Fake Flak, Real Cities, Fake Cities, all is fair in romance and war.

    Not as an “argument from authority, but I have graduate degrees and decades of professional experience in both electrical engineering and optical engineering (aka applied radiation physics). And I am prepared to defend my understanding of this “feedback” and “forcing” issue.

    In summary, the consensus climate science community is wrong, and the MSLB paper is simply less wrong, but still incorrect.

    Cheers, KevinK.

    • Roger C. Lowry

      Nice one, Kevin,
      Couldn’t have said it better myself.
      These people have no idea about NFB and how useful it is in a system with high but ill-defined open-loop gain…….
      I seem to recall however that MSLB said that the basic Ac = Ao / [1+ AoB] is totally out of place in the area of terrestrial thermostatics
      Regards,Rog.

    • Matthew R Marler

      Kevin K: The whole concept of applying electrical engineering theories of properly introducing feedback into a system to explain the climate is frankly, silly.

      That may be so, but Monckton et al comment on a particular application, they did not introduce it into climate science.

  13. simple feedback, be it positive or negative, cannot explain why the earth appears to have a very stable minimum average global temp around 10C and a very stable maximum average global temp around 22C, and tends to be relatively unstable between the upper and lower limit.

    The earth average temperature looks more like the walk of a drunk down a hallway. Sometimes the drunk moves down the center of the hallway, but quite often the drunk leans on one wall or the other, and lurches between them quite unpredictably. All the while moving down the hallway.

    What we are trying to model is the very short term path of the drunk. The path appears predictable because it has inertia, and it is bounded, but not even the drunk can predict what will happen next. Yes, you can add forcings, maybe the hallway curves to the right, but does that mean the drunk will always end up leaning on the left wall?

    • maksimovich

      Historical behavior does occur for random walks in random environments (Sinai 1982), the constraint ( a proposition by Ruelle 2000) is that a system may exhibit historical behavior instead of recurrence,a limitation envisioned by Lorenz’s system of almost intransitive.

  14. no physical properties that correspond to voltage, current or power.
    ================
    power is the sun
    voltage is the temp difference between the equator and the poles.
    current is flow of heat from the equator to the poles.

    • The Sun is ENERGY, not power

      And yes, voltage is a potential difference akin to voltage, but not voltage

      Current and heat flow are not the same thing.

      There is no gain in the system, so “feedback” theory as originally derived and applied by electrical engineers does not apply to the climate.

      • whoops, that should be;

        “temperature is a potential difference akin to voltage, but not voltage.”

        Sorry for the “quick fingered” typing.

      • Planning Engineer

        Thinking more on the bigger argument, but will disagree with the description of the sun as energy but not power. Power is the rate of energy change over time and the sun certainly can be called a power source.

      • Matthew R Marler

        Kevin K: The Sun is ENERGY, not power

        The measurement unit used is the watt, and at the Earth surface watts/m^2, because the sun is power.

      • Alan McIntire

        The sun may be a source of energy, but what earth GETS from the sun is
        ergs/square meter/second- that’s power. Incidentally, that power has been increasing from about 70% of current levels 4.6 billion years ago to current levels to about double current levels 5 billion years from now.

      • Vaughan Pratt

        The Sun is ENERGY, not power

        Earth receives 1.33 kW of insolation on each square meter of Earth’s cross-section. Are you saying that kilowatts is not power but energy?

        Current and heat flow are not the same thing.

        Of course, but your original statement was that they don’t even correspond, which as PE and MM point out is false. For starters consider thermal conduction, for which “Ohm’s law is the electrical analogue of Fourier’s law” (end of 2nd paragraph). Voltage, current, and electrical resistance correspond to respectively temperature, flux, and thermal resistance. For both electricity and thermal conduction, conductance is the reciprocal of resistance.

        There is a further correspondence, namely between capacitance and heat capacity. (C is standardly used for both), and between charge and heat content (Q is standardly used for both). The counterpart of the electrical law C = Q/V (shorthand for C = Q/ΔV) is the heat content law C = Q/ΔT.

        Yet another correspondence, up to sign, is between Shannon information and thermodynamic entropy S. In this correspondence noise corresponds to temperature T. The thermodynamic principle dQ = TdS admits the information-theoretic interpretation that the additional energy dQ required to convey additional information dS in the presence of noise T is proportional to both the information and the noise. Dividing both sides by dt converts the energy differential dQ to power dQ/dt and the entropy/information differential dS to dS/dt, baud rate in symbols/second (for a suitable alphabet, e.g. {0,1}).

        One benefit of these correspondences is that they make it easier to explain the relevant parts (though obviously not all) of thermodynamics to electrical engineers. Another benefit is that some heat flux and storage processes can be diagrammed using familiar electrical circuits involving resistors and capacitors, with the corresponding laws remaining applicable. Heat flux from the Sun is modeled as a constant current source rather than as a constant voltage source. Various bits of EE software can be easily adapted to model thermal processes. Signal processing theory including the notions of impulse response, transfer function, Fourier transform, etc. carry over directly to thermal systems that filter heat flux such as the upward flow of both heat and magma through the Earth’s crust, particularly well suited to modeling the ocean floor in the neighborhood of mid-ocean ridges.

        As for amplifiers, a small increase of CO2 as a heat-trapping agent results in a large increase in water vapor, another heat-trapping agent. That is an amplifier, powered by insolation. Granted there is no cathode-grid-anode or base-emitter-collector structure, and the slew rate is measured in years rather than nanoseconds, but that doesn’t make it any less of an amplifier.

        @KDG: BTW, are greenhouses (the ones you put plants in) amplifiers or not?

        Interesting question, but certainly not via the mechanism by which water vapor amplifies the greenhouse effect. Ordinary window glass of thickness 1 cm has about the same optical depth (defined as log(I/O) where I is the incident or input intensity and O the output intensity) at far infrared wavelengths as 1000 ppmv of CO2, so today’s 400 ppmv of CO2 corresponds to 4 mm of glass. However even if the added warmth of a greenhouse (however caused) raised the humidity, even a 20-foot layer of water vapor at 100% relative humidity would have an optical depth of essentially zero (log(I/O) = 0 when O = I) and therefore have no discernible heat-trapping capability. A layer the thickness of the atmosphere is a very different matter.

        (The reason I went into physics instead of EE as an undergraduate is that I’d spent my high school years building radios, oscilloscopes, transmitters, etc., worked as a TV repairman between high school and university, and got my ham radio license at the earliest age allowed in Australia. I decided I’d like to learn something new like quantum mechanics, relativity, plasma physics, particle physics, optics, thermodynamics, etc. Ironically the first decade of my teaching career was in MIT’s EE department.)

  15. Reblogged this on JunkScience.com.

  16. JC SNIP. off topic for this thread; fine for open thread

  17. Bevan Dockery

    JC SNIP off topic for this thread

    • Bevan Dockery

      Gymnosperm, the linear regression analysis shows that there is no significant correlation between changes in atmospheric CO2 concentration and changes in satellite lower tropospheric temperature so CO2 changes clearly do not cause temperature changes. They are independent variables.

      However the significant correlation between temperature and rate of change of CO2 concentration indicates that there could be a dependent relationship by some as yet unknown physical cause. Note that the correlation coefficients are least in the Arctic – Alert station, and in the Antarctic – South Pole station, which fits with temperature driving the rate of change of CO2.

      • I totally agree. My own suspicion is that unknown “physical cause” is biological activity in the oceans which has likely caused isotopic excursions that dwarf human efforts in the geological past. This activity is supported by the recent CO2 measurements by NASA and is consistent with the isotopic profile of current atmospheric Carbon increase.

  18. Science does not pit models against models. (Not like a Tinker Toy model for an atom , but a model in prose and equations for gravity, or an epidemic, or especially climate.) Nor is the test whether a model with an uncountable number of parameters can be tuned to selected data, especially where the data themselves often prove to have been tuned by postmodern scientists to the model (an ethical problem). Nor is it expert opinion. Nor voting.

    Academics several decades back bought into Post Modern Science (PMS) where the decisive tests for a proposition being scientifically accepted are all subjective: peer-review, publication in a conforming journal, and an established communal consensus. To be sure, PMS has a couple of other tenets, but in the end they are both wrong and non-determinative.

    Industrial science outstrips academic science by a good order of magnitude. But only when forced by government regulations, as in pharmaceuticals and courtroom trials, does it rely on peer-review, publication, or consensus. To the contrary, industrial science relies on secrecy and material success. Science is a mapping of facts onto facts, from data base to validated predictions. It is the objective branch of knowledge with zero tolerance for the subjective.

    THE scientific test is the predictive power of a model. Period. That’s an if and only if criterion. AGW has failed — not in comparison to some other model. And not in the ultimate catastrophe it predicts with the GCMs (for the also scientifically illiterate policy makers), and which no one alive today will live to observe, but in the subtle Climate Sensitivity. The IPCC reports set lower bounds for the ECS version, and the immediate data lie below that ultimate figure.

    To the postmodern scientist this is patently unfair. After all, weren’t the models peer-reviewed, published appropriately, and supported by a consensus of like-minded individuals? It’s not fair holding them a traditional scientific criterion. What has happened is that reality set in. Even the scientifically challenged sensed something was wrong.

    The test for the GCM validity was set by the IPCC: were they enough to convince Policymakers to shut down Western technology and pass the key to the treasuries to the modelers?

    The scientific question to be asked is not how the GCMs might be redesigned from the ground up, but specifically why did AGW fail? The question has a half to a whole dozen answers, but the easiest one for policymakers is that the GCMs omit cloud feedback, the most powerful feedback in all of climate, and which is both positive and negative.

    • The 3rd para is not clear Jeff. The practice of industrial science implies that commercial considerations and protection of IP would require that the peer review process and publication of code and data would need to be enforced by Govt regulation and the courts.

      Academic science of a post normal kind as practiced by some of the earlier climate scientists also failed to provide data and code but there were no enforcement by Govt regulation and the courts.

      The foregoing implies that there was little difference between industrial science and the early practice of climate science.

      • In industry, commercial considerations and the protection of Intellectual Property is assured by trade secrets, first and foremost. Industry seeks patent protection only for mature IP, typically after exposed by contracts won or products on the market.

        Academic science today has been reduced to publish or perish. In industry, if you publish you will perish. Physical sciences in academia is postnormal, possibly without exception. It is less frequent in medicine, and rare in engineering schools.

        In industry, peer review is limited to fellow employees with a strict need-to-know. It’s called the design review process. Industry guards against publication. In my experience, corporations will have a board to clear papers for publication, assuring protection of trade secrets. New employees will sign an agreement not to reveal what they learn on the job.

        These considerations point out the huge gulf between academic science and industrial science.

    • I wouldn’t be so harsh myself.

      The test of a model is utility, not necessarily predictive power. And GCM have utility, just not in decadal projections of global temperatures.

      Unfortunately that is what they have become poster children for.

      On the other hand, having just written that, perhaps they do have utility there :)

      • What on earth, so to speak, is the value to industry of a model that can’t make reliable predictions?

        GCMs have utility — political utility.

        JC SNIP: off topic for this thread

      • PS with apologies. Please imagine that the bold ended after human-induced and before climate change.

      • In the sense of “predictions” being “out of domain propositions”, a lot of models with no utility “out of domain” still have utility for investigating “in domain” phenomena. For a most trivial example I went to my grandson’s 5th birthday party yesterday and all his presents were models being happily used in domain, but obviously useless outside of it. Likewise GCMs have utility in domain, it’s when you take them out of domain e.g. to project future states they fall over.

    • Matthew R Marler

      Jeff Glassman: Science does not pit models against models.

      Where did you get that?

      • “A beautiful theory slain by an ugly fact.” Being prettier than another model is not the test.

    • Great insights about post modernism and post modern science, Jeff.

    • Thankyewverrahmuch. Quoted, cited, excerpted, referenced..

  19. JC SNIP off topic for this thread

  20. The pause after 1998 was not the only pause. What about the pause (actually a slight decrease) from 1941 to 1976? How to explain that? Note that the periods 1910 to 1941 and 1976 to 1998 were El Nino dominant, and the pauses were either balanced or slightly pro-La Nina.

  21. Has anyone ever dissected CMIP5 code?

    • Well, that is a multipart question. CMIP5 is an archive of many models, and their variants. There is no single code to examine.. I read and have in my second book dissected NCAR CAM3, whose several hundred page documentation is free on line. Doc. Chapters 4 and 7 go into the absolutely necessary tuned ‘parameterization’.
      BTW, some of this stuff is still originally coded in Fortran, which I used as an undergrad decades ago. NOT confidence inspiring.

      • Steven Mosher

        Actually it IS confidence inspiring.

        http://arstechnica.com/science/2014/05/scientific-computings-future-can-any-coding-language-top-a-1950s-behemoth/

        “Wherever you see giant simulations of the type that run for days on the world’s most massive supercomputers, you are likely to see Fortran code. Some examples are atmospheric modeling and weather prediction carried out by the National Center for Atmospheric Research; classified nuclear weapons and laser fusion codes at Los Alamos and Lawrence Livermore National Labs; NASA models of global climate change; and an international consortium of Quantum Chromodynamics researchers, calculating the behavior of quarks, the constituents of protons and neutrons. These projects are just a few random examples from a large computational universe, but all use some version of Fortran as the main language.”

      • Coding in modern fortran can be both effective and a pleasure.

      • Thanks for the reply.

        I suppose my more direct question would relate to the models in general. There seems to be an inherent warming bias in virtually all of them. Why?

        Where is the shared coding that produces this warming bias across the board, and more importantly, why has it not been corrected to reflect the current reality of observation?

        Is it true that these models just get additional layers of code placed on top in perpetuity?

        Perhaps nobody has a reader for the original Fortran cards any longer?

        I just find it puzzling that there is such a ~ between model output and reality. Frankly, that is not acceptable for much of anything, let alone global energy policy.

        Perhaps I will try a “model” approach with my taxes next month now that I have a precedent to reference.

        Regards Ed

      • Perhaps nobody has a reader for the original Fortran cards any longer?
        not needed, since 1985, I witnessed the last reader standing idle with the advent of magnetic tape and floppy disk

  22. Planning Engineer

    Thanks Rud. I’m halfway through Blowing Smoke. It’s helping me with some knowledge gaps.

  23. I don’t really have any special training in this field. But I’d like to try and understand the nature of the Monckton et al paper. Here is how I understand it:

    Instead of chasing a wild goose and trying to predict its path through the woods, the Monckton paper ignores the known unknowns and focuses on what is known, that being the limits of the goose, such as its size, speed, lifespan etc.

    So, is that a fair analogy?

    This is my understanding of chaotic systems: once they form, their long-term behaviour cannot be predicted, because cause and effect merge together and cannot be separated. If that is right, then CO2 and temperature are like Yin and Yang, and we cannot say that one causes the other.

    If that is right, then chaotic systems behave as if they have existed since an infinite past. That is unlike linear systems which have an obvious start point, such as a motor race. With a linear system, you certainly can determine cause and effect.

  24. To my way of thinking, Monckton et al.’s central argument is based on their “transience fraction” r_t. They say that at a given time after application of the stimulus the output will reach a ratio r_t of its equilibrium value, where r_t as a function of their feedback f and the time t since the stimulus was applied: the greater the f and the lower the t, the lower the r_t.

    They argue that the r_t(f, t) implied by the AR5 equilibrium-climate-sensitivity range differs from the function that Monckton (somewhat opaquely) inferred from other papers. Mr. Istvan somewhat more clearly inferred a (single-t) r_t(f, t) function from the IPCC-asserted ratios of transient to equilibrium climate sensitivities.

    The problem is that Monckton et al. seem to have obtained their r_t(f, t) function from a step-function response depicted in the Gerald Roe paper, whereas Mr. Istvan has obtained his (single-t) function from something like a response to a ramp stimulus (implied by the definition of transient sensitivity). It’s not clear that anyone has recognized the distinction.

    Given that r_t is so central, I think its derivation should be discussed more deeply and clearly, both by Mr. Istvan and by Monckton et al.

    • Joe, that is one of the two main points on which I originally disagreed with Lord Monckton when this paper originally appeared. See the original WUWT post. I gave mine, having found theirs less than convincing. Plus gave the Lindzen caveat.

      • You are correct in implying that I had not reviewed all the comments there in detail. I hope to address that omission in due course–but not tonight.

  25. Mike, just think of it as a change in albedo of the object when the object includes the atmosphere around it. Does that make you see it.

    • Yes Mike, and a decrease in albedo (which is effectively what happens) causes an increase in temperature. CO2 is black paint for Infrared.

    • Or deep red paint if you wish. Earth and its atmosphere are a big ball. CO2 changes the color and therefore the amount of energy absorbed. It’s really not nearly as complicated as people make it out to be. The details are complicated, but not the big picture.

    • John Smith (it's my real name)

      and 500 ppm would alter the reflective quality of the paint to a measurable degree?
      the observations of the last 18 years seem to raise at least a slight objection

  26. UPDATE: Christopher Monckton has responded
    https://curryja.files.wordpress.com/2015/03/monckton.pdf

    • Matthew R Marler

      From Christopher Monckton’s response: Curiously, a climate modeller at NASA GISS made a similar mistake to Dr Istvan, even going so far as to say the Bode equation was not used in the climate models at all. I referred him to not one but two papers by James Hansen, the creator of the GISS model,
      each of which discussed the applicability of the Bode equation. One paper
      even derived it from first principles not inelegantly–but without taking into
      account the constraints on its applicability that I have set forth here.

      Personally, I find it always worthwhile to read his rebuttals.

      • The equation is used as a diagnostic to understand the output, not as a basic equation. Monckton has this wrong. Likewise, the feedback is an output, not an input.

      • If it’s a feedback, it has to feed BACK into the input. Otherwise, it’s just a simple output.

      • Feedback is both!

      • I mean that a term like Monckton’s ‘f’ (or g) is not an input to a GCM. However, by looking at the forcing which is input and the global temperature response which is output, you can calculate an effective ‘f’.

      • Matthew R Marler

        Jim D:However, by looking at the forcing which is input and the global temperature response which is output, you can calculate an effective ‘f’.

        What would the feedback parameter have to be to force a 4C increase in temperature by 2100?

      • Insufficient information. How much CO2 has been added by then?

      • Matthew R Marler

        Jim D: Insufficient information. How much CO2 has been added by then?

        Use a figure that you think is highly likely. Or go with 600ppm at 2100.

    • Thanks for the heads up on Monckton’s response to Rud’s critique. The original paper, Rud’s critique and Monckton’s response have been saved to my hard drive for further reading and study. I haven’t found any other comments to date to be helpful. The use of the Bode equation in the GCM’s has been suggested as a reason for them overheating and this suggestion is worthy of further consideration by all commenters.

    • Interestingly, though, because the climate behaves as a chaotic object a simple model is not inherently less likely to be able to reach a respectable projection of climate sensitivity than a far more complex model.

      Yup. Trenberth’s quite correct criticism of this simple model also applies to the GCM’s he bases his own prophecies of doom on.

      The exasperating unpredictability of objects that behave chaotically is now a major focus of mathematical enquiry, though it is too little considered in climatological physics.

      But the chaoticity of the climate tends powerfully to level the playing-field as between a very simple model such as ours and the vastly more complex general-circulation models that are the cause of the current panic pandemic.

      Thus the failing paradigm. But the fact that simplistic models aren’t any worse than slightly less simplistic GCM’s doesn’t make either of them right.

      The striking singularity as the loop gain of unity approaches is simply not consistent with how the climate has behaved over the past 810,000 years.

      What about the past 810,000,000 years?

    • Feedback, by definition, is both.

    • Matthew R Marler

      Jim D: The equation is used as a diagnostic to understand the output, not as a basic equation.

      Either it is applicable, accurate, and informative or it is not. Monckton et al show that if it is, then the corresponding positive feedback must be much lower than previously claimed. Istvan showed that the model and data do not preclude a higher value than that used by Monckton et al, but he did not support a value sufficient to produce the kind of temperature response that Hansen and others have warned of.

  27. Thanks for this post, Professor Curry, and for having the courage to post it.

    The Bulletin of the Chinese Academy of Sciences is superior to current issues of Nature & Science

  28. I heartily agree with Mr. Istvan regarding Monckton et al.’s bland assertion of an “unsupported ‘Process engineer’s design limit ≤+0.1.’” To a point, I also agree with his suggested simplification of the feedback theory. But that discussion is too elliptical in my view. More specifically, it, like Monckton et al.’s, ignores the point at which delays enter the analysis.

    In the Monckton et al. paper a so-called forcing \Delta F results from, e.g., increasing atmospheric carbon-dioxide concentration. An increment in carbon-dioxide concentration so increases optical depth as to raise the effective altitude at which the earth radiates. Lapse rate being what it is, that reduces outgoing radiation by some value \Delta F of power per unit area. In a (too-simple but illustrative) lumped-parameter model, the resultant radiation imbalance would cause the temperature T to increase in accordance with

    c_p\frac{dT}{dt}=\Delta F-a_0T,

    where c_p is an effective heat capacity per unit area and a_0 is a coefficient relating a body’s temperature to its radiant power output per unit area.

    That equation tells us that in the absence of other effects such as lapse-rate feedback, clouds, etc., the temperature change \Delta T would in the fullness of time settle on the value \Delta F/a_0. In some circles, the form of that equation would cause a_0 to be looked upon as a feedback coefficient. But Monckton et al. eschewed that interpretation. They called this CO2-alone ratio of temperature change the “Planck climate-sensitivity parameter” \lambda_0, a sort of open-loop gain: \lambda_0 = 1/a_0.

    It is only other feedbacks that Monckton et al. do call feedbacks. In the (again, too-simple) lumped-parameter model represented by the differential equation above, effects such as the reduction in lapse rate caused by increased evaporation, albedo changes resulting from more cloud formation, etc. would add further feedbacks a_n to the equation so that a_0 would be replaced by a_{tot} = \sum\limits_{N}a_n. This means that Monckton et al.’s f = \lambda_0 a_{tot}-1

    Now, the above (first-order) equation tacitly implies that the entire heated body undergoes the same temperature change in unison. If the world did work that way, a slower initial temperature response and a faster settling would be observed than is exhibited by the more-complicated IPCC-backed models, at least as Monckton et al.’s Table 2 represent them.

    For example, an f value that results from such a (first-order) lumped-parameter model whose “transience fraction” r_t reaches 0.63 at 25 years (see Monckton et al.’s Table 2) would reach 0.86 at 50 years, whereas Table 2 shows significantly slower settling. The settling shown there is more like what one would observe if the radiating surface were (for the sake of simplicity) that of a semi-infinite slab through which heat diffuses.

    Even if the (lumped-parameter) body were replaced by a (distributed-parameter) semi-infinite slab, though, the feedbacks still would be associated with the same delay. In the IPCC models, no doubt, some feedbacks would be associated with different delays, as they would be in real life. For example, the albedo change resulting from melting polar ice may have a relatively long delay that depends less on the global-average temperature than on, say, the length of time the temperature has been elevated and causing conversion of sensible heat to latent heat.

    In any event, Monckton et al.’s and Mr. Istvan’s discussions would both benefit from better explanations of what they mean by their various feedbacks and in particular of the fact that Monckton et al.’s \lambda_0 itself occurs only through feedback involving a (probably small in its case) delay, just as the other feedbacks do.

    Now, in a sense this is a side issue. My current impression is that Monckton et al. probably have a good point: there has been no good explanation for changing from (what I take it is) the previously adhered-to r_t function to the one implied by the equilibrium-climate-sensitivity range the IPCC now asserts. But for me that point has not been made well enough for me yet to have confidence in it. And this lack of precision in discussion of feedback further undermines my confidence.

  29. Monckton of Brenchley: “In particular, Bode mandates that at a closed-loop gain >1 feedbacks will act to reverse the system’s output. Thus, in a circuit, the output (the voltage) becomes negative at loop gains >1.”

    I’ve always wondered what Lord Monckton meant when he has said in similar contexts things like “the voltage transits from the positive to the negative rail.” Now I see that such statements were based on the above-quoted impression. That impression is inconsistent with what guys who know this stuff have told me. What they say is that loop gains that exceed unity are possible, but they do not prevail throughout the amplifier’s operating range; as the output increases beyond a certain point, the loop gain falls.

    I’m told that what this means as a practical matter depends on inevitable energy-storage elements. But I understand that it can mean that the output changes monotonically until it reaches one rail, at which it then remains: there is no “transit from the positive to the negative rail.”

    • “But I understand that it can mean ” means it can mean anything.
      So this means it means nothing, which means everything.
      Can I join you on the loop of the rails . would either of us gain anything?
      Not without transit anyway. Remember it might already have been on the negative rail hence no translation needed.

    • In the probably vain hope that Lord Monckton will actually take constructive criticism in the spirit in which it is offered:

      Please consult the first block diagram here, which illustrates a Schmitt trigger. Such a circuit is typically considered a digital element, but its constituent elements illustrate use of a loop gain that exceeds unity.

      When the constituent amplifier’s input goes positive, the output goes more positive, and that output increase makes the input go more positive, etc., until the output hits the positive rail, where the amplifier holds it until the composite circuit receives an input negative enough to overcome the erstwhile-positive feedback.

      The point here is that it is erroneous to say, as you have, “Thus, in a circuit, the output (the voltage) becomes negative at loop gains >1.” The proper interpretation is that in the y = A(x + fy) equation from which one obtains y = A/(1 – Af) the A value will have decreased when the output hits the rail so that Af, although initially greater than unity, no longer exceeds it.

      This does not mean that there isn’t a good reason to question significantly positive net feedback. But please, please spare your more-knowledgeable well-wishers the fremdschaemen they’ve heretofore suffered whenever you discuss electronic circuits.

  30. Yes, the “GHGs” simply affect the response time of the climate. Much like a simple capacitor in an electrical circuit, it delays the time when an “equilibrium” temperature might be achieved. Since the climate is very much like a hybrid “AC/DC” circuit it is never ever in a real equilibrium state. It simply warms up a bit “faster” and then cools down a bit “faster”.

    I have attempted to explain this “response time” phenomenon on several web blogs and been called all kinds of nasty names, but the “climate” is very much like an “AC circuit”, we used to study those in “shop class” in high school, but then the “really smart folks” told us we were fools…

  31. Just in case it may be of help to anyone, here’s the routine I used to study Monckton et al.’s model:

    MoncktonModel = function(dF = 1, L_inf = 1, G = 1, g = 1, L_0 = 0.3125,
    f = 0, CRat = 2, C_0 = 1, C_t = 1, q = 1, r = 1, k = 5.35){
    # A stab at coding the simple model described in Monckton et al.,
    # "Why models run hot: results from an irreducibly simple climate model"
    # dF: Monckton et al.'s delta F, i.e., forcing due only to increased CO2 concentration
    # L_inf: Monckton et al.'s lambda sub infinity
    # L_0: Monckton et al.'s lambda sub zero
    # CRat: Ratio of Monckton et al.'s initial CO2 concentration to their final CO2 concentration
    # C_0: Monckton et al.'s initial CO2 concentration
    # C_t: Monckton et al.'s final (time t to infinity) CO2 concentration
    #
    # Caution: This routine permits user to supply inconsistent argument sets,
    # and it assigns priority among inconsistent arguments somewhat arbitrarily
    # A typical call supplies only the feedback srgument f, possibly as a vector,
    # in response to which the routine returns, inter alia, the so-called
    # equilibrium climate sensitivity as dT.

    if(missing(dF)){
    if(missing(CRat) & !missing(C_0) & !missing(C_t)){
    CRat = C_t / C_0;
    }
    dF = k * log(CRat);
    }
    if(missing(L_inf)){
    if(missing(G)){
    if(missing(g)){
    if(missing(f)) return("f is missing");
    g = L_0 * f;
    }
    G = 1 / (1 - g);
    }
    L_inf = L_0 * G;
    return(list(df = df, dT = dF * r * L_inf / q, L_0 = L_0, L_inf = L_inf));
    }
    list(dF = dF, dT = dF * r * L_inf / q, L_inf = L_inf);
    }

    MoncktonModel(f = c(1.0, 1.5, 2.2))
    # $dT
    # [1] 1.685608 2.181375 3.708337
    #
    # $L_0
    # [1] 0.3125
    #
    # $L_inf
    # [1] 0.4545455 0.5882353 1.0000000

    I have not tested it extensively, so I would be grateful for any but reports.

  32. Still can’t figure out why nobody wants to confront saturation. Certainly a poorly understood factor, but one with long legs. Saturation is approximately, but not convincingly logarithmic.

    We all know the general shape of a logarithmic curve, yet we really don’t know where CO2 falls on that curve at current level. It makes all the difference…

    • Please read Sensitive Uncertainty in ebook Blowing Smoke.
      Saturation fails as an argument because the ‘top of the GHG fog’ has no limit. It can always go higher into the atmosphere. Especially CO2, since H2O falls off rapidly with temperature dependent lapse rate (Clausius -Clapeyron and all that not quite right stuff.)

      • Does it matter how much of the “GHG fog” is above (the majority of) negative-feedback clouds?

      • In other words, when latent heat is released during cloud formation, is CO2 saturation above the cloud zone less important (compared to clear-sky situations) since the clouds block much of the CO2-enhanced longwave radiation from reaching the surface?

      • Opluso, sorry for the late reply. Day job interference. Cloud formation via WV condensation certainly releases the latent heat of evaporation into the atmosphere at whatever altitude the cloud forms. But usually, this latent heat is still in the ‘GHG IR fog’. The WV ‘fog clearing’ altitude is about 10 km in the tropics, but much lower in the colder dryer high latitudes. The CO2 ‘fog clearing’ altutude is about 8km everywhere, since a well mixed gas whose concentration is only pressure, not also temperature dependent. Towering cumulonimbus reach to maybe 45,000 feet, so about 13km. That is why tropical convection cells have such an enormous influence, which the GCMs cannot capture. See essay Models all the way Down. The ‘top of fog’ is only a layman’s metaphor for the zone in which the mean free path of an IR photon increases enough that its escape to space probability is significantly enhanced. See footnotes to essay Sensitivy Uncertainty. Regards. Very good questions, ‘simple’ answers.

      • Rud, I disagree that significant amounts of human CO2 are above the HITRAN measurements that clearly show saturation.

      • CO2 plumes and holes seen by new satellites challenge the “well-mixed” assumption. There are lumps and dynamics to be found and accounted for and accommodated.

      • Rud, I’m researching a post on saturation. It strikes me that your position follows a realclimate post by “Spencer Weart, in collaboration with Raymond T. Pierrehumbertat” http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/#sthash.UNT1FrdZ.dpuf.

        I completely disagree. Firstly, the massive HITRAN database was compiled by aircraft at the highest reliable lift altitude. These guys were all about ICBM trajectory and mutually assured destruction. They were not climate scientists. They were warriors. They didn’t even trust God, even he must bring data.

        Secondly, if higher altitude CO2 were raising the net radiative altitude, some warming should register there. Instead, we see high altitude cooling.

        Thirdly, although water’s absorption is relatively weak in the 15 micron CO2 sweetspot, this weakness is tempered by the 90/10% concentration ratio. It doesn’t matter that water is concentrated lower in altitude if it catches all the upwelling IR. As well mixed as it is, CO2 is also concentrated low in altitude and it may be effectively sequestered statistically.

        Fourthly, IMO, about half of the greenhouse effect is from the “top down” near IR solar incoming spectra where water rules.

  33. “A planet is not a glass receptacle”.
    I bsee you are on the same page as Zeke.
    Nor is it a “a simple zero-dimensional model”
    You need the ” dynamics that a box or slab ocean model would provide, so immensely useful.”

    • A planet is, well…a planet.

      A few centuries of penetration, observation, exploration and contemplation might produce models that are merely inadequate…as opposed to risible.

      • mosomoso,

        Probably not.

        On a long term basis:

        they can’t model clouds with any accuracy;

        they can’t model ocean heat content with any accuracy;

        they can’t model el Ninos with any accuracy;

        they can’t model la Ninas with any accuracy;

        they can’t model water vapor with any accuracy;

        they can’t model solar with any accuracy;

        and most of all they are helpless in trying to model the interactions of all of the above.

        The most likely plain and simple fact is that the global climate is too complex for man to ever model on a long term basis, and certainly not with the tenths of a degree accuracy in temperature trends currently claimed.

        Climate models in their current iteration, like so much of ‘climate science’, are politics dressed up as science.

      • This idle hobbyism is no science. It just costs like a science.

    • KevinK,

      Probably a silly question, but would it be worth building a climate model with analogue computers? :-/

      • Heh, why we attempt understanding with these pitiful digital simulacrums of the great analog computer that is the earth.
        ====================

      • Oh, well, threading adds crypticism to my remarks. Silver lining.
        ============

      • Adding crypticism to kim: what salesmen do when they retire from selling snow to the Eskimos?

  34. Why models run hot. Because they input parameters to give a heightened Climate Sensitivity.
    Thank you Rudd,
    Hopefully one or two people at a time will see the sense in these arguments.

    • Steven Mosher

      1. Not all models run hot
      2. some models run hotter than others.
      3. Diagnosing this is a MODEL by MODEL affair.

      Models can run hot for the following reasons.

      1. bad observations.
      2. Chance
      3. Inaccurate, incomplete forcings during the forecast period.
      4. Innaccurate, incomplete physics.

      #1 can be diagnosed by re evaluating temperature series. See Cowtan and way. They may still be missing excess warmth in the arctic

      #2. Can never be eliminated, it can only be narrowed by collecting more data. waiting.

      #3 can be address by re examining the inputs. For example, over looking
      issues such as volcanic forcing
      #4. once you address 1-3 , then you can start to investigate #4, although this work can be parallelized, dont mess with the hard stuff until you’ve eliminated the easy stuff.

      which is it? what combination? theres an uncertainty monster for you.

      • Mosh leaves out that it may be a poorly designed model with basis assumptions wrong

      • Steven Mosher

        Mosh leaves out that it may be a poorly designed model with basis assumptions wrong?

        HUH, what part of number 4 is greek to you?

        If your model, doesnt match the observations you know

        1. The observations may be wrong ( duh, repeat the test)
        2. The input data could be wrong
        3. The model mathy stuff could be wrong/incomplete, poorly designed
        with faulty assumptions and Halitosis.
        4. Shit happened.

      • catweazle666

        You missed:

        5. Attempting to model effectively infinite non-linear coupled chaotic systems is a total waste of computer time.

      • Models fail because they fail in the modeling. That can be for the reasons Mosher suggests, but in this case one would have to say the key problem is in the structure assumed for the models .

        This is not “the physics” or “the math” as people would normally understand those terms. It is because the models have been built at a resolution that apparently can’t adequately describe the phenomena of interest.

        To the extent to which all GCMs share an underlying structure that is seen to be problematic, so they can be evaluated as a group rather than individually.

      • David in TX

        Steven Mosher | March 2, 2015 at 2:50 am | Reply

        “1. Not all models run hot”

        In a reasonable world we’d select those models for further refinement and toss the others into the circular file. Survival of the fittest. But on Planet IPCC Darwinian evolution doesn’t seem to be in operation. Special creation as written in The Church of Carbon Sin gives the otherwise failed models a new lease on life.

        Climate Science is a religion.

      • Steven Mosher

        “In a reasonable world we’d select those models for further refinement and toss the others into the circular file. ”

        sounds simple.
        but, models that are good on one metric ( temperature) may be poor on another, say precipitation.

        It’s an active area of research. publish something

      • Yes, they’re all wrong. Start over.
        =============

      • Steven Mosher

        Models fail because they fail in the modeling. That can be for the reasons Mosher suggests, but in this case one would have to say the key problem is in the structure assumed for the models .

        1. one would not HAVE to say that.
        2. if you choose to say it you need an argument, not mere assertion

        This is not “the physics” or “the math” as people would normally understand those terms. It is because the models have been built at a resolution that apparently can’t adequately describe the phenomena of interest.

        1. This assumes that the cause of “hotness” is resolution.
        2. low resolution could also cause coolness.
        3. You would need to test your hypothesis that resolution was the key.

        To the extent to which all GCMs share an underlying structure that is seen to be problematic, so they can be evaluated as a group rather than individually.

        Huh.

      • “1. one would not HAVE to say that”.
        Regrettably one does have to say this, because it isn’t covered by the reasons that are stated and its omission seems to lead to an argument that there aren’t issues that relate to GCMs as a collective.

        “2. if you choose to say it you need an argument, not mere assertion”
        All the reasons need arguments, but we weren’t doing arguments we were doing lists.

        “1. This assumes that the cause of “hotness” is resolution.
        “2. low resolution could also cause coolness.
        “3. You would need to test your hypothesis that resolution was the key..”
        You are perfectly correct, too hot or too cold is neither here nor there. As Goldilocks knows, it isn’t just right.

        But the risk is that just relying on your list making one ends up closing ones mind the to potential that as a class GCMs might be inadequate to task of usefully forecasting/projecting future climates.

      • blueice2hotsea

        Steven Mosher –
        1. Not all models run hot

        True. 3% of models do not run hot.
        But, 3/97 has the feel of systemic defect, whereas 50/50 is more indicative of random error.

      • 1. Bad observations- corrected by adjustments…maybe, but much disagreement.
        2. Chance- yes, like the 10 years of rising CO2/rising global temperature correlation that fooled Dr. James Hansen. Or 20 years of correlation that created the consensus. How long is enough? With multiple oscillations/cycles of years to 100,000 years- how long is enough?
        3. Inaccurate, incomplete forcings… Not sure what inaccurate forcings are- mistaken known knowns? Are incomplete forcings related to those oscillations/cycles of up to 100,000 years? How about the unknown unknowns.
        No chance of getting to #4, is there!!!!?

      • David in TX

        Since when are people concerned with global average precipitation, Mosher? The selection pressure is for global average temperature. I repeat, in a reasonable world we’d toss out the models which are running hot and retain those that aren’t for further refinement.

    • 4. Innaccurate, incomplete physics.

      Well, the first thing you might do is stop calling it “physics.

      To use an analogy I’m much more familiar with, the behavior of a small section of the outer membrane of a nerve cell might (barely) be called electro-chemistry. The behavior of a brain, a ganglion, even many types of nerve cell can’t. It depends far more on information than chemistry or electrical currents.

      • Steven Mosher

        really? Physics is the perfect description.
        But, rather than nitpick pick we will just say
        4. Innaccurate, incomplete, math stuff.

        Notice how you havent addressed the point I made, but rather quibbled about semantics. bad faith.

        so when your code determines whether a patch of water should be frozen or not would you call that physics? Again you can quibble about what to call it, call it “murf” the fact remains that there are multiple potential cause of an apparent mismatch. And diagnosing this is a model by model affair

      • My thought on first hearing Ander’s ‘And then there’s physics’ was ‘And then there’s everything else.’ Amusingly, physics is among the ‘everything else’, since simple radiative physics is only the beginning of understanding of climate.
        =============

    • Belated by day job entry into this subthread. As posted elsewhere on this thread, most of the CMIP5 models are running hot. The one that is not, WE #31, has a low sensitivity. In poker terms, that is a big tell. You all work out some of the rest. My post succeeded in provoking the question. Seek your own answers.

      • The higher the climate sensitivity to CO2 the colder we would now be without man’s efforts. You’d better hope that the recovery from the coldest depths of the Holocene has been predominantly natural, ‘cuz if man’s done the heavy lifting of warming, we can’t keep it up much longer.
        =====================

      • Steven Mosher

        huh, provoked?
        been looking at it since 2008

      • WE #31 ?? Can’t identify, nor can Google?

      • “hmm. I haven’t read widely in the GCM work”, only “been looking at it since 2008”
        A lot longer than me. And a much better attention span and knowledge
        “1. Not all models run hot” SM backed up by
        “The one that is not, WE #31, has a low sensitivity.” RI
        Flaw in second statement is that Climate Models do not have a Climate Sensitivity in them Rudd, much as you and I would like to believe this.
        Apparently it is an “emergent” phenomenon only that can only be detected after the model is run.

      • “HAS | March 2, 2015 at 7:56 pm | WE #31 ?? Can’t identify, nor can Google?”
        Data adjustment HAS, It is done all the time with outliers I am reliably informed by my Greenie sons.

      • angech2014, that went flying past me.

        What I’d like to know what CMIP5 model is referred to as “WE #31”.

      • http://judithcurry.com/2013/02/22/spinning-the-climate-model-observation-comparison/

        Fig1 from the above post seems to show #31 but it doesnt say which model so maybe no real help. The Fig suggest to me that there are maybe 5 models that seem to have a reasonable match the the instrumental record.

        ( Im not prepared to define reasonable)

      • ta

      • If WE #31 is referring to series 31 of the dataset supplied by Willis, then he has identified that series as output from the inmcm4 model

  35. I finished going through the paper a few days ago, and found two serious (and several minor) flaws in it.

    The first is Figure 6, which I suspect the authors intended to be the ‘money shot’, which would convince laymen that the paper had somehow refuted dangerous anthropogenic global warming. It purports to show that the ‘irreducible model’ is a closer match to observations than anything the IPCC has published. To be frank, the figure is shockingly bad. The graph shows modelled decadal warming rates carried forward to 2050, compared to “the observed temperature trend of recent decades”. Two different decadal rates seem to have been selected (63 and 17 years), with no justification given for either. And, on what planet do we expect a linear extrapolation of recent temperatures to be necessarily indicative of future temperatures over the next 35 years? In any case, observed temperatures include the net effect of all climatic processes, not just the response to anthropogenic CO2, which is all that the authors have modelled. So how can the two possibly be directly comparable? Finally, even if it is true that the simple model closely matches observations, unless the net total of non-anthropogenic forcings is zero this can only be accidental.

    Figure 6 is obviously beset with a number of fundamental problems and is exactly the sort of rubbish that sceptics get upset about when their opponents try it. Irrespective of other criticisms of the paper, it should never have been published with Figure 6 included in anything like its current form.

    The other major problem is in the Discussion, where the authors make the claim that “The simple model confirms the hot running [of GCMs] and exposes several of the reasons for it.” In fact, the proposed model itself confirms nothing, as it is just a simple linear mathematical function. It is the value of net global feedbacks selected by the authors, which they then feed into the model, which creates the variance to the GCMs. I’m not sure exactly what the authors intended, but the conclusions of the paper seem to suggest that somehow it offers a mathematical rebuttal of the GCMs, not one based on parameter value selection. I think it would be easy to skim-read the paper and come away with completely the wrong impression. The paper should more accurately say ‘The simple model _can_ be used to confirm the hot running…’, but only if you agree with the authors’ selected feedback value. In this way, the paper more or less assumes its own conclusion.

    I’ve posted up a full review here:
    https://jonathanabbott99.wordpress.com/2015/02/25/review-of-why-models-run-hot/

      • David in TX

        Lovely. I posted exactly the same graph and Curry deleted it. She didn’t actually read the paper. No surprise there I suppose.

    • ==> “Figure 6 is obviously beset with a number of fundamental problems and is exactly the sort of rubbish that sceptics get upset about when their opponents try it.”

      And yet we have so many “skeptics” who aren’t the slightest bit upset in this case.

      Interesting, no?

      What could explain why they get “upset” when their “opponents” try it but aren’t “upset” here?

      Maybe, Jonathan, you need to re-think a bit the mechanism of causality that you use to describe how “skeptics,” as a group, come to their views about climate change?

      • Joshua, it is quite evident that many sceptics exhibit motivated reasoning, one only has to read the comments at WUWT. But then so do very many who support the consensus.

        Of course, you have previously accused me in the past of motivated reasoning as a matter of routine. In fact you are a stuck record on the subject. Perhaps there is a lesson for you to draw from that.

    • Figure 6 is accompanied by the following text.

      “If, for instance, the observed temperature trend of recent decades were extrapolated several decades into the future, the model’s output would coincident with the observations thus extrapolated.”

      Notice the word “If”? The extension of current observations is not predicted, or posited as a probability. It is what more rational readers would call a hypothetical.

      And this critique is particularly amusing since at the core of the CAGW religion is belief in the linear response of climate to added CO2.

      http://www.carbonbrief.org/media/234811/surface-temp-future.jpg

      From the AR5 Summary for Policy makers. Figure SPM6.

      The IPCC predicts linear increases in temp, and the CAGW acolytes swoon.

      A skeptic suggests, as a hypothetical, a continuation of the current 18+ year linear trend in reported temps, and they get a case of the vapors.

      Poor dears.

      • But Gary, why suggest a hypothetical scenario unless you think it has value?

      • Jonathan,

        I was responding to your question:

        “And, on what planet do we expect a linear extrapolation of recent temperatures to be necessarily indicative of future temperatures over the next 35 years?”

        The answer is the IPCC’s planet.

        My implied question is why you see no problem with their actual linear projections?

        I am not a big fan of situational logic.

      • Gary, I misunderstood. The IPCC may illustrate their expected climate trends in linear fashion if they wish, not that I expect such forecasts to be accurate. However the authors definitely should not attempt to work forwards from past observations, they should know better.

      • Come off it Steven, you know perfectly well that extrapolating a 17 year (or 63 year for that matter) temperature trend forwards by 35 years means nothing worth saying.

        Anyway, you’ve been dropping enough hints over the last few days. You’ve found other problems: care to share with the rest of the class?

      • Steven Mosher

        “Come off it Steven, you know perfectly well that extrapolating a 17 year (or 63 year for that matter) temperature trend forwards by 35 years means nothing worth saying.

        On the contrary. it means exactly what it says.
        if you assume a linear trend, this is what you will see.
        that can be useful

        Anyway, you’ve been dropping enough hints over the last few days. You’ve found other problems: care to share with the rest of the class?

        nope. I already gave Rud a hint to back away from the train wreck.
        Pity Briggs didnt see the issues. Let’s wait and see who else will run into
        on coming traffic to defend Monkton.

      • I suspect the 17 year RSS and 63 year HadCRUT4 were meant to reflect two realistic extremes from observations? HadCRUT4 is land-ocean, correct?

    • David in TX

      “And, on what planet do we expect a linear extrapolation of recent temperatures to be necessarily indicative of future temperatures over the next 35 years?”

      Planet IPCC. Granted it has little relation to the earth. But you should have known about it.

      FAIL

      • Thank you for your nicely reasoned, polite reply, which completely missed my point: let the IPCC play their silly games, but don’t join in.

      • David in TX

        When in Rome, do as the Romans do. Monckton et al were justified in using decadal average temperature increases. IPCC advertised an expectation of 0.2C to 0.3C/decade temperature increase in third assessment report. To their chagrin it is MIA for nearly two decades since and the observed trend over the whole satellite period (1979 forward) is ratcheted down to 0.13C/decade as a result.

        Personally I would have not have used a 63yr HadCRUT decadal average as I don’t believe there is a credible global average temperature measurement accurate to hundredth’s of a degree/decade other than satellite measurements.

  36. Mike Flynn | March 1, 2015 at 10:55 pm |
    Rud,
    I am not so much a skeptic, more an unbeliever.

    As I write this there are 89 comments. I rarely read all of what are usually extraneous comments. This post is the exception as there is a lot of meat at this feast. I don’t claim to follow all of the technical details but am struck by the fact that this is an excellent academic conversation. What is missing, however, is data. The physics says this should happen; the physics says that should happen; and yes I believe the physics, at least as much of it as we know. Until there is real world climate data supporting the physics (and currently, despite claims to the contrary there isn’t), I agree with Mike Flynn’s comment “I am not so much a skeptic, more an unbeliever.” I appreciate and listen to Christopher Monckton and Rud Istvan writings and will continue to read and learn from them. But the bottom line is “show me the data.”

    • Danny Thomas

      PMHinSC, Dr. Curry, Rud, et al
      I don’t usually allow myself to interupt in such technically oriented posts but I must offer a strong second to
      “This post is the exception as there is a lot of meat at this feast.”

      Thank you. Much is over my pay grade, but it’s a true pleasure to witness the orientation of this thread. Encore! I will commit to “reading more and posting less” should this be the fare and frankly, the moderation. Should a junior league be started I’d like to volunteer to play. I come away “feeling” a “need” to learn more, and that there are strong instructors willing to teach. Thanks to ALL!

  37. Steven Mosher

    nice job Rud.
    there’s more.
    .

  38. SkepticalScience dug up old papers from the 1960s (http://skepticalscience.com/LBJ-climate-1965.html). What was really surprising is that despite m(b)illions spent on climate change research estimates of climate sensitivity remain remarkably broad despite 50 years of study.

    Eg: Manabe & Wetherald 1967 http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281967%29024%3C0241%3ATEOTAW%3E2.0.CO%3B2: “Doubling the existing C02 content of the atmosphere has the effect of increasing the surface temperature by about 2.3C for the atmosphere with the realistic distribution of relative humidity and by about 1.3C for that with the realistic distribution of absolute humidity. The present model does not have the extreme sensitivity of atmospheric temperature to the CO2 content which Moeller (1963) encountered…”

    With recent results though, what it looks like is that estimates of high climate sensitivity should now be treated as outliers requiring stringent observation-based justification and shouldn’t be used as policy tools until those justifications are fully met.

  39. Mayor of Venus

    Where does a spectroscopist such as myself begin to clarify what you just wrote? To begin, water vapor and carbon dioxide, as tri-atomic molecules, have strong asbosption bands in the near and middle infrared. The di-atomic molecules nitrogen and oxygen, do not. Oxygen does have the “A”, “B” and gamma bands in the very near infrared and visible red spectrum. We measured the intensities and pressure broadening coefficients of these features in the NASA – Ames spectroscopy lab in the late 1960’s and early 1970’s. The primary isotope of carbon dioxide is carbon 12 (6 protons and 6 neutrons) and 2 oxygen 16 (8 protons and 8 neutrons). About 1% has a carbon 13 ( 6 protons and 7 neutrons), and a smaller percentage a heavy oxygen 17 or 18). The absorption bands of the heavier isotopes are displaced to slightly longer wavelengths than the similar bands of the main isotope.

    • Curious George

      Mayor, please help. What happens AFTER a CO2 molecule absorbs an IR photon? It can re-radiate it later, of course, but – can it dispose of the energy in a collision? In a triple collision? How much of it gets converted into a local heat? What is a chance that a CO2 molecule gets thermally excited and emits an IR photon?

      • Your questions only emphasize the point that there is no real world climate data supporting the physics of CAGW. Physics without data is just math on a white board.

  40. If you know the answer, why are you wasting my time? Your question is about as silly as asking me how long is a piece of string, and then telling me you know the answer because you have it in your hand!

    I was asking because the comments about radiative processes in the Earth’s atmosphere indicated a lack of understanding of radiative diffusion.

    The answer, in case you want to know, is (roughly) 200,000 years. Or so. That’s because the mean free path of the photons is very, very short, so the energy transfer is dominated by radiative diffusion.

    Neutrinos, on the other hand, leave the core more or less immediately, and take roughly 8 minutes to reach the Earth.

    None of the math involved is particularly difficult, and all of the processes have been experimentally verified. Indeed, they are crucial to nuclear weapons design.

    My point is this: if you want to make the claim that CO2 does not act as a greenhouse gas, you should have some very basic understanding of the physics and the experimental evidence when making your argument. Otherwise people will treat you (correctly) as a crank.

  41. Thanks to Rud for the post, to Christopher for the reply and to Judith for posting both.

    Cheers.

    • Bob, thanks. I am in awe of Monckton, who wrote a longer reply in about 8 hours than my post, which took a month and two rewrites for stuff had got initially wrong… I guess just don’t process that fast unless rehearsed for a debate. And of course also in awe of Prof. Judith, who facilitates the debate.
      I must admit, when it comes to ocean stuff I lean very heavily on you. Many favorable references to you in the two most recent books. Highest regards.

      • Good to see expressions of gratitude! I sure appreciate what you, Bob and Rud, have done as teachers- I have your books- and am awed by your knowledge. I’m always awed and amazed by Mockton’s language ability!
        And as for Diogines- if he were still alive he would have found an honest woman! at the helm of this blog.

  42. Regretfully, I must share the local consensus opinion that MSLB offers no new physical insights into climate modeling and resembles more a game of musical chairs using IPCC numbers in simple models already described in AGW literature. As of this writing, a search of this thread shows only two contributions in which the words perturbation a/o convection are significant and I feel free to add comments thereon.

    The factor (1-g) appearing in many papers and textbooks is no more than the first two terms in a perturbation expression for expansion of a function of temperature (linearity). As such, it is necessary that g be a fraction of unity else nonlinear, higher order terms come into play. It is well-known that linearity is enhanced in electrical circuits by negative feedback and, somehow, this has been conflated as justification for truncation to linear terms. But there is no evidence for a power-consuming, op-amp mechanism of high open-circuit gain in the atmosphere. MSLB adopt a Planck model as a perturbation basis and all deviations are described by feedbacks. It would surely have been preferable to choose the current atmosphere as a basis, with CO2 doubling then becoming a 1% perturbation. To be sure, there is a tipping point in a homogeneous atmosphere at which structures extremely efficient at transporting energy from surface to tropopause spontaneously form, but strong evidence exists that we passed that point some time ago.

    Scan MSLB for the word ‘convect’. Nada. If one accepts MODTRAN results, net radiative flux is 102 W/m2 at the surface and 247 W/m2 at 15km (US Std. Atm.) The 145 W/m2 difference is generally attributed due to convection, i.e. convection is the dominant energy transport mechanism in the lower troposphere where GHG gases concentrate. Should we increase the surface temperature 1K, how much would the 247 value increase? Most would guess ca. 3W/m2 due to increased radiative flux. And how much would the convective contribution be? The naive might assume about the same given their existing ratio, doubling the increase to 6W/m2. Climate Science Consensus says zero. What now happens if, hypothetically, feedback mechanisms reduce the radiative contribution to 0.3W/m2. To achieve a 3W/m2 increase, CSC says a 10K increase is necessary. The naive assert 2K suffices.

    Models run hot because they ignore convective compensation for GHG-reduced radiation. Should you seek a simple, steady-state model circumventing these issues which runs on the back of an envelope, try the Carnot Equation (turbulence and a variational minimum included at no extra charge.)

    • Unfortunately, this simple model might not be adequate, but neither are the ones run on super computers. So, in a sense, this model is just as good as any other. And it does emphasize the stability of the climate system over the millennia. No tipping points.

      • No tipping points.

        While the real thing might very well have “tipping points.” AFAIK GCM’s have all evolved to exclude instabilities, because when they didn’t they soon crossed them. So, from the first, GCM’s weren’t a good model of the real thing.

    • Convection is main conveyor of energy into space. One thing I haven’t seen anyone say is that CO2 will make that process more efficient. (Does anyone know of any papers that mention this?) Two points:

      1. CO2 acts as a bucket for energy. This bucket is transported to TOA to release its energy,.

      2. If a CO2 molecules internal modes are excited and it is lofted to TOA by convection, it is then in an environment of lower pressure. This favors emission over de-excitation by collision.

      This means there is a cooling feedback from more CO2.

      • Although those points don’t seem well taken to me, I’ll let the physicists here address them.

        But I seem to recall a recent posting, probably at WUWT, about a Christopher Essex YouTube talk that had a passage dealing with the fact that CO2-caused increases in “resistance” to radiative cooling “squeeze” the heat into the convective-cooling channel. (That’s the highly technical explanation.) Sorry I don’t have the link at my fingertips.

      • Less heat getting out means increased cooling. Makes perfect sense. Just not enough more cooling to offset entirely the even more less heat getting out.

      • I don’t believe JCH or Joe Born understand what I wrote. I would be interested in the WUWT link if it exists.

      • I’ve wondered about this j2. Can’t get my head around it.
        ===============

      • Let me try to break it down. Imagine a convection cell with and without CO2 entrained.

        1. The upward convection will carry CO2 molecules, if present, to the top of the cell. (Any argument with that?)

        2. If the CO2 molecule had absorbed IR energy at a lower level, it will for some of the molecules still contain that energy once it arrives at the top of atmosphere.

        3. As it rises, it will experience lower pressure. At high pressure, CO2 will tend to shed its energy in collisions. At lower pressure, it tends to shed energy via emission. So, since it has risen to a higher altitude, it will have a greater tendency to emit radiation than at lower levels.

        4. If it emits IR radiation at top of atmosphere, a good portion of that will make it into space.

        5. If there is no CO2 in the convection cell, this heat transfer mechanism will be absent and will not contribute to cooling the Earth.

        This is why I posit that CO2 will make convection cells more efficient at cooling the Earth.

      • Years ago I wondered if something analogous was happening with respect to latitude, instead of altitude. Can more CO2 facilitate the transport of heat poleward, where it is more easily radiated out?

        Either or both could damp the simple no feedback calculation.
        ============

      • jim2,

        Convection is main conveyor of energy into space. One thing I haven’t seen anyone say is that CO2 will make that process more efficient. (Does anyone know of any papers that mention this?)

        I have run some radiative models on a static global atmosphere and believe I have found effects similar, though not identical to what you are indicating. Hopefully the results will appear here.

        It strikes me a car’s cooling system is an analog.

        The engine receives heat ( combustion ) and doesn’t shed the heat very well.

        The water pump transfers heat from the engine, by convection of radiator fluid, to the ‘radiator’ which transfers heat out of the engine system better than the engine. ( the convection to consider is the internal convection of the fluid – not the external convection of the misnamed ‘radiator’ which really cools by convection of air but outside the ‘system’ ).

        The rate of convective exchange effects the cooling rate of the engine.
        The rate of cooling at the radiator effects the cooling rate of the engine.

        In the earth-atmosphere system, the lower troposphere experiences a surplus of energy ( like the engine ). Tropospheric convection transfers energy to the upper troposphere which radiates more effectively than the lower troposphere ( just as the car radiator also cools more effectively than the engine ).

        Things which increase convective energy exchange ( like the water pump working ) increase the cooling rate.
        Things which increase the removal of heat from the system ( upper radiance ) should increase the cooling of the source region ( through convection ).

        Of course, once we consider these processes, we must include the equations of motion and all the unpredictability they entail.

        If deep convective cores are on the order of 10km but models are 100km resolution, it wouldn’t be surprising that some transfer is being missed.

        If convective cells have a lifetime of 30 minutes, but models have temporal resolution of 1 hour, it wouldn’t be surprising that some transfer is being missed.

        If convection is the result of (largely) the ITCZ, which is a function of the general circulation whose governing equations are highly nonlinear, it wouldn’t be surprising that some transfer is being missed.

      • Of course, in a typical convection cell, the cooling effects of water will overwhelm any cooling effect of CO2, unless the terrain is dry desert.

      • jim2
        I have thought the Connellys research with radiosonde data informative on this.
        “It can be seen from the infra-red cooling model of Figure 19 that the greenhouse effect theory predicts a strong influence from the greenhouse gases on the barometric temperature profile. Moreover, the modelled net effect of the greenhouse gases on infra-red cooling varies substantially over the entire atmospheric profile.

        However, when we analysed the barometric temperature profiles of the radiosondes in this paper, we were unable to detect any influence from greenhouse gases. Instead, the profiles were very well described by the thermodynamic properties of the main atmospheric gases, i.e., N 2 and O 2 , in a gravitational field.”

        “While water vapour is a greenhouse gas, the effects of water vapour on the temperature profile did not appear to be related to its radiative properties, but rather its different molecular structure and the latent heat released/gained by water in its gas/liquid/solid phase changes.

        For this reason, our results suggest that the magnitude of the greenhouse effect is very small, perhaps negligible. At any rate, its magnitude appears to be too small to be detected from the archived radiosonde data.”

        Open Peer Rev. J., 2014; 19 (Atm. Sci.), Ver. 0.1. http://oprj.net/articles/atmospheric-science/19 page 18 of 28

      • Thanks Ron C. That’s what I was thinking when I referred to the cooling effectS of water over CO2. Of course, in parts of the atmosphere containing water, it overwhelms the effect of CO2 there also.

      • jim2
        Ironically, CO2 being IR active is called a “greenhouse gas.” Yet it is actually N2 and O2 that are trapping heat, ie delaying the release into space. CO2 absorbs, then transfers to N2 or O2 as parcel of air equilibrate according to the lapse rate. Nearer to the TOA, it’s the CO2 that emits the energy into space.

      • jim2: You’re right; I hadn’t understood what you meant. After your explanation, it seems plausible, although I don’t have the background to assess whether the effect you hypothesize would be significant.

        In any event, I believe this is the Essex talk I recalled.

      • jim2:

        Oops! This is the correct link.

      • Jim2,
        You make an interesting argument for gooling by CO2 at the top of convection cells. I don’t have the expertise to known if your assumptions (particularly length of time and low pressure scenario) are correct. If your hypothesis is correct, it would be interesting to know whether it references a significant amount or radiation to space.

      • Danny Thomas

        Jim2,
        You’ve stated much more eloquently than I could. I’ve asked this question elsewhere w/o satisfactory response. (Hope not off topic). We have an understanding of the energy entering, but do we have a true measurement of that which is leaving? Plus, that which does not enter as a result of man’s changes (ACO2, et al and do they function only one way?).It seems this would be important to understanding. And is this modeled?

      • Jim,
        There were a series of papers by Palchetti et al. on REFIR-PAD balloon experiments back in 2006 measuring FIR spectra in both vertical and horizontal directions. The latter show radiance or emission spectra with peaks in regions for water, carbon dioxide and ozone. Reprints are available on line.

        Atmos. Chem. Phys., 6, 5025-5030, 2006, etc.

      • michael hart

        If water vapor cannot significantly emit in the bands exclusive to CO2, then will added CO2 not cause more rapid arrival of saturation, condensation and hence cloud formation? I’ve asked this before, but not seen a response.

      • Quandam & others might be interested in this paper which derives a simple climate model and energy/entropy budget on the basis of the 2nd law/ principle of maximum entropy production & Carnot equation

        http://www.knmi.nl/~laagland/cursus/presentaties_voorjaar11/Ozawa.pdf

        fig 5a energy budget diagram is entirely missing Trenberth’s 333 W/m2 ghg back radiation, but shows the 40 W/m2 direct radiative atmospheric window to space (same as Trenberth) and 102 W/m2 convection of latent & sensible heat from the surface to TOA (almost same as Trenberth’s 97 W/m2). Thus, convection greatly dominates heat transport from the surface to tropopause, and is more than twice the magnitude of radiative transport through the atmospheric window. The authors close their energy & entropy budgets without any determination of IR surface back radiation from GHGs, greenhouse gas concentrations, or radiative forcing from GHGs.

      • Hockeystick, ” but shows the 40 W/m2 direct radiative atmospheric window to space (same as Trenberth) and 102 W/m2 convection of latent & sensible heat from the surface to TOA (almost same as Trenberth’s 97 W/m2).”

        Those are actually two of the biggest mistakes in the Trenberth budgets.

        Latent and sensible( thermals) are closer to 115 Wm-2 and the atmospheric window is more like 20 Wm-2 from the actual surface and ~40 from above the atmospheric boundary layer.
        https://curryja.files.wordpress.com/2012/11/stephens2.gif

        http://scienceofdoom.com/2013/02/02/kiehl-trenberth-and-the-atmospheric-window/

      • Hockeyschtick
        That is an interesting paper, and requires some time and thinking to grasp its implications. Thanks for the link; now to ponder entropy and potential vs. kinetic energy.

      • I’ve got some nice peanut butter packed celery stalks here, RonC, if you can help me out a little.
        ================

      • Thanks kim, I am slogging through it and will need many celery breaks.
        I am reminded of Stephen Hawking’s report from his publisher that each additional equation in a book cuts the readership in half.

      • kim, jim2, hockeyschtick
        I have read over the linke Ozawa paper, and it led me to a more recent paper that builds on the same ideas, but addresses the issue here: Are models running hot because they exaggerate CO2 effect?
        The paper is Herbert et al 2013, Vertical Temperature Profiles at Maximum Entropy Production with a Net Exchange Radiative Formulation.
        The abstract and key finding:

        “Like any fluid heated from below, the atmosphere is subject to vertical instability which triggers convection. Convection occurs on small time and space scales, which makes it a challenging feature to include in climate models. Usually sub-grid parameterizations are required. Here, we develop an alternative view based on a global thermodynamic variational principle. We compute convective flux profiles and temperature profiles at steady-state in an implicit way, by maximizing the associated entropy production rate. Two settings are examined, corresponding respectively to the idealized case of a gray atmosphere, and a realistic case based on a Net Exchange Formulation radiative scheme. In the second case, we are also able to discuss the effect of variations of the atmospheric composition, like a doubling of the carbon dioxide concentration.”

        “The response of the surface temperature to the variation of the carbon dioxide concentration — usually called climate sensitivity — ranges from 0.24 K (for the sub-arctic winter profile) to 0.66 K (for the tropical profile), as shown in table 3. To compare these values with the literature, we need to be careful about the feedbacks included in the model we wish to compare to. Indeed, if the overall climate sensitivity is still a subject of debate, this is mainly due to poorly understood feedbacks, like the cloud feedback (Stephens 2005), which are not accounted for in the present study.”

        So there you have it: Convection rules in the lower troposphere. Direct warming from CO2 is quite modest, way less than models project. If Leo Smith is right above feedbacks being negative, I don’t see the alarm.

        http://arxiv.org/pdf/1301.1550.pdf

      • Is not the peak CO2 emission temperature -81°C? What atmospheric height does that correspond to? .

  43. If the parameters deduced by Curry, or inferred by the irreducibly simply model, are applied to a full sophistication GCM, do accurate regional forecasts pop out of the analysis? Have any of the so called mainstream climate scientists ever actually tried and experiment like this? Or do they only ever execute model runs which contain the parameter values in the range they want to find?

    • Eric, two points.
      The GCMs per se do not have these ‘simple’ parameters. For example a GCM TCR is calculated by increasing CO2 1% per year until it has doubled (70 years), then taking the average temp from year 60 to year 80 and dividing by the initial temperature. Sensitivity is an emergent property.
      I am aware of no such literature. However, there is a considerable literature on downscaling (applying a ‘magnification’ of a GCM to a particular region) most of which says it doesn’t work well with respect to temperature, clouds, and precipitation. Essays No Bodies and Last Cup Of Coffee have starter downscaling references.

    • Steven Mosher

      one example
      http://www.narccap.ucar.edu/

      There are others. And they are actually used in planning

  44. One thing that bothers me about feedbacks – what evidence is there that the net total feedback is a constant during all climate conditions – in other words could there not be a different value during an ice age compared to an interglacial?

    Does anyone know if any literature exists on this topic?

    • Right.

      Topography alone invalidates comparisons – glacial ice amounted to huge mountain ranges, kilometers high, where now lie plains or Hudson Bay.
      Circulation was greatly impacted. Potential albedo change was much different. In addition to the ‘shape’ of insolation which caused the glacials in the first place. These things make the glacials intersting and important but poor analogs for CO2 changes.

      • This is what amuses me so much when alarmists cite paleontology. There are many alarming things in paleontology. Most of these have been ignored in the purblinded focus on CO2.
        ==============

    • Steven Mosher

      yes.

    • Jonathan – Start here..
      (If a subscription is required to see the article request a PDF from the corresponding author.)

      • Doug Badgero

        Abstract seems to point to a reconciliation of paleo estimates, not an attempt to address differences between paleo and now. More importantly, how about a reconciliation of glacial vs interglacial sensitivity. Not sure sensitivity 65 million years ago is that important.

    • I had the very same though after watching Professor Andrew Dessler from Texas A&M use the bode equation to confirm his positive feedback hypothesis. (Find it here about 13 minutes in https://www.youtube.com/watch?v=l9Sh1B-rV60).

      He also seems to be ignoring milankovitch cycles, however it did occur to me that whilst the cloud and water vapour feedbacks may be positive when the Earth’s mean temperature is low that does not mean that they are positive when the earth’s mean temperature is higher.

      These two papers seem to address this but I haven’t had a chance to look through them in full yet.
      file:///F:/Images/87780.pdf
      http://www.clidyn.ethz.ch/ese101/Handouts/roe09a.pdf

      Happy reading

  45. J Abbott wrote on March 2 at 2:02

    “And, on what planet do we expect a linear extrapolation of recent temperatures to be necessarily indicative of future temperatures over the next 35 years?”

    Try planets Ehrlich, Hansen, Oppenheimer, Holdren et al when it proves convenient. Warmists/coolists/stormists brought cherry-picked start and endpoints to the party from the beginning of the conversation on man-made climate change. Just pick your points, extrapolate, make claims, and sell books/studies. Now there is a model… for making a personal fortune at the expense of others.

    • Perhaps, but not on Planet Sceptic.

    • 1979 is Boss Cherry; greatest Arctic ice extent and coldest temps in a century. Cited as start of sats, but the Nexus series and others flew as early as the 60s. Baselines starting there are depressed vs earlier and later series by only a few years.

  46. I would like to thank everyone that contributes here. Your posts and comments have sent me down paths to knowledge that I wouldn’t have found by my own lights. Thank you!

  47. I have just deleted A LOT of comments that were off topic. In a few instances in Snipped lengthy comments that were worthwhile but off topic and should move to another thread. This is not a thread to discuss whether or not the greenhouse gas exists. I am trying to look at the comments from the perspective of a reader who actually wants to learn something, while respecting in particular regular commenters. Thank you for your cooperation.

    • So I go to BarackObama.com and link to a climate name list. It doesn’t get more “on topic” than that.

      And you snipped it. Why?

      • The topic is climate models. Barackobama.com has nothing to do with climate models

      • What is denied more than the output of climate models?
        ==============

      • 50 years of climate models and no improvements.

        ECS from CO2 doubling still ranges from 1.1 to 4.5

        FIFTY YEARS OF CLIMATE SCENCE WITHOUT PROGRESS

        Long past time to cut off all gov’t funding. Let private industry find some use for this crap or let it die. Enough is enough.

      • stay tuned for a post on this issue, probably early next week, there have been some new developments

      • Dr Curry

        Look forward to the post. You’ve been advocating for improvements that will lead to greater scientific knowledge of the climate system. But, to date, from my limited perspective, that goal has been dampened by the establishment.

        Regards,

        Richard

      • @ rls

        “……improvements that will lead to greater scientific knowledge of the climate system. But, to date, from my limited perspective, that goal has been dampened by the establishment.”

        Any ‘greater knowledge of the climate system’ that could by any stretch cast the slightest doubt that ACO2 is causing the planet to heat at an unprecedented rate AND that the rising temperature will prove catastrophic unless coordinated governmental action is undertaken–immediately–to control or eliminate ACO2 is not ‘dampened by the establishment’. The ‘greater scientific knowledge’ is ruthlessly suppressed by the establishment and the careers of those naifs who attempt to disseminate are crushed remorselessly. By the establishment.

      • Kiddo, what foolish behaviour, unless the establishment is right.
        =====================

      • A minor point of disagreement: models are scientific hypotheses. They are poor, but scientific nonetheless. A hypothesis is a great place to start, but a poor place to finish. :-)

      • David L. Hagen

        Karim:
        Models are only scientific if you verify and validate and throw them out if they do not match observations. Simpler models such as Monckton et al., that perform better are preferred over complex models with politically set parameters that are not accurate. See Richard Feynman

      • Steven Mosher

        Feynman didnt follow his own rules.
        models are not thrown out.
        they are changed and improved.
        Sometimes a tension between models and observations exists for decades.
        “match” observations is a PRAGMATIC decision. there is no rule for
        matching.

      • Steve writes- “Sometimes a tension between models and observations exists for decades.”

        Can you give an example that you consider similar? I can’t think on any.

      • David L. Hagen

        Steven Mosher
        Monckton et al., review:

        In 1990, FAR predicted with ‘‘substantial confidence’’ that, in the 35 years 1991–2025, global temperature would rise by 1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1

        Following IPCC’s warning I wrote 330 p report on solar thermal to control global warming.
        Per Fig. 1, after 35 years, actual warming was only 1.37 K/century 51% less than IPCC’s prediction – OUTSIDE of IPCC’s FAR expected range [1.9, 4.2]K.
        Consequently, I find the IPCC’s predictions failed! Time for pragmatic scientific models to replace politically established climate sensitivity. The irreducibly simple MSLB model appears more scientific accuracy.

      • catweazle666

        “Sometimes a tension between models and observations exists for decades.”

        LOL!

      • Fred Harwood

        What a tease!

      • My tuning fork is at the ready (mixing music and distributed version control metaphors).

      • Have to admit I’m pretty pessimistic, for some of the reasons GaryM mentions below, that substantial improvements in climate models is likely any time soon. Still, am looking forward to post next week. What bothers me the most isn’t that climate models lack skill – i understand their usefulness- but that they’re “sold” as “the science.”

      • You may want to give particular attention to how well accepted the \lambda_0 and f values are that Monckton et al. quote. They say in their §8.1:

        The simple model indicates that, as a result of the fall in the interval of estimates of f from 1.9 [1.5, 2.4] \textrm{W m}^{-2} \textrm{K}^{-1} in AR4 to 1.5 [1.0, 2.2] \textrm{W m}^{-2} \textrm{K}^{-1} in AR5, the Charney-sensitivity interval in response to a CO2 doubling should have been reduced from 3.26 [2.0. 4.5] K to 2.2 [1.7, 2.7] K. Yet the CMIP5 climate-sensitivity interval given in AR5 is 3.2 [2.1, 4.7] K (AR5).

        The central estimate is near half as high again as it would have been if the method in AR4 had been followed (Table 5). The simple model suggests that the CMIP5 Charney-sensitivity estimates published in AR5 are unduly high and that the central estimate has apparently been overstated by almost half.

        Note that this conclusion does not depend on their (in my view, questionable) r_t function, i.e. on transient response. It depends only on (1) the authors’ having correctly interpreted the feedback values, namely, 1.9 [1.5, 2.4] \textrm{W m}^{-2} \textrm{K}^{-1} supposedly given in AR4 and 1.5 [1.0, 2.2] \textrm{W m}^{-2} \textrm{K}^{-1} supposedly given in AR5, and (2) rough general agreement on a relatively narrow possible range for what they call \lambda_0.

        Recall that their \lambda_0 is the increment in temperature required to redress an imbalance in top-of-the-atmosphere radiation. That is, if the earth’s average temperature had been keeping the net top-of-the-atmosphere radiation equal to zero and then a radiation imbalance \Delta F were imposed (e.g., more solar radiation in or less long-wave out), the average-temperature increase needed to increase the outgoing radiation by \Delta F and thereby return the system to balance if nothing else changed would be \lambda_0 \Delta F.

        Although I have been largely critical of the paper and in particular of its r_t and electronic-circuit discussions, §8.1 may be making a good point, and the simple model may be a good way for us laymen to appreciate it. Obviously, \lambda_0 isn’t really constant and it isn’t known precisely. But maybe everyone agrees that values fall within a relatively small range. If so, and if Monckton et al. did in fact interpret those feedback numbers as their source intended, then Monckton et al.’s simple model may actually be a good tool for us laymen to use in understanding the §8.1 criticism of the AR5 equilibrium-climate-sensitivity range.

        On the other hand, maybe the experts are relying on other feedbacks, or Monckton et al. misinterpreted what those feedback numbers meant. If so, that would be good to know.

    • Might as well delete all of mine as well Dr. Curry. They have no logical context now with the other deletions. Although I did think the simple analogy might get through. Boy was I wrong.

      • Jeez, thanks for your efforts, it seems to be a lost cause which is why i am trying to snip this discussion in its bud. Periodically we have a greenhouse related thread which I let be a free for all. However, some try to turn every thread into a discussion of this

      • I’ve only just arrived on the thread, with limited time, and greatly appreciate all efforts made to ‘stick to the knitting’ or I suppose one could say ‘snip back to the knitting”. To any valuable thoughts lost in the process: I bid you bon voyage in fond expectation that in later expression elsewhere you will have matured like fine wine.

  48. Some of the reasons why the models are off (OR RUN HOT) are because they neglect or assign wrong relative importance to the following items:

    Geo -Magnetic Field Strength Of The Earth.

    Solar Variability and Associated Secondary Effects. Way under estimated.

    CO2 ‘S Role– far to much importance.

    Initial State Of The Climate- probably wrong or incomplete.

    Lack Of Understanding Of Feedbacks. Especially clouds.

    Aerosols – The models do not address them properly.

    This is why even in hindsight the models can not get it right and why basic atmospheric predictions they made such as the hot spot in the lower troposphere in the tropics and a more zonal atmospheric circulation pattern in response to global warming have not come to pass.

    • From the start of the modelling efforts Trenberth was explicit that model runs were not initialized on real observations, but on arbitrary settings to illustrate possible mechanisms and scenarios. So unless initial conditions are irrelevant and attractors take over and end up in the same states no matter what, predictions are impossible. Real observations of sufficient scope and accuracy are far beyond present capability and instrumentation. (Cell grids many orders of magnitude smaller would be needed, e.g. Most of the planet is covered by sparse badly maintained weather stations, most of which are ignored. Etc.)

      So models are fed with mostly made-up stuff. Imagination in, imagination out.

  49. “Why models run hot.”

    I am always surprised that people focus on just this issue when evaluating the performance of GCM’s. The simple answer is that the models poorly represent how the system will perform. Many bloggers and scientists alike seem to love to develop their pet hypotheses regarding the root cause of the inaccuracies, but that generally is a fools game (imo). The analysis to determine where a model is performing well vs. not performing well must be done by very detailed evaluation of each model.

    The truth is the current GCM’s do not seem to simulate the actual circulation very well at all. Look at the modeled changes of rainfall patterns vs. what has actually occurred over the last decade. Does any reasonable person believe that they can rely on current GCMs for forecasting future rainfall changes?

    • If more time was spent studying the climate instead of models of the climate, we might be getting somewhere.

    • You’ll see from a comment I just posted above I don’t necessarily agree about the point about taking each model on its merits when evaluating them. The point is that they (GCMs) are a class of models sharing much more commonality than diversity, and it is most likely that their problems derive from this commonality. This can be studied by looking at them as a collective.

      • AFAIK few of the models were developed de novo, but are the same or similar structurally with varying parameter inputs (substitutes for unobtainable or missing data or even data categories). Hoping for a good guess or combination of guesses.

  50. In December 2014, Willis posted GMT series generated by 42 CMIP5 models, along with HADCRUT4 series, all obtained from KNMI.
    http://wattsupwiththat.com/2014/12/22/cmip5-model-temperature-results-in-excel/
    We were able to analyze the temperature estimates of CMIP5 models and compare them with HADCRUT4 (1850 to 2014), as well as UAH (1979 to 2014). The models estimate global mean temperatures (GMT) backwards from 2005 to 1861 and forwards from 2006 to 2101.

    Bottom Line:
    In the real world, temperatures go up and down. This is also true of HADCRUT4.
    In the world of climate models, temperatures only go up. Some variation in rates of warming, but always warming, nonetheless.

    The best of the 42 models according to the tests I applied was Series 31. Here it is compared to HADCRUT4, showing decadal rates in degrees C periods defined by generally accepted change points.
    Periods HADCRUT4 SERIES 31 31 MINUS HADCRUT4
    1850-1878 0.035 0.036 0.001
    1878-1915 -0.052 -0.011 0.041
    1915-1944 0.143 0.099 -0.044
    1944-1976 -0.040 0.056 0.096
    1976-1998 0.194 0.098 -0.096
    1998-2013 0.053 0.125 0.072
    1850-2014 0.049 0.052 0.003
    In contrast with Series 31, the other 41 models typically match the historical warming rate of 0.05C by accelerating warming from 1976 onward and projecting it into the future.
    Over the entire time series, the average model has a warming trend of 1.26C per century. This compares to UAH global trend of 1.38C, measured by satellites since 1979.
    However, the average model over the same period as UAH shows a rate of +2.15C/cent. Moreover, for the 30 years from 2006 to 2035, the warming rate is projected at 2.28C. These estimates are in contrast to the 145 years of history in the models, where the trend shows as 0.41C per century.
    Clearly, the CMIP5 models are programmed for the future to warm more than 5 times the rate as the past.

    More discussion AT http://climateaudit.org/2015/02/05/marotzke-and-forsters-circular-attribution-of-cmip5-intermodel-warming-differences/#comment-753381

  51. Was it a coincidence that the attack on Soon et al by the green extremists commenced at the same time that Rachendra Pachauri was being investigated for sexual harassment and eventually had to resign from head of the IPCC and other posts?

    Today we are discussing the burning at the stake of the brilliant scientists Willie Soon instead of Pachauri’s string of professional and personal blunders that culminated in the sexual harassment court case that eventually forced him to abandon the climate catastrophist headship.

    I seriously believe that this was not a coincidence.

    • Curious George

      barackobama,com started a campaign to “call out deniers” at the same time/ One more “coincidence.”

  52. Pingback: A frontier of climate science: the model-temperature divergence | The Fabius Maximus website

  53. BTW, are greenhouses (the ones you put plants in) amplifiers or not? I suspect not, but my understanding of physics isn’t that fine.

  54. David L. Hagen

    Einstein’s Razor Test
    Einstein’s Razor:
    “Everything should be kept as simple as possible, but no simpler.”

    Forecasting PrinciplesTest
    Evidence – based Forecasting for Climate Change Policies
    Kesten C. Green, Willie Soon & J. Scot Armstrong, 2013

    The more sophisticated methods provided inconsequential improvements in forecast accuracy for horizons up to 10 years, beyond which horizon the no-change model provided the most accurate forecasts.

    (This paper directly addresses “ECS from CO2 doubling still ranges from 1.1 to 4.5 FIFTY YEARS OF CLIMATE SCENCE WITHOUT PROGRESS” and the upcoming post.)

    The Irreducible Climate Model by Monckton et al fit both Einstein’s Razor and Forecasting Principles, GCM’s do not.

  55. Rud.
    clouds make ”milder climate” = shock absorbers – cooler days / warmer nights NOT WARMER PLANET!!! Because propaganda takes in consideration only the hottest minute in 24h and ignores all the rest of 1439 minutes = deceive yourself and the others. Here is the ugly/ unpleasant truth: https://globalwarmingdenier.wordpress.com/water-vapor-h2o/

    • By thinking cloud f ~0, I agree! But lack observational proof. What I had as observational evidence 3Q14 was written into the clouds essay. Regards.

  56. “If, for instance, the observed temperature trend of recent decades were extrapolated several decades into the future, the model’s output would coincident with the observations thus extrapolated (Fig. 6).”

    Linear thinking. But interesting… Climate is too hard….

  57. I wonder what it needs before the retraction of a paper is warranted, if even the Monckton et al. paper doesn’t fullfill the criteria for it. So-called “skeptics” have demanded the retraction of papers, with which they don’t agree, for much less than for all the things that are outright wrong in the Monckton et al. paper. The paper doesn’t just contain severe methodological mistakes and unscientific assumptions, which render the paper fatally flawed. It also contains outright falsehoods. I’m speaking about a clear misrepresentation of facts, not just about disagreements how something should be done. I’m speaking about something like contrafactual, false statements about the content of the First Assessment Report of the IPCC, which are presented in the paper.

    Before anyone is complaining that I wasn’t specific about what I think was wrong with the paper, I have done this already elsewhere. And others have too.

    • Heh, the models run hot. What’s your explanation?
      ===================

      • Do they “run hot”? What is the evidence for your claim?

      • Regarding GCM climate models running hot, see figure 11.25 in the AR5 report.
        https://curryja.files.wordpress.com/2014/01/1125.jpg

      • Jan P Perlwitz
        “Do they ‘run hot’? What is the evidence for your claim?”

        This is lazy. You could be suggesting a whole lot of things.

        Perhaps you think the models are not running hot relative to how hot they could be running, or perhaps you think the runs are not realistic because the inputs or the environmental assumption aren’t valid, or perhaps you think it will all come out in the wash, or perhaps you think the global mean surface temperature isn’t the appropriate measure of a hot model, or perhaps you think the models were never meant to be any good until 50 years have passed …..

        The conversation isn’t going anywhere if you start off like this.

      • HAS,

        So you think that asking for the information on which a claim is based was the wrong start. I’m supposed to talk about something w/o knowing what data, what assumptions, and what methodology were applied to derive a conclusion.

      • Monckton’s pocket calculator is running cold. If you take his model and apply his numbers to get the warming between 1950 and now you get about half the actual warming. Not much use for projection. Double it and you get a better transient value. Triple it for the equilibrium value.

      • Jan P Perlwitz
        “So you think that asking for the information on which a claim is based was the wrong start.”

        Yep.

        If you don’t disagree they run hot this is easily dealt with. Just say so.

        If you do tell him why.

        As of now I haven’t got the faintest idea what you think about the subject.

      • BTW so you understand my view I’m somewhat further back on GCMs.

        I worry about how they seem to model worlds on which the absolute temp ranges in-sample over at least a couple of degrees even after tuning, and what that all means for their utility in a world where even basic physical properties are non-linear with temp. Not to mention other stuff.

        My judgement at this stage would be to consolidate down to a couple of teams internationally working on GCMs, not to fund any science that simply applies the models’ output as if it represents the real world, and use the finds released elsewhere.

        And I’d move all climate research groups that want to make statistical inferences under their local Prof of Stats.

        If only life were that easy.

      • Thanks for asking, Jan. The evidence of my senses. It doesn’t feel as hot as the models expect it to be. My eyes rest on a thermometer which doesn’t show as hot as the models expect it to be. I hear many learned authorities tell me that it’s not as warm as the models expected. Your conversation tastes funny and there’s something rotten in the state of modeling and I can smell it all the way over here.
        =================

      • David in TX

        Michael | March 3, 2015 at 12:40 am |

        “Opinion is evidence.”

        While that’s of course true in climate science in this case we have facts as evidence that models are running hot:

        https://curryja.files.wordpress.com/2014/12/hawkins.png

        Thanks for playing. Better luck next time.

      • …….it’s within projections.

        Yawn.

      • It’s within updated/revised projections… yawn.

      • Judith, you write:

        “Regarding GCM climate models running hot, see figure 11.25 in the AR5 report.”

        Although one may suspect something like that from the figure that you show, because the last two data points of the observations have moved outside the 5-95% percent interval of the simulations, I don’t see how this figure is supposed to be sufficient for the conclusion that the models were “running hot”, for two reasons:

        1. With a 5-95% percent interval, it is permissible to be below the lower interval boundary during 5% of the time (and during 5% of the time above), since 10% of the probability distribution lie outside of the interval boundaries. Instances when this happened can also be found for the period of the historic simulations from 1880 to 2005. And the last data point of the observations in the graphic is from 2012. The 2014 annual average of the observations has obviously moved into the 5-95% interval again, since it is about the same as the 2010 value or even a little bit warmer.

        2. Even if the projected temperatures for the time period after 2005 are indeed permanently higher than the observed temperatures, such a divergence is not sufficient for the conclusion that the models were “running hot” (meaning that their sensitivity to a forcing increase was too high). Why is it not sufficient? Because the models themselves aren’t the only possible cause for such a divergence. Another possible cause is that the net forcing change from the climate drivers, which are an input into the models, has been larger for the simulations than it actually has been in the real world. In such a case, even a simulation with an assumed perfect model, i.e., a model without any flaws by definition, would show such a divergence. Without taking this into consideration one would also conclude that the perfect model was “running hot”, which is a logical contradiction to the assumption of a perfect model. Thus, one must exclude first that the divergence is caused by a divergence of the forcings, before one can draw a conclusion that the models are the cause.

        In summary, I don’t think so that the shown figure provides sufficient evidence for the correctness of the assertion that the models were “running hot”.

      • The other line of evidence is sensitivity determination: observational based methods yield much smaller sensitivity than the climate models.

      • Jan, ” Thus, one must exclude first that the divergence is caused by a divergence of the forcings, before one can draw a conclusion that the models are the cause.

        In summary, I don’t think so that the shown figure provides sufficient evidence for the correctness of the assertion that the models were “running hot”.”

        Did you graduate from the Willard School of Logic?

      • I agree with Jan P here, and this is a problem with the IPCC model effort. All the models have to use the same historic to current forcing, which could easily be wrong due to the uncertainty in aerosol forcing changes, for example. This can result in all of them sharing the same bias, that those who don’t understand will assume is the fault of the models’ AGW mechanisms, rather than a forcing error that they have in common. Had they been allowed to more fully explore the forcing uncertainty, we probably would have seen some with reduced trends in the 2000’s, and people would be talking more about the forcing uncertainty component.

      • Yes, I argued this same point in my uncertainty monster paper. They are starting to pay attention to this, courtesy of the pause

      • JimD, If forcings are uncertain the uncertainty range should include that. 95% confidence should mean just that, there is only 5% chance the observations would below that point. 95% confidence doesn’t mean “I am 95% confident I am right unless I happen to be wrong because I forgot something.”

        Since you are guessing along with Jan it “might be forcing” what if it might be feedback or it might be the dog ate my homework?

      • captd, the IPCC assessment’s range did not just use the models. They extended the model uncertainty range downwards, possibly because of the potentially overestimated forcing put into the models. Note that if this is aerosol growth, that is only a temporary reprieve unless we all plan to become like China.

      • Danny Thomas

        JimD,
        Interesting and one I’ve wondered about myself. AR5 I beleive shows aerosols +/-53%, but what were they in WWII and then again in the 70’s and also before 1900? This “warming” has an approximatel 350 (certainly over 150) year track record.

        Just seems a bit selective to me and involves a lot of “IF’s” and “possiblies”.

      • The interesting question is where the onus of proof lies. I would have thought the modellers.

        On the face of it they are running hot, and there other more credible hypotheses as to why this is than the forcings assumptions being wrong or what we see being an outlier.

      • JimD, and btw, the Crowley and Unterman 2013 version of aerosol optical depth based on volcanic sulfates, doesn’t really improve things starting in 1850. The problem seems to start around 1700 pretty much in keeping with the Stadium Wave paper.

        https://lh4.googleusercontent.com/-OvQHzJy8gFU/VLF3XHImAqI/AAAAAAAAMGM/X1vv5Tx0kiI/w689-h411-no/oppo%2Bover%2Bmann.png

      • captd, on feedback, it is very hard to account for the 20th century warming, especially the recent ~0.7 C rise after 1950, without a significant positive feedback. This is why the models have that part in common.

      • captd, from your graph, it is impressive what happens after 1900, and this is exactly in line with the CO2 rise. It is very hard for the models to trace that unless they have a significant positive feedback to CO2, which may be why we haven’t yet seen a “skeptic” low-sensitivity model, because they would have failed to capture this well.

      • JimD, ” Note that if this is aerosol growth, that is only a temporary reprieve unless we all plan to become like China.”

        Nope, the arm waving over the ~1910 to 1940 period is coming back to haunt. Probably the best but most under utilized proxy for ocean heat capacity are the sea level reconstructions.

        http://www.psmsl.org/products/reconstructions/figure1.gif

        That looks like a long term persistent increase to me.

      • JimD, “captd, from your graph, it is impressive what happens after 1900, and this is exactly in line with the CO2 rise.”

        Yes it does, but with a “sensitivity” of 1.6 excluding water vapor likely due to long term persistent increase in OHC due to recovery.

        It looks to me like natural/internal variability is closer to +/- 0.3 C instead of +/- 0.1 C. I believe Dr. Curry has mentioned something along those lines as well.

      • Natural millennial variability (including solar and volcanoes) has a standard deviation of 0.2 C according to Lovejoy and could therefore include about 0.2 C of the 1910-1940 rise. But a rise of 0.8 C is four standard deviations, and very unlikely to occur by chance. You need a type of natural variability that coincidentally correlates in sign with a CO2 forcing change, which is therefore doing a good impression of a positive feedback.

      • JimD, “You need a type of natural variability that coincidentally correlates in sign with a CO2 forcing change, which is therefore doing a good impression of a positive feedback.”

        That is exactly what Dr. Curry mentioned, that natural variability, the 1976 pacific shift was amplifying CO2. If you look at Oppo 2009, there are quite a number of oscillations in the +/- 0.2 to 0.3 range in the tropics. Most of them aren’t timed with volcanic events but instead have very inconsistent lags so it is hard to tell what and when they may have been forced.

        https://lh4.googleusercontent.com/-4NOhaf24yrY/VPaY58vRvUI/AAAAAAAAM9c/Db4o-rwVC40/w843-h401-no/past%2Bclimate%2Boppo%2Band%2Bslr.png

        Sea level rise and the borehole reconstructions provide a smoothed reference for a mean of sorts, but there isn’t any clear cut “atmospheric” forcing that explains things.

      • captd, check your graph. It is a century-long series of variations in one direction, not just since 1976. However, about 0.5 C has come in that period. It’s fast rises and pauses instead of rises and falls because of the background trend.

      • Jan wrote:
        “Because the models themselves aren’t the only possible cause for such a divergence. Another possible cause is that the net forcing change from the climate drivers, which are an input into the models, has been larger for the simulations than it actually has been in the real world.”

        Hold on there.
        IPCC consensus projections = Models + assumed forcings

        It doesn’t matter if the problem is with the models or the forcings. The end result is the same: the IPCC consensus is wrong.

      • That is exactly what Dr. Curry mentioned, that natural variability, the 1976 pacific shift was amplifying CO2. …

        Except it largely stopped doing that by 1985, and was unable to exert any noticeable negative influence on the SAT until 2006, at which time its negative phase was near the end. So here we are in the midst of an ENSO-neutral heatwave, with an El Nino watch back in place for the NH summer, and the 12 months ending on Feb 28th looking to be .69C to .71C on GOTI. And there is no end in sight to this monthly record setting until at least July.

        But the important thing was to claim 2014 was not the warmest year. LMAO.

      • Danny Thomas

        JCH,

        You probably will have a challenge with the way I view this too:”But the important thing was to claim 2014 was not the warmest year. LMAO.”

        The term was warmest/hottest year EVER! And, how was that temperature derived? This in conjuction with “pause”/”faux pause” is not the problem. It’s the propaganda/transparency/honesty. Your perception is the kerfluffle over it’s ranking and mine is too. What does it matter (except for propaganda) to say it was one of the warmest in a trend of warmest years? For this observer, it would have been more credible to say one of the warmest in a series, especially if one considers the confidence levels. Just showing my perspective and lack of understanding of why this nit pick has to be won by the CAGW folks. Unless it’s designed to validate models.

      • JimD, “captd, check your graph. It is a century-long series of variations in one direction, not just since 1976. ”

        The rise starts in ~1700 would be about 315 years or “centuries long” The increase in temperature is about 0.9 C which based on current rates of ocean heat uptake would take “centuries”. Using SLR as a reference, the IPWP temperature moves above and below the reference with decreasing amplitude excursions. That would be called a weakly damped response by some. Looks something like this.

        https://lh5.googleusercontent.com/-a7GVlqIRoB8/UIHS525YllI/AAAAAAAAFAo/Ud_UUg-9bdw/w491-h339-no/weakly%2Bdamped%2Bsignal.png

        If you estimate the range of variability near the end of response, it would be low, at the start it would be high.

        btw, if you don’t like Oppo 2009, you can use ljungquist and Christianson’s ocean reconstructions and get the same thing. Heck even Emile-Geay Nino3.4 has close to the same response.

      • I love it. The models are not running hot because the flaw in the models makes them run hot. Well explained. Now I can rest.
        ===============

      • JCH, “Except it largely stopped doing that by 1985, and was unable to exert any noticeable negative influence on the SAT until 2006, at which time its negative phase was near the end. So here we are in the midst of an ENSO-neutral heatwave, with an El Nino watch back in place for the NH summer, and the 12 months ending on Feb 28th looking to be .69C to .71C on GOTI. And there is no end in sight to this monthly record setting until at least July.”

        So you like ENSO,

        https://lh3.googleusercontent.com/-JDDjman7m4I/VHX3peO_mKI/AAAAAAAAL2Y/AHyZCysQqx0/w679-h479-no/nino%2B3.4%2Bemile-geay%2B2012.png

        According to that reconstruction the recovery started around 1750 and once surface temperatures reach a convective trigger temperature temperature tend to plateau. Plateauing at the end of a centuries long rise would tend to have lots of nearly warmest years EVAH because you are, wait for it… at the plateau. Quibbling over a few hundredths of a degree is something I will leave to youse guys.

      • kim, it is the input, the forcing, that could be too strong due to a lack of enough aerosol effect. The error bars are very wide on the aerosol effect. All the models use a current total forcing near 2.3 W/m2, but it could easily be rather less than 2 W/m2, which makes all the difference between the models run hot or cold. It is an important first-order consideration, often neglected by the model critics.

      • stay tuned for a guest post from Nic Lewis on this topic

      • Judith, you write:

        “The other line of evidence is sensitivity determination: observational based methods yield much smaller sensitivity than the climate models.”

        Could you explain this statement, please? Here is Figure 1 of Box 12.2 of the WG I AR5 Report:

        http://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2012/FigBox12.2-1.jpg

        I don’t see from the figure what you claim. I only see this being the case for some studies where the sensitivity estimates based on observation data are lower than the ones from the complex climate models.

        Perhaps, you are thinking of only a very selective number of studies, like Lewis (2013) and Lewis and Curry (2014). But more studies are out there, as one can see from the Figure I linked above.

        A couple of other new studies:
        Lovejoy (2014), http://dx.doi.org/10.1007/s00382-014-2128-2 estimates an effective climate sensitivity of 3.08+/-0.58 K for CO2-doubling for the instrumental period since 1880.

        And just published in Nature, Martinez-Boti et al. (2015), http://dx.doi.org/10.1038/nature14145 estimate an 68% confidence interval of 3.0-4.4K for the ECS, based on Pliocene and Pleistocene data.

        As for your own climate sensitivity estimates. You also use a model, even if it is a simple energy balance model. You are making assumptions, like the assumption of a global uniform forcing and that the climate sensitivity is the same for all climate drivers. You very likely know that this has been challenged in Shindell (2014), http://dx.doi.org/10.1038/nclimate2136. According to this study, the climate sensitivity to aerosol and ozone changes was substantially larger than for CO2, because they were more located in the middle and high latitudes of the Northern Hemisphere.

        Your estimates are also sensitive to the assumptions on the strength of the aerosol forcing. The aerosol forcing is not just a problem for the CMIP5 model simulations.

        In summary, I also don’t agree with your statement that the climate sensitivity estimates were evidence for the statement that the models were “running hot”. The evidence is not conclusive here, either.

      • Two major posts coming on this topic, stay tuned.

      • > Did you graduate from the Willard School of Logic?

        Beware your wishes, Cap’n.

    • I would appreciate links to any other critiques of this paper that you know of

      • Arthur Smith examined the paper:
        http://arthur.shumwaysmith.com/life/content/the_monckton_equation

        I myself wrote some comments on it (in addition to my blog post):
        https://docs.google.com/document/d/1ny6UAs5I9_PBKoIle7tptLXjeAuQYTaznhxXvTrZLJU/edit

        ATTP picked on the absurd appeal to process engineers who were designing electronic circuits, which the authors used to postulate a very small closed-loop gain factor for the Bode-system gain equation in the model in the paper to prescribe the strength of feedbacks:
        https://andthentheresphysics.wordpress.com/2015/01/15/the-designers-of-our-climate/

        There is more in the discussion under the mentioned blog post, like the issue with the instantaneous redistribution of the energy throughout the whole model climate system in response to a radiative perturbation, which indicates a very small heat capacity of their model. Which is not physical, considering the huge heat capacity of the real climate system. Under the blog post, there is also an exhaustive discussion of Monckton’s claim about that the Bode system-gain equation was used in the complex climate models. A claim that is false. His claim, according to which this was based on the Hansen (1984, 1988) papers, was thoroughly rebutted there. Monckton was also challenged by me to show where the Bode system-gain equation was used in GISS ModelE, since he claimed this and he displayed himself as someone who knew the code. Nothing. Despite of all that, he recycles this assertions here in his reply to Rud Istvan.

      • Jan thanks for the links. As far as I an tell, the main potentially substantive critique of Monckton’s model is interpretation of rt.

      • Jan P Perlwitz obviously has access to the code used by GCM modellers! This is a breakthrough for Climate Etc readers because he is in a position to tell us what parameters the models are using and indeed, why they are not very good at matching the data record!

      • David in TX

        It appears there are no critiques other than knee-jerk reactions on warmunist blogs.

        It must really suck if you’re a climate priest the IPCC Church of Carbon Sin whose devoted your life to “the cause” then along comes a simple climate model ginned up one night in a pub on the back of a napkin that accurately tracks observed global average temperature.

      • Jan P Perlwitz:

        ATTP picked on the absurd appeal to process engineers who were designing electronic circuits, which the authors used to postulate a very small closed-loop gain factor for the Bode-system gain equation in the model in the paper to prescribe the strength of feedbacks: [etc]

        This argument is wrong. It’s a matter of time scale.

        If we use the electronic system analogy, higher frequency poles dominant the short-period response of the system. Suppose you have poles at 3-months, 100-years and 2000-years.

        If you want to describe the response of the electrical system to a high-frequency driver, and want to restrict yourself to the current time-scale of given high-fidelity observations (which really is about since 1960), you would only need a single-pole system to fully describe the response.

        Anybody who understands system analysis would have recognized this immediately as a problem with Ander’s argument. Which should tell us what we need to know about that blog and what passes for intellectual discourse there, if nobody on that blog post brought that up.

        I’d refer to the single-pole system as a “single-box” model:

        But these get used all over the place in climate science.

        And remarkably nobody till this point thought there was anything wrong with doing that.

        Probably because there isn’t.

        Yes you can describe more phenomena with a higher order model. (NSS.) But there are many things that are amenable to a single-box model too.

      • Peter Davies,

        “Jan P Perlwitz obviously has access to the code used by GCM modellers!”

        Everyone with Internet has access to the code of various Earth system models.

      • Matthew R Marler

        Jan P Perlwitz: Under the blog post, there is also an exhaustive discussion of Monckton’s claim about that the Bode system-gain equation was used in the complex climate models. A claim that is false. His claim, according to which this was based on the Hansen (1984, 1988) papers, was thoroughly rebutted there.

        I did not find that rebuttal at that link. I didn’t read the whole thing because what I did read was not very informative.

        Climate science is full of approximations and simplifications, including the use of “equilibrium” simplifications for a process that is dynamic and never even in steady-state. Monckton et al did not claim that the climate system was “designed by an electronics engineer”, or even that it was “designed”. They observed that the climate is “stable” in the sense that as far as reasonable evidence goes, oscillations have kept the global mean temperature within such bounds that an “explosive” positive feedback is not a reasonable expectation. Why exactly is the use of the Bode Equation to model this stability any less reasonable a priori than the many counterfactual assumptions of equilibrium?

      • Waiting for Jan Doe.
        ===============

      • swood1000,

        “Isn’t this why they gave different scenarios?”

        Exactly. Thus, why would the authors of the FAR state that there was “substantial confidence” in this specific near-term warming rate that was explicitly mentioned in the Monckton et al. paper, if this warming rate only realized in the case that the emission scenario A realized and they didn’t know whether this emission scenario would realize.

        “And isn’t it appropriate for us to select the scenario closest to the one that actually transpired (Business-as-Usual)?”

        What makes you think that this was the “business-as-usual” scenario of the FAR? Have you checked? Or are you just assuming?

        Because I have checked. It isn’t the one.

        According to the FAR:
        Approximate total forcing (F) increase from 1990 to 2010 (F_2000-F1990+(F_2025-F_2000)/25*10; from page 54, Table 2.6 and page 57, Table 2.7):
        Scenario A: 1.156 W/m^2
        Scenario B: 0.732 W/m^2
        Scenario C: 0.646 W/m^2
        Scenario D: 0.602 W/m^2

        Approximate real world forcing increase from greenhouse gases from 1990 to 2010, inferred from AR5 report, page 677, Figure 8.6d:
        0.625 W/m^2.
        (http://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2008/Fig8-06.jpg)

        Thus, the real world forcing change due to greenhouse gases from 1990 to 2010 rather resembles the ones of Scenario C and D in the FAR. Much lower than the one in Scenario A of the FAR. This is what I get when I compare the actual numbers.

        “In the first place, this is not a list of the “large scale” features. It is a list of “The Large-Scale Changes On Which The Models Agree.”

        OK.

        “Under “Temperature changes” are discussed warming of the earth’s surface and of the troposphere, along with the global mean warming. So I am perplexed by how you conclude that such warming is not a “broad scale feature.”

        I didn’t say that. I myself mentioned broad scale features of the temperature increase in a comment above as an example, like the larger warming over land than over the oceans. But the whole Chapter 5 is about equilibrium climate change. It’s even stated in the header of every page of the chapter. The assessment of this was indeed done with general circulation models of the atmosphere. Nothing in this specific chapter is about the quantification of the transient climate response for specific years, though, and certainly not for the 35 years from 1990 to 2025, i.e., for the near-term climate change.

        “Don’t you agree that (a) Scenario A was the projection in case no steps were taken to reduce greenhouse gas emissions,…”

        Well, steps were taken (Montreal Protocol). And other factors contributed too, so that this scenario didn’t realize.

        “Why do you keep referring to the other scenarios when it is clear that they are inapplicable?”

        Because they aren’t inapplicable. That’s what the numbers about the forcings tell me, as laid out above.

        “If the projection is a 1°C rise in temperature, along with an uncertainty range, are you saying that it is incorrect to say that the projection is a 1°C rise in temperature?”

        The range of uncertainty should be made clear to make a more precise statement, particularly if the uncertainty is quite large.

        “What then does “substantial confidence” mean? Are you saying that since all model projections have an uncertainty range, “substantial confidence” has no meaning?”

        How it is used in the FAR, I understand it rather as an expert judgement, particularly since a number of those “broad scale features” of climate change are rather qualitatively described without real quantification. The statements about those were based on equilibrium simulations for CO2-doubling with GCMs.

        “Your assumption is that a 1°C rise in temperature is not includable as a “broad scale feature of climate change,” despite the fact that global mean temperature changes are specifically mentioned among “The Large-Scale Changes On Which The Models Agree.”

        But those models can’t have agreed on the specific “1°C rise in temperature” (equal to 0.3 deg. C per decade with a range of 0.2 to 0.5 deg. C/decade) until 2025, since those models weren’t used for this estimate in the first hand.

        As for your question in the other comment about the Zero-dimensional energy balance model. That a box-diffusion-upwelling model was used for the projections of the transient changes for the four scenarios is stated in the Annex of the FAR under A.5 (page 336). The used model is described in more detail in Section 6.6 (page 187ff.).

        This all has become much too long again.

      • > This all has become much too long again.

        This happens with many questions.

        The acknowledgement of that too long response seems quite short.

        Crickets.

      • Jan P Perlwitz –
        I responded to you back on the original thread: http://judithcurry.com/2015/03/01/lessons-from-the-irreducibly-simple-kerfuffle/#comment-681119

    • Jan,

      I’m speaking about a clear misrepresentation of facts,

      No, you’re not. You’re simply typing aspersions.

      • You can’t mean…Teachers Union? Where did that get U all?

      • Somebody taught somebody a thousand years ago that Rhino make you so…

        They learned it so well that today starving Africans have to slaughter the few remaining just to entertain the rich.
        And the rich keep track of what remains. To some degree…

        Nothing new under the Sun and it’s too bright outside right now to read your…

      • aaron,

        “No, you’re not. You’re simply typing aspersions.”

        Do I? So what about we look at the paper by Monckton, Soon, Legates, and Briggs? I only need to go to the second sentence in the abstract of the paper, which states:

        “In 1990, the First Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) expressed ‘substantial confidence’’ that near-term global warming would occur twice as fast as subsequent observation.”

        This is then repeated in the first sentences of Section 2, together with numbers attached for the near-term global warming as projected in the FAR (for one scenario of four), to which “substantial confidence” was allegedly assigned in the FAR.

        However, the fact is that no statement was made in the FAR whatsoever, according to which a near-term global warming rate was projected with “substantial confidence” in there.

        How do you call it then when someone makes up statements which cannot be found in a scientific publication that is referenced, if you don’t call that a clear misrepresentation of facts?

        It is particular relevant in this case, because Monckton et al. put as much emphasis on the near-term projections in the FAR and the statement of “substantial confidence”. They want to say something by doing this, don’t they? Otherwise they wouldn’t have put it even in the abstract.

        In one of the comments here, Monckton says now, if he misrepresented something then he didn’t so knowingly. True, mistakes can happen. Sometimes researchers misinterpret statements from other papers, when they cite. But what if the misinterpretation is relevant for the conclusions? And how can you mistate something like that just by accident? Why do I have a hard time to believe that? Well, perhaps when a strong confirmation bias is in play, which extremely distorts your perception of reality. (And how did the peer-reviewers of the paper let pass something like that?)

      • What bugs me Jan, is the insubstantial confidence that has grown in the last quarter century.
        ===================

      • Jan P Perlwitz –

        However, the fact is that no statement was made in the FAR whatsoever, according to which a near-term global warming rate was projected with “substantial confidence” in there.

        The FAR said the following:

        “Nevertheless, for reasons given in the box overleaf, we have substantial confidence that models can predict at least the broad scale features of climate change.”

        Did the models then predict that in the 35 years 1991–2025, global temperature would rise by 1°C?

      • swood1000,

        “The FAR said the following:

        ‘Nevertheless, for reasons given in the box overleaf, we have substantial confidence that models can predict at least the broad scale features of climate change.'”

        Yes, they said this with respect to “the broad scale features of climate change”, such as larger warming over land than over sea or larger warming in the high latitudes of the Northern Hemisphere in winter. These are “broad scale features”.

        But no such statement was made with respect to any projected specific near-term warming rate.

        Did the models then predict that in the 35 years 1991–2025, global temperature would rise by 1°C?

        For the projections in the FAR? What models do you mean with “the models”? General circulation models, i.e., the “complex” climate models? No, they didn’t.

        Or do you mean “the models” actually used for the projections of the transient climate change in FAR? The answer depends on the scenario. The projection for scenario A was about 1 deg. C global surface warming until 2025 (also note the statement in the FAR that the rise wouldn’t be “steady because of the influence of other factors”). The projection for scenario B was about 0.2 deg. C per decade, that makes about 0.7 deg. C until 2025. For both of the scenario C and D 0.1 deg. C per decade. (Page XI)

        No “substantial confidence” was assigned to any of those specific quantifications of the surface warming rates in the FAR. On the other hand, it was pointed out that there were “many uncertainties in our predictions particularly with regard to the timing, magnitude and regional patterns of climate change, due to our incomplete understanding …”. (Page XII) This was also more elaborated further down in the report what major uncertainties are.

      • You can split hairs about the language but the observations are clear… global temps have been warming at about half the rate of the ensemble of climate model forecasts.

        I design statistical forecast models and do verifications. To me, it’s obvious the models “run hot”. They always do. Either unknown forcings are bigger than generally claimed or sensitivity is much lower than in the GCMs. Pick one. If it’s the first one, then we need to spend more research resources on non-greenhouse causes of climate change. If it’s the second one, which I suspect, then we need to turn down the sensitivity knob or fix the parameterizations or simply modify the output with statistical bias adjustments.

        The old LFM weather forecast model (circa 1979-1984 ??) had a QPF bias of 2 to 1. I’ve been told that in the waning days of the model that they actually found a coding error that doubled all the precip amounts. They didn’t change it, though, because forecasters and statistical systems were already adjusting for it.

        So, if the GCMs keep on with their over-forecasting..and I suspect they will…then no one will take them seriously and always cut them in half.

      • Jan P Perlwitz –

        The projection for scenario A was about 1 deg. C global surface warming until 2025

        Scenario A was the “Business-as-Usual” scenario. So the FAR said that they had “substantial confidence that models can predict at least the broad scale features of climate change,” and the models predicted a 1°C rise in temperature, and your accusation of a “clear misrepresentation of fact”

        However, the fact is that no statement was made in the FAR whatsoever, according to which a near-term global warming rate was projected with “substantial confidence” in there.

        rests on your assertion that a 1°C rise in temperature is not a “broad scale feature of climate change.” Does your certainty about this definition really rise to the level that allows you to make such accusations without at least disclosing your assumptions?

      • Mary –

        ==> “…global temps have been warming at about half the rate of the ensemble of climate model forecasts.”

        What is the time span that you use to make that determination?

      • I looked at a handful of plots of model projections (as far back as 1979) and forecasts from Hansen (1988) and the IPCC 1990 up to the current date. The ballpark observations “back of the envelope” has been Obs warming = 50% of forecast. That’s actually a bit generous. I like to use “Wood for Trees” for observations. It’s the most robust.

        I would welcome a more rigorous analysis but don’t have the time or data. I’m amazed that these simple verification statistics are so rarely produced. I think the models are a great tool. Skeptics say they are worthless but I argue they have been getting the signal correct. They forecast warming…we’ve had warming. But it’s obvious they have been running hot. After more than a quarter of a century of data in the books, I will assume they will keep running hot by a factor of 2 unless the sensitivity is adjusted or other model changes are made that correct the bias.

      • swood1000,

        “Scenario A was the ‘Business-as-Usual’ scenario.”

        That’s how the named it. For this scenario, specific assumptions were made how emissions would develop after 1991. The assumptions were modified for the other three scenarios. The authors of the report couldn’t know how the emissions would actually realize in the world after 1991. And they don’t state anywhere that they would know that. On the other hand, the FAR states:

        “How much confidence do we have in our predictions?
        Uncertainties in the above climate predictions arise from our imperfect knowledge of
        • future rates of human-made emissions
        • how these will change the atmospheric concentrations of greenhouse gases
        • the response of climate to these changed concentrations”

        (Page XVII)

        The first two points are directly related to the uncertainties how real world greenhouse gases in the atmosphere will develop, i.e., to the uncertainty what scenario will be closer to the real world after 1991. The last point is related to the uncertainty in the climate sensitivity.

        “So the FAR said that they had “substantial confidence that models can predict at least the broad scale features of climate change,”

        Why don’t you read Section 5.2.2 in the FAR? The “large-scale” features of the simulated climate change are described there. It’s explicitly stated in the title of this section. (Page 139 ff)

        “and the models predicted a 1°C rise in temperature”

        Only as a conditional projection. The 1 deg. C by 2025 is projected by “the models” only for the case that the emissions of Scenario A realize in the world. The FAR presents three more possible scenarios. Note, even 0.3 deg. C. per decade temperature rise isn’t a precise statement about what was projected with the used models for Scenario A in FAR. This projection has an uncertainty range, which spans the interval from 0.2 to 0.5 deg. C / decade (Page XII). Thus, even a global surface warming of 0.2 deg. C/ decade wouldn’t be in contradiction to the projection for the near-term global surface warming under the condition of Scenario A.

        “and your accusation of a “clear misrepresentation of fact [quote] rests on your assertion that a 1°C rise in temperature is not a ‘broad scale feature of climate change’.”

        How is it supposed to even make sense that a specific quantification over a specific time interval for a specific scenario was such a “broad scale feature” to which “substantial confidence” was assigned, if there are three more scenarios (which Monckton et al. neglected to use for their comparison, or at least to mention. Instead, they display it in their paper, as if there was only the one prediction in the FAR. Which is another misrepresentation of the content of the FAR. They only chose the one scenario from the FAR with the largest positive difference of the forcings relative to the real world forcings. Then the difference between projection and observed trend is presented as “evidence” for a too high climate sensitivity of the models, which is fallacious with respect to logic. They did the same in Figure 6 with the scenarios from the Hansen et al., 1988 paper. Again the projection using the scenario with the largest difference of the forcings relative to the real world forcings is picked as “evidence” for the alleged too high climate sensitivity of the general circulation model.)?

        “Does your certainty about this definition really rise to the level that allows you to make such accusations without at least disclosing your assumptions?”

        My “assumption” is that if something isn’t stated in the FAR report then it isn’t meant. But what kind of “assumption” is that? Why would I have to “disclose” such an “assumption”?

        Monckton et al. claim.,

        “In 1990, FAR predicted with ‘’substantial confidence’ that, in the 35 years 1991–2025, global temperature would rise by 1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1.”

        I don’t need to make any “assumption” that such a statement as asserted here can’t be found in the FAR. It is not in there. Period. You are the one who is making assumptions like that the authors in the FAR meant something what they didn’t state. You also are making the assumption that the authors of the FAR predicted that the emissions of Scenario A would realize, instead of the ones from the other scenarios. Which they didn’t. Or how would you even claim that the conditional prediction for Scenario A was the one to which “substantial confidence” was assigned, in contrast to the predictions for the other scenarios?

        Since you didn’t answer my question what models you mean when you say “the models” with respect to the FAR. I asked you that because here we have another misrepresentation of facts in the Monckton et al. paper. The authors present the projections of the near-term global surface warming of Scenario A from the FAR as projections done with general circulation models and as such as “evidence” for the claim that the general circulation models were “running hot”. However, the fact is that these projections weren’t even done with general circulation models. Instead, they were derived with calculations using a simple Zero-dimensional energy balance model of the atmosphere (i.e., the same type of model as used in the Monckton et al. paper) coupled to a simple ocean model. Or is this not a misrepresentation of facts? If it isn’t what is it?

      • You should be a defense attorney… but all arguing over the phrasing and who said what doesn’t change the simple fact that forecast minus observed is “running hot”.

        Let’s look at it another way. Show me a forecast model or “Zero-dimensional energy balance model of the atmosphere” or forecast from the IPCC or from Hansen with a significant time frame (at least 15 years or so) that has “run cold” . There might be some…but I can’t think of any. They have all run hot all the time.

        As a stat person, I like my forecast errors evenly and randomly scattered without bias. With climate forecasts, they always (almost always?) run hot. That’s a serious problem.

      • Jan
        here are the original figures from FAR

        Far modeled a BaU of 800 ppm CO2 by 2100 in fig A.3.
        https://klimaathype.files.wordpress.com/2015/03/ipcc-far-figa-3a.jpg

        “In 1990, FAR predicted with ‘’substantial confidence’ that, in the 35 years 1991–2025, global temperature would rise by 1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1.”

        here is FAR fig A.9

        https://klimaathype.files.wordpress.com/2015/03/ipcc-far-figa-9.jpg

        QED

      • Mary Brown,

        ‘You should be a defense attorney… but all arguing over the phrasing and who said what doesn’t change the simple fact that forecast minus observed is “running hot”.’

        I feel like I’m in a time loop. After the exchange with Judith Curry above, where I already had replied to what she said was evidence for the correctness of this claim, the initial assertion is simply repeated by the next person. It’s recursive. And I’m supposed to accept it because this person is doing some hand-waving and claims that she knows it all.

      • “And I’m supposed to accept it because this person is doing some hand-waving and claims that she knows it all.”

        No, you are supposed to accept this because the 1990 forecast was for a temp rise of 0.7 deg by 2015 and the observed was 0.32 deg. When we point out the warming has been half what has been forecast, it is easily verified.

        http://www.woodfortrees.org/plot/wti/from:1990/to:2014/plot/wti/from:1990/to:2014/trend

        It’s simple forecast-observed. My first grader can easily handle the math.

      • Hans Erren,

        “QED” for what exactly? I know these figures. What are they suppose to prove or refute? Please explain.

        There were four scenarios resulting in four conditional projections in FAR.

      • We are following a CO2 BaU, see fig A.3.a,

        FAR forecast for BaU

        “that, in the 35 years 1991–2025, global temperature would rise by 1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1.”

        that’s figure A.9, So Monckton is correct.

      • Hans Erren,

        “We are following a CO2 BaU, see fig A.3.a,”

        Yes, in the case of the greenhouse gas CO2. And nitrous oxide in the real world seems to be close too, from visual examination. On the other hand, it is not true for methane, CFC-11, CFC-12, HCFC-22. The scenarios in FAR didn’t just assume changes in CO2, but also in those other greenhouse gases. Compare Figure A.3 in the FAR with following recently measured data:
        http://cdiac.ornl.gov/pns/current_ghg.html

        CFC-11 and CFC-12 are even lower today than assumed for scenarios C and D in the FAR. The fact that the other greenhouse gases have been increasing more slowly or even decreasing (in the case of the CFCs due to the Montreal Protocol) is important, since these gases have a much larger Global Warming Potential than CO2.

        Other important forcings hadn’t been taken into consideration at all for the scenarios in FAR yet, like the ones from volcanic and tropospheric aerosols or trends in solar forcing.

        It comes down again to what I had said already further above. If one wants to do a proper comparison between climate projections and observations to evaluate the skill of the models, one has to adjust for possible differences between the prescribed forcings (or the inputs from which the forcings are calculated) and the real world forcings (or the climate drivers that are the equivalent to the inputs in the model from which the forcings are calculated) first, since differences between projections and observations can result from differences between the prescribed forcings and real world forcings, i.e. from differences in the input, or from model deficiencies. If one doesn’t do this one can’t draw any valid conclusions from such a comparison about model skill. Otherwise, all the conclusion one could draw from a difference between projections and observations is that the projections missed the observations, and that’s it, w/o being able to draw any conclusion about what caused this, or specifically about the models.

        Monckton et al. totally neglected this in their paper (besides all the misrepresentations).

      • Jan P Perlwitz –

        The authors of the report couldn’t know how the emissions would actually realize in the world after 1991. And they don’t state anywhere that they would know that.

        Isn’t this why they gave different scenarios? And isn’t it appropriate for us to select the scenario closest to the one that actually transpired (Business-as-Usual)?

        Why don’t you read Section 5.2.2 in the FAR? The “large-scale” features of the simulated climate change are described there.

        In the first place, this is not a list of the “large scale” features. It is a list of “The Large-Scale Changes On Which The Models Agree.” So, large scale changes on which the models did not agree are not included. The five categories mentioned are:
        1. Temperature changes
        2. Precipitation changes
        3. Soil moisture changes
        4. Sea ice changes
        5. Changes in mean sea level pressure
        6. Deep ocean circulation changes
        Under “Temperature changes” are discussed warming of the earth’s surface and of the troposphere, along with the global mean warming. So I am perplexed by how you conclude that such warming is not a “broad scale feature.”

        Only as a conditional projection

        Don’t you agree that (a) Scenario A was the projection in case no steps were taken to reduce greenhouse gas emissions, (b) no such steps were taken, therefore (c) Scenario A is the applicable scenario? Why do you keep referring to the other scenarios when it is clear that they are inapplicable?

        This projection has an uncertainty range, which spans the interval from 0.2 to 0.5 deg. C / decade

        If the projection is a 1°C rise in temperature, along with an uncertainty range, are you saying that it is incorrect to say that the projection is a 1°C rise in temperature?
        What then does “substantial confidence” mean? Are you saying that since all model projections have an uncertainty range, “substantial confidence” has no meaning?

        Which is another misrepresentation of the content of the FAR. They only chose the one scenario from the FAR with the largest positive difference of the forcings relative to the real world forcings.

        I just cannot fathom your position. They chose Scenario A because Scenario A was the scenario that did, in fact, occur.

        Then the difference between projection and observed trend is presented as “evidence” for a too high climate sensitivity of the models, which is fallacious with respect to logic.

        Why shouldn’t they compare the observed trend to the “Business-as-Usual” projection, since that is what the observed trend represents?

        But what kind of “assumption” is that? Why would I have to “disclose” such an “assumption”?

        Your assumption is that a 1°C rise in temperature is not includable as a “broad scale feature of climate change,” despite the fact that global mean temperature changes are specifically mentioned among “The Large-Scale Changes On Which The Models Agree.” Isn’t it at least arguable that a 1°C rise in temperature is includable as a “broad scale feature of climate change”? If so, shouldn’t you make it clear to your reader that you are answering that question in the negative?

        I don’t need to make any “assumption” that such a statement as asserted here can’t be found in the FAR. It is not in there.

        If a rise in temperature was includable as a “broad scale feature of climate change,” then would you still deny that “FAR predicted with ”substantial confidence’ that, in the 35 years 1991-2025, global temperature would rise by 1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1″?

      • Jan, the othe gases dont matter that much if I eyeball A3 (CO2(1990)=350, CO2 (2100)=825, and using a sensitivity of 2,5 K /2x CO2, I get:
        2.5*LN(825/350)/LN(2) = 3.1 Celsius, where fig A.9 reads 3.25

        not bad for a back of envelope isn’t it?

        The secret is that FAR states as best value for TRANSIENT climate sensitivity a value of 2.5 degrees / 2x CO2, as is clearly written in the caption of fig A.9.

        That’s hot.

      • Jan saws the woman in the wooden box in half. Well, tries to. I can still hear the screaming.
        ================

      • How I wish Jan Perlwitz could give his explanation for why the models run hot, but I now see that is constitutionally impossible.

        Jan, I told Andy Revkin in early 2008 that it was the water vapor feedback. I certainly didn’t come up with that insight on my own. By then it was common knowledge among skeptics.

        Seven years later, the modelers continue in their blind mad rage, or whatever is the explanation for the paralysis.
        ====================

      • Jan P Perlwitz –

        However, the fact is that these projections weren’t even done with general circulation models. Instead, they were derived with calculations using a simple Zero-dimensional energy balance model of the atmosphere (i.e., the same type of model as used in the Monckton et al. paper) coupled to a simple ocean model. Or is this not a misrepresentation of facts? If it isn’t what is it?

        This is included in the FAR:

        5.7.3 How Have The Equilibrium Experiments Been Assessed

        Over 20 simulations of the equilibrium response to doubling CO2 using general circulation models (GCMs) coupled to mixed-layer oceans have been carried out by 9 modelling groups (Table 3.2a)

        Do you have a reference to the “Zero-dimensional energy balance model” you referred to?

      • Mary Brown,

        “No, you are supposed to accept this because the 1990 forecast was for a temp rise of 0.7 deg by 2015 and the observed was 0.32 deg. When we point out the warming has been half what has been forecast, it is easily verified.”

        The FAR doesn’t contain any projection for the global surface temperature increase between 1990 and 2015.

      • Sure it does. Why would you deny the obvious? Gives you no credibility.

        “In 1990, FAR predicted with ‘’substantial confidence’ that, in the 35 years 1991–2025, global temperature would rise by 1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1.”

        Fig A9 shows a substantially linear trend… thus 0.7 deg from 1990 to 2015.

        Why does everyone defend these forecasts? Real scientists would readily acknowledge the shortcomings and work to fix them.

      • It would take massive incuriosity for the modeling community not to explore a lower water vapor feedback. So the default assumption is that they have.

        Jan, take a deep breath and tell me about the results of that exploration.
        ==================

      • swood100,

        I accidentally have put my reply in the wrong subthread. It’s here:
        http://judithcurry.com/2015/03/01/lessons-from-the-irreducibly-simple-kerfuffle/#comment-680998

      • Jan P Perlwitz –

        Exactly. Thus, why would the authors of the FAR state that there was “substantial confidence” in this specific near-term warming rate that was explicitly mentioned in the Monckton et al. paper, if this warming rate only realized in the case that the emission scenario A realized and they didn’t know whether this emission scenario would realize.

        You are treating this as if Monckton et al. claimed that the FAR predicted that there is “substantial confidence” that Scenario A would unfold. But they did not claim that. Rather, they are claiming that FAR said, in effect, this: “If Scenario A unfolds, then there is “substantial confidence” that in the 35 years 1991-2025, global temperature will rise by 1°C.” FAR said:

        “Nevertheless, for reasons given in the box overleaf, we have substantial confidence that models can predict at least the broad scale features of climate change.”

        and the broad scale features described in Section 5.2.2.1 include global mean warming.

        What makes you think that this was the “business-as-usual” scenario of the FAR? Have you checked? Or are you just assuming?
        Because I have checked. It isn’t the one.

        “Projections for the present assessment are made using the standard IPCC greenhouse gas forcing scenarios. These consist of a “Business-as-Usual” scenario, and three lower scenarios (B-D) in which greenhouse gas emissions are substantially reduced.” FAR page 275. “The first approximates to a Business-as-Usual (BaU) case. The other three incorporate a progressive penetration of controls on greenhouse gas emissions, and in this report are labelled Scenario B, C and D (The BaU case had earlier been referred to as Scenario A).” FAR page 331. “In the Business-as-Usual Scenario (Scenario A) the energy supply is coal intensive and on the demand side only modest efficiency increases are achieved.” FAR page 341.

        What do you mean “It isn’t the one”?

        So I am perplexed by how you conclude that such warming is not a “broad scale feature.”

        I didn’t say that. …But the whole Chapter 5 is about equilibrium climate change. …Nothing in this specific chapter is about the quantification of the transient climate response for specific years, though, and certainly not for the 35 years from 1990 to 2025, i.e., for the near-term climate change.

        What difference does it make? If we are looking for the meaning of “broad scale feature” and we find it in Chapter 5 then what’s the problem? And if you do not say that warming is not a “broad scale feature” then when the FAR says:

        “Nevertheless, for reasons given in the box overleaf, we have substantial confidence that models can predict at least the broad scale features of climate change.”

        why do you say that they could not be including warming within the meaning of the term “broad scale features”?

        “Don’t you agree that (a) Scenario A was the projection in case no steps were taken to reduce greenhouse gas emissions,…”

        Well, steps were taken (Montreal Protocol). And other factors contributed too, so that this scenario didn’t realize.

        The FAR was released in 1990. The Montreal Protocol was agreed to in 1987 and entered into force on January 1, 1989. Therefore, the Montreal Protocol had to be assumed under the Business-as-Usual scenario.

        Thus, the real world forcing change due to greenhouse gases from 1990 to 2010 rather resembles the ones of Scenario C and D in the FAR. Much lower than the one in Scenario A of the FAR. This is what I get when I compare the actual numbers.

        I have never seen such a remarkable attempt to turn a clear liability into an asset. According to your numbers the FAR said that it was their projection that under Business-as-Usual, there would be a forcing of 1.156 W/m². This would be part 1 of their projection under Business-as-Usual. Part 2 was the temperature increase that this would cause. If the FAR projected forcing of 1.156 W/m² then that was their projection. They didn’t say “We have no idea what forcing to expect and can only guess.” They said “If you don’t heed our warning there is going to be a forcing of 1.156 W/m². If you do heed our warning then the forcing will go down.” Is it your position that the FAR cannot be blamed for being unable to predict the climate under Business-as-Usual conditions, despite saying that they could?

        The authors present the projections of the near-term global surface warming of Scenario A from the FAR as projections done with general circulation models and as such as “evidence” for the claim that the general circulation models were “running hot”. However, the fact is that these projections weren’t even done with general circulation models.

        In the FAR, GCMs were used extensively. Apparently an upwelling-diffusion model was used for the temperature projections because of cost. However, it was stated that this model was used “to help interpret the results from CGMs” and that the results “are apparently not inconsistent with simulations using more detailed coupled ocean-atmosphere GCMs”:

        “It is possible to use an energy-balance atmospheric model coupled to an upwelling-diffusion model of the ocean to estimate changes in the global-mean surface air temperature induced by the different scenarios of radiative forcing and to help interpret the results from CGMs.” FAR page 187. “As discussed in Section 6.1, the detailed physical basis for a model of this type may be questioned, but it yields global average results that are apparently not inconsistent with simulations using more detailed coupled ocean-atmosphere GCMs.” FAR page 190.

        If the FAR says that the models gave the same results as GCMs, when used for the specific purpose, then why should FAR not be taken at its word and the results considered to be equivalent to the results from CGMs?

        You did not answer my “arguable” question. Are you saying that it is not at least arguable that changes in global mean temperature are included within the definition of “broad scale features”?

      • Mary Brown –

        Mary –

        In skimming over this sub-thread, it seems to me that you could legitimately say….based on what we’ve seen from 1990-2015, you see solid reasons to question the science behind the projections for SAT rise from 1990-2025…

        But you said this:

        ==> “… global temps have been warming at about half the rate of the ensemble of climate model forecasts.”

        Seems to me that without qualification, your statement is problematic.

      • https://www.flickr.com/photos/125630565@N05/14662039872/in/set-72157645113383959

        I would welcome a closer look at the verifications by someone who has more knowledge of the data.

        Part of my point is that simple verification statistics are rarely shown anywhere. We are more than a quarter century into serious climate forecasts. Constant pressure is applied to make massive societal changes based off forecasts. The least we can do is verify the forecasts and make that information widely available and honestly asses the strengths and weaknesses.

        Instead, I constantly see a reluctance to discuss verification and a resistance to modify forecast models despite the widening gap between forecast and observed.

        I realize that it’s a complicated issue. My education is in climate and atmospheric science, but I don’t work actively in the field so there is much I don’t know. But I do know and care a lot about forecast verification.

      • Steven Mosher

        kim

        ‘How I wish Jan Perlwitz could give his explanation for why the models run hot, but I now see that is constitutionally impossible.”

        it’s a stupid question.

        model A, may run for reason X, model B for reasons X,Y,Z.
        model C, does not run hot, model D for reason Q, model ….

      • Danny Thomas

        Let’s use model “C”!

      • It’s stupid as a single question, enthralling as a series of them.

        Granted, water vapor feedback may well not be the common error, even if there is one. However, when I examine the ensemble, everything looks cloudy.
        ===========

  58. Greg Goodman

    http://climategrog.files.wordpress.com/2015/03/ssn_volcanism.png?w=800

    There is a curious temporal coincidence of major volcanoes and marked drops in solar activity. Whether there is any causal link between them is a whole other story. However, this does mean that there is a strong chance of mis-attribution. There is probably different sensitivities and hence different time constants for each as well, making identifying the correct proportion of the effects of each even more perilous.

    The last two events were not the biggest on record, but both happened very close to solar max. Thus the cooling expected from the volcanoes will happen at about the same time as the drop in solar activity.

    I recently discussed on this site evidence that volcanic forcing is being underestimated and hence sensitivity to it being exaggerated:
    http://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

    This would imply that the early 20th. c. warming that the models roundly fail to produce was due to changes in solar forcing. It would also mean that since 1960 the general downward trend may be masking part of AGW.

    Now how will Monckton et al’s irreducible, which epitomises the AGW+”internal variation” paradigm, react to that situation?

    Its single explanatory variable will also miss early 20th c. warming and model the *masked* AGW.

    Apart from apparently shaking up some rigid beliefs amongst alarmsists ( which is surely useful ), I don’t think it helps much on attribution.

  59. russellseitz

    For Judith to ask and answer:

    ” Have worse papers been published in prestige U.S/EU journals? Yes.”

    recalls both Sturgeon’s Law , wherein the astute science fiction writer assertedt
    “Ninety per cent of published papers are crap”

    And Minsky’s Corollary, in which the noted information scientist replied :
    ” So are 95% of the remainer.”

    This certainly applies to the thousand most cited climate science papers of the present age, but the skeptic’s dilemma is that at the present rate of skeptical publication, several doublings of CO2 may pass before the erstwhile skeptics publish 200 papers under peer review rigorous enough to result in papers generally cited by authors other than themselves.

    In the interim, MSLB exemplifies Sturgeon’s 90%, and , face it , the Science Bulletin in which they chose to publish remains globally as obscure as ever.

    Citation indices and impact factors don’t lie- ask around China and you will find that in terms of journals Chinese authors brag about , the Chinese equivalents of Science and Nature are, no surprise, Science and Nature

    • Yet another paper. If we get enough of these we can light a fire.

      Science in a reproducibility crisis:

      http://phys.org/news/2013-09-science-crisis.html

      So models are simplified representations of reality, and a simplified model is a simplified simplified representation of reality. And then there’s chaos …

      There were, give or take, 1300 physics PhDs awarded In the US per annum for the last decade and how many papers published? And for climate science? And we are supposed to do what with that?

      “Let me take you down, ’cause I’m going to … Strawberry Fields, nothing is real, nothing to get hung about…”

  60. The skeptical thesis is very simply put: It has two prongs.

    1/. Overall feedback cannot be high enough to justify late 20th century warming without also creating a climate so unstable with respect to real externalities (sun/volcano/meteorite/etc) that we wouldn’t be here to argue it. In fact the weight of evidence is that the feedback is strongly negative.

    2/. Consideration of climate as not dominated by radiative dissipation, but having in the lower atmosphere a strong turbulent convective and conductive element, especially through such massively lagging elements as ocean currents, shows that climate is a nicely incalculable non linear dynamic system, in which the overall negative feedback will almost certainly be complex enough to cause natural aperiodic change without any need to posit externalities to ’cause ‘ it.

    In short the picture that emerges is one of chaotic quasi random behaviour with no strong spectral lines at any frequency with no particular ‘average value’ having any meaning beyond the ability to roughly place temperature between some extremes or other.

    As such the theory has not and can not have any predictive power that can be used to justify it beyond saying in broad terms ‘climate wont get hotter than X or colder than Y and all values in between are moderately probable’ .
    And this is a fundamentally curious philosophical point. Popper et al demand that science be both refutable and not refuted as a test of validity.

    However if we posit chaotic systems, they are in many senses irrefutable. No actual test that may be performed can actually live up to predictions which embody extreme sensitivity to initial conditions.

    That is, not only do we not have mathematics capable of predicting the output of chaotic systems, we cannot even use Poppers criteria to distinguish them by application of his philosophy from pure metaphysical twaddle. At least not in a simple way.

    What is needful is to build complex non linear dynamic models with elements that we think correspond to the major feedbacks in the earth’s climate, and see if any of them actually generate the sort of bistable attractors and general boundaries of the Earth’s climate such as we know it to have been in paleological times (after having removed any CO2 derived confirmation bias from it).

    That wouldn’t be the ‘strong evidence’ so beloved of physicists etc but it would at least demonstrate that models could be constructed to display climate variability of the order of what is on the historical record, without the need to introduce external drivers, like Milankovitch cycles or carbon dioxide. Or any other stray unicorns and pixie dust.

    Simplified models could easily be constructed. Much simpler than GCMs. On the (untested) assumption that huge swathes of the circulation could be parametrised, (as is done already) but with two important distinctions, we incorporate many lagging negative feedback terms and the transfer functions are assumed to be non linear.

    Its a great little PhD project.

    Stop looking at climate as a linear system, and model it as a non linear one and then curve fit using lagging (negative) feedback terms. Until the spectral analysis of the output matches the spectrum of observed temperature fluctuations over time. That is based on my intuition that whilst we can’t predict the actual value of a chaotic system we should be able to predict its spectral energy density.

    And that presumption is very easily tested for at least a sample chaotic system

    Once we discard the notion that climate can be modelled by a linear dynamic system, all of the simplistic models fall to pieces, and the GCM’s can be shown to be far too crude to do the job. It leaves only as far as I can tell the sort of approach outlined above, which is already fraught with deep caveats, but might actually shed some light on the whole model controversy if explored.

    • Don’t want ter sound negative but -that Naychure she’s
      stood the test of time. Jest ain’t no linear girl.

    • Leo, do you think the temperature record is accurate enough to do that sort of modeling?
      ============

      • @ kim

        I’ll let Leo answer for himself.

        My answer is another question: Does anyone believe that we have had a data collection system in place since 1880 that would allow us to list the subsequent years in rank order of the ‘annual temperature of the earth’ or that precision and accuracy of the data system was sufficient to statistically justify the worldwide headlines that ‘2014 was the warmest year since records began in 1880!’, when the increment over the previous ‘record’ was 0.02 degrees?

    • Me likes! So many cool potential math projects in climate science.
      So little interest.
      Bad on ye climate science!

      • I should mention:
        The mathematician takes a simple model, simple equations and tries to understand things thoroughly and rigorously. Make the model a little more complex, see if you can keep going. Establish proofs, theories, do complex numerical studies.

        I’ve experienced ridicule from climate types for such an approach, claiming ‘that’s too simple to be interesting’. Or, ‘have you tried that on any “real” models’?

        The climate way (yes, over-generalization surely) is to throw everything in the kitchen sink into the model and try to subdue the model into behaving.

        There is a middle ground where each group cooperates and helps the others to make progress.

    • +

      However, I think Kim is right. The data doesn’t exist and it would likely take a long time to generate adequate observations.

      It is unlikely that we will maintain a comprehensive and consistent observation system over the needed time scales.

    • Leo, thanks for this concise expression of the skeptical thesis. It could work on an elevator ride.
      I would break your 2nd point into 2 sentences, for easier reading:

      “The climate as not dominated by radiative dissipation, but has in the lower atmosphere a strong turbulent convective and conductive element, especially through such massively lagging elements as ocean currents. Thus climate is a nicely incalculable non linear dynamic system, in which the overall negative feedback will almost certainly be complex enough to cause natural aperiodic change without any need to posit externalities to ’cause ‘ it.”
      (Sorry, I can’t help it that I do some editing for a living :)

    • Interesting …”the” “skeptical” thesis,

      • Rob Starkey

        Obviously mistitled. “A skeptic’s thesis” is more accurate

      • What’s interesting is that every alarmist effort to dive through the two horns of Leo’s Dilemma ends tragically, dead in the ditch.
        ==================

    • Matthew R Marler

      Leo Smith: Stop looking at climate as a linear system, and model it as a non linear one and then curve fit using lagging (negative) feedback terms. Until the spectral analysis of the output matches the spectrum of observed temperature fluctuations over time. That is based on my intuition that whilst we can’t predict the actual value of a chaotic system we should be able to predict its spectral energy density.

      Other functionals are also of interest, such as annual mean rainfall, and annual mean temperature — and then eventually the annual mean rainfall and annual mean temperature by region.

      As to matching the spectral density, should the apparent 950 year period be treated as “real” or some sort of “artifact”? If it is real, then the current warming is “just about on time” — something bandied about here.

    • Matthew R Marler

      Leo Smith: 2/. Consideration of climate as not dominated by radiative dissipation,

      A good start would be for climate scientists to estimate the changes in the advective/convective and evapotranspirative transfers of energy from the Earth surface to the troposphere that will result from a warming of the Earth surface. A good place to start would be with estimates of rainfall change as surveyed by O’Gorman et al.

  61. Every climate model that is published should contain a footnote that explains the model explicitly ignores observed climate phenomenon that is not well understood and is too complex to be incorporated in the model. Furthermore these models should have another footnote explaining that they, like financial models, make no guarantee of future performance.

    Imagine if there were truth in climate model advertising. Imagine if model creators were actually held accountable for their models. Imagine how doing this would change the presentation of what, precisely, is “consensus” and what is essentially propaganda.

  62. Lauri Heimonen

    Judith Curry

    The Monckton, Soon, Legates, and Briggs paper “Why models run hot, results from an irreducibly simple climate model”:

    ”Resolving the discrepancies between the methodology adopted by IPCC in its Fourth and Fifth Assessment Reports that are highlighted in the present paper is vital. Once those discrepancies are taken into account, the impact of anthropogenic global warming over the next century, and even as far as equilibrium many millennia hence, may be no more than one-third to one-half of IPCC’s current projections.”

    As far as I know, even in the paper above there is assumed that all the increase of CO2 content in atmosphere is controlled by anthropogenic CO2 emissions. As you well know there is no proper evidence according to which anthropogenic CO2 emissions could dominate global CO2 increase in atmosphere. Any increase of CO2 content in atmosphere is controlled together by all CO2 emissions from sources to atmosphere and by all CO2 absorptions from atmosphere to sinks where nowadays the share of anthropogenic CO2 emissions is only about 4 % at the most; look e.g. my comment http://judithcurry.com/2015/02/14/week-in-review-43/#comment-674828 .

  63. Why do models run hot?

    The most obvious answer is: That is what they are designed to do. Provide scientific evidence that ACO2 is driving the temperature of the Earth upward at an unprecedented rate and, most importantly, that catastrophe can ONLY be avoided by governments taking IMMEDIATE action to control the use of fossil fuels worldwide. And I can think of few scientific projects that have been more successful. In fact, if you are a teacher, government scientist, college professor, TV meteorologist, etc, merely QUESTIONING their validity, irrespective of the empirical data, will (not can) cost you your job and professional reputation.

    Why do the model outputs not match observations?

    A classic example of GIGO.

    The first and most obvious is that they are designed under the assumption that ACO2 is the control knob for the planetary temperature and poses an existential threat unless it is drastically reduced or eliminated. Since the only direct evidence that ACO2 does in fact have a significant impact on global temperatures amounts to an ex cathedra proclamation from the experts: “The global temperature is rising and it (or 50% of it, or some other subset of it) is due to ACO2.”, it is unlikely that models based on the ACO2 control knob axiom will have much relation to observations, other than by accident. Since the ACO2 axiom predicts rising temperature and temperature empirically has had periods during which it has risen, fallen, and remained relatively stable, it is inevitable that there will be periods during which the modeled trend will be highly correlated with the observed trend. It is also inevitable that those periods will be cited as certain confirmation of the ACO2 control knob axiom.

    Another reason that the models correspond with reality only by accident is that they require that the magnitude, sign, and predicted time history of every physical process that influences climate be made available to the model.

    Does anyone believe that we can list in rank order and predict, with adequate precision for the task, the future time history of every important variable that impacts the climate?

    Can we list everything about the sun that varies with time and influences our climate? Is TSI actually a constant? Is the spectral distribution of the TSI important? Sunspots seem to correlate well with variations in the climate. Why? Pure coincidence? If not, what is the mechanism of influence and is it predictable?

    Solar wind? Cosmic rays? Volcanic activity, land and submarine? Plate tectonics? Earth’s magnetic field, magnitude and configuration? Orbital mechanics? Variations in the structure of the gravitational field experienced by the earth due to changes in the configuration of the other planets? Ad infinitum.

    Are NONE of the above important to the planetary temperature? Is TSI, in the spectral band in which we measure it, the only important external input to the climate system and is ACO2 REALLY the control knob that determines how the climate responds to the measured TSI? Are all other inputs to the climate system insignificant? If not, are the significant ones predictable enough to make climate models meaningful?

  64. pochas, “His Lordship can erase the the positive water vapor feedback (the leftmost icon in his fig 3) and deal with the fact that feedback from the water cycle is wholly negative and greenhouse gas sensitivities are well below 1 ºC for a CO2 doubling.”

    It isn’t “solely negative” though. Water vapor and the water cycle tend to regulate so you would have positive and negative “feedbacks” based on surface temperature and available moisture. Other than the relatively small CO2 impact on moist surfaces, water vapor is an independent function with respect to CO2. Negative surface temperature “forcing”, volcanoes and solar, have a large impact on the sign of water vapor forcing. Once tropical surface and mid latitude temperatures reach a certain convective triggering temperature, water vapor tends to be a mainly negative feedback.

    That is pretty much be biggest problems with climate models, even the irreducibly simple ones, water vapor feedbacks are definitely not linear.

    Any model can “get” average, but water vapor is dependent on the extremes that stimulate/limit convection and the deeper the convection the worse the models will perform.

    http://lmgtfy.com/?q=convective+triggering+parameterization

    • I agree that feedback in nonlinear. I didn’t mention precipitation. When it occurs, feedbacks are massively negative, especially at the equator.

      • pochas, “When it occurs, feedbacks are massively negative, especially at the equator.”

        Right, and the majority of the atmospheric energy is produced in the tropical oceans. This is where some of the younger scientists need to get better press time. Oppo, Rosenthal, Linsley and a growing number of others are fighting an uphill battle with the old guard paleo hockey stickers.

      • Frederick Colbourne has an elegant solution elsewhere on this thread. Repurpose the IPCC to explore natural and man-made effects on the climate. At the very least, the US ought to be able to encourage such a solution, soon, if not now. The easiest way would be to start funding such research. Hmmm.
        =========================

  65. Not sure why my first comment on this thread is gone.
    I pointed to empirical research showing that the vertical temperature profile of the atmosphere as measured by radiosonde balloons contradicts the profile used in climate models.
    This is obviously relevant to the question, Why do climate models run hot?

    • Ron C. you were even thanked for the lost comment, by jim2, who read it.

      Also, Judy, when the threading’s been broken, sometimes going to the end of the thread and using that box still puts the comment several from the end. Mebbe strike through some offenders, make a public show of the indecency.

      Ah, I hesitate to criticize your superb editing. I give thanks daily.
      =========

    • Ron, I restored your comment. But I was deleting posts that question the existence of the greenhouse effect, that is not the topic of this post.

      • Well, you deleted my comment which does not question the existence of the greenhouse effect. Without greenhouse gasses we would have no lapse rate and no convection. Its just that once the greenhouse effect is fully developed and the lapse rate is in charge there is no effect from adding more greenhouse gasses.

  66. How do you post a comment at the end of the thread?

  67. A few comments on the comments. First, Judith Curry wonders whether a very simple model can really illuminate the complex models’ failings. We found it useful for pointing up the heavy impact of feedbacks on climate sensitivity; for showing that because of feedback mathematics the central estimate should be closer to the lower than the upper bound of the sensitivity interval, and not centered as it is in the IPCC’s reports; for showing that the IPCC’s reduction of the feedback sum from 2 to 1.5 Watts per square meter per Kelvin required climate sensitivity to be cut by a third, but that the cut had not been made; for demonstrating the extravagant absurdity of the RCP 8.5 scenario; and for exposing the implausible effect of the Bode relation, which leads to huge and unwarrantable feedback amplification as the loop gain approaches unity (i.e. as the feedback sum approaches 3.2 Watts per square meter per Kelvin of directly-driven warming, not that far above the 2 W/m2/K that was the IPCC’s implicit central estimate as recently as 2007).

    Another commenter questioned my account of what happens in an electronic circuit when the loop gain exceeds unity. A bloke down the pub told him he didn’t know what “voltage flicking from the positive to the negative rail” meant, and he did not believe I knew anything about electronic circuits. I get that a lot. The presumption seems to be that anyone without a piece of paper certifying he has had Socialist state-approved training cannot know anything. Well, I didn’t get my science from a bloke down the pub. I consulted a process engineer with not one but three doctorates, who checked in the literature to confirm his own knowledge and told me that the Bode graph shown in the paper, like that for an electronic circuit, mandates that the voltage will try to go to positive infinity as the closed-loop gain approaches 1, and that as it exceeds 1 the voltage will try to go to negative infinity.

    Just try searching for “from the negative to the positive rail” in Google. There are plenty of hits. There, you will be able to learn all about relaxation oscillators, for instance. The problem with the Bode equation as it is applied to the climate is real, and research is now being done on it by others better qualified than I. But the misapplication of the unmodified Bode equation to the climate is the main reason for high climate sensitivity, as our model makes ruthlessly plain.

    A number of commenters say they do not like the mention of the process engineers’ design limit of a loop gain of 0.1 on circuits intended not to oscillate. The fact is that the climate has remained near-thermostatic for the past 810,000 years, and has not oscillated (except in the narrow band between glacial and interglacial). It has not behaved like an object whose loop gain exceeds unity and then relaxes back: instead, it has behaved more like an object in which net-negative feedback mandates stability. The very high temperatures that would be expected if the loop gain had risen appreciably above the IPCC’s implicit central estimate of a loop gain of 0.64 have simply not materialized.

    Note that our paper did not state that the process engineers’ maximum design limit of 0.1 for the loop gain was mandatory. Our model is a model. Choose your own feedback sum and hence loop gain, if you don’t like ours. However, the eminent researcher who has now taken up the feedback-amplification problem and hopes to publish shortly has concluded, as we have, that on this ground climate sensitivity cannot exceed 1 K, and may well be significantly less. On verra.

    Another commenter says I have misrepresented the IPCC’s documents, but fails to provide details. All I can say, therefore, is that I have not knowingly done so.

    Aside from that commenter, inferentially smarting because on his own website he did not win the day in quite a protracted discussion, the quality of the discussion has been high. As far as I can see, no one has landed a mortal blow on our reasoning. Interestingly, our paper has been attacked on the ground that it is not new, and also on the ground that it is. Of course much of it is not new, for our intention was in part pedagogical, but the parts that are new are of more than passing interest, and our detailed criticisms of the defects in the present models’ methodology should not be too lightly dismissed.

    • Monckton:

      The fact is that the climate has remained near-thermostatic for the past 810,000 years, and has not oscillated (except in the narrow band between glacial and interglacial). It has not behaved like an object whose loop gain exceeds unity and then relaxes back: instead, it has behaved more like an object in which net-negative feedback mandates stability. The very high temperatures that would be expected if the loop gain had risen appreciably above the IPCC’s implicit central estimate of a loop gain of 0.64 have simply not materialized.

      This has always struck me as one of the most powerful arguments against imminent catastrophe.

      Interestingly, our paper has been attacked on the ground that it is not new, and also on the ground that it is.

      On the right lines then :)

      Very well done for the challenge you’ve laid out.

    • Monckton,

      I would appreciate a response to my comments on Figure 6, which you have not addressed.

    • Mockton:

      I agree that the criticisms against your use of a simple feedback equation are themselves mostly wrong. I will make a couple of points. The first is that the general form of the feedback equation usually assumes that the feedback is ‘negative’, i.e. A=Ao(1/1+g) and not (1/1-g). That’s mostly a stylistic critique because the feedback itself is allowed to go positive or negative. The second is that your equation is stable for all values of g not equal to 1. Yes, the gain can become ridiculously large for values of g arbitrarily close to 1, but it’s still stable. instability ONLY occurs at the roots of the denominator, so 0.75 is NOT unstable (it is unphysical).

      The other thing you probably should consider is a slightly more sophisticated model in which g is a complex function representing energy storing (phase shifting) effects like the thermal mass of the oceans. These are analogous to the effects of capacitors and inductors in a circuit and lead to more complex behaviors. In fact, they can lead to instabilities when the real part of the gain is less than 1.

    • Monckton of Brenchley

      Dear Mr. Monckton’ instead of guessing, as both camps do – you need different approach: I have the proves, beyond any reasonable doubt – can be all proven now, no need to wait another 85y – that ”global” warming is zero! If capable person like you can get the truth to the public – Warmist would be finish before Christmas. Here is the Holly Grail in climatology: .https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

  68. Model Output Statistics (MOS) have been used in weather forecasting since the 1970s. MOS replaced the “Perfect Prog” (PP) method because MOS accounts for time uncertainty and model bias. But even PP is much better than raw weather model output.

    So why, in 2015, do we still hang our hat on raw climate model output? A climate MOS would be an obvious improvement.

    Scarfeta from Duke Univ essentially did a basic Climate MOS in his climate forecasts. He found that model warming times 0.5 gave the best fit. Thus actual warming was just half the forecasts of GCMs. This is a remarkably similar conclusion to MSLB

    • 1/2 the warming of the forecasts – yes.

      I also noticed that the warming rate, which is computed during the warm period ending in about 1998, is about double what it should be, based on observations.

      Also, the hiatus or pause seems to be a period in which the natural variability is netting against the background rate to be about flat.

      It seems the background rate is about .1C/decade and natural variability is about the same, so sometimes you get warming at .2C/decade and sometimes 0C/decade.

      It certainly won’t surprise me if CS turns out to be about 1/2 of 3C or 1.5C (or even less).

      We will see.

  69. “Pause” and “hiatus” carry with them the implication that the phenomenon in question is but temporary.

    So in this case, since noone knows the earlier rise temperatures has halted but temporarily, it makes no sense to use those terms.

    Instead, use something like “plateau”. Unless of course you are a dishonest alarmist who likes the deviousness of using them.

  70. That a challenge to the ‘consenus’ emanated from Chinese science is perhaps no accident.

    Here in the West under democracy, those both in and out of government with totalitarian leanings and motivations, have good cause to support and practice the corruption of state-funded climate science in an alarmist direction. This is because such alarm presents an apparently good reason to crack down on a free society by expanding taxation and the politicisation of society in general.

    No so in China. There the state is already in total control, and has no elections to worry about. So unlike their Western counterparts, state-funded chinese climate scientists can be free from the overriding political imperative that state-funded Western ones are, and can actually be free to seek the truth.

  71. That was a waste of 10 seconds of my life.

  72. Pingback: Weekly Climate and Energy News Roundup #171 | Watts Up With That?