Emergent constraints on climate sensitivity: Part I

by Nic Lewis

Emergent constraints on climate sensitivity:  their nature and assessment of validity.

There have been quite a number of papers published in recent years concerning “emergent constraints” on equilibrium climate sensitivity (ECS) in comprehensive global climate models (GCMs), of both the current (CMIP5) and previous (CMIP3) generations. The range of ECS values in GCMs has remained almost unchanged since the early days of climate modelling; in the IPCC 5th Assessment Report (AR5) it was given as 2.1-4.7°C for CMIP5 models.[i]

From the IPCC 1st Assessment Report (FAR) to AR5, the main cause of the large uncertainty as to ECS in GCMs has been the difficulty of simulating clouds and their behaviour.[ii] This has led to cloud feedback differing between GCMs even as to its sign – and to little confidence that the true level of cloud feedback lies within its range in GCMs. Progress in understanding cloud behaviour and related convective dynamics and feedbacks has been painfully slow. We shall see in this 3-part article that emergent constraint approaches have the potential to offer useful insights into cloud behaviour, however the main focus will be on to what extent they narrow the uncertainty range of ECS in GCMs.

Emergent constraint studies typically identify a quantitative measure of an aspect of GCMs’ behaviour (a metric) that is well correlated with their ECS values across an ensemble of GCMs. They then compare observational estimates of that metric with its value in each GCM. Often they use regression to fit a linear relationship between the metric and ECS values in the GCM-ensemble, and assert that the range of ECS values spanned by the segment of the regression line consistent with observational estimates of the metric is most credible. Alternatively they derive a constrained ECS range directly from the ECS values of GCMs whose behaviour is consistent with observational estimates of the metric. Sometimes more than one metric is used. In most cases, such emergent constraint studies have favoured ECS values in the upper half (3.4–4.7°C) of the CMIP5 range.

Probably the best known emergent constraints study is Sherwood et al (2014).[iii] That study has often been cited in support of arguments that ECS is unlikely to be in the lower half of the IPCC’s 1.5–4.5°C range, contrary to what is suggested by energy-budget studies that relate warming and heat uptake during the instrumental period to the estimated change radiative forcing. The abstract to Sherwood et al (2014) says this about the spread of ECS in GCMs:

The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide.

The Sherwood emergent constraint is linked to feedback from changes in low clouds, which affect reflected shortwave (SW) solar radiation. Differences between GCMs in SW low cloud feedback are known to be the main factor behind their wide spread of ECS values, so it makes sense that an emergent constraint would involve SW cloud feedback. Indeed, a paper published earlier this year (Qu et al 2018)[iv] concluded that all useful emergent constraints on ECS work via SW low cloud feedback, saying:

Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable to a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated, but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. In addition, any proposed ECS constraint should not be taken at face value, since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.

The key point is that, since SW low cloud feedback accounts for a large part of the overall variation in GCM ECS values, any metric that in GCMs is well correlated with ECS is almost bound to be strongly correlated with SW cloud feedback. But, as Qu et al. go on to say, the reality of the physical links between the emergent constrain metric and SW feedback must be investigated, and even where it exists other biases in GCMs may affect the validity of their ECS values. That is a very important point.

A recent paper, Caldwell et al. 2018,[v] systematically reviewed emergent constraints on ECS, analysing the 19 previously-proposed constraints detailed in Table 1. They omitted emergent constraints that were impracticable to test or had already been found not to be robust. They also omitted two emergent constraints that targeted high-latitude clouds and were poorly correlated with ECS, which they took to mean that only constraints on tropical clouds had a strong impact on ECS.

The correlations with ECS given in Table 1 are for ensembles of all CMIP5 models for which the data needed to calculate the metric for the emergent constraint involved were available. The ensemble therefore varies between constraints as to both size and constituent models. Correlations shown in brackets are not significant at the 90% probability level. Because many models are related, and the likelihood of data-mining for high correlation constraints having taken place, the significance test is weak and (as Caldwell et al. say) should be regarded as just screening out constraints that are almost certainly not significant. In many cases the original study tested the constraint on CMIP3 models or a combination of CMIP3 and CMIP5 models, and may have obtained a stronger correlation. However, a valid emergent constraint should persist between model generations.

Table 1. Short description of each emergent constraint on ECS tested in Caldwell et al. 2018, per their Table 1, and their calculation of its correlation with ECS in CMIP5 models (from their Table 2). Correlations in brackets are not significant with 90% probability, assuming model independence.

The Caldwell findings are very interesting. Its abstract makes these points:

Several constraints are shown to be closely related, emphasizing the importance for careful understanding of proposed constraints. A new method is presented for decomposing correlation between an emergent constraint and ECS into terms related to physical processes and geographical regions. Using this decomposition, one can determine whether the processes and regions explaining correlation with ECS correspond to the physical explanation offered for the constraint. Shortwave cloud feedback is generally found to be the dominant contributor to correlations with ECS because it is the largest source of inter-model spread in ECS. In all cases, correlation results from interaction between a variety of terms, reflecting the complex nature of ECS and the fact that feedback terms and forcing are themselves correlated with each other. For 4 of the 19 constraints, the originally-proposed explanation for correlation is borne out by our analysis. These 4 constraints all predict relatively high climate sensitivity. The credibility of 6 other constraints is called into question due to correlation with ECS coming mainly from unexpected sources and/or lack of robustness to changes in ensembles. Another 6 constraints lack a testable explanation and hence cannot be confirmed. The fact that this study casts doubt upon more constraints than it confirms highlights the need for caution when identifying emergent constraints from small ensembles.

Caldwell et al. also point out that:

One problem with emergent constraints is that large inter-model correlations between current climate and future-climate quantities are expected by chance in multi-model databases. As a result, emergent constraints without a solid physical basis should be viewed with scepticism. Unfortunately, most emergent constraints in the published literature lack a satisfying physical explanation.

Caldwell et al. therefore regard a proposed emergent constraint as not credible if it lacks an identifiable physical mechanism; is not robust to change of model ensemble; or if its correlation with ECS is not due to its proposed physical mechanism. For the last point, they examine the decomposition of the correlation by source (types of feedback, and forcing from a doubling of CO2)  and the geographical location of the principal sources of correlation.
Caldwell et al.’s eventual assessments of the 19 proposed emergent constraints are shown in Table 2. In two cases they combined a pair of constraints that are similar and well correlated. They note that those two pairs (Volodin/Siler and Zhai/Brient Alb) are similar in that both assume cloud changes track SST in a climate-invariant way, but that although Volodin and Siler are both strongly correlated with Brient Alb, they are poorly correlated with Zhai.

Table 2. Assessment of proposed emergent constraints by Caldwell et al 2018. Reproduction of their Table 4

The fact that only 4 out of the 19 emergent constraints pass Caldwell’s basic tests of credibility is disturbing, and implies that results from emergent constraint studies should be treated with considerable scepticism. Some of the papers proposing failed constraints are highly cited, e.g. Trenberth (where the originally reported strong –0.73 correlation fell to a very weak –0.22 in CMIP5 models) has 219 citations.[i] Somewhat surprisingly, all three of the proposed emergent constraints that Trenberth and Fasullo formulated had insignificant correlations with ECS when tested on CMIP5 models.

In Parts 2 and 3 of this article I will examine the 4 constraints, all favouring high ECS values, that Caldwell et al find credible, and formulate conclusions.

.

Nicholas Lewis                                                                      March 2018


[i] Trenberth, K. E., and J. T. Fasullo, 2010: Simulation of present day and 21st century energy budgets of the southern oceans. J. Clim., 23, 440–454.

[i] Table 9.5 of AR5. The method used appears to slightly overestimate the ECS of the least sensitive CMIP5 models and to underestimate the ECS of the most sensitive models. A better estimate of the range would be from slightly under 2°C to slightly above 5 °C. This is almost identical to the 1.9–5.2°C range in the IPCC 1st Assessment Report (FAR).

[ii] See, e.g., Section 5.2.1 of the FAR. Note that I am including problems in simulating convection as part of the cloud simulation problems, as the two issues are intimately connected.

[iii] Sherwood, S.C., Bony, S. and Dufresne, J.L., 2014. Spread in model climate sensitivity traced to atmospheric convective mixing. Nature, 505(7481), p.37-42.

[iv] Qu, X., A. Hall, A. M. DeAngelis, M. D. Zelinka, S. A. Klein, H. Su, B. Tian, and C. Zhai, 2018: On the emergent constraints of climate sensitivity. Journal of Climate, 31 (2), 863–875, doi:10.1175/JCLI-D-17-0482.1.

[v] Caldwell, P, M Zelinka and S Klein, 2018. Evaluating Emergent Constraints on Equilibrium Climate Sensitivity. J. Climate. doi:10.1175/JCLI-D-17-0631.1, in press.

 

Moderation note:  as with all guest posts, please keep your comments civil and relevant

119 responses to “Emergent constraints on climate sensitivity: Part I

  1. Emergent constraint studies typically identify a quantitative measure of an aspect of GCMs’ behaviour (a metric) that is well correlated with their ECS values across an ensemble of GCMs.

    1) GCMs are swamped with random (or completely indescribable) numerical error, thus producing results that are meaninless.

    2) The concept of ‘ECS’ is undefined for a complex nonlinear system, or, at best, is useless for prediction, since prediction involves the study of future, and therefore, unknown climate states at which the linearization of sensitivity is unknown.

    They then compare observational estimates of that metric with its value in each GCM. Often they use regression to fit a linear relationship between the metric and ECS values in the GCM-ensemble, and assert that the range of ECS values spanned by the segment of the regression line consistent with observational estimates of the metric is most credible.

    In other words, the metric is curve-fitted to the erroneous results of random numerical noise. Due to the authority of climate modelling and science in general, this fit is transferred into a metaphysical/psychological metric known as ‘credibility’.

    Science is returning fast to its Alcemist roots.

    • Caldwell et al. therefore regard a proposed emergent constraint as not credible (sic) if it lacks an identifiable physical mechanism.

      And if that mechanism is found to be non-linear, as it will almost surely be? Then what reason is there in using a curve fit to diagnose it?

    • The very fact that people have allowed the non-provable existence of a phantom quantity like ‘ECS’ to be bandied about is sign of an unrecoverable retreat.
      Alchemists use language to win their wars, not reason.

    • First one would have to define divergence in a nonlinear system of equations at the core of GCM. This can only be done with systematically designed model families using statistics of solutions with trajectories evolving from feasible differences in initial and boundary conditions. Feasible differences are very large for climate models.

      “Schematic of ensemble prediction system on seasonal to decadal time scales based on figure 1, showing (a) the impact of model biases and (b) a changing climate. The uncertainty in the model forecasts arises from both initial condition uncertainty and model uncertainty.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

      What is being discussed here are perturbed physics models – where models are run 100’s to 1000’s of times with different initial states.

      The opportunistic ensembles of the CMIP are rather single solutions of disparate models with no scientifically justifiable rationale for the choice. On such a basis – none of the individual solutions of the CMIP have any credibility – and the statistics of opportunistic ensembles are therefore equally meaningless.

      Poorly modeled climate will of course continue to evolve independently of what happens in models. Just like weather.

      It leaves a large part of the modelling industry – let alone this post – with no basis in mathematical reality.

      • I like the diagram, thx

      • Your Terra and Aqua — a cool blog. I heartily recommend it to everyone here. (click on Robert I. Ellison)

      • It leaves a large part of the modelling industry – let alone this post – with no basis in mathematical reality.

        Correct. And the same with the other 80-90% of climate papers that are based on model results.

        I also take issue with the very notion of an ‘Equilibrium Climate Sensitivity’. The whole use of such language assumes that which it tries to prove. It is a linguistic trick to reduce all the complex dynamics of the full nonlinear system to a simple linear radiative balance.
        Acceptance of the term is surrender to the whole doom and gloom dialectic.

      • There is actually an exact analogue to this sort of ‘fortune telling’ via random manifestation of physical systems (i.e. random solutions via computer modelling with numerical error).

        The Chinese did the same thing via the fracture patterns of shells under heating. They were quite expert at it, like our climate modellers, and enjoy similar authority:

      • nickels, isn’t the cyclical nature of the ice ages an indicator that x amount of forcing roughly produces x amount of warming?

      • afonzarelli:

        Such a relationship is strictly the result of linear thinking.
        Non-linear systems give no such guarantee.
        And clouds are obviously a massive non-linear feedback which cannot possibly be inferred from linear analysis.
        In more generous moments I tend to think the shoddy shape of climate analysis is a result of this amateurism concerning nonlinearity (among the community, I mean).

      • Non-linear systems give no such guarantee

        And yet there we are every hundred thousand years with roughly the same amount of warming (guaranteed)…

      • I suspect that it is a self limiting boom and bust cycle.


        http://biologyislifeccps.weebly.com/boom-and-bust-population-growth-cycle.html

        When conditions are right – insolation and thermohaline circulation – there is a runaway ice sheet – some 25W/m2 difference between glacial max and min – and CO2 feedback from biokinetics – about 2W/m2. At glacial max –
        CO2 deserts expand and dust blows over ice sheets melting them and causing feedbacks in the other direction.

      • Ellison, that 25 W/m2 number (verses 2 W/m2 for co2) is damning. Do you know of any estimates out there of ECS based on the ice ages? (and what might your own estimate be? thanx)…

      • Yeah, right. Scientists know what the temperature was 100,000 years ago, lol.

      • Last time I checked, scientists didn’t even have a clue what the temperature was this century.

      • Once scientists figure out how many genders there are (hint less than 3, more than 1), then we’ll let them move onto more complex things like temperature.

        http://sitn.hms.harvard.edu/flash/2016/gender-lines-science-transgender-identity/

      • “But what actually happens is that when CO2 reaches a minimum and albedo reaches a maximum, the world rapidly warms into an interglacial. A similar effect can be seen at the peak of an interglacial, where high CO2 and low albedo results in cooling. This counterintuitive response of the climate system also remains unexplained, and so a hitherto unaccounted for agent must exist that is strong enough to counter and reverse the classical feedback mechanisms.” https://www.sciencedirect.com/science/article/pii/S1674987116300305

        Sensitivity to changes in TOA flux is variable. It can be low. The response to greenhouse gases, land use change, changes in cloud with centennial changes in ocean and atmosphere circulation, solar variability, etc – has been muted in the modern era.

        But it can be extreme. Regionally – 10’s of degrees in as little as a decade. Megadroughts and megafloods.

        This shows solutions of an energy-balance model (EBM), showing the global-mean temperature (T) vs. the fractional change of insolation (μ) at the top of the atmosphere. (Source: Ghil, 2013)

        The model has two stable states with two points of abrupt climate change – the latter at the transitions from the blue lines to red from above and below. The two axes are normalized solar energy inputs μ (insolation) to the climate system and a global mean temperature. The current day energy input is μ = 1 with a global mean temperature of 287.7 degrees Kelvin. This is a relatively balmy 58.2 degrees Fahrenheit.

        Ghil’s model shows that climate sensitivity (γ) is variable. It is the change in temperature (ΔT) divided by the change in the control variable (Δμ) – the tangent to the curve as shown above. Sensitivity increases moving down the upper curve to the left towards the bifurcation and becomes arbitrarily large at the instability. The problem in a chaotic climate then becomes not one of quantifying climate sensitivity in a smoothly evolving climate but of predicting the onset of abrupt climate shifts and their implications for climate and society.

        The energy content – warmth – of the planet changes in response to changes in energy in and energy out. And this changes for many reasons.

      • Robert I Ellison: It leaves a large part of the modelling industry – let alone this post – with no basis in mathematical reality.

        “mathematical reality” is a beguiling phrase.

  2. The other option (Knutti 2017) is to abandon ECS and use carbon climate response (CCR/TCRE)
    https://ssrn.com/abstract=3142525

  3. “It is a linguistic trick to reduce all the complex dynamics of the full nonlinear system to a simple linear radiative balance.”

    It has to be like that. Otherwise people would be required to think for themselves. What a mess that would be.

  4. We must immediately discredit the notion that ‘having a physical explanation’ for something lends anything but the most ethereal credibility to this method and the result derived from it. Such an appeal to authority is not science and actually is a use of the method we saw in the Orestes paper, where post-modernist ‘narratives’ are given the same credibility as testable scientific hypothesis and experiments.

    Such appeals lead to absurd nonsense, as seen in this paper on emergent constraints:

    When combined with the power of human mind to assess the
    physical plausibility of their predictions
    , comprehensive climate
    models are the most powerful tools available to predict
    future climate and its response to radiative forcings such as the
    anthropogenic increase in greenhouse gases

    https://link.springer.com/content/pdf/10.1007%2Fs40641-015-0027-1.pdf

    There is also the problem that, if one cannot trust the full model itself and the conclusions derived therefrom, what good are small linear relationships that one divines from the numerical noise produced?

    And the obvious issue that such ‘relationships’ between predictor and predictee depend on the starting point and ending point of a simulation, and that there is, in fact, no equilibrium conditions existing in climate that would make the reduction of the climate system into some kind of reproducible ‘Poincare Map’ relevant or defined in any way. Choosing a different starting point or ending point reserves every right to give a completely different relationship between predictor and predictee.

    In fact, this sort of ‘windowing’ also hides the very fact that imposing an observational constraint is nonsensical, since future evolution of the system has every chance to visit all regions of the supposed ‘linear relationship’.

    This entire method is just a complex obfuscation which tries to hide the fact that the entire model is inadequate, and attempts obfuscate this failing by swapping depth for breadth and appealing to non existent ‘statistics’, over a probability space that is only in the author’s imaginations.

    • ‘having a physical explanation’ is a necessary, but not sufficient condition. Without it one has nothing, and that is its usefulness in this case.

      • Agreed that such a rational has meaning.
        But it isn’t science unless they build something with it and fly the thing.

    • nickels: Such appeals lead to absurd nonsense, as seen in this paper on emergent constraints:

      thank you for the link.

  5. Christopher Monckton says that there is an elementary error in physics that has lead to a too high ECS.
    His team has calculated a net positive sensitivity of just 1.2 c and hopefully this may be tested in court in the US. Who knows?
    What do Nic, Judith and others think? Anyone?

    • Nic and Judith should spend more time debunking guff like Monckton’s than talking about emergent constraints, because that stuff reaches a far larger audience making it more relevant to the skeptics. They won’t, of course.

      • Jim D,
        My being completely ignorant on this, could you explain to me what’s wrong with Monckton? In the mean time I will read the WUWT article. You don’t have to defend your critique I’d just like the info so I understand better. Thanks

      • Really you should ask Judith or Nic rather than me. He defines sensitivity using the absolute temperature, which is wrong to start with. Then when he gets a ridiculously low number, he reverts to the standard definition to plug it in rather than sticking with his previous definition that would have given a stupidly high answer. It’s just a mess, but they don’t notice at WUWT, of course.

      • Well since it is an amicus brief it will be up to the judge to decide and if it’s appealed then that’s the law but not necessarily the science I’d guess. Or necessarily even a reflection of the brief. Thanks for the reply. Hopefully JC or Nic Lewis could shed some light here.

      • Only a couple of skeptics over at WUWT saw through it, including Scottish Skeptic and Roy Spencer. Monckton still doesn’t understand their basically correct critique, and thinks there is a feedback even with no GHGs at all, perhaps even with no atmosphere, but I think he hasn’t quite thought it through yet.

      • JIm

        I am inclined to agree with you. There seemed to be flaws in Monckton’s reasoning but it seems a document worth analysing more deeply in a place such as this in order to pick out the good bits from the more questionable aspects.

        Monckton is an interesting character but he has never spoken for me.

        tonyb

      • Jim D, There are loads of people who are willing and able to debunk “guff like Monckton’s” – why don’t you do so, for instance? On the other hand, there are almost no people willing and able to publicly explain emergent constraint studies and debunk unsound ones. So it is a better use of my time to do that than to critique Monckton.

      • I think the ones he convinces are more likely to listen to you and Judith than to me, and besides, that is the level of debate at WUWT which is much more popular than here, so you would reach and perhaps even change the views of more people. Roy Spencer had a go, but it was a bit tame.

      • JIm

        Nic is right. Why don’t YOU debunk Moncktons stuff if you think that all of it is so bad that its flaws can be exposed so comprehensively that sceptics at such places as WUWT will come to recognise its shortcoming?

        tonyb

      • I have no time for posting at WUWT. If Monckton comes here, sure. Nick Stokes is posting there pointing out the problems. Enough to say that temperature does not feedback on itself. Monckton is inventing a new process to try to make physics fit his naive and wrong equation.

      • JimD
        ECS is an inbred child created within the closed and incestuous climate science community. What Monckton has done is shine a light on ECS from other disciplines such as engineering control systems and electrical system feedback, and found ECS to be a vacuous nonentity.

        It’s an impossible chimera, combining the mutually exclusive – equilibrium and feedbacks. You can’t have equilibrium and feedbacks – not in a system like the climate.

        I agree with you however that CM himself probably doesn’t understand this implication of his work. He might have killed ECS. It is feedbacks that are the subject of the whole debate. Monckton wisely gives the precondition “if the system is predisposed to feedbacks”, then any input, amplified or not, will trigger those feedbacks. This precondition is met in the climate. It is an open, dissipative system containing both negative (friction) and positive (excitability) feedbacks. Therefore there can be no equilibrium or any equilibrium temperature or ECS. Any starting point other than absolute zero is a starting point for change and a trigger of feedback.

        What it boils down to is that if in a dynamic system the result of an effect changes the effect that caused it, then the system will endlessly change and never reach equilibrium. Lorenz showed this in 1963 and the climate community still have not understood it. Lorenzian pseudo random walk is the closest the system would come to “equilibrium” although no plateau or state would ever be normative or a mean. The system and state will change forever.

      • Correction – where I say ECS it would be more precise to say equilibrium temperature.

      • I think many skeptics are yet to understand the concept of a positive feedback in climate, Monckton is proving himself to be one of them, and confuses them even further. It’s the blind leading the blind there. For an external forcing, a positive feedback simply amplifies its effect. The two best known are the water vapor and albedo feedbacks.

      • JimD should spend more time debunking guff like Monckton’s than making comments at Climate Etc, because that stuff reaches a far larger audience making it more relevant to the skeptics. He won’t, of course.

      • Since you agree Monckton’s post is guff, my job is done here.

      • ” He defines sensitivity using the absolute temperature”

        Wow, what else would you use? Anomaly becomes just a scaling factor. Is “emergent” ECS somehow exempt? Get with Mr. Eschenbach on emergence. He actually gets it. Not sure Nic does.

        Emergence is a meta layer of new and inherently unpredictable (at current human understanding) information that develops in complex systems. This goes all the way back to Boltzmann, before quantum theory, who realized that we deal with nothing but macro (meta) layer detail. We can’t possibly know the positions and momenta of all the particles in a cat or a table or the atmosphere. We gotta deal with the big (emergent) stuff.

        Modelling tries to believe that we can reduce emergence.

      • Jim D wrote, ” … my job is done here.”
        You will not be missed.

      • “Jim D, There are loads of people who are willing and able to debunk “guff like Monckton’s” – why don’t you do so, for instance?”

        On a previous issue I grasped the nettle myself (https://wattsupwiththat.com/2015/03/12/reflections-on-monckton-et-al-s-transience-fraction/ and https://wattsupwiththat.com/2015/04/01/some-updates-to-the-monckton-et-al-irreducibly-simple-climate-model/).

        The problem is that Lord Monckton’s responses will have enough buzz words to pull the wool over the eyes of most readers–apparently including our hostess. The guest post by Rud Istvan here at Climate Etc. said of Lord Monckton’s previous post that “The mathematical derivation of the ‘irreducibly simple’ equation is impeccable.” That was ludicrous. Lord Monckton’s replacing convolution with multiplication was like substituting addition for multiplication. Yet Dr. Curry spiked my proposed responsive post pointing this out as “too technical.”

        I’m not trying to discourage you from debunking Lord Monckton. But don’t be too optimistic about whether many will recognize that you have succeeded in doing so.

      • Your hostess has ignored Monckton’s new article since I am busy with other things.

      • Thanks for the encouragement. As you say, it can be seen as technical, but Roy Spencer’s one-liner that basically says that the feedback responds to, and is defined in terms of, perturbations, not to the absolute temperature, should be sufficient. If someone wants to argue with even their hero Roy Spencer, let them. When even some of the skeptics at WUWT are speaking up, that should be a clue. Judith has tried to control the more outrageous claims, but seems to steer away from Monckton.

      • I won’t personally analyze all Monckton’s errors, so I can’t ask someone else too, either. However, I did email Lord Monckton’s Heartland lawyer to suggest that associating Heartland with Lord Monckton’s incoherence may not help Heartland’s reputation. Perhaps some heavy hitters could do the same. That brief makes us skeptics look like kooks.

    • Jim D maybe you should file an amicus brief debunking Monckton. That would go a long way in debunking his ‘science’s if it’s obviously wrong.

    • Hesitated to weigh in because OT. But after all these ridiculous comments could not resist. I have sparred with Monckton here before (example guest post on his ‘Irreducibly simple equation’ whichnwas neither irreducible nor simple. But, over on WUWT, I cannot fault this new draft paper. Those who whine here about C v K, or Spencer’s shallow criticism, do not understand math models and how they can be applicable in different physical domains. You want my views on this new effort, read my comments at his post at WUWT. That means especially you, Jim D.
      Now lets get back to Nic Lewis important topic dissecting the new faux ‘constrained CMIP5 ECS estimates’.

      • Roy Spencer destroys Monckton’s whole line of reasoning with one line “The effective radiating temperature of the atmosphere (~255 K) is not a “forcing” and so cannot have a “feedback”.”
        Monckton asserts it does and is 24 K, no more, no less, a number he got out of nowhere together with the 8 K he says is from CO2 alone. If 255 with no GHGs (perhaps not even an atmosphere) amplifies to 279 K, why doesn’t that 279 K have a further feedback on itself. What is different between the no forcing 255 K and the no forcing 279 K?. He’s got himself into a logical tangle, and that’s just one of his many problems.

  6. …The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide.

    … especially when taken with the willing suspension of disbelief!

  7. “CO2 climate sensitivity has a component directly due to radiative forcing by CO2, and a further contribution arising from climate feedbacks, both positive and negative. “Without any feedbacks, a doubling of CO2 (which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed.”
    Undisputed?
    Well first of a lot of this heat is supposed to go into the oceans, which adjust over thousands of years to the “final” ECS.
    Which means of course that in the short term next 200 years the temperature rise could theoretically be much less than 1C.
    Secondly warmist argue that all the other changes that go on on this 200 years, let alone the thousands required, mean that the ECS can change into something different from this “undisputed” figure anyway.
    I am unhappy with assertions that an ECS cannot be worked out to some degree. While there is room for many elephants there is still a range of probability we can work within, hence the debate. The trouble is that both sides adjust the parameters to fit their narrative.
    I would be interested in Nic’s take on the definitions if he has them handy,
    That are in common usage by everyone for a doubling of CO2.
    If we see a figure of 1.4 C or an average of 3.0 C is this the figure for a doubling of CO2 immediately, 10 or a 1000 years?
    Can we avoid the wriggle room the warmists always run to when they are losing their arguments.
    The only consistent message is that 3C will be dangerous and that therefore it “must be” 3 C or greater by hook or by crook or by Jim D.

  8. the 4 constraints, all favouring high ECS values, that Caldwell et al find credible

    Business as usual in the climate kitchen.

    • Geoff Sherrington

      PhilS, Feedback and equilibrium.
      Imagine an early steam engine with one of those classic governors, the rotating balls that throttle back the increasing speed. The engineering tolerances were not so good, so they were forever hunting around the desired set point. Why not look at climate like this? The hunting sometimes beats, so you have repeating patterns like Javier,’s cycles, with a bit more grease on the governor now and then to speed up the hunt for an equilibrium speed. The pont being, are personal views of effects like feedbacks too theoretical and rigid? This discussion of forcing versus effect etc gets a lot of attention that might be superfluous if we remember that the governors can wander around in efficiency. Nothing natural in climate is a perfect match to mathematical concepts because the systems are loose and full of drift and sudden rate changes. All within the broad control constraints of snowball and less than boiling seas. Geoff.

      • All within the broad control constraints of snowball and less than boiling seas

        (that really narrows it down there, Geoff… ☺)

      • Good point Geoff
        I remember a while back seeing a palaeo study of the PDO to about ten cycles back which showed its wavelength to vary between about 30-100 years. So it’s a mistake to assume a constant 60 year time period – it just ain’t so. In the studies by Javier the oscillation alignments come and go, sometimes correlating and sometimes not. I think that Robert Ellison is on the right lines saying that many apparent regular cycles are internal chaotic cycles which only from time to time give the appearance of regularity. Your analogy of a shifting regulator constantly changing the feedbacks could well be right.

        However I don’t dismiss the evidence from Javier and others for astrophysical forcing. What I keep saying is that it doesn’t have to be an either-or choice between internal chaotic oscillation and cycles rigidly forced by astrophysical processes. Chaotic oscillation can be periodically forced from outside, either strong forcing (waveform of forcer and system the same) or weak forcing (waveform of the forced system complex and weakly or intermittently resembling that of the forcer).

    • “It was one more defeat in our long and losing battle to keep the Sun perfect, or, if not perfect, constant, and if inconstant, regular. Why we think the Sun should be any of these when other stars are not is more a question for social than for physical science.” Jack Eddy

  9. Part one suggests most of these papers are of poor quality. I look forward to parts 2 and 3 dissecting those few that aren’t.
    An underlying issue is the intrinsically problematic nature of the models themselves. Computational intractability forces large grid sizes. This means important processes like convection cells (thunderstorms) have to be parameterized. The parameters are tuned to best hindcast. This auromafically drags in the attribution problem that invalidates the underlying model forcing assumptions. See my two guest posts at WUWT in 2015 and 2017 for details. Constraining results that are questionable from first principles seems a rather useless exercise.

    • Ristvan and Matt, The most telling negative results about GCM’s are referred to by Nic in his comprehensive writeup on ECS. These results show that ECS in models is strongly influenced by sub grid parameterizations of tropical clouds and convection. Further, the paper suggests that there is no adequate observational evidence to constrain the parameter choices, and therefore ECS.

  10. ristvan: Constraining results that are questionable from first principles seems a rather useless exercise.

    It depends on the target audience, and how well the writer understands them. There is a large audience that accepts the results that you declare questionable. Directing their attention relentlessly to the flaws in the derivation of those results reduces the size of the audience. Not everyone in the audience is open-minded, but not everyone is closed-minded either. It also communicates to that audience that critics are well-informed and diligent, not to be mindlessly disparaged and ignored.

    If you think this paper is of poor quality, provide your critiques and perhaps Nic Lewis will elaborate and improve the quality; or show where your own evaluation falls short. The interchange can enlighten the lurkers.

    • (Matthew, i think the papers of poor quality that Istvan is referring to are the papers that Nic referred to in his post)…

      • afonzarelli: (Matthew, i think the papers of poor quality that Istvan is referring to are the papers that Nic referred to in his post)…

        I see what you mean, and you might be right.

    • Mm, afonz gotmit right, sorry if my comment was unclear. My critique was of the ‘constrained model’ papers, when the underlying models being constrained are junk. I have the highest respect for Nic Lewis on ECS, and very much look forward to parts 2 and 3.

  11. So, in a nutshell, the predictive skill attributed to climate science is all BS. Disagree? OK, I have a challenge then. Create a predictive model that will tell us who will win the World Series in 2025, 2026 and 2027. There are many fewer variables in that than in climate. If you get it right, I will bow down to the God of Science.

    • Jeff Meskan | March 20, 2018
      “Create a predictive model that will tell us who will win the World Series in 2025, 2026 and 2027. There are many fewer variables in that than in climate. If you get it right, I will bow down to the God of Science.”

      Model C do?
      Pick the two sides that finish first and second in the preceding year to win the next year.
      Take Punxsutawney Phil result in February,
      If predicts winter continuing pick the first team, if an early spring the second.
      Wash, rinse ,repeat.
      Bowing will have to wait till 2027 I guess.

      • This is utterly ridiculous.

        At the level of climate, it would be easy to model. The winner of the world series in 2027, assuming there is a world series of MLB teams, will be a team that is a member of MLB. It will be a team of that league that is the first team to win four games in a 7-game series, assuming that is still the format. Done. Climate is that simple.

        On to .2 ℃ per decade of the first two decades of the 21st century. A decadal prediction, that was significantly harder; they’re exceptionally close. In baseball, that would be like predicting home runs will increase if players were to be allowed to use steroids.

        Nobody is losing any arguments, angech, except you.

      • Regarding the winner of the World Series in 2027: I asked that climate scientists create a predictive model to forecast the winner, not that there will be one. That is obvious. The variables to predict the actual winner include, but are not limited to: the current roster of teams, the skill of management to acquire talent from their farm system or free agency, the probable free agency players, the projected salary cap per team (which depends on the renegotiated price of the TV contracts), the degradation of player skill over time, etc. After running your model, tell me who wins. Simple. Climate scientists now get away with making predictions and never being accountable for the actual results. Remember Paul Ehrlich’s “Population Bomb” written in the 1970’s? He predicted mass world-wide famine and starvation in the 1980’s. It never happened, but he was still hailed as a leading “futurist”. Why is that?

    • Models for that already exist. Called computer games.

      I’d like to challenge our Governor (Islee) to bet his mortgage payment on the winner of this year’s Super Bowl based off of running Madden NFL Live a thousand times.

  12. ” Shortwave cloud feedback is generally found to be the dominant contributor to correlations with ECS because it is the largest source of inter-model spread in ECS.”

    This is the same circular reasoning that happens again and again in modeling logic. It is exactly the same circularity criticized by Qu et al. It is exactly the same reasoning that says if you take 50 different models with different initial conditions, you can somehow parse out the randomness (entropy); when it is actually the disparity of initial conditions that causes the spread.

    • This is a perturbed physics ensemble tuned to the region of instrumental surface temps and projected forward. The same model with different initial conditions – and both high and low sensitivity.

      Constraining models is first of all a matter of more precise observations and finer grids. The development of useful models is much more likely as nested fine to large scale process models – and at time scales of days to years. Even then – there is still a matter of a lack of fundamental knowledge of the climate system and its multiple couplings.

      “The accurate representation of this continuum of variability in numerical models is, consequently, a challenging but essential goal. Fundamental barriers to advancing weather and climate prediction on time scales from days to years, as well as longstanding systematic errors in weather and
      climate models, are partly attributable to our limited understanding of and capability for simulating the complex, multiscale interactions intrinsic to atmospheric, oceanic, and cryospheric fluid motions.”
      https://journals.ametsoc.org/doi/pdf/10.1175/2009BAMS2752.1

      Before constraining anything – one should decide if long term projections are sufficiently robust to be worth the candle. The CMIP – whatever version – might just be a forlorn cause.

  13. It is wrong to assume that a constraint that worked with one set of models, like CMIP3 and didn’t with another like CMIP5, is not worth pursuing. For example, maybe there was more spread in the CMIP3 models in some aspect leading to that constraint being significant, but when some errors were corrected for outlier models, the models may behave more similarly and it can no longer be used to distinguish how good the models were with any significance. For an emergent constraint to work, you have to have a range of bad to good in their performance with that observational constraint. Ironically the method works best if some of the models are bad. For example they may have a negative cloud feedback in the tropics, but they perform poorly against satellite data for the current climate. Later generations of the model would correct their performance against those observations. It’s like in quantum mechanics, the system changes when some aspect of it is observed. In this case observations reveal deficiencies that can be used as emergent constraints, but only until those deficiencies are corrected. Then something else has to be found, but the method relies on variable deficiencies across models.

    • It’s like quantum mechanics aye?

    • As you read this story just remember the Chinese have bigger computers and are vastly outspending the US in A.I. technology so it will be interesting to see who wins this race. (Hint: the biosphere will crash but CO2 will only be one of many causes.)
      Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale:
      https://www.hpcwire.com/2018/03/19/deep-learning-at-15-pflops-enables-training-for-extreme-weather-identification-at-scale/
      “The Cori supercomputer has given climate scientists the ability to use machine learning to identify extreme weather events in huge climate simulation datasets. Predictive accuracies ranging from 89.4% to as high as 99.1% show that trained deep learning neural networks (DNNs) can identify weather fronts, tropical cyclones, and long narrow air flows that transport water vapor from the tropics called atmospheric rivers. As with image recognition, Michael Wehner (senior staff scientist, LBNL) noted they found the machine learning output outperforms humans…
      As with image recognition, Michael Wehner (senior staff scientist, LBNL) noted they found the machine learning output outperforms humans.”

    • Jim D: Then something else has to be found, but the method relies on variable deficiencies across models.

      It also depends on them having common strengths.

      The analogy with quantum mechanics is poor.

      • The more common the strengths, the less the method works, but also the range will narrow as poor models are weeded out. I think skeptics would be very upset if the provably poor models persisted in contributions to future reports. Emergent constraints is a way of pulling them towards better verification, a kind of public shaming or peer pressure.

      • Jim D: I think skeptics would be very upset if the provably poor models persisted in contributions to future reports.

        Huh?

        I don’t know about anyone else, but I applaud efforts to separate the better and more reliable models from the poorer and less reliable models. From progress to date, I think a reasonably adequate model will come decades from now, but I think the efforts at improvement are worthwhile.

        The more common the strengths, the less the method works,

        the common strengths are what justify computing the ensemble means of the trajectories. I think you are saying that the method works best if the common strengths are neither too common nor too rare. In the extreme of no common strengths, you don’t have an ensemble (different sense of this word!) of models of anything.

  14. JimD
    “It’s like in quantum mechanics, the system changes when some aspect of it is observed.”

    Cack handed way of putting it.
    The system is the system, it just does what it does only the parameters or rules at the micro level are just far removed in terms of scale for us to observe or appreciate.
    As to changing when observed?
    One would have to believe in a god or magic .
    Just think about it.
    Does the atom really know you are observing it’s behaviour?
    Bunkum.
    The moment you go to observe anything you are changing the rules of unobserved behaviour.
    It is not, repeat not, the same experiment.
    God should not play dice with men.

  15. The use of correlated predictors in a regression model when both predictors may well be false is at least, significant: no one is validating these models and… no one can.

  16. “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

    Pretending that there is a deterministic solution to any of these models – that indeed share components and development – is utter nonsense.

    • Your usual inability to distinguish initial value problems like weather prediction from boundary value problems like climate change induced by external forcing changes.

      • “Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation (see ref. 26).” http://www.pnas.org/content/104/21/8709


        “Generic behaviors for chaotic dynamical systems with dependent variables ξ(t) and η(t). (Left) Sensitive dependence. Small changes in initial or boundary conditions imply limited predictability with (Lyapunov) exponential growth in phase differences. (Right) Structural instability. Small changes in model formulation alter the long-time probability distribution function (PDF) (i.e., the attractor).”

        We see above in the Rowlands et al 2012 graph – and indeed in the Slingo and Palmer 2011 schematic – the results of sensitive dependence. These are chaotic dynamical systems with nonlinear systems of partial differential equations at the core. The idea of climate model solutions converging rather than diverging from small initial differences is just a completely baseless AGW blog meme.

      • Climate models all rise in response to positive external forcing. That part is predictable and not chaotic. You have to distinguish these otherwise you end up in a hopeless mess when interpreting climate projections.

      • “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.” IPC TAR 14.2.2.2

        You waste our time in a hopeless mess of AGW blog memes. Until we have these PDF’s from perturbed physics ensembles I will continue to scoff. Opportunistic ensembles are utter bunkum. As the IPCC said so long ago – they cannot provide meaningful information. And both climate and models can surprise at either end of the cool/warm spectrum – it is a reality of both the climate system seen in data and models.

      • You have already decided that adding a forcing growing to equivalent to a 1% increase in solar strength over a century or so has completely unknowable effects to you, despite all evidence to the contrary in the observations (not just models).

      • Well it is about 1/2% – but the size of the direct effect is unknown as are implications for abrupt change in the system and where natural variability can shift to. There may in future be a better grasp on these things – certainly than yours. “Doing so is vital, as the future evolution of the global mean temperature may hold surprises on both the warm and cold ends of the spectrum due entirely to internal variability that lie well outside the envelope of a steadily increasing global mean temperature.” https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2008GL037022

        The discussion was on models where you are are definitively incorrect. You then go back to AGW – but the climate picture is a whole lot bigger than that.

      • It’s about ECS, with equilibrium responses being another concept you have little belief in.

      • It is about a quantity we have no idea how to realistically constrain? Yeah right.

      • Turns out the higher end of the range is favored by constraints based on observations. That’s what this is about.

      • OK, why not tell Lewis and Curry that? They’re the ones posting this line of research, not me. Maybe they’ll think highly of your considered opinions, not given yet, on why models with positive cloud feedbacks match observations better and also have higher climate sensitivities.

      • Not so much the words, just the picture. I get it. Anyway, I think you are really very interested in why emergent constraints always seem to point to higher sensitivities because that conflicts with your prior beliefs that you may need to revise with this new information.

      • “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic.”

        “Schematic of a probabilistic weather forecast using initial condition uncertainties. The blue lines show the trajectories of the individual forecasts that diverge from each other owing to uncertainties in the initial conditions and in the representation of sub-gridscale processes in the model. The dashed, lighter blue envelope represents the range of possible states that the real atmosphere could encompass and the solid, dark blue envelope represents the range of states sampled by the model predictions.”

        The same nonlinearities in weather and climate models of course produce “Uncertainty in weather and climate prediction – Julia Slingo, Tim Palmer
        Published 31 October 2011. DOI: 10.1098/rsta.2011.0161

        It is very difficult to imagine that chaos in models can be constrained – and it is known without any doubt at all that these models are chaotic. Nor do we know what the limit of imprecision of these models is. From Rowlands et al (2012) the imprecision of the HadCM3L perturbed physics ensemble exceeds the range given by CMIP opportunistic ensembles.

        The scientific rationale for choosing any individual trajectory in the solution space is lacking. Yet these are compared to other non-unique solutions from other models and they pretend that more or less cloud feedback is the distinguishing feature. Each of these models can give low or high sensitivity from different starting points – with the same structure – because the distinguishing feature is in mathematical chaos – pioneered as a Hamiltonian of an evolving orbital system more than a century ago. And as people from the IPCC to Julia Slingo and Tim Palmer keep saying – the only possible use for these long range and models is to produce many foreecsts in a probability density function. However unlikely that is. The experiment has been done many times and the hypothesis that Jimmy has not the slightest clue has not been falsified.

        There are potentially scientifically valid uses for models – in validated process level investigations and perhaps in initialized decadal scale forecasting.

        e.g. https://judithcurry.com/2018/03/19/emergent-constraints-on-climate-sensitivity-part-i/#comment-868864

        Apart from the intractable problem of chaos – there is the epistemological problem. And I did quote part of this yesterday. “The global coupled atmosphere-oceanland-cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and
        climate are, therefore, somewhat artificial. The large-scale climate, for instance, determines the environment for microscale (1 km or less) and mesoscale (from several kilometers to several hundred kilometers)
        processes that govern weather and local climate, and these small-scale processes likely have significant impacts on the evolution of the large-scale circulation (Fig. 1; derived from Meehl et al. 2001).

        The accurate representation of this continuum of variability in numerical models is, consequently, a challenging but essential goal. Fundamental barriers to advancing weather and climate prediction on time scales from days to years, as well as longstanding systematic errors in weather and
        climate models, are partly attributable to our limited understanding of and capability for simulating the complex, multiscale interactions intrinsic to atmospheric, oceanic, and cryospheric fluid motions.” https://journals.ametsoc.org/doi/abs/10.1175/2009BAMS2752.1

        JC SNIP

      • You said all that before, but you are not addressing that adding 1% to the forcing changes the climate significantly, and the remaining ice caps have trouble persisting under those conditions. Tipping points are pushed hard,

      • And you just keep going around in very small circles. You know what you said and what was removed.

        We are never getting to 3.7W/m2. Realistic emissions trajectories in multiple gases and aerosols and sequestration just won’t allow it. Still it is a not much of a change if we did.

        We don’t know where tipping points are. They can be cooler or warmer – and cooler seems far more likely.

        “But what actually happens is that when CO2 reaches a minimum and albedo reaches a maximum, the world rapidly warms into an interglacial. A similar effect can be seen at the peak of an interglacial, where high CO2 and low albedo results in cooling. This counterintuitive response of the climate system also remains unexplained, and so a hitherto unaccounted for agent must exist that is strong enough to counter and reverse the classical feedback mechanisms.” https://www.sciencedirect.com/science/article/pii/S1674987116300305

        And opportunistic ensembles are complete BS.

      • Paleoclimate tells us a lot. Below 300 ppm we are subject to Ice Ages. Above 600 ppm there was hardly any ice and 100 foot higher sea levels. Two completely different climates, and our trend is towards the second one.

      • Paleoclimatology tells us nothing so simple minded.

        “Abrupt climate change is not unusual, and in fact many simple physical systems exhibit abrupt changes. Here, we illustrate a few basic points using a mechanical analogy.

        Imagine a balance consisting of a curved track poised on a fulcrum, as shown above. The track is curved so that there are two “cups” where a ball may rest. A ball is placed on the track and is free to roll until it reaches its point of rest. This system has three equilibria denoted (a), (b) and (c) in the top row of the figure. The middle equilibrium (b) is unstable: if the ball is displaced ever so slightly to one side or another, the displacement will accelerate until the system is in a state far from its original position. In contrast, if the ball in state (a) or (c) is displaced, the balance will merely rock a bit back and forth, and the ball will roll slightly within its cup until friction restores it to its original equilibrium.

        Suppose we push down gently on the right arm of the balance causing a slight tilt, as shown in (a1). When we let go, the ball will rattle around for a bit as the balance tilts back and forth. Once things settle down, the system will return to its original state of rest, with the ball in the left cup. As noted above, this position is stable in the face of small perturbations.

        If instead we push the right arm somewhat farther down, as shown in (a2), the ball will eventually roll over the fulcrum and slide down into the right cup. This is an example of a system passing a threshold. When the pressure is relieved, the system does not return to its original state. A temporary influence can have permanent effects; this is what is known as hysteresis .

        This device illustrates other kinds of behavior that are common in the climate system. The equilibria illustrated in the top row are steady, in that the system sits still without moving. But suppose that the ball in state (a) or (c) is given a gentle push. If the friction is low, the ball in either case will rattle around for a long time, but will remain in its original cup. This illustrates the notion of an unsteady regime—the “left cup” regime and the “right cup” regime. A strong enough push at the right time could cause a transition between one regime and the other.

        An unusual application of force could cause unexpected behavior. Hit it hard enough, and the device might do something different from anything seen before. For example, the arm of the balance might bang against the table, and the ball could bounce out of the cup and roll away.

        Now imagine that you have never seen the device and that it is hidden in a box in a dark room. You have no knowledge of the hand that occasionally sets things in motion, and you are trying to figure out the system’s behavior on the basis of some old 78-rpm recordings of the muffled sounds made by the device. Plus, the recordings are badly scratched, so some of what was recorded is lost or garbled beyond recognition. If you can imagine this, you have some appreciation of the difficulties of paleoclimate research and of predicting the results of abrupt changes in the climate system.” https://www.nap.edu/read/10136/chapter/3#13

      • You seem to have written a lot there, so I must have hit a nerve. If you think a 600+ ppm climate is fine and safe, you need to be advocating for it. Otherwise you need to be urging caution about forcing rapid changes in climate.

      • But really – we were talking models where you again completely miss the point.

      • You can run a model at 300 ppm and then again at 600 ppm. Completely different climates will ensue.

  17. Nic: What emergent constraints studies appear to be doing is sometimes known as data-mining, a statistically dubious process. The most notorious example of data-mining came in the 1990’s when a large number of genetic markers were linked to genetic diseases. These are the same genetic markers used today in forensics, paternity, genealogy and detect random mutations without functional significance in the human genome. Perhaps 100 patients with a genetic disease and 100 controls would be studies to see if there was a connection between the presence of a particular marker and the genetic disease. Then the DNA around the marker would be searched for genes that cause the disease. Such searches were usually a waste of time and the correlation between the marker and disease was invariably weaker or non-existent when the experiment was repeated with other groups of patients. These studies played a significant role in Ionnaides paper “Why Most Published Research is Wrong?

    By analogy, one can search a population of a few dozen AOGCMs for markers that correlate weakly with high ECS and then see if the real world has that marker. Given an infinite number of locations, altitude and observables, one will find some relationships. Only those relationships linking markers to high ECS are likely to end up being studied thoroughly enough to warrant publication. So Caldwell’s comment is extremely important, but I don’t know if there is any simple way to get climate scientists to pay attention to the follies of data-mining. Could the relationship between a particular marker/constraint and ECS be examined in a thousand of perturbed physics ensembles before being accepted?

    Your Caldwell quote: “One problem with emergent constraints is that large inter-model correlations between current climate and future-climate quantities are expected by chance in multi-model databases. As a result, emergent constraints without a solid physical basis should be viewed with scepticism. Unfortunately, most emergent constraints in the published literature lack a satisfying physical explanation.”

  18. Nic: I read Sherwood with an eye to identifying any features resembling data-mining. I noticed that they used AMIP output, output which has a climate feedback parameter associated with low climate sensitivity in EVERY model. These AOGCMs only show high climate sensitivity when forced by GHGs. This suggests many interesting questions: Is the relationship between S, D and LTMI present in historic simulations of today’s climate? Since climate sensitivity appears to increase during simulations of the future, do these markers change with time?

    Look at the [linear] correlation between Sherwood’s parameter D (the only one mechanistically linked to ECS) and ECS: ECS = mD + b. Now plug in the observed values for ERAi and MERRA. That linear correlation produces a central estimate for ECS between 5 and 6 K. Furthermore, of the 48 CMIP3+5 models, arguably only 8 (possibly related) models do a decent job of reproducing the D values observed by ERAi and MERRA. The other models are off by a factor of 2. What this data really says is that most models don’t do a good job of reproducing observations of a parameter (D) that is important to climate sensitivity! Most models aren’t fit for purpose. Among the decent performing models, there is no correlation between D and ECS. (The lack of a key to the symbols makes it impossible to know which models perform well in D.)

    The existence of a linear correlation between ECS and Sherwood’s S parameter appears to depend on a few outliers with high climate sensitivity. However, if one believed the linear correlation was meaningful, MERRA’s value for S would imply that climate sensitivity is high and ERAi’s that climate sensitivity is low.

    It just goes to show that one can get anything published in Nature today that says climate sensitivity is at the high end of the IPCC’s range. It would be fun to re-plot the Sherwood data with the climate feedback parameter on the y-axis different names for the S and D and show it to an audience of climate scientists. The numerical values for climate sensitivity would be accidentally missing, but everyone knows that they run from 2K to 5K. After the audience raises the same objections I have about the conclusion that climate sensitivity is low, tell them that the vertical axis is really the climate feedback parameter and that this is Sherwood’s data.

    • franktoo, thanks for this very relevant and perceptive comment. I give a fairly detailed analysis of the Sherwood D constraint in Part 2 of this series, which I hope will be published within the next day or two.

      I had not considered your point about the use of AMIP rather than historical simulations, but I don’t think the AMIP simulations were used for calculating D. See the first sentence of the (full) Methods section.

      I agree that Sherwood S is pretty uninformative about ECS in GCMs. But as Caldwell et al (2018) concluded that it is not credible, I do not deal with it in my analysis.

  19. Nic: Thanks for the useful replies. I wanted to look in detail about one of these papers, but in this case I should have waited for Part 2.

  20. Pingback: Emergent constraints on climate sensitivity: Part II | Climate Etc.

  21. Competition between global warming and an abrupt collapse of the AMOC in Earth’s energy imbalance

    Abstract

    A collapse of the Atlantic Meridional Overturning Circulation (AMOC) leads to global cooling through fast feedbacks that selectively amplify the response in the Northern Hemisphere (NH). How such cooling competes with global warming has long been a topic for speculation, but was never addressed using a climate model. Here it is shown that global cooling due to a collapsing AMOC obliterates global warming for a period of 15–20 years. Thereafter, the global mean temperature trend is reversed and becomes similar to a simulation without an AMOC collapse. The resulting surface warming hiatus lasts for 40–50 years. Global warming and AMOC-induced NH cooling are governed by similar feedbacks, giving rise to a global net radiative imbalance of similar sign, although the former is associated with surface warming, the latter with cooling. Their footprints in outgoing longwave and absorbed shortwave radiation are very distinct, making attribution possible.

  22. So AMOC collapse doesn’t quite do it? Nor perhaps NH cooling with enhanced meriodional flows equivalent to the difference between RCP 4.5 and RCP 8.5? Nor perhaps a cooling Pacific – with more low level cloud – from these intensified north/south flows with a cooling Sun?

    Nothing quite does it for these people – they are addicted to anthropogenic doom. And the only solution is to make energy expensive in an energy poor world. It most certainly ain’t.

    https://watertechbyrie.com/

    • Like abrupt climate change is not doom. LMAO. Greenland would get walloped. Iceland would get walloped. Great Britain would get walloped. Northern Europe would get walloped.

      And keep deluding yourself with your little graph.

  23. The secret is to build prosperous and resilient communities in vibrant landscapes.

    “My” little HadCRU graph with trend?

  24. Pingback: Weekly Climate and Energy News Roundup #309 | Watts Up With That?

  25. Pingback: Weekly Climate and Energy News Roundup #309 |