What’s the worst case? Climate sensitivity

by Judith Curry

Are values of equilibrium climate sensitivity > 4.5 C plausible?

For background, see these previous posts on climate sensitivity [link]

Here are some possibilistic arguments related to climate sensitivity.  I don’t think the ECS example is the best one to illustrate these ideas [see previous post], and I probably won’t include this example in anything I try to publish on this topic (my draft paper is getting too long anyways).  But possibilistic thinking does point you in some different directions when pondering the upper bound of plausible ECS values.

5. Climate sensitivity

Equilibrium climate sensitivity (ECS) is defined as the amount of temperature change in response to a doubling of atmospheric CO2 concentrations, after the climate system has reached equilibrium. The issue with regards to ECS is not scenario discovery; rather, the challenge is to clarify the upper bounds of possible and plausible worst cases.

The IPCC assessments of ECS have focused on a ‘likely’ (> 66% probability) range, which has mostly been unchanged since Charney et al. (1979), to be between 1.5 and 4.5 oC. The IPCC AR4 (2007) did not provide any insight into a worst-case value of ECS, stating that values substantially higher than 4.5 oC cannot be excluded, with tail values in Figure 9.20 exceeding 10 oC. The IPCC AR5 (2013) more clearly defined the upper range, with a 10% probability of exceeding 6 oC.

Since the IPCC AR5, there has been considerable debate as to whether ECS is on the lower end of the likely range (e.g., < 3 oC) or the higher end of the likely range (for a summary, see Lewis and Curry, 2018). The analysis here bypasses that particular debate and focuses on the upper extreme values of ECS.

High-end values of ECS are of considerable interest to economists. Weitzman (2009) argued that probability density function (PDF) tails of the equilibrium climate sensitivity, fattened by structural uncertainty using a Bayesian framework, can have a large effect on the cost-benefit analysis. Proceeding in the Bayesian paradigm, Weitzman fitted a Pareto distribution to the AR4 ECS values, resulting in a fat tail that produced a probability of 0.05 of ECS exceeding 11 oC, and a 0.01% probability of exceeding 20 oC.

The range of ECS values derived from global climate models (CMIP5) that were cited by the IPCC AR5 is between 2.1 and 4.7 oC. To better constrain the values of ECS based on observational information available at the time of the AR5, Lewis and Grunwald (2018) combined instrumental period evidence with paleoclimate proxy evidence using objective Bayesian and frequentist likelihood-ratio methods. They identified a 5–95% range for ECS of 1.1–4.05 oC. Using the same analysis methods, Lewis and Curry (2018) updated the analysis for the instrumental period by extending the period and using revised estimates of forcing to determine a 5-95% range of 1.05 – 2.7 oC. The observationally-based values should be regarded as estimates of effective climate sensitivity, as they reflect feedbacks over too short a period for equilibrium to be reached.

Values of climate sensitivity exceeding 4.5 oC derived from observational analyses are arguably associated with deficiencies in the diagnostics or analysis approach (e.g. Annan and Hargreaves, 2006; Lewis and Curry, 2015). In particular, use of a non-informative prior (e.g. Jeffreys prior), or a frequentist likelihood-ratio method, narrows the upper tail considerably. However, as summarized by Frame et al. (2006), there is no observational constraint on the upper bound of ECS.

The challenges of identifying an upper bound for ECS are summarized by Stevens et al. (2016) and Knutti et al. (2017). Stevens et al. (2016) describes a systematic approach for refuting physical storylines for extreme values. Stevens et al.’s physical storyline for a very high ECS (>4.5 oC) is comprised of three conditions: (i) the aerosol cooling influence in recent decades would have to have been strong enough to offset most of the effect of rising greenhouse gases; (ii) tropical sea-surface temperatures at the time of the last glacial maximum would have to have been much cooler than at present; and (iii) cloud feedbacks from warming would have to be strong and positive.

An interesting challenge to identifying the plausible upper bound for ECS has been presented by a newly developed climate model, the DOE E3SM (Golaz et al. 2019), which includes numerous technical and scientific advances. The model’s value of ECS has been determined to be 5.3 oC, higher than any of the CMIP5 model values and outside the IPCC AR5 likely range. This high value of ECS is attributable to very strong shortwave cloud feedback. The DOE E3SM model’s value of shortwave cloud feedback is larger than all CMIP5 models; however, shortwave cloud feedback is weakly constrained by observations and physical understanding.  A stronger argument for placing the DOE E3SM value of climate sensitivity in the ‘borderline impossible’ category is Figure 23 in Golaz et al. (2019), which shows that the global mean surface temperature simulated by the model during the period 1960-2000 is as much as 0.5 oC lower than observed, and that since the mid-1990s the simulated temperature rises far faster than the observed temperature. This case illustrates the challenge of refuting scenarios associated with a complex storyline or model, which was noted by Stevens et al. (2016).

An additional issue regarding climate model derived values of ECS was raised by recent paper by Mauritsen et al. (2019). An intermediate version of the MPI-ESM1.2 global climate model produced an ECS value of ~ 7 oC, caused by the parameterization of low-level clouds in the tropics. Since this model version produced substantially more warming than observed in the historical period, this model version was rejected and  model cloud parameters were adjusted to target a value of ECS closer to 3 oC, resulting in a final ECS value of 2.77 oC. The strategy employed by Mauritsen et al. (2019) raises the issue as to what extent climate model-derived ECS values are truly emergent, rather than a result of tuning that explicitly or implicitly considers the value of ECS and the match of the model simulations with the historical temperature record.

Was Mauritsen et al. (2019) justified in rejecting the model version with an ECS value of ~ 7 oC? Is the MPI-ESM1.2 value of ECS of 5.3 oC plausible? Observationally-derived values of ECS (e.g. Lewis and Curry, 2018) are inadequate for defining the upper bounds of ECS. There are two types of constraints that in principle can be used: emergent constraints and the Transient Climate Response.

Emergent constraints in principle can help narrow uncertainties in climate model sensitivity through empirical relationships that relate a model’s response to observable metrics. These analyses have mostly focused on cloud processes. The credibility of an emergent constraint relies upon the strength of the statistical relationship, a clear understanding of the mechanisms underlying the relationship, and the accuracy of observations. Further, the most robust emergent constraints are for model parameters that are driven by a single physical process (e.g. Winsberg, 2018). Investigations of integral constraints related to cloud processes have mostly concluded that the climate models with ECS values on the high end of the IPCC AR5 likely range show best agreement with the integral constraints (e.g. Caldwell et al., 2018). However, Caldwell et al. (2018) and Winsberg (2018) caution that additional processes influencing the metric and other biases in the model may affect the analysis. While the robustness and utility of these emergent constraints continues to be investigated and debated, this technique is not very helpful in identifying a plausible upper bound or in rejecting high values such as obtained by Golaz et al. (2019) and Mauritzen et al. (2019).

The Transient Climate Response (TCR) in principle can be of greater utility in providing an observational constraint on climate sensitivity. TCR is the amount of warming that might occur at the time when CO2 doubles, having increased gradually by 1% each year over a period of 70 years. Relative to the ECS, observationally-determined values of TCR avoid the problems of uncertainties in ocean heat uptake and the fuzzy boundary in defining equilibrium owing to a range of timescales for the various feedback processes. Further, an upper limit to TCR can in principle be determined from observational analyses.

TCR values cited by the IPCC AR5 have a likely (>66%) upper bound of 2.5 oC and < 5% probability of exceeding 3 oC. Knutti et al. (2017; Figure 1) show several relatively recent TCR distributions whose 90 percentile value exceeds 3 oC. Observationally-derived values of TCR determined by Lewis and Curry (2018) identified the 5-95% range to be 1.0–1.9 K. As discussed by Lewis and Curry (2015) and Lewis and Grunwald (2017), use of a non-informative prior or a frequentist likelihood-ratio method narrows the upper tail considerably. While the methodological details of determining values of TCR from observations continue to be debated, in principle the upper bound of TCR can be constrained by historical observations.

How does a constraint on the upper bound of TCR help constrain the high-end values of ECS? A TCR value of 2.93 oC was determined by Golaz et al. (2019) for the MPI-ESM1.2 model, which is well above the 95% value determined by Lewis and Curry (2018), and also above the IPCC AR5 likely range. Table 9.5 of the IPCC AR5 lists the ECS and TCR values of each of the CMIP5 models. If a TCR value of 2 oC is used as the maximum plausible value of TCR based on the Lewis and Curry (2018) analysis, then it seems reasonable to classify climate model-derived values of ECS associated with TCR values ≤ 2.0 oC as verified possibilities.

In light of the cited analyses of ECS (which are not exhaustive), consider the following classification of values of equilibrium climate sensitivity relative to the π-based classifications provided in the possibilistic post, which provides the expert judgment of one analyst (moi). Note that overlapping values in the different classifications arise from different scenario generation methods associated with different necessity-judgment rationales:

  • ECS < 0: impossible
  • 0 > ECS < 1 oC: implies negative feedback (unverified possibility)
  • 1.0 ≤ ECS ≤ 1.2 oC: no feedback climate sensitivity (strongly verified, based on theoretical analysis and empirical observations).
  • 1.05 ≤ ECS ≤ 2.7 oC: empirically-derived values based on energy balance models from the instrumental period with verified statistical and uncertainty analysis methods (Lewis and Curry, 2018) (corroborated possibilities)
  • 1.15 ≤ ECS ≤ 4.05 oC: empirically-derived values including paleoclimate estimates (Lewis and Grunwald, 2018)   (verified possibilities)
  • 2.1 ≤ ECS ≤ 4.1 oC: derived from climate model simulations whose values of TCR do not exceed 2.0 oC. (Table 9.5, IPCC AR5) (verified possibilities)
  • 4.5 < ECS ≤ 6 oC: borderline impossible
  • ECS > 6oC: impossible

In evaluating the justification of the high-end values of ECS, it is useful to employ the logic of partial positions for an ordered scale of events. It is rational to believe with high confidence a partial position that equilibrium climate sensitivity is at least 1 oC and between 1 and 2.7 oC, which encompasses the strongly verified and corroborated possibilities. This partial position with a high degree of justification is relatively immune to falsification. It is also rational to provisionally extend one’s position to believe values of equilibrium climate sensitivity up to 4.1 oC – the range simulated by climate models whose TCR values do not exceed 2.0 oC — although these values are vulnerable to improvements to climate models and our observational estimates of TCR, whereby portions of this extended position may prove to be false. High degree of justification ensures that a partial position is highly immune to falsification and can be flexibly extended in many different ways when constructing a complete position.

The conceivable worst case for ECS is arguably ill-defined; there is no obvious way to positively infer this, and such inferences are hampered by timescale fuzziness between equilibrium climate sensitivity and the larger earth system sensitivity. However, one can refute estimates of extreme values of ECS from fat-tailed distributions > 10 oC (e.g. Weitzman, 2009) as arguably impossible – these reflect the statistical manufacture of extreme values that are unjustified by either observations or theoretical understanding, and extend well beyond any conceivable uncertainty or possible ignorance about the subject.

The possible worst case for ECS is judged here to be 6.0 oC, although this boundary is weakly justified. The only evidence for very high values of ECS comes from climate model simulations with very strong positive cloud feedback (e.g. Mauritzen et al. 2019; Golaz et al. 2019) and statistical analyses that use informative priors. Further examination of the CMIP6 models is needed to assess the causes, outcomes and plausibility of parameters and feedbacks in these models with very high values of ECS before rejecting them as impossible.

With regards to the plausible worst case (lower bound of borderline impossible values) of ECS, consideration was given to the upper bound of verified possibilities (4.1 oC) and also the time-honored value of 4.5 oC as the upper bound as the ‘likely’ range for ECS. Consideration of the model’s value of TCR in comparison to observationally-derived values of TCR seems to be a useful constraint for assessing the plausibility of a model’s ECS value. However, further investigation is needed to understand the methodological differences in the varying estimates of TCR and the causes of varying relations between TCR and ECS values among different models.  This seems to be a more fruitful way forward than the emergent constraints approach.

Given that 4.5 oC was specified by the IPCC AR5 as the upper bound of the likely range (> 66% probability), the judgment here that specifies 4.5 oC as the maximum plausible value of ECS will undoubtedly be controversial. Other analysts may make different judgments and draw a different conclusion on this. Consideration of different rationales for making judgments on the maximum plausible value of ECS would illuminate the underlying issues and rationales for judgments.

Because of the central role that ECS plays in Integrated Assessment Models used to determine the social cost of carbon that is largely driven by tail values of ECS, the issue of clarifying the plausible and possible values of ECS is not without consequence.

95 responses to “What’s the worst case? Climate sensitivity

  1. “0 ≤ ECS ≤ 1.2 °C: no feedback climate sensitivity (strongly verified, based on theoretical analysis and empirical observations).”

    Is there a typo here? It looks to me that the lower bound of a no feedback sensitivity should not be zero, but higher.

    “It is rational to believe with high confidence a partial position that equilibrium climate sensitivity is at least 1 °C and between 1 and 2.7 °C, which encompasses the strongly verified and corroborated possibilities. This partial position with a high degree of justification is relatively immune to falsification.”

    This requires a strong assumption that natural causes have not contributed in an important way to the observed warming. If for example the Modern Solar Maximum has contributed to 60% of the observed warming since 1950, something that is not irrational given how little we know about the effects of solar variability on climate, then ECS values below 1 °C are certainly possible, and feedbacks must be net negative.

    • yes a typo, will fix!

      • double sixsixman

        ECS and TCS are …
        “No one knows”.

        Any other “answer is a wild guess.

        One could look at the entire era of
        man made CO2 emissions, which
        I’ll define as starting in 1940.

        If one assumes, with no proof,
        that 100% of the warming since 1940
        was caused only by CO2, as a worst
        case estimate, the average global
        temperature increased
        about +0.6 degrees C.
        in the 78 years
        to the end of 2018.

        +0.6 degrees C. in 78 years is
        a +0.077 degrees C. per decade.

        Let’s round that up to +0.1 degrees C.

        There is no logical reason to expect,
        or predict, more than +0.1 degrees C.
        warming per decade in the future.

        Therefore, CO2 is harmless, even with
        a worst case assumption.

        But actual experience with global warming,
        cause unknown, and actual experience
        with the growth rate of CO2 levels,
        at about +2ppm per year, apparently
        mean nothing to climate alarmists.

        They have their wild guesses
        and computer models (computer
        games, actually).

        They have their +3 degrees C. ECS,
        since 1979, that causes predictions
        to be triple the actual warming, on
        average.

        And they have their “return to coal”
        RCP8.5 that has no relationship
        to actual experience with CO2 growth
        rates.

        In a fantasy world of a coming climate
        change catastrophe, past experience
        with climate change (always good news)
        apparently does not matter.

        In that fantasy world, the future climate
        change is always bad news … that never
        shows up … and we’ve been waiting
        for 60+ years since Roger Revelle’s
        predictions started this climate fraud.

        Real climate science is not wild guesses
        of the future climate, based on wild guesses
        of ECS, and wild guesses of RCP.

        Anyone who participates in a debate
        about ECS and RCP is aiding
        the climate change alarmists.

        Real science means you don’t make
        predictions about things
        not understood well enough
        to make accurate predictions
        of the future.

        In my 21 years of reading
        climate science as a hobby,
        the most important lesson
        I can teach is to
        STOP MAKING
        PREDICTIONS OF THE
        FUTURE CLIMATE,
        and ignore predictions
        that others make !

        They have been consistently
        wrong for many decades !

        The right answer is no one knows
        the future climate, not even whether
        there will be warming or cooling.

        And no one knows when our
        current Holocene interglacial
        will end.

        Unfortunately, few people
        will accept “we don’t know”
        as the right answer for any
        question.

        My climate science blog:
        http://www.elOnionBloggle.Blogspot.com

    • As a non-expert I have the question if ECS and TCS are ill-defined notions: The feed-back mechanisms (albedo change by deglaciation, humidity change …) are based on rising temperatures alone. So any primary cause of global warming would produce the same feed-backs. Therefore, it seems confusing to attribute these feed-backs to the CO2-concentration alone.

      • That is correct. Even more. Most of the Greenhouse Gas effects should come from water vapor that is also a byproduct of warming, so fingerprinting anthropogenic warming to the increase in CO2 is pure fantasy.

    • A Freudian typo.

    • Steven Mosher

      “Modern Solar Maximum has contributed to 60% of the observed warming since 1950, something that is not irrational ”

      since there is no modern solar maximum, it is highly irrational

      • since there is no modern solar maximum, it is highly irrational

        Denying the evidence with opinion?

        The Modern Solar Maximum is clearly seen with a gaussian smoothing of the sunspot data:

        https://i.imgur.com/ZiWx0Yk.png

        Or even better just by running a 70-year moving average through the 1750-2018 sunspot data:

        https://i.imgur.com/SXGv5lm.png

        Respected climatologists like Takuro Kobashi, Bo Vinther and Tom Blunier accept the existence of the Modern Solar Maximum, as they see its effects on climate:

        Kobashi, T., Box, J.E., Vinther, B.M., Goto‐Azuma, K., Blunier, T., White, J.W.C., Nakaegawa, T. and Andresen, C.S., 2015. Modern solar maximum forced late twentieth century Greenland cooling. Geophysical Research Letters, 42 (14), pp.5992-5999.

        I guess you are wrong as usual. The Modern Solar Maximum, a one in 600 years event that exactly coincides with Modern Global Warming is assigned a near-zero effect on climate by models. Not surprisingly, since it ended models performance has been abysmally poor.

      • Javier April 2, 2019 at 5:01 am

        Is in the second graph the time scale along x-axis right? I don’t understand that there are datapoints up to 2018 with a running 70-year moving average filter.

      • Teerhuis, it is a trailing average (not centered). The first point in 1820 corresponds to the average of 1750-1820. The last point in 2018 corresponds to the average of 1948-2018. Data is monthly. Modern solar maximum corresponds to 1935-2005.

  2. Pingback: Climate Sensitivity | Transterrestrial Musings

  3. Some helpful approach for an upper bound of ECS is also included in Skeie et.al (2018) https://www.earth-syst-dynam.net/9/879/2018/esd-9-879-2018.pdf with 3.1 ( most likely 2.0) with the remark that this paper used unfortunately the unchanged AR5 forcings.

  4. “Equilibrium climate sensitivity (ECS) is defined as the amount of temperature change in response to a doubling of atmospheric CO2 concentrations, after the climate system has reached equilibrium.”
    ECS is a useless concept – unless you have an agenda to achieve. It contains innumerable unjustifiable assumptions, including that “temperature change” is a determinable response to a change in atmospheric CO2 concentrations. Under what circumstances? All other factors remaining unchanged? When has that happened? And what the heck is climate system equilibrium, but another meaningless construct?
    I am sure that the climate will change as long as I remain on a diet, but that doesn’t mean that climate change results from my eating habits. “Climate science” fails because causality (human action causing measurable changes) cannot be shown.
    Economists like to pose scenarios, ceteris paribus (all other things remaining unchanged), to argue what might result from some change in conditions – because it is always impossible to isolate all the different variables that may influence or change the outcome of what they are studying. The same fatal defect applies to attempts to predict climate change. Accepting the concept of ECS as valid, is starting your reasoning with a false set of premises. No valid conclusions can ever result from such an invalid beginning. ECS is simply a tool invented to justify unjustifiable arguments and predictions.
    Climate change predictions and economic forecasting serve the same purposes – legitimizing political control over people and their property, and enriching and empowering the politicians and the “experts”. Neither are consistent with legitimate science. Or with reality.

    • double sixsixman

      PaulG23:
      I’ve been writing an economics newsletter,
      for paid subscribers, as one of my hobbies
      since 1977.

      Few people know that, as a group,
      US economists have NEVER
      predicted a US recession !

      Why should climate forecasting
      be any better?

      In economics, the forecasts
      are almost always for good news,
      but 8 or 9 years out of 10 ARE
      good news (no recession) !

      So economists seem right 80% to 90% of the time.

      In climate junk science, the forecasts
      are always for bad news, in spite of
      the fact that past global warming
      has been 100% good news, whether you go back
      20,000 years, or 300 years, or 78 years !

      So climate alarmists are wrong 100% of the time !

      Yet they repeat their wrong prediction
      of a coming climate catastrophe every year,
      and gullible people actually listen to them !

  5. If CO2-induced ‘runaway’ global warming is not even remotely possible, let’s begin by agreeing on that.

  6. Nicely done.

    Seems we could get a start on policy by asserting the most plausible and least falsifiable range of values. Take the midpoint. Then, construct likely physical scenerios for such a value (sea level rise, changes in agricultural zones, etc) and finally, derive social costs.

    Once we do all that, we can have a discusion of how to deal with fossil fuels and other human elements of climate forcing. Even here, I suspect final “social cost” uncertainties wide enough to drive a Mack truck through. But as my brother says about gun control, it would be nice to start having a rational discussion rather than pinging back and forth between extreme statements.

  7. Ari Okkonen

    Nobel physicist Richard Feynman has strongly stated: “if measurements do not support predictions of a hypothesis, the hypothesis is simply wrong”. I analyzed North Atlantic sea surface temperatures for 160 years. It seems that the temperature changes are explained by a linear trend for recovery from the “little ice age” and a 63 year sine cycle for AMO so well that nothing is left for an exponential term having time constant of 50 years for logarithm (warming effect) of atmospheric carbon dioxide concentration. North Atlantic sea surface has a large area for temperature exchange with the atmosphere. So, this measurement based analysis does not support predictions of the CO2 based global warming / global climate disruption hypothesis. The result casts serious doubt on the validity of the CO2 induced global warming hypothesis.

    • Steven Mosher

      “Nobel physicist Richard Feynman has strongly stated: “if measurements do not support predictions of a hypothesis, the hypothesis is simply wrong”.

      nope. In fact Feynman lectured on a famous case where observations
      had to be corrected
      https://youtu.be/b9F8Wn4vf5Y?t=206

      And here as well
      “The net result of all those calculations was that we believed that the sun should be emitting a huge number of neutrinos all the time. I can illustrate that. [Your] thumbnail’s about roughly a square centimeter. Every second, about a hundred billion of these solar neutrinos according to our calculations would be passing through your thumbnail every second of every day of every year of your life, and you never notice it.”

      “Well, right from the beginning it was apparent that Ray was measuring fewer neutrinos events than I had predicted. He came to Caltech in early 1968 to spend a week with me while he and I wrote our papers up describing for me a refined calculation, for him the first measurement of the rate in his tank. It was clear that the rate that he was getting was a factor of three smaller than I was predicting, and that was a very serious problem.

      There was a famous meeting at Caltech, just a few physicists—Dick Feynman, Murray Gell-Mann, Willie Fowler, Bob Christie, and a couple of others—in a small meeting room, where Ray presented his results and I presented my calculations of what he should have measured. There was some discussion of it afterwards, and it was pretty inconclusive. There was a discrepancy; it looked like one of us was wrong.”

      “I was very visibly depressed, I guess, and Dick Feynman asked me after the meeting if I would like to go for a walk. We just went for a walk, and he talked to me about inconsequential things, personal things, which was very unusual for him, to spend his time in quite idle conversation; it never happened to me in the many years that I knew him that he did that before or afterwards. And only toward the end of the walk, which lasted over an hour, he told me, “Look, I saw that after this talk you were depressed, and I just wanted to tell you that I don’t think you have any reason to be depressed. We’ve heard what you did, and nobody’s found anything wrong with your calculations. I don’t know why Davis’s result doesn’t agree with your calculations, but you shouldn’t be discouraged, because maybe you’ve done something important, we don’t know. I don’t know what the explanation is, but you shouldn’t feel discouraged.”

      whats the lesson? when data dont match a hypothesis… We dont know
      we dont know if its the data ( some correction is necessary) and we dont dont if its the instrument, or the theory

      • Ari Okkonen

        In this missing neutrinos case there was one missing component in the theory, neutrino oscillation. The neutrino flow was measured in the Earth for the flavor they were created by the nuclear processes in the Sun. However, they had time to change to other flavors dismissed by the detector during their long journey to Earth. This case of conflicting theory and measurements encouraged further neutrino research.
        https://en.wikipedia.org/wiki/Neutrino_oscillation

      • Steven Mosher |
        “Nobel physicist Richard Feynman has strongly stated: “if measurements do not support predictions of a hypothesis, the hypothesis is simply wrong”.nope. In fact Feynman lectured on a famous case where observations had to be corrected”

        I think the lecture was about the changing hypotheses, never correcting the observations.

        There was no need to apply a “Mosher shift “ to the data to shift the 8 minute discrepancies in Jupiters moons to match a theory, instead they realised both theory and observations were right.
        Einstein did not make “Mosher shifts” in observations either. He accepted the raw data. He just said it would have different values from different observational viewpoints. I do not think he or Feynman would ever accept fiddling observations (temperatures) in the one time frame.

        You have a neat way of attacking Feynman which raises the question of why you dislike him so much.
        The answer, I guess, is that good science and scientific principles crucify the Climate Alarmism pseudoscience you promote.
        What “Nobel physicist Richard Feynman meant was “if (scientifically sound and appropriate) measurements do not support predictions of a hypothesis, the hypothesis is simply wrong”.
        I guess he assumed, incorrectly, that scientists would take that statement scientifically, not needing caveats for literalists.

        Your misrepresentation of the intent of the quote allows you to justify your practice of correcting data under the pretext that the observation is wrong.
        “we dont know if its the data ( some correction is necessary) and we dont know if its the instrument, or the theory“
        Note the observations were not wrong and did not need to be corrected.
        They were just not the right observations to do for that theory. They did detect neutrinos, they could not detect all the neutrinos. That factor of 3 was very important, I guess you do understand why?

    • Ari Okkonen

      Feynman’s comment about when consequences of a hypothesis disagrees with experiment:
      https://youtu.be/LIxvQMhttq4

      • Steven Mosher

        funny, he didnt practice science that way now did he?
        and neither did Romer.

      • Steven Mosher

        So Ari, with the solar neutrino’s which turned out to be correct?

        Hypothesis said 3X
        Observation said X

        Which was correct?
        How many years did science live with this conflict? how many decades?

        It would be nice if hypothesis could merely be rejected when the observations disagree, but thats not the way it works… if you OBSERVE
        what scientists actually do!.

      • When consilience of evidence contradicts theory, theory is always wrong. Wegener showed it when he presented paleontological, geological, and geographical evidence that the continents had been in a different position. Geologists still clung to their failed theory for 40 years. The generation that Max Planck had observed. What scientists do best is fight anybody who contradicts them.

  8. Steven Mosher

    good tutorial and history

    • The paper first linked below is critical to the understanding of the problem using the 4X CO2 feedback parameter to estimate the historical time period forcings for CMIP5 models. It leads to a correlation of feedback parameter to forcing which is highly unlikely given that forcing should be an input variable without any plausible link to the feedback parameter which is intrinsic to the individual model. Forcing should also have the same value across all individual models when utilizing the energy budget model. If the observed forcing and change of ocean heat content (OHC) in conjunction with the 4X CO2 derived feedback parameter is used to calculate an individual model temperature series it becomes immediately apparent that the temperature trends for the models will run significantly higher than for the observed temperature trends if forcing and OHC were not being used in the models to bring the historical period model trends more in line with those of the observed.

      It can also be shown that there is a good correlation between the model feedback parameter, as derived by the 4X CO2 CMIP5 experiment, and the ratios of ceemdan derived temperature trends for the RCP 8.5 model runs for the 1861-2005 and 2006-2100 time periods. That correlation is further evidence that the CMIP5 model forcing and OHC are being used during the historical period to better align the model and observed temperature trends.

      Finally it can be shown that for an ensemble mean (mean is used here because CMIP5 models use different forcings during the historical period) feedback parameter, as derived from the energy budget model, is only between 5 and 10 % lower (more sensitive to forcing) in the RCP 8.5 2006-2100 period than the 1861-2005 period.

      The Lewis-Curry 2018 paper linked second below has a good critical analysis of papers that have attempted to put holes into the results coming out of the use of the energy budget model. It also gives evidence for the relatively small change in sensitivity of the models with increasing time periods and forcings.

      In the third link below there is the following reference to potential problems with models emulating the observed temperature series and trends:

      “Thus, the forced surface response can be delayed by the export of heat to the deep ocean whose temperature is not directly tied to that of the overlying atmosphere. This export is not well constrained by observations so a model’s excessive climate sensitivity can be masked during simulation of the historical period by overestimating heat uptake by the deep ocean. In summary, successful simulation of the observed climate is not a guarantee of reliable future projections because the agreement may result spuriously from cancellation of errors in the prescribed forcing, model sensitivity, and heat uptake by the deep ocean.”

      Link 1:

      Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models

      Piers M. Forster Timothy Andrews Peter Good Jonathan M. Gregory Lawrence S. Jackson Mark Zelinka

      https://agupubs.onlinelibrary.wiley.com/doi/10.1002/jgrd.50174

      Link 2:

      https://www.nicholaslewis.org/wp-content/uploads/2018/07/LewisCurry_The-impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity_JCli2018.pdf

      Link 3:

      https://pubs.giss.nasa.gov/docs/2014/2014_Miller_mi08910y_corrected.pdf

    • Steven,
      I wasted an hour of my life watching your recommendation. The Dessler 2018 approach which occupies most of the last half of this presentation advances an absurd, unscientific argument which will leave any reader intellectually poorer for having heard it. Dessler evidently does not understand the concept of decorrelation due to asynchronicity.

      In normal thermodynamics, when a single-body loses heat as a function of its temperature, then a cyclic heating input (forcing) will yield a result where the net flux cycle leads the forcing cycle which leads the temperature cycle. There is a phase difference between the net flux and the temperature cycle of pi/2 radians for the single-body problem and something less than pi/2 if the heated body is exchanging heat with a larger heat sink. In the real world, the mixed layer exchanges heat with the deeper ocean (which reduces the phase difference between net flux and temperature to something less than pi/2 radians) but the problem is rendered more complex by the fact that the net flux is made up of three components out of synch :- (a) a forced SW and LW component (leading temperature), (b) a responsive change in OLR due to Planck, WV and Lapse Rate, which are rapid enough to be considered synchronous with temperature and (c) a lagged SW and OLR response due to cloud formation and redistribution from both the annual seasonal cycle and from the ENSO events which contribute to the period. Any crossplot of monthly flux data and temperature which does not recognise this physical asynchronicity yields uninterpretable nonsense. Interannual data will still exhibit some decorrelation, but not to the same extent. Deseasonalising the monthly data does not solve this problem; it only adds further inestimable error. Equally, moving from surface to tropical 500-hpa temperature does not solve the problem; and makes it even more difficult to assign any meaning to the parameters which Dessler abstracts from his “observed data”. He extracts a bowl of MUSH, and then believes that he can linearly factor his bowl of observed mush into a meaningful CO2 forcing value using a bowl of GCM-abstracted MUSH. I am unable to ascribe any worth to his final pdf.

      • stevefitzpatrick

        Hi Paul,
        Ya well, Dessler doesn’t seem much interested in actually understanding what the responce to GHG forcing will be, and it’s unrealistic to expect him to rationally consider the credibility of different lines of evidence. He’s more interested in forcing you (and everyone else) to live the way he thinks you should. Any argument which increases the level of alarm seems good enough for him.

        In any case, it’s good to see you are still following the raging dumpster-fire known as “climate science”.

      • After reading the Dessler 2018 paper I was surprised that Nic Lewis has not yet to my knowledge done a critical analysis. There are a number of very weak points in the paper. Why he would attempt to abstract any meaningful information using a short period of very noisy monthly data is beyond me – unless it provided a result that met with a predetermind conclusion. Monthly Net R TOA data is very noisy.

        I am in the process of attempting to duplicate Dessler’s methods and results when using GMST and comparing that result with methods of my own using annual data over the same time period.

      • Steven Mosher

        Kri.

        Luckily I only wasted 30 seconds on your incorrect inane response

      • I suspect Nic Lewis has better and more important work to do than showing how weak Dessler’s work is, a fact that I suspect a lot of people already know. I delved a little into some of Dessler’s cloud work and it was quite weak, being totally dependent on GCM’s which we know miss badly patterns of change in precipitation for example. They badly miss cloud fraction as a function of lattitude. Then the final indication of how weak the work was is that Dessler replied that “the result must be correct because it agrees with simple theory”, i.e., in reality simple vague verbal formulations that have no scientific basis.

      • You delved into it, huh? I delved in the 737 Max. It’s weak.

      • I am also curious about Dessler’s use of the ECMWF ERA-Interim reanalysis temperature data set. That will be ripe for a sensitivity test.

    • I have spoken too soon about a Nic Lewis critical analysis of the Dessler paper being discussed here. He presented his critique at Climate Etc. and has it posted at his web site here: https://www.nicholaslewis.org/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/

      I would like to add a few results of my own analysis of that paper here. My first impression of the paper was being puzzled by why such a short period was used for determining the observed feedback parameter (FBP) and ECS. At first I thought perhaps it could be rationalized by assuming that it was limited by the time period for satellite measurements of the net R TOA flux using CERES data and that that measurement was more precise than ocean heat content (OHC) measurements that are available over longer periods of time (1959-present). What I found in the data did not support that assumption.

      I did an estimate of the FBP and ECS using AR5 forcing (For) from Lewis and Curry 2018 (LC 2018), NOAA OHC (which is very nearly the same as that used in LC 2018) and the Cowtan-Way GMST (CW). I used a variation of the Empirical Mode Decomposition method designated ceemdan to extract the trends from the these three variable. The decomposition for OHC change was for the period 1959-2016 and for For and CW the periods were 1850-2016. I used ceemdan because it effectively can remove the multi-decadal and short quasi-cyclical components along with red and white noise. It takes longer time periods of data to find the longer cycles but it allows using shorter time periods for estimating the FBP and ECS. Using this approach with these data I was able to reproduce the ECS values and confidence intervals derived in LC 2018 and even for short time periods such as 2001-2016 which is nearly the same as the period used in the Dessler paper of 2001-2017. I used annual data whereas Dessler used monthly.

      When comparing these results with the Dessler paper I had assumed that the forcing data are nearly the same (AR5) and thus difference had to come from either or both the temperature and OHC /Net R TOA data. Since the ERA-Interim data used in Dessler is available only for 1979-2017 extracting a trend with ceemdan from longer multi-decadal variations is not possible. Comparing a ceemdan trend on the CW data for the entire period versus 1979-2017 showed this to be the case and it accounted for some of the difference in results. This result also points to why Nic Lewis talks about the importance of selecting start and end dates that avoid interference from multi-decadal and volcanic effects.

      The Net R TOA trend values from CERES data for 2001-2016 gave a value near 0.63 W/m^2 while that for the NOAA OHC data gave a value close to 0.16 W/m^2. The CERES values would mean that nearly all the net forcing heat was going into the oceans and not affecting the GMST. The Net R TOA values from the CMIP5 RCP 4.5 models gives a mean Net R TOA of 0.20 W/m^2 with a 95% CI of +/- 0.10. The CERES trend is far out of the model range while the OHC data is well within it.

    • I also found that using the zonal temperature series for 30S-30N at 500 hpa, as Dessler did in his paper, gives a temperature trend (using ceemdan) for the 2001-2016 time period very close to that trend for the Cowtan-Way GMST series. That occurred even using the limited available time period from ERA-Interim Reanalysis of 1979-2018 to extract the trend with ceemdan.

      In other words using the 500 hpa tropical zonal temperature series with the NOAA OHC and AR5 Forcing would reproduce the ECS from LC 2018 where the CW GMST temperature series, AR5 forcing and an OHC series nearly the same as NOAA was used.

      Why Dessler would use his very convoluted and untested methods over a very short time period to obtain an observed higher ECS value from the available data is beyond me.

    • Perhaps the easiest way to avoid CAGW is to adjust it away. Poof. Took care of that human tragedy. What goes up must come down.

      • Tony Banton

        Indeed – perhaps nature is “faking it” as well?
        https://www.climate.gov/news-features/featured-images/spring-arriving-earlier-national-parks

      • Indeed, the idea that we have been measuring 0-2000 m ocean heat content globally prior to the 21st century is fake. We couldn’t possibly have measured it.

        Then you convert it from Joules to degrees and it becomes irrelevant.

      • Then you convert it from Joules to degrees and it becomes irrelevant.

        How can a unit conversion make a measurement irrelevant? That’s nonsense. Regardless of units, either the measurement was relevant, or it’s not.

        Either the observed heating supports the claim that the Earth is warming (and roughly in line with predictions), or it doesn’t. Looking at it in different units doesn’t change that, and you can’t say “well, it’s a lot of heat in joules, but it’s a small amount of temperature change in Kelvin.”

        I mean, the only way you can say that kind of nonsense is if you’re saying “well, it feels like a small temperature change to me”, and ignore whether or not it aligns with theory. (It does).

      • Javier took the words out of my mouth. What is fake is the belief that we have any knowledge of the rate of warming to 2000m from 1850-1900 or 1900-1950 or 1950-2000. There is no reliable baseline. ARGO is a start but who knows how it compares to any appropriate period that takes into consideration unknown factors and unknown oscillations.

      • Either the observed heating supports the claim that the Earth is warming (and roughly in line with predictions), or it doesn’t. Looking at it in different units doesn’t change that

        There is no doubt the Earth’s surface has been warming for the past 300 years. I don’t think it is physically possible that the oceans haven’t warmed too.

        The question is that the oceans have warmed a minuscule temperature change. According to Bereiter et al. 2018 new noble gases technique just 0.1 °C in the last 50 years.
        https://phys.org/news/2018-01-thermometer-global-ocean.html
        All the effects from warming come from the temperature increase, not the ZetaJoules.

        Using the appropriate units gives you the adequate perspective to know if it constitutes a problem or not. 0.2 °C/century is not a problem.

      • Tony Banton | April 2, 2019. “Is this fake as well?”
        Silly question.
        As labelled it is fake.
        Global Ocean Heat Content 0-2000 meters is not 25x 10 to the 22 joules.
        It would be somewhere north of 288x 10 if the former figure equates to a 10th of a degree. Say 3000 times as much.
        Properly labelled it is fake.
        If it was properly labelled as the increase in GOHC over 80 years it would imply we were capable of accurately measuring a 1 in 3000 approx amount of energy change over 80 years equating to 1 in 24000 per Year.
        Or that temperature changes of 1/ 800 th of a degree per year were both reliably measurable and useful.
        Since water temp changes seasonally by over a degree the error in measurement has to be huge and unreliable.
        What they said.

    • Not NASA? So jest who stole the cookies from the cookie jar?

  9. Tony Banton

    “Then you convert it from Joules to degrees and it becomes irrelevant.”

    Converting solar energy stored in water to degrees C is just hiding an “inconvenient” problem away my friend.
    Al la Monckton style.
    It most certainly is not “irrelevant” (as you well know) in that it is a measure (the greatest one due to ~93% of the climate systems accumulated solar energy residing there) of Earth’s warming.
    And *deny* all you like – it is increasing.
    Faked?
    To hand-wave it away as irrelevant tenths of a degree and complain “we cant possibly” know is a favourite Naysayer tactic – the we don’t know everything, so we know nothing approach.

    To return to my point.
    It’s not faked.
    As Nature tells us.

    • Yeah, I can imagine these guys going “well, the map says I have to travel 3 miles, and that’s not that far, but when I convert it to feet, that’s almost 16,000 feet! Wow, that’s really far!”

      It’s a dumb, pseudo-scientific attitude.

    • The graph shows in increase of about 25E22 Joules from roughly 1980 to 2019. It takes about 5.8E24 Joules to evenly heat the ocean by 1 C, so the increase shown should be heating at a rate of 0.0011 degrees per year. I’d
      really like to know what equipment they’re using to measure something so variable and dispersed over such a huge area, so accurately! But if they’re right, in a thousand years well be able to measure the increase in ocean temperature by looking really closely at a really good thermometer.

      I’ll put that on my to-do list for 3019 AD.

    • lol, 30 joules =0.0158 celsius heat units, less than 2/100th of a degree celsius, oo-ooh, scarey …but clevah that they can measure to such fine detail. Ma Naychur must be impressed.

      • Tony Banton

        ” oo-ooh, scarey”

        Yes it is my friend …. that the presentation of a quantity of energy into a unit of temperature that is 1000x smaller than when that unit of energy is applied to air – can illicit such a revealing response.

    • Converting solar energy stored in water to degrees C is just hiding an “inconvenient” problem away my friend.

      Please explain to me how exactly 0.1 °C in 50 years as per Bereiter et al., 2018, constitutes a problem.

      And don’t forget that we don’t measure energy in the ocean, but temperature. So first the temperature is converted to energy to produce high enough units as to look an “inconvenient” problem to you.

      • Tony Banton

        “Please explain to me how exactly 0.1 °C in 50 years as per Bereiter et al., 2018, constitutes a problem.”

        Wearily…
        For the third time.
        I didn’t say it was a “problem” in itself.
        Though it is a “problem” that the Naysayer community says it isn’t, with such as we “can’t know”, as well as the “2/100ths”
        So magically there’s no problem.

        I said “Is this fake as well?”
        Which certain naysayers revert to with the “with one bound” response to which the sensible of us can only scratch our heads in amazement at.
        And of course there can be no response, as that is as far down the rabbit-hole as can be gone.
        Why, indeed “Naysayers” “always win”.

        “And don’t forget that we don’t measure energy in the ocean, but temperature. So first the temperature is converted to energy to produce high enough units as to look an “inconvenient” problem to you.”

        Wearily again….
        “less than 2/100th of a degree celsius, oo-ooh, scarey”

        Mr Javier.
        As you well know the above would translate to 20C IF the same Joules were to (instantaneously) heat the atmosphere.
        93% of the excess energy accumulated by the climate system.
        That is the “inconvenient” problem that the 2/100ths covers conveniently.
        And…. returning to my point that you have diverted from (a well practised “Naysayer” tactic) – it is not faked.

      • the above would translate to 20C IF the same Joules were to (instantaneously) heat the atmosphere.

        I see you rely on physical impossibilities when the evidence is not in your favor. Please explain how something that is colder could instantly heat something that is warmer.

        The words excess and accumulation are interpretations from an artificial reference frame. The oceans were warmer during the Holocene Climatic Optimum and colder during the Last Glacial Maximum. There is no excess or accumulation, just change.

        And returning to your point, nobody is arguing that the planet’s surface hasn’t warmed over the past centuries. From the point of view of most sane persons it is a good thing that it did. Conditions during the LIA were a lot more challenging.

  10. Reblogged this on Climate Collections.

  11. Judith, I have a post in limbo because of 3 harmless links.
    Ken Fritsch

  12. I can’t help by being amazed at the huge efforts to predict the various scenarios, when we are constantly adjusting the computer models, discovering new mechanisms, and have only very short observational periods to verify the models. With these short windows, it is important to remember, that what looks like a flat line may be a part of a flat-topped mountain, what looks like as exponential function may just be part of the upward slope of a cyclical function. Likewise, transient, chaotic events cannot be predicted, or modeled reliably, although all kinds of scenarios can be played.

    As Richard Heyser from JPL once said to me: “Nature does not solve equations”. She just does what she does, and doesn’t give a hoot about what our computers say. Good science grows out of curiosity and humility.

    Pragmatically, whether the rise in temperature is 2 or 4 or 6 degrees, we will have sufficient time to mitigate the effects at reasonable cost, compared to the draconian costs of focusing only on CO2 emissions. Hence, the pragmatic approach makes sense, especially in light of the lack of genuine willingness to get anywhere near the dramatic recommend reductions.

    Unfortunately, I tend to agree that science has been taken hostage by politics, and science, despite its history of trying to be objective and humble, has effectively learned the vocabulary of politics and religion, obfuscated by a lot of mathematics and statistics. In the EU, science is fed by political goals, including strong emphasis on energy and greenhouse emissions, thus providing scientists who support that agenda, with good employment. This biases research. Thus, science has become more and more goal-oriented, i.e. run on the premises of businesses, which is contrary to free, open-ended research.

    It would behove the politicians, to commit a small percentage of research funds to support serious research looking at poking holes in current climate research. This would be a potentially highly valuable leveraged investment, considering the huge costs that meaningful CO2 emission reductions would incur and help keeping “religion” out of science.

    As a layman, (albeit with a math and physics background), I find it difficult to penetrate the body of literature, pro and con. However, what I can discern is that the language used, by almost all parties, is filled with strong adjectives and hyperbole, and has a strong populist, at times religious tone. That scares me, more than the worst case sea rise scenarios.

    Religion deals with what we don’t know and don’t understand. Perhaps that is why religion’s apocalyptic vocabulary has been adopted in the climate debates.

    • Bravo! Equations are certain. Their inputs are not. And they have no interest in other equations.

    • As Richard Heyser from JPL once said to me: “Nature does not solve equations”. She just does what she does, and doesn’t give a hoot about what our computers say. Good science grows out of curiosity and humility.

      Nature is the perfect analog computer. Nature solves all the natural equations every day, every second. Nature solves the equations and stores the records of what happened in various proxy records. Nature does not give a hoot about what our flawed computer models show or say, but nature does care about the equations, what happens is the results of those equations.

  13. Equilibrium Climate Sensitivity is a pointless concept. The Earth’s climate will never achieve equilibrium because the Sun’s output is continuously changing as is the configuration of the Solar System.

    Mathematical analysis of CO2 data and satellite lower troposphere temperature data (UAH) has shown that CO2 and temperature are independent variables. It further shows that temperature either directly or indirectly determines the rate of generation of atmospheric CO2. For detail see:
    https://www.climateauditor.com

    CO2 data from across the globe shows a regular seasonal variation. CO2 decreases in Spring as temperature increases due to photosynthesis in annual vegetation. Then CO2 increases in Autumn-Winter as temperature falls. This is the complete reverse of the UN IPCC claim that CO2 causes global warming and shows that climate change affects CO2 concentration.

    Further, the rate of generation of CO2 increases with El Niño events and decreases with La Niña events. Again, climate change determining the concentration of CO2.

    The Antarctic Dome C ice core data shows the temperature and CO2 concentration changing in unison during warming and cooling periods over the past 800,000 years, long before homo sapiens evolved. This contradicts the UN IPCC claim that humans have been responsible for changes in CO2 concentration.

    The fact is that CO2 has not caused climate change. It is climate change that has been the cause of the changes to the atmospheric CO2 concentration. The UN has been promulgation a fraud to achieve Maurice Strong’s ambition of destroying capitalism and creating his ‘paradise’ of One World Government with the UN in charge, of course.

    • Lauri Heimonen

      ”The fact is that CO2 has not caused climate change. It is climate change that has been the cause of the changes to the atmospheric CO2 concentration.”

      I agree with the statement above. For instance in my comment https://judithcurry.com/2018/12/28/week-in-review-science-edition-92/#comment-887274 I have earlier written:

      ”Why even the latest report of IPCC does give an unreliable view on the cause of the recent climate warming? My view on that is as follows:
      – At first the representatives of IPCC – possibly by circular argumentiton, without any evidence in reality – seem to assume, that the recent increase of CO2 content in atmosphere has been totally caused by human CO2 emissions. Whereas, according to natural laws, the share of human CO2 emissions in the recent, total increase of CO2 content in atmosphere is only about 5 % at the most. We have to understand the fact that the quality of all kind of CO2 emissions to atmosphere are similar. Any content of CO2 in atmosphere is determined according to dynamic balance betwween all CO2 emissions from sources to atmosphere and all CO2 absorptions from atmosphere to sinks. The influence of any CO2 emission on CO2 content in atmosphere depends on its its share of total emissions.
      – Secondly even the role of the sun in the climate models adopted by IPCC seems to be based on assumption – may be even on circular argumentation, too – without any evidence in reality, too. The sun is key factor to control climate warming. In addition there are plenty of other factors conrolling changes on climate warming dominated by the sun. From the cretagenous period, since 100 million years, until today activity of sun has little by little been increasing. However, the climate temperature during that time has decreased so much, that nowadays tropical temperature is ahout 2 C-degrees and climate temperature of polar regions 20-40 C-degrees lower than during the 100 million years ago, caused by continent movings, which have changed water streams of oceans so that climate had been getting cool especially upper latitudes. During the latest 800 000 years, being associated with glacials and interglacials, the sun has orbitally controlled changes of climate temperature according to distance of the globe from the sun in the mildly elliptical orbit. During the current interglacial Holocene there can be found, that changes of climate temperature trends can be explained by activity changes of sun, which has especially controlled natural El Niño and La Niña phenomena.
      – The third inconsistence, compared to the truth, is, that, according to observations, trends of CO2 content in atmosphere have been always found to follow trends of climate temperature and not vice versa. For instance that can be found by geological and even latest observations during the last 100 million years, during the last 800 000 concerning the glacials and interglacials, and even during the present interglacial Holocene, as the trends of climate temperature has been dominated by El Niño and La Niña phenomena.
      – Finally one have to learn to understand the truth, that by using the climate models adopted by IPCC, nobody has managed to forcast or to hindcast trends of climate temperatures.
      As a summary one can find that there is no evidence in reality, according to which human CO2 emissions could make any observable, threatening warming of climate be possible. Read more in the linkshttps://judithcurry.com/2017/05/02/nyes-quadrant/#comment-848558 and https://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992.”

      The term ‘greenhouse gas’ is a symbolic expression of any gas in atmosphere absorbing infrared radiation coming from earth surface. The name ‘greenhouse gas’ means symbolically the same, what glass walls and windows of any greenhouse do: they prevent cooling of greenhouse. As to CO2 in atmosphere, there is no evidence in reality, that man-made increase of CO2 in atmosphere could cause any threatening warming in climate. Instead, CO2 increases in air improve vegetation both in greenhouse and in nature.

  14. “Are values of equilibrium climate sensitivity > 4.5 C plausible?”

    The asking of this question begs the possibility of some unknown constraint setting limits to thermal behavior. The 2nd Law seems a rather obvious choice. To illustrate, imagine a 1-dimensional problem bounded by thermal reservoirs at 1000K and 200K between which an energy flux of 1000W flows. At some intermediate position a 500K isotherm exists. How much energy is dissipated in the spans between 1000-500K and 500-200K? Were we to substitute amps for watts and volts for degrees, we would dismiss the calculation as trivial. And, although you won’t find the answers in your primers (500W, 300W), the corresponding thermal problem is an equally trivial consequence of the 2nd Law (Carnot Equation).

    Given the flux and boundary temperatures, the dissipative sensitivity (W/K) depends only on a function d ln(J)/d ln(T1-T2). For a sensitivity = 3.7/4.5, a negative slope (-0.5) is needed, implying that raising the temperature differential reduces the flux while dissipation remains positive. For zero slope (equivalent to unlimited ‘positive’ feedbacks), the 2nd Law constraint becomes 1.4K/3.7W.

  15. Dr. Curry, any chance of getting my post with 3 links out of moderation?
    kenfritsch

  16. Climate change is a given, not a problem. CO2 mitigation is a problem, not a solution. Atmospheric CO2 warms us, cools us, and feeds us.
    Global warming is real, and has been since the end of the Little Ice Age (a misnomer – we’re still in an Ice Age, q.v.)
    Global warming is statistically on pause for the last 20 years. When it ends, the world will get either warmer or colder. Warmer is preferable.
    The earth, currently 15°C, has spent half of the previous 600 million years around 22°C, looking like there’s a tight lid at that number. That must entail strong negative feedback and no “tipping point”.
    The end-Ordovician (Hirnantian) Ice Age, 440 million years ago, began when CO2 was around 4,000 ppm and lasted only a couple of million years. At the end of that time, with glaciers reaching almost to the equator, with 85% of marine life extinct, when the frigid oceans had inhaled and reduced atmospheric CO2 to around 3,000 ppm, the globe suddenly began to warm up, getting back to the previous 22°C with astonishing speed. We haven’t the faintest idea why it cooled so fast and so far, or why it warmed so fast and so far. After all, that was the time of the cool young sun, which had only 70% of today’s irradiance. The day had 22 hours, and the year was 400 days…
    We don’t know why the P-T extinction warming (to at least 28°C) occurred so fast, nor why it was so brief. What brought the temperature down to 22°C again? Why didn’t it “run away”?
    Why is the lowest temperature around 12°C? When “snowball earth” occurs, with glaciers almost down to the equator, why doesn’t the albedo force more cooling, more ice, more albedo, and more cooling down to the Stefan-Boltzmann equilibrium of 255°K (0°F) ?
    In other words, why has the earth’s temperature been so stable, ranging from 285°K to 295°K since the end of the Hadean Age 3 billion years ago? That’s a median of 290°K (62°F), ±2%. It’s climate stability needs an answer, not climate change.

    • Here’s an interesting call to arms – for doctors, yet! – from 2000. In the BMJ, of all places. And a response…
      https://www.bmj.com/rapid-response/2011/10/28/it-isnt-easy-being-green

    • jimmww Wrote:

      We don’t know why the P-T extinction warming (to at least 28°C) occurred so fast, nor why it was so brief. What brought the temperature down to 22°C again? Why didn’t it “run away”?
      Why is the lowest temperature around 12°C? When “snowball earth” occurs, with glaciers almost down to the equator, why doesn’t the albedo force more cooling, more ice, more albedo, and more cooling down to the Stefan-Boltzmann equilibrium of 255°K (0°F) ?
      In other words, why has the earth’s temperature been so stable, ranging from 285°K to 295°K since the end of the Hadean Age 3 billion years ago? That’s a median of 290°K (62°F), ±2%. It’s climate stability needs an answer, not climate change.

      When anything tries to warm earth too much, it removes all the sequestered ice and the sun cannot heat a water covered earth that cools with evaporation and precipitation and IR out that is a function of temperature to the fourth power. Any increases to input promotes IR out to the fourth power.

      On the cold side, earth gets colder when ice volume and extent is maximized. When the oceans run out of thawed ocean that is exposed to the atmosphere gets too small, there is not enough ocean evaporation and snowfall to maintain the ice volume and extent. The extent is maintained until the volume is depleted and warming occurs. This bounds the lower limit of temperature and sea level.

      Climate stability has answers that are explained best by the ice core data and those answers can be applied to climate history before the ice cores and can be applied to climate future beyond climate flawed climate model output.

      If you have unanswered questions, look at actual data for answers. Models are based on flawed understanding and are no help.

      • All true, Papa, but the questions as stated remain unanswered. What is “too much” and why?
        IR out to the fourth pertains to land only. Over 70% of the earth, downwelling IR warms the top several microns of the seas, which evaporates, rises, and deposits its heat higher in the troposphere. Stratospheric CO2 then radiates IR out to space. A natural refrigeration cycle. But that doesn’t explain the almost consistent ceiling to temp at around 22C, not the peak at the P-T to 28C nor the rapid descent from there.

        Speculation is good and valuable, but not an explanation.

  17. One thing we have to be careful of is warming outside of ECS — some of the worst-case warming scenarios may be plausible enough to consider as edge cases for planning simply because we can’t rule them out happening even in the absence of AGW.

  18. “Climate sensitivity is then defined mathematically as the derivative of an appropriate functional
    or other function of the system’s state with respect to the bifurcation parameter.” http://research.atmos.ucla.edu/tcd/PREPRINTS/Ghil-A_Met_Soc_refs-rev'd_vf-black_only.pdf

    To redefine a foundation. Sensitivity to greenhouse gases in this theory is a fundamental limitation to scientific expectations. Models are chaotic. This BOM ENSO outlook is a stochastically perturbed initial condition GCM with an ENSO module. But any of them can produce a spaghetti map of diverging solution trajectories from stochastically forced differences in initial conditions.

    http://www.bom.gov.au/climate/enso/wrap-up/archive/20190402.sstOutlooks_nino34.png

    After that probabilistic forecasting is a breeze. Although it seems that what is being forecast is chaos.

    Opportunistic ensembles like the CMIP are vastly different. Here a single trajectory is chosen from 1000’s arbitrarily and compared to others from different models. Giving a mean and variance of modeler expectations? What possible practical purpose could that have?

    Spatio-temporal chaos in climate is another matter. The best scenario there is surprises on both the warm and cool ends of the global spectrum.

    • Thanks for the link (though it took me back to stressful days and the ‘dark’ art of dynamic systems).
      In 1936 JF Dodwell perceived that the ‘Earth dynamic’ formula consisted only a secular expression, and missed an evident ‘Step’ or ‘abrupt’ input which he had deduced from ancient measurements of obliquity. Through archaeology I stumbled on evidence of such, both negative and positive.

      From this site, (and thanks to Javier) the step changes correlated to Eddy cycle peaks (for -ve) and troughs (for +ve). The last step change, in 2345bce, (with several proxies corroborating Dodwell; and a cycle trough) was positive. Next due is negative as per historic. That – a single case known from 3550bce – would/should correlate to an Eddy peak.

      The name of the game is not developing a more accurate model but developing better resilience to adverse “changes”. World politics is a eunuch in the face of Mother Nature.

      • Dodwell as I understand thought a comet may have collided with the planet setting up wobbles in Earth’s obliquity. A wobble caused the little ice age. But internal variability is broader than seen in the modern era. Does vigorous natural variability imply higher sensitivity to greenhouse gases?

      • Dodwell, following from earlier others, had sensed from the ancient obliquity measurements that the theory from Stockwell and Newcomb did not tell the whole story. That was evident, and/or hinted mathematically. What and how were speculation. The astro/scientific community ignored it.

        It was surprise, and at first disbelief, to find the evidence in ancient but very precise calendars. However all the tell-tales are in many proxies as I have been finding. Including matching the dates with the Eddy cycle from the ‘Nature Unbound – IX’ thread.

        From there on I see only ‘blind’ questions. Except the evident link to the Eddy cycle means no comet impact. ‘Dynamical systems theory’ can be applied in a different level, as geology clearly illustrates. Then extreme climate change is a collateral effect.

  19. David L. Hagen (HagenDL)

    CurryJA Re: “ECS < 0: impossible"
    Under what assumptions? Do we need to consider associated factors?
    (Yes increasing CO2 does contribute "greenhouse" warming – as does H2O. But clouds etc. are highly uncertain.)
    What about other major natural trends occuring at the same time? e.g.,
    Does that include the 150 years of global cooling during the Chicxulub impact at the Cretaceous–Paleogene boundary? See:
    Vellekoop, J., Sluijs, A., Smit, J., Schouten, S., Weijers, J.W., Damsté, J.S.S. and Brinkhuis, H., 2014. Rapid short-term cooling following the Chicxulub impact at the Cretaceous–Paleogene boundary. Proceedings of the National Academy of Sciences, 111(21), pp.7537-7541.
    https://www.pnas.org/content/pnas/111/21/7537.full.pdf
    What about during the 8C cooling from the last interglacial to the glacial period?
    https://www.ncdc.noaa.gov/abrupt-climate-change/Glacial-Interglacial%20Cycles

    • David L. Hagen “CurryJA Re: “ECS 0.
      Being as it is the positive known response to a doubling of CO2 alone.
      2. Positive and negative feedbacks a very vexed issue.
      Imply there are other factors which should have been considered than just CO2.
      3. Any negative feedback cannot reduce the effect to 0 or below that.
      If it did then the science that says there is a positive response is wrong in the first place. Ie CO2 would have itself had no effect or a negative effect.
      Another way to say it is the perturbation of the original effect cannot create enough energy to cancel itself out fully. A negative feedback should always be slightly smaller than the original positive response..
      4. One is still skeptical, just this answer in a slightly different form from Lucia years ago is very hard to counter and deserves the respect that Judith has given it.

      • David L. Hagen (HagenDL)

        angech Re: “as its the positive known response to a doubling of CO2 alone”. Why change the definition based on the presumed positive response”. The actual definition given is: “Equilibrium climate sensitivity (ECS) is defined as the amount of temperature change in response to a doubling of atmospheric CO2 concentrations, after the climate system has reached equilibrium.”
        The definition includes the response to ALL possible climate conditions, not just the current climate.
        Why cannot a negative feedback reduce the effect to zero or below? Yes the direct effect is absorption and re-radiation. But nothing is said about the context. If the response is dependent on other conditions, those need to be stated. e.g. no comet strike, or x centuries after a comet strike etc.

      • David, thanks
        “ The actual definition given is: “Equilibrium climate sensitivity (ECS) is defined as the amount of temperature change in response to a doubling of atmospheric CO2 concentrations, after the climate system has reached equilibrium.”
        The definition includes the response to ALL possible climate conditions, not just the current climate.”
        I do not quite see it as simple as that.
        It is the response of One designated climate system with no new inputs just a redistribution of the heat until equilibrium is reached. No new volcanoes, clouds or aerosols. Just and only doubled CO2.

  20. Pingback: What’s the worst case? Climate sensitivity | Watts Up With That?

  21. Pingback: What’s the worst case? Climate sensitivity – Enjeux énergies et environnement

  22. Pingback: What’s the worst case? Climate sensitivity – Brojo

  23. Rising CO2 creates conditions that are better – not worse – for growing coffee:

    http://www.co2science.org/articles/V22/apr/a4.php

    Good for all those late night climate conference sessions. (But also pointing to those conferences themselves being futile.)

  24. The factor that may be the worst in the study of climate science is that the different cliques promote their theory and do not engage in debate and discussion with any who disagrees. The extreme alarmists do this. The Luke Warmers do this. Most skeptics do this. No one has tried to bring the different cliques in to discuss and debate the actual facts and data. Each of the different cliques have some piece right and do not study any of the other factors. The right answer is a mix of all the correct pieces.

    I do understand why no one has provided an energy balance theory that is correct. The alarmists promote the theory that energy is hidden in the deep oceans. That is actually correct. The oceans are huge positive energy capacitors that store more and more energy in cold times until it gets warm enough to thaw polar oceans. Then the evaporation and snowfall in polar and other cold regions store negative energy in sequestered ice that builds until that capacitor starts discharging ice that thaws and reflects and causes earth to be cold while the positive capacitor starts charging. This is only in the polar climate cycles and is different from the tropical climate control system. The total climate is a mix of the different regions with mixing by atmosphere and ocean circulations. There is no one balance for earth, the balance is the proper consideration using the mixing of the regions.
    I know someone who has written a book about energy balance and I know some who have reviewed his writtings. He has some basic facts wrong and he does not yet understand. If we could pull together some people who really want to learn about natural climate change causes. we might make some useful progress. There are many who understand parts of this puzzle, but they work against each other and not with each other. Correct understanding is the proper combination of the many correct parts and the proper understanding of other people’s understanding.

    • People say that heat is real, but cold is just an absence of heat.
      That is like saying electricity is positive and negative electricity is just an absence of positive electricity. Capacitors and batteries show us we can have positive and negative energy. In climate, much is determined by cooling that is just the IR out from warmer conditions. In the oceans and sequestered ice, there are huge capacitors that are not considered. The oceans are the positive energy capacitors that store energy until polar ice thaws and turns on evaporation and snowfall and sequestering of ice in cold places. This is charging the negative capacitors with sequestered ice that will cool earth by thawing and reflecting. This concept is not recognized or studied. Climate science will never be correct until the role of sequestered ice and the cycles of warm oceans and sequestered ice are understood.

    • When oceans are cold and polar oceans are covered with sea ice, the energy in the oceans cannot escape from under the ice so oceans gain energy while the sequestered ice depletes. The positive energy capacitor is charging and the negative energy capacitor is discharging.
      When oceans are warm and polar oceans are thawed, the energy in the oceans promote evaporation and ocean cooling and positive energy capacitor discharging and the energy is used to produce ice that is sequestered in the charging of the negative energy capacitor.

  25. Pingback: Javier on sunspot data – Climate Collections

  26. Phil Salmon

    Some candidates for worst case:

    Super volcano
    San Andreas force 10 earthquake
    Meteor impact
    Tsunami at major coastal city
    Glacial inception
    End of transitional glacial-interglacial flicker and start of 20-70 million year deep global glaciation (snowball earth)

    Nothing case:
    Usual chaotic climate fluctuation

    Near to best case:
    Mild warming from CO2 pops us out of a glacial regime, while plant growth is fertilised more and more by increasing atmospheric CO2.

    • Richard Arrett

      I would add Supernova. That would be very bad.

    • Phil Salmon,

      I agree with all that. But I’d emphasise that the the worst case climate change scenario is global cooling. We have no way of predicting when an abrupt cooling event may occur. However, we do know, from relatively recent historical evidence (e.g. past 300 years and past 2000 years) that even moderate cooling, such as the Little Ice Age, is very damaging.

      On the other hand, the evidence seems to indicate that any global warming we might get this century would very likely be beneficial, not harmful – for both ecosystems and the global economy.

  27. Jacques-Marie Moranne

    The priority should be to verify the feedback, because if temperatures do not raise by more than 1°C, there is no climate problem.
    And this should not be too difficult, since Relative Humidity (I confirm “relative”) is decreasing from 75% to less than 10%, from earth surface to tropopause.

  28. Pingback: Weekly Climate and Energy News Roundup #355 | Watts Up With That?

  29. Your blog has great value, Judith. I occurs to me that when examining a large non-linear and chaotic system it is useful to look at the system as a whole and test theories based on first principles. My high level assessment of the CO2 from fossil fuel theory has very few points, none of them subjective:
    1. The earth has a mass of 5.148E18 Kg and 1 ppm (by mass) is 5.148 Gt.
    2. Known oil reserves on earth are 1.7 trillion barrels
    3. Combustion of 1 barrel of oil produces circa 433 Kg of CO2
    4. If all known oil were combusted tomorrow it would add only circa 147 ppm to atmospheric CO2
    5. World known natural gas reserves are approximately 7,123 trillion cubic feet.
    6. Combustion of 1,000 cubic feet of natural gas produces approximately 53 Kg CO2.
    7. If all known reserves of natural were combusted tomorrow it would add approximately 27 ppm to atmospheric CO2.
    8. Fossil fuels are not capable of adding more than 174 ppm (mass) to atmospheric CO2.
    9. 174 ppm ( mass) is equal to 174 ppm (volume) unless Avogadro had it wrong.
    10. Any warming effect of CO2 declines exponentially as the concentration rises.
    11. It is inconceivable that oil & gas can create the level of CO2 predicted by Michael Mann and his alarmist theory.
    12. Coal is another fossil fuel and known reserves are 1.1 trillion tons. Each ton of coal combusted will produce 2.86 tons of CO2 which has the potential to add 611 ppm to atmospheric CO2.
    13. Even with all coal combusted, fossil fuels contribution to atmospheric CO2 from all known reserves would total 785 ppm.
    14. It seems impossible to me that fossil fuels are even capable of the increase in CO2 that would present a danger to Earth even if the most radical views of the alarmists were accepted as possible or even likely.
    15. Water vapour has a far greater concentration in the atmosphere than CO2 and is more thermodynamically active (i.e. a great so-called “greenhouse effect”.
    16. It is more likely than not that fluctuations in short term climate (often referred to as weather) are related to water vapour changes to the extent they are related to any greenhouse gas effect, leaving aside other natural factors such as solar activity.
    17. My conclusion is that Michael Mann’s vilification of fossil fuels is unwarranted and harmful, in that depriving billions of people of the benefits of fossil fuels has the potential to do more harm than anything that might occur based on rising CO2 concentrations.
    18. The 785 ppm increase in CO2 from combustion of all known fossil fuels would take place over 50 years at the earliest and would have many natural offsets.
    19. The climate alarm does not seem to have a foundation in science to the extent the alarm is grounded on CO2 as a cause.

    Thanks for letting me comment.

    Michael Blair

  30. The Earth can not tell one W/m^2 of solar forcing from any other, so the idea than the next W/m^2 of solar (or equivalent) forcing can increase the temperature by 0.8C increasing surface emissions by 4.4 W/m^2 is obviously wrong since each of the 240 W/m^2 of actual solar forcing results in only 1.62 W/m^2 of surface emissions.

    An ideal BB would emit 1 W/m^2 per W/m^2 of forcing, independent of its temperature. The Earth’s surface emits 620 mw/m^2 per W/m^2 more than an ideal BB where these excess surface emissions are replaced by surface emissions absorbed by the atmosphere and returned to the surface at a later time. A testable result is that the 1.62 W/m^2 emitted by the Earth’s surface per W/m^2 of forcing is also independent of its temperature.

    Given that half of what the atmosphere absorbs from the surface must be emitted into space to offset the incident solar energy (another testable result), the absolute upper limit sensitivity as expressed in the power domain would be 2 W/m^2 of incremental surface emissions per W/m^2 of forcing, which is even less than the 2.2 W/m^2 of surface emissions per W/m^2 of forcing corresponding to the IPCC’s lower bound ECS of 0.4C per W/m^2.

    1.62 W/m^2 of surface emissions per W/m^2 of forcing corresponds to about 1.,1C from doubling CO2 and is exactly in the middle of the range most supported by the data and theory. This also corresponds to the sensitivity of a gray body whose equivalent emissivity is 1/1.62 (0.62) and whose temperature is that of the surface and given EXACTLY by 1/(4oeT^3).

  31. ECS for warming is certainly low (although the exact temperature depends on which version of HITRAN you’re using with later versions showing lower values) but there are clearly negative feedbacks.

    I will watch with interest over the next few decades as academics are dragged kicking and screaming by the reality of the climate toward that realisation.