Why Dessler et al.’s critique of energy-budget climate sensitivity estimation is mistaken

By Nic Lewis

Plain language summary

  • A new paper by Andrew Dessler et al. claims, based on 100 simulations of the historical period (1850 to date) by the MPI‑ESM1.1 climate model, that estimates of climate sensitivity using the energy-budget method can vary widely due to internal climate system variability.
  • I calculated what effect the uncertainty implied by the internal variability affecting the MPI‑ESM1.1 simulations had on the distribution of the primary climate sensitivity estimate in the recent Lewis & Curry energy-budget paper.
  • The result was a marginal narrowing of the Lewis & Curry sensitivity estimate. This is because the allowance for internal variability by Lewis & Curry is larger than internal variability in MPI‑ESM1.1.
  • Since historical period energy-budget sensitivity estimates are much more  imprecise for other reasons, internal variability contributes little to their total uncertainty; it is an unimportant factor.
  • Nothing in the new Dessler et al. paper indicates that the Lewis & Curry energy-budget climate sensitivity estimates are likely to be biased low.

Introduction
Climate scientist Andrew Dessler has two interlinked short papers on climate sensitivity estimation out, one as an unpublished non-peer reviewed preprint. I focus here on the published study (hereafter Dessler18).[1]

The abstract reads:

Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100- member ensemble of the MPI-ESM1.1 climate model simulations of the period 1850-2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K.[2] The spread in the ensemble is related to the central hypothesis in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500-hPa tropical temperature better describes the planet’s energy balance.

Of direct relevance to the new Lewis and Curry paper (hereafter LC18)[1]</sup3, Dessler18 states:

With respect to precision of the estimates, our analysis shows that λ and ECS estimated from the historical record can vary widely simply due to internal variability.

Andrew Dessler has been using the Dessler18 results to criticise energy budget ECS estimates, such as that in LC18. He tweeted:

New Lewis and Curry paper is out! Unfortunately for them, it’s already shown to be wrong! Our recent paper showed that the methodology produces answers that can deviate significantly from reality.[4]

In reality, the LC18 results are untouched by the Dessler18 findings, as I shall show.

What Dessler18 does
Dessler18 estimates climate feedback strength λ from the global mean changes in surface air temperature TS, effective radiative forcing F and top-of-atmosphere net downwards radiation N between the first and last decades of 100 MPI-ESM1.1 historical simulations, using the standard energy-balance equation: λ =  ΔRTS = (ΔF − ΔN) /ΔTS.[5]  Exact simulated ΔN and ΔTS values are output by the model, while ΔF is estimated separately.[6]  ECS is estimated as F2CO2 / λ, where  F2CO2 is taken as 3.9 Wm−2.[7]

The estimated 5-95% uncertainty range for λ  derived from the MPI-ESM1.1 historical simulations is 1.17−1.63, with a median of 1.43 and an almost identical mean (units for λ are Wm-2K-1). The distribution has an estimated standard deviation of 0.137 and shows no evidence of non-normality.[8] The fractional standard deviation of the λ estimates is 0.096⤬  the median estimate.[9]

How would LC18’s results be impacted by adopting the internal variability implied by Dessler18?
One can easily work out the effect on the LC18 primary results of adopting the level of internal variability implied by the Dessler18 results. I calculate that when differences in ΔR and ΔTS and in the length of the base period used are allowed for, fractional standard deviations in λ of 0.044 and 0.054 attributable to internal variability in respectively R and TS are implied.[10] This calculation uses a split, derived from the estimates of the model’s internal variability in ΔR and in ΔTS, of the ΔR and ΔTS contributions to the standard deviation in λ.

I apply the calculated fractional standard deviations in λ of 0.044 and 0.054 attributable to internal variability in ΔR and ΔTS to respectively the denominator and the numerator of the fraction {(F2⤬CO2 ΔTS) / (ΔF − ΔN)} that is used to estimate ECS.[11] To avoid double counting, I remove the allowances made in LC18 for internal variability in ΔR and ΔTS.

The result is to change the 5–95% range for estimated ECS from 1.16–2.68 K on the original LC18 basis, to 1.19−2.65 K using internal variability corresponding to the Dessler18 results. The ECS estimate becomes slightly better constrained, not worse constrained.

It is arguable that the almost symmetrical uncertainty range for λ in Dessler18 implies that, because of the anticorrelation between ΔTS and ΔR, almost all the variability in λ can be treated as arising from internal variability in its numerator variable, i.e. in ΔR. In that case, the fractional standard deviation of  0.096 in λ just needs to be scaled to allow for the higher ΔR value and longer base period used in LC18. The resulting fractional standard deviation in λ attributable to internal variability in ΔR is then 0.054,[12] with none attributed to internal variability in ΔTS.

I apply the 0.054 fractional standard deviation to the denominator of the ECS formula, again removing the allowances made in LC18 for internal variability in ΔR and ΔTS. The result is to change the 5–95% range for estimated ECS, from 1.16–2.68 K on the original LC18 basis, to 1.20−2.64 K. Even if the calculated fractional standard deviation of 0.054 were increased by 50%, the 5–95% ECS range would still be narrower than when using the internal variability estimates adopted in LC18.

Do Dessler18’s results have any implications for historical period energy-budget ECS estimates?
The short answer to this is no.

Andrew Dessler initially claimed that the LC18 energy budget methodology caused bias. Although, to be fair, he has since withdrawn this claim, he is still promulgating the idea that energy budget ECS estimates are biased low, writing:[13]

I said several times that the Lewis and Curry method is biased, but I should’ve said it is imprecise. Our ACP paper shows that the method would give us an accurate estimate if we had 100 different realizations of the 20th century. However, The imprecision of the method means that with only one realization (the historical record), it is possible that you could get an answer that is far from true. Several other papers that have come out recently have also suggested that the pattern of warming that we experienced during the late 20th century causes energy balance estimates of ECS to be lower than the climate system’s true value.

One can quantify “far from true” based on the 100 MPI-ESM1.1 historical simulations analysed by Dessler18. Historical period energy budget ECS estimates are not limited to using 10-year averaging periods, which is what Dessler18 focuses on. Use of 20-year periods gives a better idea of how much internal variability influences energy budget ECS estimates. Dessler18 quotes a 5–95% range of 0.48 Wm−2 for λ estimated using 20-year averages. The 95th and 99th percentiles of the 100 λ estimates involved are respectively 0.15 Wm−2  and 0.18 Wm−2 above the median estimate, while the very highest is 0.22 Wm−2 above the median. Even if the LC18 λ median estimates of 2.25–2.3 Wm−2 (using globally-complete temperature data) were biased high by 0.22 Wm−2 due to a one-in-a-hundred realisation of internal variability, the low bias in the resulting ECS estimate would not exceed 10% – a minor amount compared to other uncertainties.

Andrew Dessler also refers to other papers suggesting that the pattern of warming that we experienced during the late 20th century causes energy balance estimates of ECS to be lower than the climate system’s true value. Even if that pattern could cause energy budget ECS estimates based just on data covering the late 20th century to be biased low, it would not follow that energy budget ECS estimates based on changes over the full historical period are also biased low. The warming pattern effect is addressed at considerable length in LC18, and evidence presented that it did not bias energy budget ECS estimates based on the full historical period.

For completeness, I will also clarify another potential bias that is often mentioned. Historical period energy-budget ECS estimates reflect climate feedbacks operating over the historical period: they estimate ‘ECShist‘. In most global climate models (GCMs), estimates of their true ECS exceed ECShist estimates.[14] For the extremely similar MPI-ESM1.2 model, its true ECS appears to be between 2.9 K and 3 K. That is about 8–10% higher than the median ECShist estimate of 2.73 K from the 100 MPI-ESM1.1 historical period runs. That excess is in line with the median excess of 9.5% for current generation GCMs derived in LC18 and slightly lower than one might expect given the time-profile of MPI-ESM1-2’s response to an abrupt doubling of CO2 concentration.[15] Whether ECS exceeds ECShist in the real climate system is unknown. LC18 provides an alternative ECS estimate that allows for it doing so, based on the ECS – ECShist relationships in current GCMs.

Conclusions
I have shown to be incorrect the Dessler18 claim that the central hypothesis in the energy budget framework –  that global average surface temperature anomalies are indicative of anomalies in outgoing energy – “is not well supported over the historical temperature record in the model ensemble”.

Importantly, the uncertainty that Dessler18 focuses on barely affects the total uncertainty of primarily observationally-based [16] energy-budget ECS estimates derived from estimated changes in ΔTS, ΔN and ΔF over the period since 1850, which is dominated by forcing uncertainty. Using the same data as for the LC18 primary estimate but with the influence of internal variability removed, the fractional standard deviation in 1/ECS is 0.22.[17] Adding uncertainty attributable to internal variability, in line with that in MPI-ESM1.1, would only increase the total fractional standard deviation to 0.23.[18] So, eliminating the uncertainty that Dessler18 makes a big issue about would hardly reduce the total uncertainty in current observationally-based energy balance climate sensitivity estimates.

Other issues raised in Dessler18
Dessler18 also asserts that “If TS is a good proxy for the response RF, we would expect to also see a correlation in measurements dominated by interannual variations.  Observational data allow us to test this hypothesis. … These [CERES EBAF] data show that ΔR is poorly correlated with ΔTs in response to interannual variability (Fig. 3a), as has been noted many times in the literature.”

Dessler18 Fig. 3a actually plots monthly mean data and hence includes intermonthly variability. Nevertheless, it does show a correlation, of the expected sign. One would not expect a high correlation, because a fair amount of the shorter term variability in R [N] is caused by random fluctuations in clouds. Such fluctuations in R cause TS to move in the opposite direction to that if the change in R had been driven by a fluctuation in TS. It is well known that this problem confounds the estimation from shorter term data of the energy-balance based relationship of R and TS.[19] The problem is of limited relevance to energy balance ECS estimation, which uses multidecadal or centennial changes.

I shall make only a brief comment here on the Dessler18 proposal of replacing global surface temperature with tropical mid-tropospheric (500 hPa pressure) temperature, as being a better determinant of changes in outgoing energy. It appears to eliminate a minor source of uncertainty at the expense of introducing worse problems. Indeed, the authors admit, in the second paper,[20] that a key ratio they uses to convert 500-hPa tropical temperature interannual feedback strength into long term forced-response feedback strength “comes from climate model simulations; we have no way to observationally validate it, nor any theory to guide us”.

Nic Lewis


[1]   Dessler, A. E., T. Mauritsen, B. Stevens, 2018. The influence of internal variability on Earth’s energy balance framework and implications for estimating climate sensitivity. Atmos. Chem. Phys. https://doi.org/10.5194/acp-2017-1236-RC1, 2018. I thank the authors for having publicly archived the related data and code.

[2]   1 K = 1 °C

[3]   Lewis, N. ,and J. Curry, 2018: The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. J. Clim. JCLI-D-17-0667  A copy of the final submitted manuscript, reformatted for easier reading, is available at my personal webpages, here. The Supporting Information is available here.

[4]   https:/twitter.com/AndrewDessler/status/988898450107457536

[5]   In Dessler18 R is used to denote the top-of-atmosphere radiative imbalance (downwards net radiation, both longwave and shortwave) rather than the usual symbol N, used here. R normally denotes the change in TOA radiative imbalance resulting from a change in surface temperature, so that (when measured downwards) ΔR = (ΔF − ΔN). I use that notation here, but with R measured upwards. Note that as uncertainty in model-simulation ΔF is ignored, the standard deviation in estimated ΔR equals that in ΔN.

[6]  Using an ensemble of three runs with time-varying forcing but fixed preindustrial sea-surface temperatures (SST).

[7]  Derived from fixed SST simulations in which CO2 increases at 1% p.a.

[8]   The distribution appears to be close to normal. Its mean is almost identical to the median of 1.43, and the 5–95% range of 0.46 is 3.4⤬ its estimated standard deviation of 0.137, close to the 3.29⤬ for a normal distribution.

[9]   By comparison, the standard deviation of decadal mean TS in the MPI-ESM1.1 piControl run is 0.061 K, which is 7% of the 1850-1860 to 1996-2005 median historical simulation ΔTS of 0.87 K. Adding the base and final period uncertainty in quadrature implies a fractional standard deviation for ΔTS of 0.10. The corresponding absolute standard deviation for decadal mean N is 0.076 Wm−2. Since uncertainty in ΔF derived from fixed SST simulations is very small (and was ignored), the standard deviation of decadal mean ΔR = (ΔF − ΔN) is the same. The fractional standard deviation of ΔR is accordingly 6.2% of the 1850-1860 to 1996-2005 mean historical (ΔF − ΔN) of 1.23 Wm−2. Adding the base and final period uncertainty in quadrature implies a fractional standard deviation for ΔN of 0.087. Anti-correlation of −0.47 between decadal mean ΔN and ΔT S, which is expected from the response of R to ΔTS, explains why the 0.096 fractional standard deviation in estimates of λ from the MPI-ESM1.1 historical simulations is much less than the sum in quadrature of the estimated fractional standard deviations of ΔTS and ΔR based on piControl run variability, being 0.133.

[10]          The LC18 base period is longer than in Dessler18; 14 rather than 10 years, so the inter-period variability should be approximately 7% lower, for both ΔR and ΔTS, reducing the fractional standard deviation in λ estimates to 0.089.The contribution from internal variability in ΔTS appears to be slightly lower than that from ΔN; attributing 57% and 43% of the scaled 0.089 fractional standard deviation in λ, or 0.051 and 0.038, to respectively ΔR and ΔTS appears reasonable. I then multiply these by 1.4 to allow for their effects adding in quadrature, increasing them to 0.071 and 0.054 respectively. The median ΔR for the MPI-ESM1.1 historical simulations is 1.23 Wm−2, 0.61⤬ the 2.02 Wm−2 for the main LC18 estimate. The fractional standard deviation in λ estimates of 0.071 attributable to internal variability in R therefore scales down to 0.044. Median warming between the first and last decades of the MPI-ESM1.1 historical period runs is 0.87 K, almost identical to the globally-complete observational data warming in the primary LC18 case (0.88 K), so no adjustment is made to the 0.054 fractional standard deviation attributable to internal variability in ΔTS.

[11]          I do so by drawing one million samples from each of two normal distributions, each having a mean of one, with standard deviations of 0.054 and 0.044, and multiplying the one million sampled LC18 ECS estimates by the ratios of the first set of samples from a normal distribution to the second set.

[12] The calculation is 0.096 * (1 − 0.07) * (1.23/2.02) = 0.054. See endnote 10.

[13] https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117464

[14]          The estimates of ‘true’ ECS in GCMs are typically derived from the x-intercept when by regressing ΔR against ΔTs over years 21–150 following a simulated abrupt quadrupling of the preindustrial CO2 concentration. Since historical period forcing is not known in most GCMs, the estimates of ECShist are usually based on data providing similar information to that available from the historical period.

[15] Interestingly, the 2.73 K historical period ECS estimate appears to be biased low, as a result of diagnosed forcing in 1850 and 1851 being anomalously low. Using 20-year rather than 10-year averaging periods at the start and end of the historical period produces a median ECS estimate of 3.0 K, although that is probably biased high by the response to the 1991 Mount Pinatubo volcanic eruption. When periods that avoid both 1850–51 and volcanic years are used, the median ECS estimate is slightly over 2.9 K, surprisingly close to MPI-ESM1.2’s estimated true ECS. Possibly MPI-ESM1.1 has a slightly higher ECS than does MPI-ESM1.2.

[16] GCMs are used in deriving components of ΔF and the (small) initial value of N, but in the main with a strong input from observations.

[17] Uncertainty in 1/ECS rather than in ECS is given since 1/ECS is proportional to λ (save for uncertainty in  F2CO2), and estimates of 1/ECS have a close to normal uncertainty distribution, unlike for ECS.

[18] Since the uncertainty attributable to internal variability is independent of other uncertainties, their standard deviations add in quadrature. Adding the allowance for internal variability in ΔR and ΔTS included in the main LC18 ECS estimates would have a larger, but still small, effect.

[19] Spencer, R. W., W. D. Braswell, 2010. On the diagnosis of radiative feedback in the presence of unknown radiative forcing. J. Geophysical Research: Atmospheres, 115(D16).
Choi, Yong-Sang, et al., 2014: Influence of non-feedback variations of radiation on the determination of climate feedback. Theoretical and applied climatology 115.1-2 (2014): 355-364.

[20] Dessler, A.E., P.M. Forster, 2018. An estimate of equilibrium climate sensitivity from interannual variability. Preprint of submitted manuscript.

UPDATE 1 May 2018

Andrew Dessler has responded on Twitter to this article, writing:

Let’s match the periods chosen in our model ensemble analysis as closely as possible to L&C. We can calculate ECS using these base periods: 1869-1882, 1995-2005.

The resulting distribution has median = 3.01 K, 5-95% confidence interval: 2.59-3.56 K. Looks like an important uncertainty to me.

The fact that Lewis finds that internal variability makes no difference tells me that, somewhere buried in his inscrutable analysis, is a mistake.

Andrew Dessler also complains about his difficulty in following LC18. It certainly seems from these comments that he does not understand either that paper or this article. I will try to explain in simple terms why what he tweets is wrong.

  1. In the Dessler18 simulations, internal variability in ΔR appears to dominate, therefore one should analyses its effects on climate feedback λ = ΔRTS = F2⤬CO2 /ECS, not on ECS, when applying it to estimates from a different study.
  2. The 2.59-3.56 K ECS range equates to a λ variance of 0.016 (units for λ variance are W2m−4 K−2). Since internal variability is independent of other (mainly forcing) uncertainties, λ variances (unlike standard deviations or ranges) add.
  3. The relevant LC18 ECS estimate without the LC18 allowance for internal variability has a λ variance of 0.26.
  4. So the total LC18 λ variance after adding Andrew Dessler’s internal variability estimate is 0.28. In standard deviation terms, that is an increase of 3% over the no-internal-variability level.
  5. Internal variability that only causes a 3% increase in estimation standard deviation does not look like an important uncertainty to me.
  6. The actual allowance for internal variability in LC18 was larger than that per the MPI-ESM1.1 simulations used in Dessler18, but its effect was still minor compared with other uncertainties.

508 responses to “Why Dessler et al.’s critique of energy-budget climate sensitivity estimation is mistaken

  1. Pingback: Why Dessler et al.’s critique of energy-budget climate sensitivity estimation is mistaken | Watts Up With That?

  2. Pingback: Why Dessler et al.’s critique of energy-budget climate sensitivity estimation is mistaken |

  3. A wonderful rebuttal.

    Thank you for your work.

  4. Nic Lewis, thank you for the essay.

  5. Nic,
    I still think you’re missing the point of Dessler et al. The key point (as I understand it) is that we only have a single realisation of reality from which we can estimate climate sensitivity using an energy balance approach. Depending on how internal variability has impacted surface warming (and also the relationship between surface warming and the planetary energy imbalance) the result you get could be far from the “true” climate sensitivity of the system. Hence, one should be cautious of presenting the best estimate that one gets from an energy balance approach as someone likely to be close to the “true” climate sensitivity.

    • ATTP:

      I think it is you who are perhaps missing something.

      Reality collapses all the possibilities of the various model runs into one actual answer. Like opening Schrödinger’s box. The true climate sensitivity is what observations will show it to be – not what models predict it to be.

    • “we only have a single realisation of reality”

      Yes, unfortunately in the prediction business, that’s all you get.

    • ATTP, Did you read the post? Nic addressed exactly this point and showed that the influence of internal variability was fully accounted for in LC18 and that Dessler’s GCM experiment fell well within their uncertainty due to internal variability. The more important question is how would you or Dessler do this computation differently? You offer no suggestions for improvement. You can do better than that I would hope. Dessler’s methods are all dependent on GCM’s being right in the long term. That’s an assumption that quite poor given the poor skill of GCM’s at regional climate, i.e., the “pattern of warming.” It’s a more parsimonious assumption that GCM’s get the pattern of warming wrong because they are missing some key feedbacks.

      • This was written: “showed that the influence of internal variability was fully accounted”

        They all say from time to time when looking at climate changes of the past, “it is not known why some change happened”.

        This sounds like they have very little knowledge of internal variability and could not possibly fully account for any of it.

      • They would. Their position is the what the real world has done and is doing has no bearing on what it will do the future.

      • They wouldn’t.

    • Kenny Rice is trying to tell us that robot sex is more reliable than real sex.

    • ATTP argues [at his site]
      “In a sense, Lewis & Curry are taking one realization of reality and assuming that it is an exact representation of the typical response of the system. It probably isn’t. This doesn’t mean that climate sensitivity can’t be low (even mainstream estimates do not rule this out). It simply means that we should be cautious of assuming that it is low based on an estimate that can’t fully account for how internal variability may have influenced the path that we’ve actually followed.”

      In response I said
      “We do however have only one real set of data observations to go off. The chance that the baseline chosen was affected by natural variability is real the chance that the comparison periods were affected is real. The trend over the baseline compared to that of the other periods could be worked out and if different noted. If not it suggests that natural variability was unlikely to have affected the result. Why would it happen the same way over different time intervals and at different times?
      Hence the L and C has a chance of being right though as you and AD suggest a possibility of unusual internal variability exists.”

      • Why would it happen the same way over different time intervals and at different times?

        Because that is the way it has always been. Time intervals and the times it happened do change.

    • There is only one realization of reality possible. The system is deterministic, that we cannot accurately predict future states due to imprecise knowledge of system interactions and initial conditions does not change the fact that the reality realized is, and was, the only reality possible.

    • and Then There’s Physics: Depending on how internal variability has impacted surface warming (and also the relationship between surface warming and the planetary energy imbalance) the result you get could be far from the “true” climate sensitivity of the system.

      So there is a population of possible realizations, defined more or less; and we are living in one realization of the possibilities, what you call “the system”. What do you want, the sensitivity of this realization, or the sensitivity of the (loosely defined) system? We want the sensitivity of this realization. Thought of as a population, “the system” does not have a single sensitivity, but it has a population of sensitivities, one for each possible realization, with a distribution. We do not want the mean, variance and quantiles of the sensitivities of the system, we want the conditional mean, conditional variance, and conditional quantiles of the realization, given the measurements on the realization to date.

  6. > Andrew Dessler initially claimed that the [L18] energy budget methodology caused bias.

    A proper quote with citation might have been nice, since there’s not one single instance of “bias” in D18.

    All this talk of “bias” looks imprecise to me, and indicates a bias (which would be easy to substantiate – look at the first comments and pingbacks) toward pushing the lower limits of justified disingenuousness.

    • If you had read all the article, you would have seen a proper quote with citation n which Andrew Dessler wrote:

      I said several times that the Lewis and Curry method is biased

      • > you would have seen a proper quote with citation

        The quote Andrew recalling what he said isn’t exactly a quote of what he initially said, Nic. So it’s not that clear that when you said “initially claimed” you were referring to that quote.

        Now that you clarified what you meant, it’s clear that you misidentify Andrew’s initial claim. Here it is:

        First, note that “the Lewis and Curry method” is less specific than the “L18 model.” Second, note that there’s no mention of “bias” either, only “wrong” and “produces answers that can deviate significantly from reality.” There’s not clear-cut way to translate that into bias, which relates to a true value. A lukewarm spinning of “observational,” “best estimate” “variability” or “bias” doesn’t turn your estimate as nearer to a true value, let alone reality.

        Moreover, if you read that quote, you’d notice that he’s correcting the word “bias.” Hence why he says in the sentence you fail to recall: but I should’ve said it is imprecise. When speaking informally, there’s no harm into conflating precision and accuracy. Andrew did not make that mistake in D18, hence why the paper conclusion starts with this paragraph:

        We have estimated ECS in each of a 100-member climate model ensemble using the same energy balance constraint used by many investigators to estimate ECS from 20th- century historical observations. We find that the method is imprecise – the estimates of ECS range from 2.1 to 3.9 K (Fig. 2), with some ensemble members far from the model’s true value of 2.9 K. Given that we only have a single ensem- ble of reality, one should recognize that estimates of ECS derived from the historical record may not be a good estimate of our climate system’s true value.

        https://www.atmos-chem-phys.net/18/5147/2018/acp-18-5147-2018.pdf

        I do not think “I said several times that the Lewis and Curry method is biased, but I should’ve said it is imprecise” contains any concession over which you could make a touchdown dance.

        By the way, your article provides the URL to the interactive comment, not the paper itself.

        Next time, keep the chin up.

      • Nic, Willard is the world champion of obfuscation and obscurities as his follow up comment so amply demonstrates. Like the eye of Sauron, Willard’s very keen on finding and punishing any virtue shown by anyone who disagrees with him.

        The most parsimonious explanation is simply that Dessler has trouble sticking to a consistent story.

      • Willard what has that got to do with the ‘bias bit you brought up? Did he say: “I said several times that the Lewis and Curry method is biased” or did he not? Is this your straw man or look over there argument?

      • Thank you for your rhetorical question.

        The first relevant question in AT’s thread has been written on April 27, 2018 at 3:29 pm:

        [FrankB]: I agree that my ACP paper only identifies the bias in the method, but does not tell you that the L&C estimates are actually biased. But other work (Marvel eat al., 2018, Zhou eat al., Andrews and Webb) show that the actual pattern that we have experienced is biasing the ECS estimates low.

        https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117299

        The “bias” in first sentence is what he’s clarifying. Why do I know this? For three reasons.

        First, because Nic took issue with it at 4:54.

        Now, do you know what was Andrew’s response? Not the one Nic quoted, but this one, at 6:18 pm:

        Nic Lewis: Clearly, my comments above were not written as clearly as they could have been. For everyone else’s info, let me be clear: my paper showed that the energy balance method L&C used is IMPRECISE and one could infer values of ECS that are far from the actual value. Thus, one should not assume that values in your paper, obtained from one ensemble of reality, are accurate estimates of the system’s true values.

        When you combine my paper with others that you’re familiar with (Zhou et al., Andrews and Webb, Marvel et al., etc.), it seems very very likely that the surface pattern variability is making your estimate too low. Your argument in Sect. 7 of your paper that the surface pattern is forced are not well constructed or convincing (but I’m guessing you already know that).

        The second reason I know why Andrew is correcting his wording is because of the part that Nic omitted from Andrew’s comment:

        I agree with several previous commenters that the terms people use our causing some confusion. We wrote our ACP paper carefully to be as clear as possible, but I have not been as clear in my tweets and in blog comments (because to be honest I don’t spend a lot of time thinking about them before I post).

        [Insert Nic’s chopped quote]

        The third reason is because that comment was in response to an exchange I had with Dikran, not in response to Nic.

        I have three reasons to believe that Nic is pulling the Denizens’ legs here.

        What have you got, Ordvic?

      • Willard, I’m so far from understanding you I’m afraid I’m beyond help, And I’m being serious. All I want to know is did Dressier say: “I said several times that the Lewis and Curry method is biased”. I don’t care who is right or who is wrong and I probably never will. Did he say or didn’t he say it?

      • Well I guess I finally read your comments (both) several times and referred back to the post. Now I think I understand what your complaint is.

        However consider this If Dressier really meant to say imprecise then that would mean Lewis and Curry didn’t quite get all their decimals right; indicating perhaps shoddy work rather than bias. It also says at the beginning that the paper is wrong; indicating the paper is in error and not factually true.

        At the end of the paragraph of the alleged sentence with the ‘bias and the ‘imprecise’ it says several other papers (he doesn’t cite) show the estimates are lower than the true value caused by late 20th century warming. To me that means that they errored on the side of low. After all that is what is at contention here isn’t it? Dressler is wanting the low bias or lack of precision to be wrong. So he wants it to show that It’s really warmer than the value shows.

        IMO it comes down to more political posturing than anything. Lewis and Curry are lukewarmists and apparently (I don’t know for sure) Dressier is a warmist. Well at least there is a back and forth discussion (apparently). Sorry I didn’t get what you were saying, but I still don’t get the point. Whether they’re biased or imprecise or wrong he has a higher value than they. A rose by any other name smells just as sweet.

      • Sorry Dessler

      • Thanks, Ordvic. I agree with your overall diagnostic. I even agree with SteveF’s estimation of the importance of these debates.

        I think it’s simpler to read what Andrew says the following way:

        If you throw a die many times you may see it is biased and use that to predict the future.

        If you throw it once and get a three that is not particularly informative about the future.

        Same system, different uncertainties. As long as you are within the confidence/uncertainty range all is good. Only if you get a statistically significant result you may want to update your predictions about the future (preferably after understanding the reason).

        The paradoxical worshippers of the Uncertainty Monster who accuse scientists of ignoring uncertainties should themselves stop ignoring uncertainties.

        https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117584

        These are VeeV’s words, not mine, so you can safely understand them by a first reading. If what I’ve been saying in this subthread is obscure to Denizens, I can’t fathom how they can Grok Nic’s prose. To each one’s own, I guess.

        It is now high time we return our focus to DavidY’s very informative technical comments.

      • Lewis and Curry are lukewarmists and apparently (I don’t know for sure) Dressier is a warmist.

        Lewis and Curry are lukewarmists and Dressier is a warmist.

        All who produce a sensitivity of climate temperature to man made CO2 are lukewarmists or extreme warmists.

        Mother Nature is a pure naturalist. She created this warm period the same way she created past warm periods. She decreased ice extent by reducing snowfall in the little ice age, allowing ice volume to deplete which allowed ice extent to decrease. Now, she is cranking up the snowfall to rebuild the ice volume. After a sufficiently long warm period, the increased ice volume will increase ice extent and cause another little ice age.

        This is what ice core data indicates. This modern period is no different so far. It is only different in the future, and only in the output of flawed climate models.

      • Willard thanks for reply and link. I have to finish trying to understand Lewis here in this post before I venture to Dressler errrr Andrew Dessler but I will check your Attp link later today.

      • stevefitzpatrick

        Willard,
        When you wrote: “I even agree with SteveF’s estimation of the importance of these debates”, were you referring to agreement with my comment on Nic’s APril 24 post:

        “That said, all technical evaluation of sensitivity seems to me rather moot… nothing is going to bring about significant reductions in global CO2 emissions over the next few decades. By 2050, the reality of ~500 PPM CO2 will have settled most of the technical questions. But that won’t make the policy disagreements disappear, nor should they: the fundamental disagreements are in values, priorities, and perceived costs and benefits. It is naive to imagine those disagreements are going to disappear if TCR turns out to be 1.2C or 2.1C. We might all hope technology provides additional lower cost policy options by mid-century.”

        Or some other comment I made?

      • SteveF,

        Let’s say that our agreement ends at:

        That said, all technical evaluation of sensitivity seems to me rather moot…

        Everything else is a bit too speculative to my taste.

      • Willard, Your statement about the dice is beside the point. There is uncertainty due to the short time period of the observations. LC18 accounts for that uncertainty. Dessler I think actually admits on twitter that he doesn’t fully understand LC18. I would advise against taking his word for anything give his falsehood about LC18 having “already been proven wrong.”

        The other telling thing is that Dessler appears to have left the field while admitting at your biog that “we don’t have any good methods to disentangle internal variability from forced response.” Paraphrasing: Most of that settled science we’ve been pushing is very uncertain.

      • > Your statement about the dice is beside the point.

        Thank you for this other technical comment.

        A few notes. It was not a “statement,” but an analogy. It was not “my” analogy, but Victor Venema’s. My own analogy was this one:

        > Is this not a game with THREE dice.

        Backgammon? Kinda. There are two pairs of dice to move the stones, and there is a doubling cube. This one is used for betting on the number of points (1, 2, 4, 8, 16, 32) the winner of the game will score. (A Backgammon match is a series of games.) Before it has been offered for the first time, either player can offer it to the opponent in exchange of doubling the odds. In return, the opponent gets the cube. Later in the game, he alone can offer back the cube. Et cetera.

        https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117573

        This analogy echoes Pehr’s last comment. It also extends SteveF’s estimation, to which your technical comment is unresponsive.

        Your technical comment also sidesteps the main point made my Andrew, which relates to precision rather than accuracy. Except for KenF’s unsupported assertion that one realization of a climate system can be truth. Considering your insistence in other contexts on the difficulty to compute precise approximations of Navier-Stokes equations, I do hope you know why.

        A simpler explanation of Andrew’s communication closure is that audits never end. Those who aren’t in the auditing business should let things go.

        Being thankful for contrarian concerns ought to be enough.

    • Willard strikes (out) again

      • Nic Lewis appears to be arguing primarily on the basis that all work on climate sensitivity is wrong, except his own, and one other team who gets similar results.

        CO2 is about 400 parts per million. Man Made CO2 is about 100 parts per million. One molecule of CO2 per ten thousand molecules does not have a measurable sensitivity.

        Chicken Little is laughing her head off at all the people spending their time working on something that can never be measured and verified.

    • willito sheepishly replies:”Ouch!”

      • You should have waited a bit before cheerleading, Don Don.

        How’s teh Donald doing these days?

      • Nobel is in the mail.

      • I’ll just keep assuming that an educated reasonable fellow like Nic is right and you are wrong, as usual. My spreadsheet indicates that you have been wrong on 98.4%, of the occasions when you gratuitously nitpicked a learned person of Nic’s caliber. That’s right up there with the 98.7% of your comments that are over-the-top loquacious BS. Pompous 99.8%. Self-righteous 96.7%. Do you want the rest?

        The Donald is on a roll. At least, seven more years.

      • You might need to release that spreadsheet, Don Don.

        Unless you’re not a fan of Open Science?

      • Unfortunately for mankind, it is necessarily ongoing research. If you stop the foolishness, I will wrap it up and submit for publication.

      • Fair enough, Don Don.

        I hope your compendium contains the episode where Nic practiced double accounting regarding the very few teams he endorsed:

        > That makes three other teams getting similar results […]

        Are you referring to Otto & al as an “other team,” Nic?

        Sounds like double accounting to me.

        http://bishophill.squarespace.com/blog/2015/10/5/puffed-rice.html?currentPage=3#comments

      • Don Monfort

        You coulda been a contender, willito. But you are just a nitpicking tomato can.

      • Thank you for the kind words, Don Don.

        You should have checked the link I provided before choosing your nitpick theme. For here’s to what I was responding:

        Speaking of attention to detail, here’s James:

        Nic Lewis appears to be arguing primarily on the basis that all work on climate sensitivity is wrong, except his own, and one other team who gets similar results. In reality, all research has limitations, uncertainties and assumptions built in. I certainly agree that estimates based primarily on energy balance considerations (as his are) are important and it’s a useful approach to take, but these estimates are not as unimpeachable or model-free as he claims. Rather, they are based on a highly simplified model that imperfectly represents the climate system.

        http://www.climatedialogue.org/climate-sensitivity-and-transient-climate-response/#comment-901

        Discuss.

        http://bishophill.squarespace.com/blog/2015/10/5/puffed-rice.html?currentPage=3#comments

        Nic’s response? You guessed it:

        That makes three other teams getting similar results (joined by others post AR5), not one.

        How Nic evaded JamesA’s point is a thing of beauty.

        Pray tell me more about nitpicking, Don Don.

      • Don Monfort

        Nobody checks your links, willito. And we don’t read anything in italics. Elaborate nitpicking is no better than plain old nitpicking. We don’t have time for it.

    • Willard,
      Your arguments are complete nonsense.

      What I wrote was “Andrew Dessler initially claimed that the LC18 energy budget methodology caused bias.”

      I didn’t say that this was claimed in the Dessler18 paper,

      An example of the original claims by Andrew Dessler “that the LC18 energy budget methodology caused bias” is this one, at https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117299

      “I agree that my ACP paper only identifies the bias in the method, but does not tell you that the L&C estimates are actually biased. But other work (Marvel eat al., 2018, Zhou eat al., Andrews and Webb) show that the actual pattern that we have experienced is biasing the ECS estimates low”.

      • > I didn’t say that this was claimed in the Dessler18 paper,

        I didn’t say you said it either, dear Nic. What I suggesting was that the “initial claim” in your labyrinthine piece was imprecise. There’s no way to know what you’re referring to the way you wrote it, even for someone who read that thread at AT’s. I’m quite sure the quote you provided was opportunistic. It doesn’t even support whatever point you wished to make, but that’s between you and your peanut gallery.

        Next time you want to say that my argument is nonsense, try to read it.

      • Don Monfort

        nit, meet picker

      • willard’s comments are always complete nonsense, Mr. Lewis. He’s playing Climateball. For the rest of us, the lesson from the movie War Games is the key: The only winning move is not to play.

      • W couldn’t understand even the most basic concepts of income inequality. That this is over his head doesn’t surprise anyone.

      • A link might have been nice, Kid.

        Perhaps you have this in mind:

        The very concept of broad income shows that we’re far from the “major flaws” promised, Kid. The authors readily acknowledge that most studies have found a long-term trend of rising inequality. Also, when the main author, a Manhattan Institute talking head, tells you that inequality slows if and only if employers provide health care to their employees, you might wish to give it some thought. We’re in Trumpcare week, after all.

        Also note that the “Piketty book” ain’t Piketty & Saez 2003, and the graph that started you ain’t from that paper either. Search for the first author, Paulina Tcherneva. Her point was about the share of income growth received by the top 10 percent and bottom 90 percent of earners during expansions. But your squirrel was fun.

        https://judithcurry.com/2017/05/02/nyes-quadrant/#comment-848636

        Again, thanks for playing.

      • No. It was the inability to distinguish market from aggregate income. Such a basic concept, and, just like this thread, you whiffed.

      • > It was the inability to distinguish market from aggregate income.

        Right after I’ve presented you with a quote that may indicate otherwise, Kid. That’s just great.

        No wonder you can’t talk to your children about these subjects anymore.

      • Don Monfort

        Why bring his kids into this, willito? You are sinking.

      • You really should check before commenting, Don Don:

        Who is not having fun. Well, it doesn’t match a 20 foot, $2 birdie putt, but my Liberal daughters don’t want to hear my stuff so this is a more enjoyable outlet.

        https://judithcurry.com/2017/05/02/nyes-quadrant/#comment-848562

      • Don Monfort

        Oh, something he said intended to be humorous sometime in the past. But we can read and he didn’t say he can’t talk to his children about these subjects anymore. You made that up. And that he happened to mention his daughters at another time in a different context is a damn poor excuse for what you have done. Shame on you. You used to have more sense.

      • What’s laughable and what is over your head is that you tried to attribute certain findings to the Manhattan Institute while it was Obama’s Office of Tax Analysis and the Congressional Budget Office findings you were criticizing. You let political ideology get hung up in your skirt by blaming a think tank for what Obama’s own people and the CBO actually found. But, carry on, and continue to thrill us with your acumen about climate. I, for one, always learn a lot from you.

      • I love it when you talk like that, Kid.

        You may have missed the caveat of the paper you still can’t bother to cite:

        Views and opinions expressed are those of the authors and do not necessarily represent official Treasury positions or policy.

        Second, they’re not my criticisms, but the ones made by Saez & Piketty. If you followed the debate, you would have seen that both teams are reconciling their results as much as one can expect under the circumstances:

        Summing these various effects closes most of the gap between our pre-tax income series and their preferred pre-tax series. AS are planning to further revise their series. We hope they will consider adding fully pre-tax and pre-transfers series that match national income as closely as possible so that we can do a more systematic side by side comparison. We will accordingly update this appendix when their series are finalized.

        Try to argue that Emmanuel Saez doesn’t grok inequality as well as you do. I dare you.

        As for the Manhattan Institute remark, I’ll let you find the reference yourself. You’re so big and strong regarding questions of inequality that you should be able to find it in no time.

      • Poor wee willie is an unreconstructed socialist who believes in central economic planning by an AI overlord. This seems an idea popular with screw loose progressives.

        Piketty summarizes his theory of inequality with the formula “r > g” where “r” is the rate of return on capital and “g” is the rate of growth in the economy. If economies grow at a rate of 1 to 1.5 percent per year, while the average return on capital increases at a rate of 4 to 5 percent per year – inequality is bound to rise. It is a sort of energy balance of economies. Wealth itself is the solution this mooted problem and much else as investment increases growth and competes for returns. On top of this are progressive taxes on incomes at least and redistribution to various social ends.

        Reducing global poverty would seem to be the better focus –

        in well managed economies.

        https://watertechbyrie.com/2016/03/11/all-bubbles-burst-laws-of-economics-for-the-new-millennium/

        https://watertechbyrie.com/2015/06/08/the-new-un-sustainable-development-goals-are-they-suffering-from-relevance-deprivation-2/

        I await with bated breath poor wee willies next sophistry.

      • W

        Thinking back through all of your comments including those here, an emergent property becomes obvious, an incoherence in verbiage and thought process. Only now do I understand. It’s that English is not your first language.

        I have my standards for compassion to my fellow man. I refuse to take advantage of the disadvantaged.

        Consequently……finis.

    • I’m doin’ my Happy Dance, Willard. Been such fun! You should join in!

  7. Andrew Dessler has been using the Dessler18 results to criticise energy budget ECS estimates, such as that in LC18. He tweeted:

    I see Americans are taking everything to Twitter, from government communications to scientific debates. Very inadequate in my opinion.

  8. Very nice work. Unfortunately, as shown by how Susan Crockford has been treated recently, the point is not to actually prove the work is wrong from a scientific standpoint. The point is to create a nifty tweet that can made into a nifty headline so any annoying any data that contradicts the alarmism meme can be immediately ignored.

  9. Nic, It looks like Dessler’s claim is just wrong on its face, but it seems to me there is another criticism of their work and its similar to your critique of their 500hPa paper. There is little evidence that GCM’s “internal variability” is skillful or even meaningful. There is a long history of climate scientists saying that its not meaningful leading to the spaghetti plots in all the IPCC reports.

    Similarly for the pattern of warming argument. If there is a large discrepancy between GCMs output and the historical pattern of warming, that would seem to indicate lack of GCM skill. If this discrepancy is in the tropics, then it would point in the same direction as other negative results you point out such as Zhao on cloud microphysics models and ECS.

    Further, Dessler admitted yesterday that there are no good tools for disentangling forced response from internal variability. And that itself casts doubt on D18.

  10. I’ll take Susan Crockford and Judith Curry over Dessler any time, anywhere, any day!!!

  11. Tye same Dessler who concluded cloud feedback was positive when his plotted data showed a classic shotgun pattern and hismown calculated r^2 was 0.02!

    • stevefitzpatrick

      In fairness to Dessler, he did say that it might take a very long time (a huge amount of data!) to accurately make an assessment of cloud feedback. I think he was mostly trying to show that Roy Spenser’s claims about cloud feedback were not adequately supported. That said, Dessler pretty much *always* claims ECS estimates below the GCMs are flat wrong. I think those claims are rubbish: the empirical estimate are far more credible than GCMs with a lots of “parameterized values”. Dessler’s green sensibilities may be ‘clouding’ his judgement.

      • SF, perhaps, but that was not how I and Steve McIntyre interpreted Dessler2010 b. At the time. I wrote up the example in some detail in the long concluding climate chapter of 2012 ebook The Arts of Truth, including the NASA website page quoting him on his result. The entire book was actually about the ‘Arts of Untruth’—even the book title was an illustration, as the preface explained. The climate chapter contained many illustrations of each of the broad categories and subtypes explored using other topic areas like healthcare, education, energy, evolution (the eye, gestation), and public safety.

      • When he commented at SOD on cloud feedbacks he seemed supremely confident in his conclusions that they were positive. After all they came from GCMs and agreed with “simple theory.” It sounds to me as if Dessler is not very accomplished at constructing a consistent story. That’s a common pitfall of “communications strategies” that don’t focus on the truth.

      • verytallguy

        dpy,

        After all they came from GCMs

        Well, it’s a long thread, but his very first post there cites two papers which explicitly do *not* rely on GCMs.

        https://rd.springer.com/article/10.1007/s10712-017-9433-3

        http://www-k12.atmos.washington.edu/~dennis/Hartmann_Larson_2002GRL.pdf

        Which I guess scuppers your “communication strategy” that Dessler is all about GCMs.

        https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comment-123459

        Additionally, we can note that your campaign to keep climate blogs on the science and not the personal invective includes there your immediate insult that Prof Dessler has chosen “drop by for another drive by comment” and again here your insinuation of his lack of “focus on the truth”

        Now, how about this: You drop the constant insults and focus on the science, and I’ll read your technical insights carefully and learn from your erudition.

        What do you say? Deal or no deal?

        [copied from below as wrongly threaded]

      • I don’t want to continue to rehash that thread. Your first reference is indeed a “simple theory” method while the latter relies on detailed eddy resolving local modeling. Thus, what I said about GCM’s that agree with simple theory is an accurate characterization.

        The issue here is that Andrew Dessler made a number of derogatory but wholly wrong (and easily ascertained to be wrong) comments about LC18. Nic corrects them here. Dessler retreats to twitter and makes some even more foolish comments indicating he actually doesn’t understand what LC18 actually did. That’s rather embarrassing I would think. How would you explain this total inability to engage the science, while seeming to feel a strong need to discredit LC18?

      • verytallguy

        dpy,

        In the spirit of not rehashing, I’ll agree to disagree with your characterisation of Prof Dessler.

        In all seriousness, you may have something to add to the technical debate, but your tendency to, let’s say, over personalise makes those discussions nigh impossible.

        So I’d merely observe that if you stick to your own dictum of discussing science, you’ll make more of an impact.

        And that impact would be orders of magnitude higher if you followed Nic Lewis’ example of conducting it in the literature rather than the blogosphere.

      • VeryTall, I really don’t have a goal in mind in the climate debate, so being “effective” is not important to me in the way it would be to an activist. I am learning important science that strongly overlaps my scientific endeavors, which are for me vastly more important because they have a high impact in the real world and I’m already quite effective there and work with some of the best and smartest people around.

        One area of overlap between climate science and CFD is the whole issue of bias and the replication crisis. It’s vastly worse than people (particularly outsiders like yourself) realize. I have a lot more experience spotting bias than you do and I believe Dessler displays the classic symptoms, with this episode being particularly telling. Nic appears to not exhibit those symptoms. He shows remarkable discipline in sticking to the technical issues. On issues where I have expert knowledge, what he says seems imminently correct. That gives him extra credibility to me.

      • verytallguy

        dpy,

        your self-assessment of your skills on detecting bias shows admirable confidence.

        However, it remains possible that Nic is every bit as biased as the rest of us mere mortals.

        Some, even those with a reputation of challenging the status quo in climate science, seem to have that view:

        Nic Lewis appears to be arguing primarily on the basis that all work on climate sensitivity is wrong, except his own, and one other team who gets similar results. In reality, all research has limitations, uncertainties and assumptions built in. I certainly agree that estimates based primarily on energy balance considerations (as his are) are important and it’s a useful approach to take, but these estimates are not as unimpeachable or model-free as he claims. Rather, they are based on a highly simplified model that imperfectly represents the climate system.

        https://www.mwenb.nl/climate-sensitivity-and-transient-climate-response/#comment-921

        Have a read of the whole thread as part of your continuing quest for scientific knowledge and bias detecting skills.

      • VTG, What you are doing here is beneath you and its beneath Annan too. Without assessing any of the detailed evidence, you assert that Nic is biased too (we are all equally biased and irrational being a modern dogma of post-modern thinking). But hey the consensus that science is riddled with false results and bias is not worth responding to.

        So far as I can see, LC18 has generated no valid criticisms. There has been a lot of vacuous mumbling at ATTP about loose and largely unquanatifiable (thus unscientific) formulations concerning “patterns” otherwise known as “colorful fluid dynamics.” There is a lot of discussion of internal variability and how models seem to be totally incapable of showing the same variability as the real world but might agree with it in the far long term for some unknown reason. This is just WUWT stuff.

        We have seen another energy balance analysis from Cawley and fellow travelers that contained a silly sign error that magically biased it to the high side. I’ve not seen a correction though. But that’s old work by now and uses outdated forcing estimates.

        Dessler has a twitter rant that is a giant muddle which Nic destroyed easily.

        Why doesn’t Dessler engage an actual discussions with Nic?

        This discussion is reaching WUWT levels of misdirection and from you totally unscientific and nonspecific speculation.

        In the long run, however, Nic’s criticisms of specific results of climate science seem to be careful in their detail and he is certainly right in concluding that GCM’s do not offer scientifically valid evidence as to ECS, which I know from 40 years of experience to be right and as recent negative results have shown.

        When people don’t address issues that are critical to the debate you know they are biased or perhaps just confused or not fully honest.

      • For Annan on your thread VTG which summarizes I think his position:

        “I do think Nic Lewis over-states the case for the so-called “observational estimates” in a number of ways. Clearly, even these estimates rely on models of the climate system, which are so simple and linear (and thus certainly imperfect) that they may not be recognised as such.

        Further issues arise with his methods, though in my opinion these are mostly issues of semantics and interpretation that do not substantially affect the numerical results. (For those who are interested in the details, his use of automatic approach based on Jeffreys prior has substantial problems at least in principle, though any reasonable subjective approach will generate similar answers in this case.) The claim that “observations alone” can ever be used to generate a useful probabilistic estimate is obviously seductive, but sadly incorrect. Thus, his results are not the peerless answer that he claims.

        Nevertheless, they are a useful indication of the value of the equilibrium sensitivity, and I would agree that these approaches tend to be the most reliable in that the underlying assumptions (and input data) are generally quite good.”

        Bottom line, Annan broadly agrees with Nic’s approach and says it tends to be the most reliable method for estimating sensitivity.

        However, in fairness to Nic, I didn’t find anything specific in Annan’s blog or comments that casts doubt on Nic’s work, but perhaps I missed it.

      • verytallguy

        Dpy,

        Quite a rant there, and I thought we were getting along so well!

        But given all the insinuations in your latest, I just leave you to your own advice to stick to the science.

        God luck with that. I think you’re going to need it.

      • My condolences VTG on being unable to produce a single scientific point in a very long discussion thread. Next time, just save everyone the annoyance.

      • verytallguy

        More great technical content there dpy, pure gold.

        Remarkable stuff.

    • stevefitzpatrick

      Dessler: “The imprecision of the method means that with only one realization (the historical record), it is possible that you could get an answer that is far from true.”

      In other words, NO empirical estimate, no matter how carefully done, can ever refute the high sensitivity values prescribed by GCMs… historical data just doesn’t matter. The variability in multiple runs of a single model tells us about variability in the MODEL, not about variability in reality. This belief that GCMs are superior to measurements is lunacy, not science.

      • kenfritsch

        Dessler: “The imprecision of the method means that with only one realization (the historical record), it is possible that you could get an answer that is far from true.”

        Simulations of many (like 100) runs of a climate model can produce a distribution of results for a given variable of which for any one run (that is a realization like that of the earth’s) could occur with the probability indicated from the distribution. That single realization is the truth when it is the one that occurs in reality and is observed regardless of the probability of it occurring.
        The critical assumption here is that the model is producing a reasonably true distribution of the possible realizations of the climate. That is a large assumption for both the median/mean or variance of the distribution. If the earth’s single realization climate variable value lies in a low probability range of that distribution that could be good reason to doubt the validity of the climate model.
        Other questions would arise if climate model results with many simulations were compared amongst several individual climate models in order to determine whether a significant statistical difference existed between models – and if so how would we determine which model was closest to the truth.

      • Kenfritch, What a wonderful comment. I totally agree even though I’ve not formulated as well as this.

      • stevefitzpatrick

        Ken,
        I agree completely. The issues are: 1) does the variability revealed by multiple runs of single models credibly include measured reality and 2) do the variabilities revealed by multiple runs of individual models credibly overlap with each other? If not, then they are statistically “different”, and so can’t all be right. Any single model with a “distribution” of trends that doesn’t credibly include measured reality is likely wrong.

        The fly in this ointment is the inevitable tuning of models to match historical data. The only clean test is with projections of future warming. But the climate concerned are not willing to wait a few decades for that kind of test. They may have little choice.

    • dpy,

      After all they came from GCMs

      Well, it’s a long thread, but his very first post there cites two papers which explicitly do *not* rely on GCMs.

      https://rd.springer.com/article/10.1007/s10712-017-9433-3

      http://www-k12.atmos.washington.edu/~dennis/Hartmann_Larson_2002GRL.pdf

      Which I guess scuppers your “communication strategy” that Dessler is all about GCMs.

      https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comment-123459

      Additionally, we can note that your campaign to keep climate blogs on the science and not the personal invective includes there your immediate insult that Prof Dessler has chosen “drop by for another drive by comment” and again here your insinuation of his lack of “focus on the truth”

      Now, how about this: You drop the constant insults and focus on the science, and I’ll read your technical insights carefully and learn from your erudition.

      What do you say? Deal or no deal?

      • Previous attempt is stuck in moderation.

        VeryTall, I really don’t have a goal in mind in the climate debate, so being “effective” is not important to me in the way it would be to an activist. I am learning important science that strongly overlaps my scientific endeavors, which are for me vastly more important because they have a high impact in the real world and I’m already quite effective there and work with some of the best and smartest people around.

        One area of overlap between climate science and CFD is the whole issue of bias and the replication crisis. It’s vastly worse than people (particularly outsiders like yourself) realize. Nic’s work is an obvious proof of this. I have a lot more experience spotting bias than you do and I believe Dessler displays the classic symptoms, with this episode being particularly telling. Nic appears to not exhibit those symptoms. He shows remarkable discipline in sticking to the technical issues. On issues where I have expert knowledge, what he says seems imminently correct. That gives him extra credibility to me.

      • verytallguy

        “Previous attempt is stuck in moderation.”

        Turns out not I guess, as I’ve replied to it. Anyway, it was so good I enjoyed reading it twice ;-)

  12. Thanks for the information.

    All this leads to?

    My uninformed conclusion is Climate Science is in it’s infancy talks a lot but not quite ready to walk.

  13. The way to test Dessler’s idea is by varying endpoints because they introduce uncertainties related to the assumptions of equality in natural variability. For example consider this.
    http://woodfortrees.org/plot/best/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25/plot/hadcrut4gl/mean:12/offset:0.05/plot/gistemp/mean:12
    LC15 and LC18 take a 0.5 degree rise since c1940 when you could arguably equally take 0.9 degrees since c1950. This is a massive difference just from endpoints that cannot be distinguished other than through natural variations, but 1940 does look rather more anomalous (possibly a solar spike) in the long term record than 1950. There were no volcanoes until the early 1960’s, so from that perspective these periods are equal. The sun was stronger then than now which may underestimate the warming if anything.

    • stevefitzpatrick

      Super JimD… yet another in your endless stream of simplistic Wood-for-Trees conclusions. You should try to publish these analyses, if only for the humor value.

      • I used more data show the context of the endpoints in the temperature record than LC15 or LC18 showed. Their choice around 1940 looks suspect as being representative of anything.

      • I say row Jimmy row, gonna get there, I don’t know.
        Seems a common way to go, get out and row, row, row, row, row.

        ~ garcia, hunter

    • Jim D, 0.9C since 1950?? You cannot get there even if you stop at the peak of ENSO 1998, which would be a gross cherry pick ignoring the subsequent now 20 years.

      • The green curve traces it out for you. It does not rely on the ENSO peaks of 1998 or 2016.

      • Using the methods of Foster and Rahmstorf 2011 I get 0.76 to .88 since 1940 depending on which global metric you use, some uncertainty of course but no way is it 0.5.

        More than a third of the warming has been since 1998 using those methods.

    • Nic Lewis addressed your concern in the previous blog. Basically your calculation is unsound. Taking a least squares fit or the difference between two endpoints will not work as the volcanic activity needs to be accounted for. He even went so far as to give you the real answer of ECS = 1.3 for the period of your choice. You are simply repeating stuff that has been addressed.

      • It wasn’t a least squares fit. It was just taking 0.9 C as the temperature rise from the 1950’s, and you can see that it fits the whole period from the graph, including the acceleration in trend that is ongoing at the end. LC15’s TCR has only half this gradient and would not be a good explanation of the warming since 1950, so I am not sure how they intend that should be used. They didn’t say.

      • No he specifically addressed your point. Whether you take a least squares fit or the endpoints of your choosing you still have to account for volcanic activity over the period. When you take other considerations into account the ECS calculated will be 1.3 according to Nic Lewis.

      • With a baseline in the 1950’s there was no major volcanic activity in or just prior to that period. If there was volcanic activity after that it would have reduced the warming rate, but not affected the starting temperature. The later baseline Lewis chose are sufficiently after that, especially Pinatubo, that he hopes that did not happen, otherwise he would have an underestimate of sensitivity. What matters is that both endpoint periods for ΔT are unaffected by volcanic activity. They don’t get ΔT by regression, and it would be wrong to with intermediate volcanoes.

      • JimD:
        “What matters is that both endpoint periods for dT are unaffected by volcanic activity.”

        Nope.

        An example experiment:
        A pot of water is placed outside, in the sun, and covered with glass.
        The temperature is carefully recorded for an hour. The temperature is then raised by placing a burner under the pot for a couple minutes. The burner is turned off, and the temperature is carefully recorded again. The forcings for the endpoints could be chosen to be the same, but the intermediate forcings are likely the ones that matter, and the ones that caused most of the temperature change. Yes, the sun can warm the water due to the greenhouse effect of light traveling through the glass, however a burner is generally much more effective. You can’t ignore what happened in-between or legitimately say that what happened in-between was caused by the sun.

        As far as I can tell, what Nic is doing is to take the intermediate forcings into account. Again, as far as I can tell, you are not, but I could be mistaken, please do show me how you are taking the intermediate forcings into account so we can all see how that is done.

    • The largest thirty year trend is 1.8C per century:

      TCR for the satellite era is about the same value:

      There is much model speculation, but no observational basis for anything greater.

      1.8C is the AR4 low end of 2100 projections, so observed warming is at the low end.

      • Turns out the 30-year land trend is 3 C per century, and the ocean has some catching up to do. The Arctic is even faster. The global trend may accelerate or not depending on if the forcing rate continues to accelerate which depends on policies going forwards.

      • “forcing rate continues to accelerate”

        Though this chart needs and update ( which I will do when 2017s’ RF numbers from NOAA are out late this Spring ), the forcing rate peaked around 1989 and has decelerated since then:

      • There are scenarios where the forcing does not accelerate, but these would be regarded as long-term mitigation or stabilization scenarios reaching a total forcing of less than 4.5 W/m2 by 2100. They require a significantly reduced growth rate of per capita emissions globally (see RCP4.5). It would be a sustained effort not to accelerate.

      • “There are scenarios where the forcing does not accelerate”
        Including, most importantly, observed reality.

      • TE
        Thanks for continuing to try and push the conversation back to observations and reality.
        Scott

    • For a proper investigation of the TCR in the time span 1951…2015 with a slightly other method see https://judithcurry.com/2016/10/26/taminos-adjusted-temperature-records-and-the-tcr/ . It gives almost the same result for TCR when looking at the C/W , HadCRUT4 and Berkeley record.

      • Compared to your reference there, the forcing you used seems to have excluded the negative aerosol effect almost entirely.

      • No, I used the forcing data from AR5 ( I linked the data source) and you can see that there is ( of course) the aerosol effect is included. You should read more carefully!

      • You had a link to Gregory and Forster whose forcing would give you a different results entirely unless you ignored their aerosols which your numbers looked like doing. What I call effective TCR uses just the CO2 part of the forcing which is more certain than the net, and that is where I get 2.4 C per doubling. Then you multiply by what fraction of the net is CO2, which is less certain, but normally about 80% and you get ~2 C per doubling. I distinguish TCR from effective TCR, but it is valuable because LC15 say that the net is highly correlated to just the CO2 part. So for prediction purposes you only need to use the predicted CO2 growth as other GHGs and aerosols follow.

      • I had a link ( follow it!) to AR5 for the forcings. And the rest ( your “effective TCR”) is completely nonsense. I give up, you can’t be helped. Write an article with sources and links, I’m courious!

    • http://woodfortrees.org/plot/wti/from:1978/to:2018/trend/plot/hadcrut4gl/from:1978/to:2018/trend/plot/uah6/from:1978/to:2018/trend/plot/rss/from:1978/to:2018/trend/plot/best/from:1978/to:2018/trend

      Jim D, I don’t know if this is properly linked but the wft average from 1978 to 2018 (the warmest period is 0.7 that is 1.75 If Lewis’s value is 1.3 in one hundred years it seems to me that just being 0.3 difference in the hottest period would not be unreasonable and therefore his evaluation on the

      Dessler 18 must be fairly accurate?

      The estimated 5-95% uncertainty range for λ derived from the MPI-ESM1.1 historical simulations is 1.17−1.63, with a median of 1.43 and an almost identical mean (units for λ are ). The distribution has an estimated standard deviation of 0.137 and shows no evidence of non-normality.[8] The fractional standard deviation of the λ estimates is 0.096⤬ the median estimate.[9]

      Correct me if I’m wrong

    • I guess I was trying to show (or headed that direction) that Dessler used models to do his hundred runs. With all the different temperature charts a REAL hundred run computation could be done using Hadcrut, gistemp, satellites etc. Lewis seems to be pointing out that Dessler’s margin (+/-) is to small to make a difference. I guess this would prove of disprove it and be reality based.

  14. Great job Nic. Total evisceration in a most gentlemanly manner.

    It’s pretty clear why the “settled science minions” refuse to debate or have a Red vs Blue Team assessment. Their lunch would be eaten with great gusto.

    • Unfortunately, the Trump Administration put a stop to the red/blue assessment. Why I don’t know.

      • The military people there know that global warming is a problem, and they held sway in that decision. It may have been General Kelly that nixed it.

      • The military people, being good soldiers, buy the climate change view promoted by their government. What do you expect them to do?

        But why would they kill a red/blue team exercise? Are they worried that they’ll lose the argument? People who are confident in their beliefs are (normally) not afraid to debate them.

      • The military have to make climate change contingency plans, e.g. for the opening up Arctic or naval bases. Without those plans they know they would be at a strategic disadvantage to those countries that account for it. They have already figured out the odds and inaction isn’t an option.

      • “They have already figured out the odds…” Sure they have, using climate models as a guide. And they’ve made contingency plans for bases. That’s certainly justified, and if they’re smart they’ll wait a while to see how the facts evolved.

      • They use their own eyes too, to see the melting Arctic and rising seas. It’s fairly clear what is happening, and why these really are things you have to plan for.

      • Don Monfort

        Don’t be silly, yimmy. The military did what Obama told them to do. And if they didn’t like it, Obama got rid of them.

      • jimd

        ‘They use their own eyes too, to see the melting Arctic and rising seas. It’s fairly clear what is happening, and why these really are things you have to plan for.’

        Well rising seas have been rising since around 1750 when the glaciers were observed to have resumed melting after the LIA hiatus.

        As for the melting arctic ice, whalers from my home town observed this in the early 1800’s and alerted the Royal Society who sent an expedition to observe the ,melting arctic ice at close quarters.

        They didn’t have the ships to do so earlier as the Royal Navy had been involved in a little local military difficulty when the Melting ice was first observed in the 1780’s

        There is a memorial to this first scientific expedition in Upton Church in my home town. The Hudson bay co had also observed the melting Arctic ice in the 1700’s.

        Your grandparents would have seen Pathe News reel film of the melting arctic ice in the 1918-1940 period when they went to the cinema or read their papers.

        We won’t even go back to the Vikings.

        tonyb

      • Don Monfort

        Oh, yimmy.

        “was reportedly halted by Gen. John Kelly, the White House chief of staff.”

        We can rely on that story. I will help you, yimmy. If Kelly was the one who halted it, so what? We don’t need a red team-blue team boondoggle. It wouldn’t change anybody’s mind. Elections have consequences. Trump is running the show now. Paris is dead. EPA won’t be playing the sue and settle game. Green subsidies, pfffft. The military won’t be spending money on green fuels and green bullets. There’s a new Sheriff in town. Yimmy is sad.

      • You’ve touched on one of the bigger disillusionments. How are we to dominate with battery powered fighters and A-10s? In certain cases, our military is told to kill. It should then kill 5 times over without worrying about butterflies. When it is time to kill, the butterflies may or may not survive.

      • tonyb, see how far you get by telling the Navy to ignore Arctic changes and sea-level rise. They have serious strategic planning to do and stories of whalers won’t get much attention relative to current images and measurements of changing rates.

      • On the contrary, Don, I find the White House comedy/reality show very entertaining. Bringing Giuliani in adds another character to the ongoing soap opera. Should be fun to watch.

      • Jim D:

        We should get working on three new capable ice breakers next week. So we can do that or put solar panels on ½ our bases. The ice breakers can even carry climate scientists to Antarctica. And help protect Canada’s and our own energy operations in the Arctic from pirates and hooligans.

        Sea level rise is a fact. The military can either try to save everyone’s sea bases or ours. Here’s the question, Why were you trying to save the world when your sea base was threatened by sea level rise?

      • Ragnaar, if your urgency on icebreakers is in response to the Russian expansion of its Arctic capabilities as the ice melts, this is the point I am making about strategy. The military do notice these trends.

      • Don Monfort

        So yimmy, the entertainment value of the WH outweighs the death of the Paris BS non-binding agreement and all the stoopid climate and energy policies of your hero Obama. Interesting. You should truly be happy. It will be going on for Seven More Years!

      • Are you not entertained by it all, Don? Sit back and enjoy it while you can. Don’t mind all the indictments around him. He’s still muddling through for now. Do you agree with Giuliani that Jared is disposable? Or do you agree with Trump that Giuliani just hasn’t got the cover storyline straight yet?

      • Glacial melt seems to have accelerated slightly over recent decades. But if you look at the past 200 years, melt is relatively slow. Extent retreat has slowed considerably. There was a lot more glacial mass, and it melted much faster before GHGs became a big factor.

  15. I shall make only a brief comment here on the Dessler18 proposal of replacing global surface temperature with tropical mid-tropospheric (500 hPa pressure) temperature, as being a better determinant of changes in outgoing energy. It appears to eliminate a minor source of uncertainty at the expense of introducing worse problems. Indeed, the authors admit, in the second paper,[20] that a key ratio they uses to convert 500-hPa tropical temperature interannual feedback strength into long term forced-response feedback strength “comes from climate model simulations; we have no way to observationally validate it, nor any theory to guide us”.

    Replace observations with computer simulations?
    Is there any place for observational data and the real world, in Desslerworld?
    How many other “observational data” have simulations hidden in them?

  16. “Given that we only have a single ensem-ble of reality, one should recognize that estimates of ECS derived from the historical record may not be a good estimate of our climate system’s true value.”

    It’s hard for me to be against reality. I have one reality, as we all do. But there are about a hundred model derived ‘kind of’ realities. And the average or something near that, of the hundred kind of realities are better representation of something. Because what was, is not as good as a hundred what could bes, if we first spent a whole bunch of money studying the could bes.

    As I may have mentioned before, grounding something in what we the non-scientists understand is often a good marketing technique. The temperature record has been recorded, and they won. But that doesn’t mean you get to use it, because there’s something better than that. Scientific possibilities.

    Scientific possibilities may be better. If they are, you have to convince people of that fact. You have to get them to use them. A smart phone product makes no money if people don’t use them. Scientific possibilities have sold but their success has been limited. Al Gore’s presentations. Numerous videos of what sea level rise looks like in the future.

    Some people do use these scientific possibilities but it may look like this. We must have more solar, so this city will buy more solar. It’s used to promote renewables that don’t look to be very effective.

    Is LC18 any more effective? If Lewis or Curry get a call from Fox News, then I’d say yes. Not that it’s Fox News, but that that would have reach and impact in D.C. To contrast it, envision Dessler on NPR. Is there even a demand for that? What would be the story?

    • Roger Knights

      Dealer has been interviewed on NPR at least once, according to Wikipedia. I googled for “Dessler on NPR” and got several links that might indicate more appearances. One of them indicated he’d been interviewed on PBS.
      https://www.google.com/search?client=safari&rls=en&q=Dessler+on+NPR&ie=UTF-8&oe=UTF-8

      • I’ll write the lead in or whatever it’s called.

        New paper comes in on the low range of climate sensitivity.

        My point was supposed to be, what’s new? Reporting on the continuation of the status quo is boring. Pulling the distribution to the low end is a story. It’s easy to find a hundred people to speak of global warming and why we need action like more renewables. LC18 is a story because it challenges and I am still going with, has value for the conservatives. And its second level value is its moderate position (an ability to move conservatives a bit left).

  17. Should we really care so much, those of us who live in a climate system that is continually in the midst of an unending process of carrying out a world-wide temperature balancing act, constrained by the amount of solar energy in, versus energy radiated out into the vast emptiness of space, when the equilibrium temperature realized at any place, at any point in time in the process, has more to do with the Urban Heat Island effect on our relatively small part of the world than by any imagined effect caused by increases of parts per million in atmospheric CO2?

  18. Hi Nic,

    Out of curiosity, how do you think your Energy Balance approach compares to the multicointegration approach of this paper?

    http://wwwuser.gwdg.de/~cege/Diskussionspapiere/DP336.pdf

    • I find such cointegration studies rather opaque. Their results look wrong to me, but without examining te study in detail and running its code I can’t tell why.

      • niclewis: I find such cointegration studies rather opaque.

        Yeh. It’s what you might call an “advanced topic” in nonstationary nonlinear multivariate time series. It’s really hard to tell, in my opinion, whether their result is reliable, given a rather large number of assumptions entailed that are hard to evaluate. It ought to be more applicable than the more “ordinary” linear stationary vector autoregressive time series models. Some references for those interested: “The Cointegrated VAR Model: Methodology and Applications” by Katarina Juselius, Oxford U Press, 2006; and “Inference in Cointegrated VAR Models: Bootstrap Methods and Applications”, by Alessandra Canepa, Lambert Academic Publishing, 2009.

      • Hi Nic,

        I suggest you read the paper. It is very interesting.

        The multicointegration approach has 2 advantages over the energy balance approach.

        1. It uses all of the available data, where as your energy balance approach throws out all the data between the initial period and the end period. All that useful data can help better constraint ECS.

        2. It is essentially a 2 box model, where as your energy balance approach is a 1 box model. 2 box model is a better approximation to reality than the 1 box model, and using it means that you might be able to avoid obtaining the ECSeff/ECS ratio from climate models.

      • -1=e^iπ I’d be interested in your thinking re your last comment. It seems to me ECS is a metric from GCMs or models sufficiently complex to do the experiment. If so conversion of experiments done in simpler models will inevitably have to happen in the GCM or like.

      • -1=e^ipi
        “2 box model is a better approximation to reality than the 1 box model, and using it means that you might be able to avoid obtaining the ECSeff/ECS ratio from climate models.”
        There simply isn’t enough information in the historcail record to do so, sadly, given the noise level.
        I’m not sure that icnluding all years helps that much, as volcanic forcing efficacy varies somewhat depending on way latitude the eruption is at. And having to estimate several parameters rather than one is a major disadvantage and sourse of error IMO.

    • Kyle Armour has a recent paper on inconstant feedbacks (abstract available). His point is that the global system cannot be represented by a monolithic surface responding to the forcing with a single feedback parameter over all times. The earth is more complex than that and some aspects of the feedback are delayed.
      https://www.nature.com/articles/nclimate3278

      • We show in LC18 that Kyle Armour’s estimate of the excess in CMIP5 models of ECS over ECS_eff is greatly (26% vs 9.5%) biased high by various bad choices. Read LC18 section 7f.

      • I don’t know about CMIP5, but in reality the oceans are currently warming at only 80% of the global average. I think this lag is larger than in the CMIP5 models (also possibly responsible for the weaker hotspot), and has therefore more consequence for the nonlinearity of the response. The simple monolithic model assumes that the water vapor response is as fast as the mean global temperature response when it is actually slower at first.

    • -1=e^iπ

      2. It is essentially a 2 box model,

      How do you get that? What do you mean by “essentially”?

      With luck, here is a quote:
      Hence the I(2) model is characterized by
      ଶࢼࢾ − ௧′�ࢼ ∶ ݎ
      ′ Δ�௧
      ~ �ሺͲሻ ሺ͸ሻ
      ଵࢼ ∶ ଵݏ
      ′ �௧
      ~ �ሺͳሻ ሺ͹ሻ
      ଶࢼ ∶ ଶݏ
      ′ �௧
      ~ �ሺʹሻ ሺͺሻ
      where the ݎ polynomially cointegrating relations usually need Δ�௧
      to cointegrate to I(0) with
      ̅ࢻ = ࢾ parameter multicointegration
      ̅ࢼ�′
      ଶ.
      No luck?

      Well, that was equations 6 – 8 on p 13. It is exactly a vector autoregressive model with a polynomial relationship between the observed variables at time t and some of the observed variables at earlier times (instead of the restrictive linear relationship in ordinary vector autoregressive models).

      I do have to say thank you for the link to the freely downloadable paper.

      • reading more carefully now, here is what I find: Therefore, a multicointegration model can incorporate an ocean that heats or cools when the atmosphere is out of long-run equilibrium, and in which the ocean also reciprocally governs atmospheric temperature. This mechanism is identical to that in simple two-layer physical EBMs.

        That’s close enough to “essentially a two box model”, though there is more to a model than the list of mechanisms.

        The paper is good, imo, and repays careful reading.

  19. Dessler has a twitter response and as most things on twitter its nothing but broad cavils without any specifics. Basically, Nic’s method is complex and I (Dessler) don’t really understand it, but he must have buried something in his adjustments because I know ECS can’t be that low.

    • The net forcing change in LC18 is 50-60% larger than for CO2 itself rather than the more normally estimated 10-20%. This has the effect of depressing the CO2 sensitivity for a given temperature change. His endpoints are a bit suspect too on the side of reducing the temperature change. Basically Lewis is saying in numbers rather than words that you don’t need much CO2 sensitivity to get all this warming because of all the added methane and other GHGs contributing with less mitigating effect of aerosols than others would use.

      • I presume that these are added in ‘quadrature’.

        There are of course limits of observation precision in some estimates. Aerosols forcing may not be additive but depend on the mixing ratio.

        “Black carbon is generated by fossil-fuel combustion and biomass burning. Black-carbon aerosols absorb solar radiation, and are probably a major source of global warming1,2. However, the extent of black-carbon-induced warming is dependent on the concentration of sulphate and organic aerosols—which reflect solar radiation and cool the surface—and the origin
        of the black carbon3,4. Here we examined the impact of black-carbon-to-sulphate ratios on net warming in China, using surface and aircraft measurements of aerosol plumes from Beijing, Shanghai and the Yellow Sea. The Beijing plumes had the highest ratio of black carbon to sulphate, and exerted a strong positive influence on the net warming. Compiling
        all the data, we show that solar-absorption efficiency was positively correlated with the ratio of black carbon to sulphate. Furthermore, we show that fossil-fuel-dominated black-carbon plumes were approximately 100% more efficient warming agents than biomass-burning-dominated plumes. We suggest that climate-change-mitigation policies should aim at
        reducing fossil-fuel black-carbon emissions, together with the atmospheric ratio of black carbon to sulphate.” https://pdfs.semanticscholar.org/7988/80fa0bc2c0435355c018d6fb50f369a6f21f.pdf

        But you have to give it to Jimmy. He invents a narrative and obstinately repeats it endlessly.

      • Lovejoy also gets an effective TCR around 2.3 C per doubling using a longer record.

        Clearly CO2 forcing is the dominant driver and remains highly correlated for over the last 200 years.

      • It clearly posits that all 20th century warming was anthropogenic. That may not the case.

        Or perhaps over a shorter period?

        I haven’t read AR5. I stopped at 4AR as they were still wondering aloud if satellite measured intrinsic variability was ‘real’. Do they not mention intrinsic variability at all? Or are they still wondering. Science has of course moved on – but not Jimmy.

      • No, you can see that there are deviations as large as 0.1 C around that line on Lovejoy’s graph. That’s the natural variability in full effect. Not the leading term, but it is there.

      • Deviations are much greater than that – and where they sum to is anyone’s guess. It is not white noise – it is Hurst effects. Grow up Jimmy.

      • stevefitzpatrick

        Less mitigating effects than JimD would use is more to the point. You compare CO2 to temperature, ignoring all the other estimates of identified forcings, then draw erroneous conclusions about the response to CO2. A semi-smart 10 year old could see the problems with your “analyses”. But even though many people have pointed out your errors, you continue to post the same rubbish.

      • If you apply Lewis’s TCR to the time since 1950, you get half the actual warming that occurred. When using his number you also need to account for his rather high beginning endpoint and that CO2 is only about 60% of his assumed anthropogenic warming factor. Taking both these into account you can recover the actual 0.9 C of warming since 1950. So he is not wrong, but his TCR is only part of his anthropogenic effect, and the other part can’t be neglected when using it for projections. Using his CO2 number alone is going to severely underestimate the warming in emission scenarios.

      • stevefitzpatrick

        No JimD, when you look at actual historical numbers and AR5 best forcing estimates for 1950 and 2011 (http://woodfortrees.org/plot/hadcrut4gl/from:1948/to:2013/plot/gistemp/from:1948/to:2013), you see about 0.6C increase in temperature for an increase in forcing of ~1.72 watts per M^2. That would suggest a TCR in the range of ~0.6C * (5.35ln(2))/1.72 = 1.29C, very much in line with Nic & Judith’s estimate. But this is too short a period to make the most reliable estimate…. which is why most empirical estimates use much longer periods.

      • Arguably the best fit temperature rise since 1950 is 0.9 C which increases the TCR by 50% and is more in line with the GCMs.
        http://woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2

      • stevefitzpatrick

        JimD,
        It is like you didn’t read what I wrote at all. Read harder. The IPCC AR5 forcing increase was for 1950 to 2011, not 2018. You have to look at both temperature increase and increase in TOTAL forcing (not increase in CO2), and over the SAME period. The shorter the period, the more the influence of short term variability, and the greater the uncertainty in the estimate. Your apparent monomania with CO2 is becoming weird.

      • I showed you how since 1950 it is 0.9 C of warming both in BEST and GISTEMP. Given that and the CO2-Only forcing change the Effective TCR as I have defined it, is in fact 2.4 C per doubling. Lovejoy gets a similar number (2,33) starting from 1750 and ending in 2012.
        http://www.physics.mcgill.ca/~gang/Pause.htm

      • stevefitzpatrick

        Nobody cares how JimD defines TCR. It has a clear definition already: warming in response to a linear increase in forcing of 3.7 watts per M^2 over 70 years. You can only calculate very high TCR by adopting a bizarre non-standard definition of “TCR”, or ignoring best estimates of historical forcing. Neither approach is convincing.

      • As I also explained, Lewis has CO2 at only 60-70% of his net anthropogenic effect and that is the only reason his value seems low. For policy you need the net, like my number gives you directly.

      • Don Monfort

        Our yimmy is the King of TCR. The crown weighs heavily on his little head.

      • Don Monfort

        Can someone help me find a photo of a little guy with a big crown slipping down over his eyes?

      • stevefitzpatrick

        JimD,
        You are making zero sense, and I am beginning to think you simply don’t understand the basics of an energy balance. Nic and Judith are accounting for everything which can reasonably be accounted for in their energy balance estimates. Even dubious critiques like Dessler offers (“We are just lucky the Earth hasn’t responded like it ‘normally’ would if we had 100 identical Earths to examine.”) are at least tenuously connected to rational thought. What you are doing (completely ignoring all forcing except CO2), is just weird and technically wrong.

      • You are just not understanding the implications where Lewis says only 60-70% of the anthropogenic effect is CO2. Perhaps you (a) understand, or (b) agree, with that and perhaps not. I can’t tell from what you are saying. If only 60-70% of the warming so far has been from the CO2 part, there is a proportionate (highly correlated according to Lewis) addition due to the other GHGs minus very little subtraction due to aerosols, and the ECS due to CO2 should likewise account for this for the net response. Not sure how many other ways I have to say this before you see the meaning. It’s about Net response to Net forcing.

      • stevefitzpatrick

        JimD,
        You are making zero sense. Look at the graphic Robert I. Ellison (May 1, 2018 at 3:13 am) posted. It is from the IPCC AR5. Is there some part of the lower panel (forcing in 1950, 1980, and 2011) which is unclear to you? I showed you the resulting TCR estimate using those numbers for those dates. Carrying on about how LC18 is overestimating forcing is quite irrelevant; you don’t have to believe LC18 (though they support their estimates with many published papers), you just have to believe the IPCC best estimates of forcing. I will spend no more of my time or yours on this silly discussion, because either you are not capable of understanding a simple heat balance or not willing to accept reality. Either way, it is a waste of time.

      • LC18 are not in agreement with AR5 net forcing, so you have to decide which one you want to believe. I go with 2.4 C effective TCR per doubling from observations that span 1950 to current (or 1750-2012 if you use Lovejoy’s same computation). This gives you a number that bypasses uncertainties in net forcing by not even using that in favor of the clearly dominant CO2 component that is more certain. I go with CO2 forcing versus warming because that is the bottom line given the proportionalities in effect.

      • Steve, u r right. There is no use to educate Jim D.

      • Don Monfort

        He has only been doing this foolishness here for about a decade. Taking him seriously has always been a waste of time.

      • stevefitzpatrick

        Don Monfort,
        You are right: JimD is either very dumb or does not engage in good faith. I will never waste time on his rubbish again.

      • I will point out again that the big difference between Lewis and AR5 is that AR5 has CO2 at over 80% of the net effect while Lewis has it at less than 70%. This is what leads Lewis to his rather small ECS for the CO2 part alone because they both have to account for the same warming with anthropogenic effects alone. This means that to get the Net effect from Lewis’s number you have to multiply his CO2 effect by 1.5. This is not hard to understand.

      • Jim D: As I also explained, Lewis has CO2 at only 60-70% of his net anthropogenic effect and that is the only reason his value seems low. For policy you need the net, like my number gives you directly.

        for CO2 related policy and projections of the effects of reducing CO2, you need the CO2 effect.

      • In addition you need its fraction of the total effect, especially as the total is highly proportional to CO2’s effect. If you want to limit the CO2 effect to 2 C, you end up with 3 C unless you account for the rest in your policy.

      • Jim D: In addition you need its fraction of the total effect, especially as the total is highly proportional to CO2’s effect. If you want to limit the CO2 effect to 2 C, you end up with 3 C unless you account for the rest in your policy.

        I don’t think that makes sense. If you reduce CO2 concentration by reducing fossil fuel use, you do not affect the things unrelated to CO2 concentration (but may affect soot and sulfuric aerosols), so for policy/planning you need the effect of CO2 without most of its covariates. CO2 increase is correlated with urbanization, but if you reduce CO2 emission you don’t necessarily reduce urbanizaton; whatever the urban heat island effect it, it will remain.

      • Policies have to include methane, especially in the light of how much it contributes, which according to Lewis is a lot. There is no free pass for that. Using Lewis’s numbers regulating fossil fuels, while attacking the biggest problem at its source, is far from enough. If you’re not going to regulate methane growth, you need to do even more on fossil fuels to compensate for that lack of action.

  20. The net forcing change in LC18 comes from the best available data for the forcing and is broadly demonsrtated in the paper which passed a peer review process. How can you make those unsound claims ?

  21. 3 sites at once.
    Wonderful.
    HAS made some comments at ATTP’s thread
    HAS says: May 1, 2018 at 1:02 am
    “Also the impact of the particular choice of start and end periods is briefly addressed, and is the source of some of the differences.”

    Andrew Dessler put up a copy of some tweets including this.
    “Let’s match the periods chosen in our model ensemble analysis as closely as possible to L&C. We can calculate ECS using these base periods: 1869-1882, 1995-2005.
    The resulting distribution has median = 3.01 K, 5-95% confidence interval: 2.59-3.56 K. Looks like an important uncertainty to me.”

    I noted that this is the highest possible ECS that can be calculated from any 10 year periods of observation from 1869 -2017.
    The highest. No hint of ECS being any greater. An upper limit using observations only.
    Achieved by taking the base and comparing to a region whose midpoint is the biggest El Nino in that time.

  22. I’ll try and explain the point that I think Andrew Dessler is making again. Something I left out of my earlier comment is that there are indications that the “true” climate sensitivity appears to be a good predictor of long-term warming. The problem that is being highlighted is that if you consider a large ensemble of climate model runs and try to estimate climate sensitivity using the L&C energy-balance approach, you get a range of results, despite the model being used having a single “true” sensitivity. Hence, the energy balance approach does not necessarily return a result that is close to the “true” value. If the “true” value is a good indicator of long-term warming, then the energy balance approach could return a result that turns out to be a poor predictor of long-term warming (when compared to the “true” value). Again, this doesn’t mean that the energ-balance approach is definitely going to be wrong. It simply means that one should be cautious of assuming that it is indeed going to be close to the “true” sensitivity.

    • nobodysknowledge

      Mauritsen et al: ” it is well appreciated that climate models poorly represent clouds and convective processes”
      Just one of many shortcomings of models. Model systematic biases are presented in a lot of studies.
      What about propagation of error in sensitivity calculation?

    • As near as I can make out, your explanation of Dessler’s explanation says there is too much natural variability to use LC18. An opposing view would be that there’s too much natural variability to use a large ensemble of climate model runs.

      One could argue, there is natural variability. Now plan for sea level rise. My preferred approach is to repeat the past going forward. Recall the target is policymakers. I think we can see who has the easier sell.

      • there is too much natural variability to use LC18.

        No, I’m not really saying this. I’m suggesting that internal variability that cannot be fully accounted for in the LC18 approach means that one should probably be cautious of assuming that the LC18 result is a good representation of the actual sensitivity.

    • Steven Mosher

      ATTP.

      it took me a while to get Andrews argument until he made clear that he wasnt arguing that Lewis was Biased low.

      1. use a GCM where the Sensitivity ( response to doubling c02) and reach equillibrium is KNOWN.
      2. run that 100 times.
      3. use the L&C method to estimate sensivity from the “observations” produed by the model.
      4. You get values that diverge from the known sensitivity of the model.

      what this suggests is that the L&C approach lacks precision.
      in other words there is structural uncertainty that hasnt been accounted for.

    • ATTP, Your comment (in so far as its possible to understand such a long winded and obscure formulation) seems to be saying that internal variability could make the EBM result inaccurate. That’s an obvious and true statement. Nic responded to this by saying that this uncertainty is reflected in the PDF in their paper. The PDF has nonzero probabilities for ECS’s above 2.

  23. I have added an update to this article (below the endnotes) responding to Andrew Dessler’s tweets. Conclusion: his internal variability caused uncertainty would increase total estimation uncertainty in LC18 by 3%. A trivial increase, and less than allowed for in LC18.

    • You seem to be assuming that unaccounted for internal variability cannot impact the mean (or best estimate) that you would get from an energy balance approach. My understanding of what Andrew Dessler is suggesting is that this is not necessarily true.

      • stevefitzpatrick

        Andrew Dessler seems to be assuming the internal variability in the model he uses is a meaningful representation of Earth’s variability…. without a shred of evidence to support that assumption. The only reality that matters is the one we can measure, not a synthetic ‘reality’ in a dubious model simulation. Many studies have shown that measured reality is most consistent with sensitivity near the low end of the IPCC plausible range. That is unchanged by frantic arm waves about modeled internal variability.

      • No, I’m showing that the impact of internal variability on total uncertainty in energy budget sensitivity estimates is very minor.
        On its own, the internal variability that Andrew Dessler cites would lead to a 5-95% uncertainty range in the central (median) clmate feedback estimate using the LC18 data of about +/- 9%,so impacts beyond that level are very unlikely, although not impossible.
        However, because uncertainties add in quadrature, the total estimation uncertainty, which is dominated by forcing uncertainty, is increased very little by that arising from internal variability.

      • Nic,
        clmate feedback estimate using the LC18 data of about +/- 9%,so impacts beyond that level are very unlikely, although not impossible.

        The problem is that you’ve just suggested that it is unlikely that the ECS would be above 2K, and yet many other estimates suggest that it is likely to be above 2K. Maybe it’s more likely than you think?

      • The running thirty year trends since around 1990 are bound by the range of 1.5 to 1.8 C per century, so the variability hasn’t been that large.

  24. nobodysknowledge

    “Instead, the radiation balance is controlled primarily by tuning cloud‐related parameters at most climate modeling centers [e.g., Watanabe et al., 2010; Donner et al., 2011; Gent et al., 2011; HadGEM2 Development Team, 2011; Hazeleger et al., 2012], while others adjust the ocean surface albedo [Hourdin et al., 2012] or scale the natural aerosol climatology to achieve radiation balance [Voldoire et al., 2012]. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux‐corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.”
    From:Tuning the climate of a global model
    Thorsten Mauritsen Bjorn Stevens Erich Roeckner Traute Crueger Monika Esch Marco Giorgetta Helmuth Haak Johann Jungclaus Daniel Klocke Daniela Matei Uwe Mikolajewicz … See all authors
    First published: 7 August 2012
    So in contrast to what some scientist-activists want us to believe, the energy balance is taken serious, when the models are tuned.

  25. “A new paper by Andrew Dessler et al. claims, based on 100 simulations of the historical period (1850 to date) by the MPI‑ESM1.1 climate model, that estimates of climate sensitivity using the energy-budget method can vary widely due to internal climate system variability.”

    What does 100 simulations mean?
    Same data in the one model will give the same result.
    Different data is meaningless.
    Same data with 100 changes in the model assumptions [one per run] will give 100 different variations. There is no internal variability in a computer program unless it is added in perforce by the programmers. Unlike nature which has unknowns.
    What a surprise that changing a variable can give a different response.

  26. “with only one realization (the historical record), it is possible that you could get an answer that is far from true.”

    Unbelievable. Despite overwhelming evidence that the earth is round, it is only one of the possible realizations for the formation of the solar system. With 100 runs of our model, we show that the range of variability includes a flat earth; which we know to be true. Reality is an aberration.

  27. I think that the problem discussed here is of a rather fundamental character. The so-called observational determinations of ECS are based on energy balance models implying that the radiative imbalance of the planet can be described as a linear function of the global mean temperature. However, as discussed in the paper by Dessler, Mauritsen and Stevens, such an assumption may introduce great errors because the imbalance is also a function of the global temperature pattern.

    Thus, if the global temperature pattern changes from the initial state of the planet when the radiative forcing began to change, the ECS value calculated from the simple energy balance models will be in error. It seems to me that the only way to calculate ECS values considering the changing temperature patterns is by using advanced climate models (GCMs) because such models also describe the changed temperature pattern. One may even question if a single ECS value is the best measure of the sensitivity of the planet to radiative forcing.

    See also these comments in ACP by me and the authors:
    https://www.atmos-chem-phys-discuss.net/acp-2017-1236/acp-2017-1236-SC3.pdf
    https://www.atmos-chem-phys-discuss.net/acp-2017-1236/acp-2017-1236-AC3.pdf

    • No, you haven’t understood properly.

      • Actually, I think he has broadly understood the argument being made. Maybe what you mean is “no, I disagree”?

      • Nic Lewis, it is interesting that you claim that I have not understood the problem properly, while Dessler et al. do agree entirely with my comment in ACP.
        https://www.atmos-chem-phys-discuss.net/acp-2017-1236/acp-2017-1236-AC3.pdf

        We agree entirely with this comment. Our revised energy balance framework (Eq. 4 in our paper) is a “proof of concept” that demonstrates that it is possible to do a better job describing Earth’s energy balance than the conventional approach does. However, we don’t expect it to be the final answer and agree with the commenter that a version using several regional temperatures may be superior.

        In my comment in ACP, I have supported my arguments with equations that suggest that the radiative imbalance is a function of the global temperature pattern.
        https://www.atmos-chem-phys-discuss.net/acp-2017-1236/acp-2017-1236-SC3.pdf

        Thus, assuming that the radiative imbalance is only a function of the global mean temperature must result in a systematic error unless the global temperature pattern is not changing due to the radiative forcing. However, it seems to me unlikely that there should be no change in the temperature pattern because it is very reasonable that the warming of the planet should result in probably complex changes of the many various climate zones.

    • “One may even question if a single ECS value is the best measure of the sensitivity of the planet to radiative forcing.”

      “…medium confidence that the ECS is likely between 1.5°C and 4.5°C…”

      The target is policymakers. Or their staff which probably know about as much as I do about climate change. So, flying into the nether regions of science is losing sight of the policymakers. Here’s the problem with policymakers. They only understand things like a single value or a well constrained value such as from 1X to 2X. But as I have said, the problem is not with them, but with us. Given the cards dealt, winners provide value.

    • Pehr, We already know that GCM’s are not skillful at predicting regional climate and thus its not surprising they might be unskillful at predicting the “pattern” of warming, whatever that might really mean.

      A scientifically more justified conclusion is that the GCM’s are wrong because of well known weaknesses in their convection and cloud models (among other problems).

      These issues are well known to people like Dessler, but he continues to rely on these models seemingly assuming they must be right in their long term behavior. That’s just ignoring recent negative results that show that the long term behavior can be engineered over a broad range with credible values of cloud and convection models.

      Have you read the Zhao et al paper on cloud models?

      • dpy6629,

        I think that the simple energy balance models (or energy budget models) violate the fundamental concept of a mathematical function and we have no means to understand what the results from them mean in reality. Thus, the GCMs, although they have all those parametrization problems, seems to me to be the only tools for describing the effect of increasing radiative forcing.

      • Not sure what “concept of a mathematical function” is. If you mean the problem is ill-posed, there is some truth in that. The ECS from an energy balance method is pretty sensitive to forcings and the temperature record length. However, I think broadly speaking the method will be accurate if all the forcings, ocean heat uptake, etc. are accurate.

        I think of it as analogous to using total force balance to compute an airplane’s drag using known thrust and lift values. It’s sensitive to the thrust you use, but its more accurate than first principles CFD calculations because of turbulence and nonlinearity. GCM’s are much more complex and unreliable than engineering CFD codes.

      • dpy6629,

        A mathematical function y=f(x) has according to the definition one unique value of the dependent variable y for every value of the independent variable x. In energy balance models it is assumed that the response of TOA net flux (R-F) can be expressed as a linear function of the global average surface temperature (T_S). But it is observed that for each value of T_S there can be multiple values of R-F. Dessler et al discuss this in the conclusion section of their paper:
        https://www.atmos-chem-phys.net/18/5147/2018/acp-18-5147-2018.pdf

        The source of the imprecision relates to the construction of the traditional energy balance equation (Eq. 1). In it, the response of TOA net flux (R – F) is parameterized in terms of global average surface temperature (T_S). Recent research has suggested that the response is not just determined by the magnitude of T_S but also includes other factors, such as the pattern of T_S (e.g., Armour et al., 2013; Andrews et al., 2015; Gregory and Andrews, 2016; Zhou et al., 2017) or the lapse rate (e.g., Zhou et al., 2017; Ceppi and Gregory, 2017; Andrews and Webb, 2018). As a result, two ensemble members with the same ΔT_S can have different climate responses, Δ(R – F), leading to spread in the inferred λ.

        Figure 3 in Dessler et al illustrates the many different values the climate response ΔR can get for every single value of the global average temperature change ΔT_S. One may say that there is a very weak correlation between ΔR and ΔT_S, but ΔR or Δ(R – F) cannot be considered as mathematical functions of ΔT_S because of this lack of uniqueness in the values of the dependent variable.

      • Pehr, Figure 3 in the Dessler paper deals with monthly averages. Nic pointed out above that

        Dessler18 Fig. 3a actually plots monthly mean data and hence includes intermonthly variability. Nevertheless, it does show a correlation, of the expected sign. One would not expect a high correlation, because a fair amount of the shorter term variability in R [N] is caused by random fluctuations in clouds. Such fluctuations in R cause TS to move in the opposite direction to that if the change in R had been driven by a fluctuation in TS. It is well known that this problem confounds the estimation from shorter term data of the energy-balance based relationship of R and TS.[19] The problem is of limited relevance to energy balance ECS estimation, which uses multidecadal or centennial changes.

        In other words, this non-uniqueness is due to short term random fluctuations that will average out to a unique value in the long term.

      • Whatever your criticisms of GCMs are, the assumption of a time-constant λ in these ECS calculations is even more simplifying of a complex system and therefore likely even more wrong. It has to be realized that this is an assumption of the energy balance method, especially when projecting it out to an ECS, and there isn’t a situation where you would trust this assumption more than a GCM.

      • dpy6629,

        Spencer and Braswell (2010) (cited by Dessler et al.) have investigated such plots as figure 3 in Dessler et al. for different averaging times, that is 1, 3 and 7 months. See figure 1 in Spencer and Braswell. Increasing the averaging times don’t reduce the non-uniqueness seen in those plots.
        https://www.drroyspencer.com/wp-content/uploads/Spencer-Braswell-JGR-2010.pdf

        It is not self-evident that this non-uniqueness is due to short time cloud variations as claimed. The explanation, based both on simple algebraic calculations and on analysis of GCM results, that this non-uniqueness is caused by changing global temperature patterns seems more likely to me.

      • Some of them like to say, “all models are wrong, but some are useful”

        I like to say, “all models are wrong, they are much worse than useless”, they lead people to waste time and money trying to fix something that is not broke. Imagine how many people make a really good living, being paid to support the alarmism, using model output that does not agree with real data. They used to say the climate would show the warming in 3 years, then 5 years and then 11 years and now, they say it will show the warming, someday. Finally, we elected someone who is trying to stop this madness.

      • PopesClimateTheory: I like to say, “all models are wrong, they are much worse than useless”,

        A continuation of Cox’s quote, that you started with, is: [A practical problem is to determine when a model is so inaccurate as to be useless.] Quoted in “Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions” by Tore Schweder and Nils Lid Hjort. The models for predicting eclipses and reappearances of Halley’s Comet are pretty accurate.

    • Outgoing longwave is the least variable of the energy balance components and responds very directly to surface temperature change both in the seasonal and interannual variations.

    • Pehr Bjornborn,
      I think you go wrong in your ACP comment in stating the equation:
      λ = a_1 λ_1 + a_2 λ_2 + a_3 λ_3.

      Surely the correct equation is:
      λ = (a_1 λ_1 Ts_1 + a_2 λ_2 Ts_2 + a_3 λ_3 Ts_3)/Ts.

      When you use my corrected equation, your equations (2) and (3) are both satisfied, if I’ve read your comment correctly. So there is no incompatibility involved in the issue that you raise.

      • Nic Lewis,

        Thank you for your comment.

        In my comment in ACP, the treatise is based on the assumption that equation (3) is valid for all three regions, that is for every region the climate response can be considered a linear function of the average temperature of that region and that the three regional climate feedback parameters λ_r have constant values. As a consequence, the global climate feedback parameter λ, defined as follows

        λ = a_1 λ_1 + a_2 λ_2 + a_3 λ_3

        will also be constant. Because all parameters in equations (5)-(8) are constants obviously those equations show that R and R-F are functions of three temperatures, not only the global mean temperature.

        Your definition of the global climate feedback parameter

        λ = (a_1 λ_1 Ts_1 + a_2 λ_2 Ts_2 + a_3 λ_3 Ts_3)/Ts

        makes λ a function of three temperatures. When you plug this definition into equation (2) in my comment

        R=F+λ Ts

        you get

        R=F+a_1 λ_1 Ts_1 + a_2 λ_2 Ts_2 + a_3 λ_3 Ts_3,

        that is R is still a function of three temperatures, not only of the global mean surface temperature Ts.

        Thus, it doesn’t matter if one uses your or my definition of the global climate feedback parameter because the end result of the derivation that R is a function of three temperatures is the same. However, your definition has the inappropriate property to make λ dependent on the temperature pattern while with my definition λ is a constant.

    • stevefitzpatrick

      “It seems to me that the only way to calculate ECS values considering the changing temperature patterns is by using advanced climate models (GCMs) because such models also describe the changed temperature pattern.”
      .
      How do you proposed to verify performance of “advanced climate models” if not against reality? AFAIK, GCMs miss both regional warming patterns and rainfall patterns rather badly, and more than they miss average warming. Should we not then just logically discount these models? That is what I do.

      • stevefitzpatrick

        Pehr Björnbom,
        “It seems to me that the only way to calculate ECS values considering the changing temperature patterns is by using advanced climate models (GCMs) because such models also describe the changed temperature pattern.”
        .
        I actually had to wait a few moments to stop laughing before replying to such nonsense. Nearly all non-linear processes are very close to linear with small perturbations. ~0.4% increase in average forcing is a small perturbation.

  28. From what I can see, internal variability of heat uptake rate and surface temperature are treated as independent variables in L&C2018. What happens when they are treated as correlating, as found by Brown et al. 2014?

  29. From what I can see, internal variability of heat uptake rate and surface temperature are treated as independent variables in L&C2018. What happens when they are treated as correlating, as found by Brown et al. 2014?

    • What does one imagine the return rate of such heat uptake to be?
      If it is one thousand years, isn’t ECS an imaginary number?

      The return rate of OHC is necessarily slow and over a long time because of the very mixing which leads to OCH increase.

    • paulskio
      Yes, they are treated as independent variables in LC18, as in LC15 and Otto et al 2013. It wouldn’t make much difference if they were +ve correlated.
      Brown et al (2014) don’t give correlations. But in the MPI-ESM1.1 2000 year control run, decadal variations in those variables have negative correlation, a bit stronger than -0.5.

  30. Pehr: “A mathematical function y=f(x) has according to the definition one unique value of the dependent variable y for every value of the independent variable x. In energy balance models it is assumed that the response of TOA net flux (R-F) can be expressed as a linear function of the global average surface temperature (T_S). But it is observed that for each value of T_S there can be multiple values of R-F.”

    That’s a red-herring. Just treat R-F as a random variable rather than a deterministic function of T_S. As an RV R-F can still be linear in T_S. The source of randomness that is being integrated over is the distribution of surface temperature. So all Nic has to do is figure out the variance of R-F conditioned on T_S and the consequent impact on the variance of his sensitivity estimates, which he has done.

  31. The ERA-int 500mb 30 S – 30 N mean temperature change 1979 -2017 is less than 0.1 degree Celsius per decade.

  32. Lewis and Curry, again
    As far as I can see they have used the physics and the data in a totally scientific way and come up with similar results over several different ranges.
    Any attack on their findings should start with methodology and technique and no one seems to be quibbling with the physics they have used or the data sets they have used. Well not in any significant way.
    The problem is that this does not match with the result more traditional method of estimating ECS from GCM’s. The difference here is that the models and observations start from the same time and the models incorporate the physics as we know it to make assumptions of where the warming should go.
    Same physics.
    So why the divergence
    Obviously the models will diverge because they are using physics to predict future behaviour based on averages which the past has shown to be expected. And the laws of physics which imply a positive feedback to CO2 forcing. At an ECS 3.0C rate.
    The rise in CO2 seems to be right so why does the ECS not match the observations?
    The answer could be that the positive feedback assumptions are not right. That is that incorporating an ECS expectancy of 3.0 will always give an ECS outcome of roughly 3.0C.
    The models can only produce ECS around this range.
    All of them, hence the divergence
    Hence the dilemma.
    If one says the observations must be wrong, introduce the concept of natural variability as the cause of a bias in the results.
    But if LC is right, and there is no bias then [Lewis and Curry, again] consideration has to be given to the view that positive feedback may be out of whack

  33. If observations give an ECS figure this is an important outcome.
    Firstly it is in the ballpark range.
    If we only have one set of observations and one rule for working on them then it is the correct result. It is the only one we have.
    If we have different ranges of observations we can repeat at different intervals and establish a range.
    When we have enough length of time [AD] we can eventually say this is the ECS for this sort of system.
    Maps are not the territory. When we do models we do a much simplified model of the territory.
    When we do multiple runs we are no longer determining properties of the territory we are determining the properties of the model.
    When we achieve a result, ECS, we are merely obtaining the ECS that that model is programmed to have. Unlike a die with a choice of sides the poor model is condemned to throw to the one result for infinity. ECS of 3.0 in, ECS of 3 out.

    • “If observations give an ECS figure”

      The biggest problem with ECS is not the calculation of imbalance, but the presumption of future equilibrium.

      Pielke Sr. pointed this out.

      The oceans are presumed to be taking up heat.
      The oceans are also presumed at some point to be releasing this heat.
      But the rate will likely be much slower than uptake and with a great delay ( centuries or even millenia ) by which time, radiative imbalance may be zero or even negative.

      The problem with ECS is not the lack of evaluation but the imaginary conditions which it represents.

      • dougbadgero

        Yup

      • Yes.

        The oceans covering more than half the surface are the wheelhouse of the machine. They absorb so much shortwave. Capture, store and release. Since 1950 the SSTs have lagged the GMST. And their reserves have lagged even more and we have yet to materially impact the 1000 meter level. This has been over the last 60 years.

        How to make a problem seem worse. Turn it over, don’t store. It’s like a stimulus package. People will spend the money and the economy will be fast. Until you have to pay the interest on the debt. Which gives my general opinion to such approaches. They’re a scam.

        Call the oceans fast. Fast and storing 92% of the excess joules. We’ve been putting money in the bank for 60 years. 92% of it is still there, and we mostly spent the rest of it.

        The final point is more CO2 warms the oceans more. It makes it harder for the joules to get out. More humidity does that too.

      • The analogy I would use is that emissions are like charging your credit card. Surface warming is the only way to pay it back, so the imbalance represents the debt. The only way to remove that debt is by surface warming. The positive feedback is the interest because just doing enough surface warming to remove the imbalance introduces some more debt to pay off. Our current status is that we’re still running up the credit card charges with emissions and we have a debt and interest to pay off even though we have paid some off already with a degree of warming.

      • But the credit card has a 0% interest rate and no payments for perhaps a thousand years, so it makes sens to use it now.

      • Earth is not a greenhouse, greenhouse rules don’t apply. On earth, water evaporates, cooling the surface and warming the atmosphere. The warm wet air travels up past a fictional green house barrier and turns to water and ice, up above the fictional green house roof, releasing energy that is transferred out while the water and ice is sent back down to do more atmosphere and surface cooling and sequestering of ice for more cooling later by more reflecting and thawing.

        Try to understand the simple basic principles using knowledge of how things really work. I repeat: Earth is not a greenhouse, greenhouse rules don’t apply.

      • Pope,

        “Earth is not a greenhouse, greenhouse rules don’t apply. On earth, water evaporates, cooling the surface and warming the atmosphere. “

        Greenhouse is an unfortunate descriptor, and certainly sensible and latent heat transfer within the atmosphere are important. However, sensible and latent heat transfer tend toward zero the higher in the atmosphere one considers. Radiative transfer is the only transfer at the hypothetical “top” of the atmosphere, and all else constant, increased CO2 does cause radiative imbalance. This is not the important question but rather, are there significant feedbacks? what extent of radiative imbalance? what resultant warming? what net of benefits and detriments might we expect?

      • TE, the debt is being paid off as fast as the earth can do it, because there is warming, but there is remaining debt and interest has accrued on top of the initial debt plus ongoing deficits from continued emissions.

      • remaining debt and interest has accrued on top of the initial debt plus ongoing deficits from continued emissions.

        No interest ( the oceans get only the heat reflecting the imbalance ).
        Payments that don’t start for perhaps a thousand years.
        And very low payments – heat will emerge, necessarily because of the large thermal mass of the oceans, very slowly.

      • There is interest because we are already past the no feedback response to the forcing we have had, and there is still an imbalance.

      • Turbulent Eddie,
        Water evaporates on the surface, land and ocean, that warms the atmosphere and cools the surface. The warm humid air is carried to the tops of clouds by convection. Water Vapor becomes water and ice, warming the upper air, above greenhouse layers. That energy that was picked up from the surface is radiated out above the layers of greenhouse that was bypassed. The IR out is radiated mostly from the abundant water and water vapor and not mostly from CO2.

  34. Here is a quote from dikranmarsupial at aTTP’s:

    “Note that if you had multiple parallel Earths with the same forcings, but different initial conditions, then the estimate of TCR would be numerically different for each one, but the “true” TCR is the same for all, as the physics of the climate system is the same for them all. We can find the “true” TCR by averaging the results from the parallel Earths. Unfortunately we can’t do that, as we only have access to our reality, but the thought experiment explains why treating the estimate of TCR from the observations as the “true” TCR is essentially assuming this variance is zero, which is physically implausible.”

    The true TCR is not likely to be observed. What is observed is a sample of one from a distribution. True exists along with random. True meeting random results in the sample distribution. We experience one sample which results from both true and random.

    True + random = Observed

    Assumed + random = What?

    My immediate above may not be correct. Assumptions are part of the GCMs. Even the randoms are not the same in the two aboves. The random in the observed actually occurred. The random in the GCMs may have occurred by being limited to 6 decimal places which doesn’t happen in nature very often.

    Does true exist? Our want for it would not influence this if it did not exist. True is what ties all the random together and expresses it as a single number. True is when every last cubic inch of the climate system in known. When this is not known, true exists. Measurements of gravity’s effects exist. Is there a true value of acceleration caused by gravity? While we cannot measure it to 50 decimal places, it exists.

    What happens when we look at many random iterations as Dessler has done? Call this brute forcing the problem. The assumed now has a distribution. A distribution of assumptions. Inextricably tied to assumptions.

    Upgrade assumed to know. We then have a distribution of what we know. This might be awkward but is more palatable when we remember we have random mixed in.

    Further, some know that random isn’t very significant. Some know the Charney sensitivity. And we can also have distributions of these things.

    • There is no one ‘“true” TCR for earth.
      Look at the past fifty million years, external forcing cycled in repeating cycles but internal response evolved over time as land moved and ocean currents and levels changed. The balance between ice on land and water in the oceans evolved. The true climate of earth involves the external forcings and the internal response. Internal response has not been properly considered in theory or models.
      When oceans rise and warm there is more ocean exposed to atmosphere, more evaporation and more rain and snow and sequestering of ice. More ice advances, reflects and thaws and provides more cooling. This has not been taken into proper consideration.

      • In all of this, CO2 plays an important roll in how things grow but earth is not a greenhouse. Water evaporates, cooling the surface, warming the atmosphere, and water vapor filled atmosphere is carried up past greenhouse gases, water vapor turns to water and ice up high and the energy radiates out while the rain and snow fall to do more cooling down below.

      • If earth was really a greenhouse, gliders could never gain altitude.

  35. Andrew E Dessler says: April 29, 2018 at 11:28 pm at ATTP
    “Here are a few answers to questions:
    1. Should they have replicated the selection of end points used by L&C rather than just the first and last decades in their available model runs?
    That’s not really possible, since our ensemble members end in 2005. We have calculated ECS using 1869-1882 as the start period and the results don’t change much. In particular, there is still a very large spread in inferred ECS among the ensemble members.”

    While I sort of get this I do not understand using an end decade centered on the largest temperature producing El Nino of modern times.
    The ECS inferred from this would have to be the highest possible and it only just scrapes in at 3 C.

  36. For JCH “UAH Global Temperature Update for April, 2018: +0.21 deg. C.
    down a little from the March value of +0.24 deg. C:”
    Yes!

      • afonzarelli

        Is hadcrut4 junk, too, southern man?

      • I think GISS might have gone up so he was expecting UAH as well only to be disappointed again.
        The lag is in there should be further drops.
        Cold this year in Victoria though we were told it was the warmest April for Australia, like it was really warm in America as well.

      • I had/have no expectations, up, down, or flat, with regard to the UAH series. My term of endearment for HadCrut3 was HadCrappy3, and HadCrut4 is an admission I was right.

        RSS April is down. Birds are chilly.

        If you think this means the GISS anomaly will be lower for April than it was for March, well, time will tell, but I sincerely doubt that will be the case. It’s going to be fun watching the LC18 crushing observations pour in.

      • JCH: The titel of this blog is “Climate etc” and not “Weather ect.” Your comments about the April 2018 GMST lower the bar too much IMO.

  37. nobodysknowledge

    Quiz
    We have now learned that global warming doesn`t show up in historical energy budget data, according to consensus makers. So where does the heating hide?

  38. In the exchange, I had with Nic Lewis on Climate Etc. following my comment here
    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871495

    I demonstrated for Nic Lewis by simple algebraic calculations how the simple energy budget model breaks down when the temperature pattern of a planet is changing under warming by an external radiative forcing.

    It also seems to me that Nic Lewis has overlooked that ECS according to the simple energy budget model theory is defined on the basis of the traditional linearised energy balance equation (1) given in Dessler et al.:
    https://www.atmos-chem-phys.net/18/5147/2018/acp-18-5147-2018.pdf

    R=F+ λ Ts.

    This equation describes how the TOA energy imbalance develops when the planet is going from one equilibrium state where the anomalies R=Ts=0 to a new equilibrium state where, under the influence of the forcing F, the temperature anomaly Ts has changed in such a way that the planet has reached the new equilibrium state with R=0.

    Because this is a linearised equation, λ is a constant and the equation 0=F+ λ Ts can be solved for the new equilibrium value of Ts=-F/λ. ECS for this simple energy budget model is defined as the temperature anomaly calculated in that way for the forcing caused by a doubling of the carbon dioxide mixing ratio, for example from 400 ppmv to 800 ppmv.

    However, Nic Lewis claimed in a reply to me that λ could be temperature dependent. However, this is not compatible with the mentioned definition of ECS. We can easily see that by considering the following simple example:

    R=F+ λ(Ts) Ts.

    Now the solution of 0= F+ λ(Ts) Ts is different, it is no longer Ts=-F/λ but we have to consider the dependence of λ on Ts in calculating the solution. Thus the definition of ECS according to the simple global energy budget model is no longer valid if λ is temperature dependent.

    If we have a planet with a changing temperature pattern the calculation of ECS must be done in another way, with a modified energy budget model, and I have also discussed this in my comment to Dessler et al. (2018) in ACP

    https://www.atmos-chem-phys-discuss.net/acp-2017-1236/acp-2017-1236-SC3.pdf

    I analysed an example of a planet with three regions, each of them satisfying the regional versions of the linearised flux equation according to equation (1) in Dessler et al. (2018):

    R_1=F_1+ λ_1 Ts_1
    R_2=F_2+ λ_2 Ts_2
    R_3=F_3+ λ_3 Ts_3

    I have discussed how ECS can be calculated for this example as an average of the ECS-values that can be calculated for each of the regions. Read more in my ACP comment.

    • “Nic Lewis claimed in a reply to me that λ could be temperature dependent.”

      I made no such claim. I implied that, in the case concerned, λ could be dependent on the warming pattern.

      And I showed that you had the wrong equation for λ .

      • Nic Lewis, you wrote in your previous reply:

        Pehr Bjornbom,
        I think you go wrong in your ACP comment in stating the equation:
        λ = a_1 λ_1 + a_2 λ_2 + a_3 λ_3.

        Surely the correct equation is:
        λ = (a_1 λ_1 Ts_1 + a_2 λ_2 Ts_2 + a_3 λ_3 Ts_3)/Ts.

        At first, I think I have shown that I didn’t go wrong with my equation. It was a useful equation for the purpose of my derivations in the ACP comment, to show that R and R-F in this example is a function of three temperatures, for example, Ts, Ts_1 and Ts_2, that is not the global mean temperature alone. I also showed in my reply to you that the same result is obtained by using your equation.

        I think you go wrong here:

        “Nic Lewis claimed in a reply to me that λ could be temperature dependent.”

        I made no such claim. I implied that, in the case concerned, λ could be dependent on the warming pattern.

        You go wrong because the discussion is about the fundamental problems of the simple energy budget model that you are using in your research work when the warming pattern is changing due to the warming (as shown from analyses of the results of GCMs). In such a case the λ-value will change with the changing warming pattern. Because the warming pattern is changing together with the global mean temperature Ts, the λ-value will also be dependent on Ts.

        Thus, your claim implied that λ could be temperature dependent.

    • However, λ could be time-dependent. Imagine if the ocean initially doesn’t warm as fast as the global average, as is observed. Then the water vapor feedback is reduced and that would affect the initial gradient which is λ. The extreme case is if the ocean doesn’t warm at all, and the value of λ reflects one with no water vapor feedback. The value of λ would decrease over time reflecting an increasing ECS.

      • …and you would not be able to trust its value until such time as the ocean starts to warm at the same rate as the global average, which is certainly not the case now.
        http://woodfortrees.org/plot/hadcrut4gl/mean:120/mean:240/plot/hadsst3gl/mean:120/mean:240/plot/hadcrut4gl/from:1987/trend/plot/hadsst3gl/from:1987/trend

      • The ocean is not expected ever to warm at the same rate as the global average. Go and study CMIP5 model forced simulations.

        “However, λ could be time-dependent.” Yes, possibly. Read section 7f of LC18. But even if CMIP5 models are right about such time-dependency, the effect on warming this century willl be trivial (Mauritsen & Pincus 2017).

      • The time dependence is a result of the ocean not initially warming at the same rate as the global average, and the monolithic surface method completely ignores to its detriment this observed aspect of historical data. GCMs can account for it to some extent, and have higher sensitivities.

      • Jim D begins to realise that a noteworthy time dependence of lambda is possible as imagination but not with physics ( only land warms; ocean warms faster than GMST…) . A step foreward.

      • Indeed the reduction of lambda (increase in implied ECS) over time is an ignored aspect of Lewis’s method. He needs to improve this and I think he knows it because there are some mumblings in this direction towards the end of his paper.

      • Well, the lack of a hot spot of warming does indicate not much feedback. That’s good, right? Warming is not as bad as expected?

      • Tropical oceans not yet responding as fast as they need to for equilibrium. It’s a delay in the system. Don’t let the delay fool you. Continents are warming at nearly twice the global average rate.

      • Nic,

        But even if CMIP5 models are right about such time-dependency, the effect on warming this century willl be trivial (Mauritsen & Pincus 2017).

        Mauritsen & Pincus (2017) say

        we assume that the centennial response to present-day forcing will be consistent with the response to historical forcing

        This is an assumption that they use to estimate the committed warming. They – as far as I have see – have not actually demonstrated that a possible time dependency will have a trivial effect this century.

      • Jim D: “Indeed the reduction of lambda (increase in implied ECS) over time is an ignored aspect of Lewis’s method.”
        He can’t consider it in this paper. From the introduction:
        “Using a standard global energy budget approach, this paper seeks to clarify the implications for climate sensitivity (both ECS and TCR) of incorporating the most up-to-date surface temperature, forcing and ocean heat content data.”
        The paper deals with DATA as they are observed and as you can see in tab.3 the lambda seems to be relatively contstant in all we could observe during the last 160 years. Therefore Nic can only handle with a nearly constant lambda. In some imaginations this is not the case but this is out of assured knowledge when you accept that assured knowledge is derived from what we know from observations.
        “Continents are warming at nearly twice the global average rate.” This is what we expect from every forced warming. We know this since 1991, see Manabe et al.
        Something new from your side?

      • Yes, from the physics perspective alone, it is a bad assumption that it is constant, and observations would support that the historical value is not representative because the ocean surface is lagging the global mean. These simple models are far from perfect and have some expected biases built into their assumptions.

      • Jim D: Yes, from the physics perspective alone, it is a bad assumption that it is constant,” So you have to clarify why we see a (nearly) constant in this what we observe. It’s a loosers game: explain the world without realising the world. Many solutions are thinkable but nothing that matters!

      • The ocean warming rate is lagging by a constant factor around 20-25% leading to a lower effective sensitivity because of a consequently muted water vapor feedback. Until the ocean surface starts to catch up with the global mean warming, it is going to be a misleading lambda. That is not happening any time soon.

      • ATTP,

        Mauritsen & Pincus 2017 say:

        Finally, feedbacks in comprehensive models often vary with timescale or climate state, especially the pattern of surface warming20. In such models, the actual ECS is uniformly similar to or higher than that inferred from estimates of forcing, warming and imbalance.
        This last issue does not seem particularly relevant: we explored the impact of time-dependent feedback (Methods and Supplementary Fig. 1) and find that, even if we choose parameters corresponding to the strongest effect found among Coupled Model Intercomparison Project Phase 5 (CMIP5) models, the increase in commitment in case d is modest (around 0.12 K).
        Using the model ensemble mean time dependence, the impact is only 0.04K by the year 2100.

      • Nic,
        Yes, if we consider a case where we fix the forcing at today’s value (but with aerosols and short-lived GHGs removed) then the impact of potential feedback time dependencies would probably be modest. However, even Mautisen & Pincus’s Supplementary Information indicates that it could increase their ECS estimate from ~1.8K to ~2.4K.

      • “The ocean is not expected ever to warm at the same rate as the global average.”

        Sometimes I think people too easily forget the hydrosphere is 1) 300x more massive and 2) has barely changed temperature enough since 1950 to measure, to the extent we can accurately estimate the temperature at all. The great unremarked flaw in the “hidden heat” theories was always that the haystack of the oceans could keep hiding atmospheric warming needles well past time frames of policy relevance.

      • “The ocean is not expected ever to warm at the same rate as the global average.” OR COOL OFF AT THE SAME RATE!!!!!

        RIGHT! Three fourths of the earth is covered by ocean. Land temperatures go up and down much faster and is harder to average. The ocean temperatures are the best measure of earth temperature and the best measure of global average.

      • The land and Arctic are warming twice as fast as the ocean. No cause for comfort there because we live and grow food on the land, and Greenland melts with some fairly clear consequences globally.

  39. I’m no physicist, but isn’t there something missing in the middle of this paragraph? Or has someone already raised this?
    “…The estimated 5-95% uncertainty range for λ derived from the MPI-ESM1.1 historical simulations is 1.17−1.63, with a median of 1.43 and an almost identical mean (units for λ are ). The distribution has an estimated standard deviation of 0.137 and shows no evidence of non-normality.[8] The fractional standard deviation of the λ estimates is 0.096⤬ the median estimate.[9]…”

  40. Units for λ are…?

  41. nobodysknowledge

    Repeating my quiz:
    So where does the heating hide?
    And we got a winner. Jim D: it is hiding in the future.
    Air temperature is running faster than sea surface temperature. But just wait. Sea surface will speed up. It is simple physics (as dessler used to call it when he should explain why cloud feedbacks are so positive). And most of us cannot even understand it.
    And in the meantime land temperatures will perhaps run faster, so the lag will keep increasing. And Antarctica will not even manage to run at all, falling far back Arctics. But in the future it will come. Every year it will save its energy, so just wait. Just simple physics.

  42. Nic Lewis, based on our discussion beginning here
    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871495

    I think that the simple energy budget model you are using in your research cannot overrule the results of GCMs when it comes to the effects of changing warming pattern due to the warming caused by radiative forcing. However, there is obviously potential for developing improved energy budget models by using regional TOA energy balance equations. Perhaps it is better characterising the sensitivity of the planet for radiative forcing by using several regional ECS-values instead of only one global ECS-value, as I have proposed in my ACP comment, a possibility that Dessler, Mauritzen and Stevens agreed to in their reply to me.

    Anyhow, at the end of the day the scientific results from energy budget models and GCMs and several other lines of evidence will be the basis for the kind of risk analysis that is the goal of the work in UN’s climate panel IPCC. I like especially a discussion addressing this issue between Kerry Emanuel and John Christy available on Youtube:
    “A Conversation on Climate Change”

    My own view on those risks is close to Kerry Emanuel’s. The risk of ignoring the possibility of really bad consequences of the increasing greenhouse gases is not acceptable.

    • Pehr, in common day practice people don’t act according to the precautionary principle. E.g. people who are living behind dikes and on the slopes of volcanoes are not considering each and every time the worst case scenario, they act according to the most likely scenario.

      • In common day practice the insurance industry does seem to act with precaution. At least when it’s on its best behavior. It’s hard to say if anyone abides by the PP, because the PP is ill defined.

        Embracing risks does not imply letting go of all precaution:

        [C]ompeting on the basis of risk can be beneficial when it provides useful incentives for risk management and loss prevention. But, just as embracing risk is counterproductive when it becomes an excuse for leaving large losses on individuals, so, too, is competing on the basis of risks over which individuals have no control or moral responsibility. (Cf. Hellman 1997) Competing on that basis has the inevitable consequence of forcing some individuals to “embrace” – surely not an apt word in this context – large losses over which they have no control.

        Along with embracing risk, we need to re-invigorate a forgotten 19 century insurance idea – that of destructive competition. When insurance companies compete by “cream- skimming” the good risks and shutting out the bad risks, they destroy the safety net that insurance is supposed to provide.

        There are too much risks at stake to reduce the PP to a strawman used in Freedom Fighters’ battle cries.

      • Insurance companies want to make money, so they exaggerate the risks to potential customers, I frequently get salesmen of security equipment on my doorstep claiming how dangerous my neighbourhood is. It is a novelty that policy should be based on science fiction.

      • Insurance companies that exaggerate the risks to potential customers get eaten by those that do not, Hans, but your anecdote are duly noted.

      • I doubt it since insurance for climate risk has a very long insurance period. Who is this alledged competitor of Munich Re Willard?

      • Are you denying that markets exist, Hans?

        Munich Re is a reinsurer. It’s not alone in the reinsuring business. And to return to your “common day practice,” many insurance policies cover property damage. Good luck with finding me one that charges substantially more than the others for the same product and the same market.

        If you can find one that charges substantially less, please contact me.

      • I’m not buying climate insurance Willard. (Pun intended)

      • Hans, people always do everything they can to mitigate against risks. When driving into a fog bank you slow down. When the risks are from high CO2 levels, you slow down emissions.

      • Hans Erren, as I said my view on this issue is close to Kerry Emanuel’s. As a clarifying example, let me cite one of his views from his conversation with John Christy on the topic of what the uncertain projected ECS interval 1.5-4.5 K means for future generations (the temperature figures mentioned are in Fahrenheit):

        This is something that any serious climate scientist–and I think the two of us on this stage represent this, in this respect–would immediately tell you, is that we are very uncertain about these projections. Nobody pretends to be certain about it. But there just isn’t much certainty in that. To say it’s between 2.5 and 9 degrees for a doubling or more of CO_2, Fahrenheit, it’s to confess that we don’t know. All right? And so what we are dealing with is a problem of the[?] t-curve. Okay? The near end, it doesn’t morph, it’s 2.5 degrees–we don’t have to worry very much, I would argue. And I don’t think many of my colleagues would suggest we do. If it’s in the middle range, there will be problems. Probably we’ll adapt to them. If it’s up at the higher end, that could be catastrophic. And the question for me is: Do we do nothing to avoid, even a small risk of catastrophe for our grandchildren? To me, that’s the moral question.

        (Transcript from EconTalk with Russ Roberts
        http://www.econtalk.org/archives/2014/03/john_christy_an.html)

        I agree with Kerry Emanuel that the answer to the moral question is that we must do something. That is why I support the Paris agreement.

      • “Are you denying that markets exist…”

        When we can have villains, yes. Profit seeking corporations that are in on the fix.

        They also use the price jack and price gouge which are poorly defined, but that’s all part of the fix. If they could be defined, we could do something about them but then the sound bites of those acting in our best interests referencing them would decline.

        Mutual Skeptical Insurance of Omaha is offering climate insurance. Quite reasonably priced. Next thing you know, we might be told what have a choice as to whether we buy insurance and if we do, what kind of insurance we wish to buy.

        Minnesota some time ago told us we had to buy auto insurance. We were being saved from something. I think it was plaintiffs attorneys. We still have auto accidents. If we didn’t have to buy auto insurance, the plaintiffs attorney would have got us by now.

      • afonzarelli

        But, jimmie, slowing down emissions doesn’t impact high CO2 levels. The atmospheric rate of increase is tied to the SSTs of the southern ocean not to human emissions. (but, you knew that already, didn’t you?)

      • Pehr This is the daunting goal the Paris has set itself

        Yes the NON-annex I countries wil be the cause of the catastrophe, but only if you believe:
        a) that RCP 8.5 is a realistic scenario (which it isn’t)
        b) that CO2 sinks are saturating (which they are not)
        c) that climate sensitivity is high (which is increasingly unlikely)

      • Pehr there are better ways to deal with the uncertainty than to act regardless, particular when the uncertainty evolves over time and acting to avoid carries a high cost. The issue shoudn’t be about an imperative to do everything possible now (and ignore the consequences) which is where the PP takes you if you aren’t careful. Rather the issue is what should we be doing when.

        This leads to investing today into the things that will buy you options in the future, should the evidence start to roll in that things are going awry. Significant investment in R&D would be a classic example. Relatively low cost, you’ll wish you’d done it if things go pear shaped, and through potential spill-overs you probably won’t regret it too much if you got it wrong.

        Just on the insurance issue most retail insurance for things like SLR is sold on an annual basis, so apart from specialist underwriting I don’t think the industry carries that risk. The property owner does.

      • > I’m not buying climate insurance

        You’ve moved from common day practices to something that doesn’t look like a common practice, Hans.

        Tell me that you do not home or car insurance.

      • fonzie, interesting theory you have there. Which skeptic thinktank of geniuses came up with that one?

      • True, but you could go further and argue that most people ignore the worst case scenario, and in fact rarely even think about climate change.

        In Atlantic Beach, North Carolina, if you build a house on the beach it must be about 12 feet above mean sea level (this number varies by locality). People build multi-million dollar houses on the beach without blinking an eye. Anyone who can afford this can also afford nosebleed levels of wind/hail insurance. Available flood insurance is woefully inadequate for a beachfront home at this level, but people will self-insure the difference and will self-insure wind/hail when they drop the insurance when the premiums get really silly.

        If a beachfront homeowner suffers a storm loss he/she will rebuild when the rules allow. The point is that people flock to the seacoasts and will continue to do so. Only in places like Miami is flooding a real issue. For the average Joe, climate change is a yawner, and no amount of angst mongering will change this. Those who wring their hands over the population move to the coasts need to expend their energies on solving the problems created by this phenomenon.

        If the alarmist scenario happens to play out – that will get people’s attention.
        This possibility is many decades distant in the worst case. By then we’ll know the truth about this issue.

      • afonzarelli

        Jim, it ain’t a theory. It’s an observational fact. It’s one of the more bizarre aspects of agw to think that we can control climate with emissions reductions when we can’t even control the atmospheric growth rate. That’s why i think it’s incumbent on folks to look at all aspects of the issue regardless of whatever view the mainstream establishment holds. You’ve done this yourself wrt warming prior to 1950, bucking the establishment view that it was natural. (and kudos to you for that) There are plenty of false paradigms out there whether they be in climate science or other disciplines. Science is like a creaky old dinosaur that’s slow to change and not very fleet on it’s feet. (and we don’t have to wait for science to catch up before proclaiming the obvious)…

      • afonzarelli

        it’s should read its

      • afonzarelli

        (its feet)…

      • afonzerelli: The atmospheric rate of increase is tied to the SSTs of the southern ocean not to human emissions.

        Where did you get that? It sounds like a “Salbyism”.

      • fonzie, the way science develops and gets accepted is that it explains things. First you get a rough draft like Tyndall or Arrhenius over a century ago. Then later with more measurements and various ways of testing it, the explanation becomes more refined over time, and it explains more things, like paleoclimate or other planetary atmospheres as they become better known, and then we got computers and GCMs, and you don’t get a decent climate without representing the physics of greenhouse gases. It is now refined to the level of molecular IR properties, and the largest uncertainty about the future is what we will do to the atmosphere, not the physics itself. There are “skeptics” who want to throw all this out and start again from a couple of centuries ago, but I don’t think they accept much science anyway, so they’re special.

      • fonzie, my reply was moderated. You may have to wait because I am not going to figure out what the offending word as. It should appear above.

      • Don Monfort

        Just another Sky Dragon. Seems like more of them landing here, lately. Gives yimmy a chance to fight somebody his own weight.

      • Willard. I.am.not.buying.a.climate.insurance.period.
        Show me proven observed statistics that will convince me. Not some Ehrlich-type science fiction scenario.

      • Hans. You. Moved. The. Goalpost.

        You were talking about “common day practice.” Either you deny that you have car insurance and wear your seatbelt, or you fold. Even a hardcore contrarian like you is more precautionary than you’re openly willing to admit.

        It’s quite simple, really.

      • Not really Willard, you remind me of this marketsalesman who first misleads his public that they have a problem, and then sells them a remedy that doesn’t work, even if there were a problem..

      • DO AS I SAY NOT AS I DO:

        https://www.washingtontimes.com/news/2018/may/7/climate-skeptics-more-eco-friendly-global-warming-/

        https://twt-thumbs.washtimes.com/media/image/2016/06/20/AP_347875918848_s878x639.jpg?02370738f70e4c68cdef0456075b419d88e1a4a7

        Climate skeptics more eco-friendly than global-warming alarmists: study

        Skeptics more likely to recycle, ride bus than those ‘highly concerned’ about climate

        Al Gore has been accused of hypocrisy for talking the talk on climate change despite burning through fossil fuels at a rapid clip, but it turns out he’s not alone.

        A study by Cornell and the University of Michigan researchers found that those “highly concerned” about climate change were less likely to engage in recycling and other eco-friendly behaviors than global-warming skeptics.

        Published in the April edition of the Journal of Environmental Psychology, the one-year study broke 600 participants into three groups based on their level of concern about climate change: “highly concerned,” “cautiously worried,” and “skeptical.”

        The “highly concerned” cluster was “most supportive of government climate policies, but least likely to report individual-level actions, whereas the ‘Skeptical’ opposed policy solutions but were most likely to report engaging in individual-level pro-environmental behaviors,” the researchers concluded.

        Conducting the study, entitled “Believing in climate change but not behaving sustainably,” were Cornell assistant professor Neil A. Lewis Jr. and University of Michigan researchers Michael P. Hall and Phoebe C. Ellsworth.

        The skeptics were the more likely than the “highly concerned” to recycle, use public transportation and reusable shopping bags, and buy eco-friendly products.

        “Belief in climate change predicted support for government policies to combat climate change, but did not generally translate to individual-level, self-reported pro-environmental behavior,” said the paper.

        Why? Even the researchers were stumped, although it’s possible that skeptics may place more emphasis on personal responsibility than government action.

        “These results suggest that different groups may prefer different strategies for addressing climate change,” said the paper. “Thus, belief in climate change does not appear to be a necessary or sufficient condition for pro-environmental behavior, indicating that changing skeptical Americans’ minds need not be a top priority for climate policymakers.”

        As Pacific Standard’s Tom Jacobs put it, “remember that conservatism prizes individual action over collective efforts.”

        “So while they may assert disbelief in order to stave off coercive (in their view) actions by the government, many could take pride in doing what they can do on a personal basis,” he said in a Friday post.

        Mr. Gore, a leading climate-change activist, has long come under fire for his carbon-emitting ways, such as burning 21 times more kilowatt hours annually at his Nashville mansion than the average U.S. household, according to a 2017 study by the National Center for Public Policy Research.

        His swimming pool alone uses enough electricity to power six average homes for a year, the study said.

        Mr. Gore told CNN last year that he leads a “carbon-free lifestyle to the maximum extent possible,” pointing out that he doesn’t own a private jet and that he buys carbon offsets to balance his home and flights on Southwest Airlines.

        Copyright © 2018 The Washington Times, LLC. Click here for reprint permission.

      • > you remind me of

        That reminds me of someone who has no argument, Hans.

        From your own corner of the world:

        Banks, insurers, and other financial institutions must do more to take into account the risks posed by climate change to their business, a Dutch Central Bank study said.

        As global warming increases the risk of extreme weather events, regulators are giving more attention to its economic and market implications, with estimates showing that a single high-impact storm could cause damages of as much as 60 billion euros ($71 billion), according to the report published on Thursday. The Netherlands, which is largely below sea level, runs an inordinate risk of being affected by such events.

        https://www.insurancejournal.com/news/international/2017/10/05/466551.htm

        In the end, Nature bats last.

      • afonzarelli

        Matthew, always nice to see you… Note the graph in my comment time stamped @5:10 pm. With the exception of the Pinatubo volcano years (higher photosynthesis due to light scattering), there is a high fidelity between temperature and the atmospheric carbon dioxide growth rate. If you go ahead and take the hadsst3sh temperature series all the way back to 1850 there is also a good match with ice cores. About .2 ppm/year in the late nineteenth century. Ferdinand actually did a graph converting the growth rate to the cumulative ppm (back to the turn of the century) in which he got a difference of as much as 20ppm in the early twentieth century and an exact match at the turn of the century with ice cores. However, he used a slightly erroneous scale borrowed from Bartemis. The proper scaling would yield a difference of just 10ppm max in the early twentieth century at a time when one might expect greater smoothing in cores due to an abrupt change in the calculated growth rate. While the temperature relationship is most infamously associated with Salby, he seems to me to be a late comer. I first learned of it via Dr Spencer about ten years ago. Allan McCrae also lays claim to it about that time. It had also been floating around pryor to that with the likes of Ohio State prof Robert Wagner referencing it as well as Jaworowski back as far as the nineties. (so this is hardly new) The problem with the skeptic’ take on this relationship is that they inevitably come to the conclusion that this means that the rise in CO2 is largely natural. (it’s one thing to say that the temperature/growthrate relationship exists, quite another to discern what exactly the relationship means)…

      • afonzarelli: there is a high fidelity between temperature and the atmospheric carbon dioxide growth rate.

        but you added “not [tied] to human emissions”, which is an inferential step too far. A reasonable hypothesis is that the entire increase since 280 ppm or so is due to human emissions, but temperature controls the rate of growth in the atmosphere by controlling the rate of solution into terrestrial water.

      • afonzarelli

        Don da Mon, my take on “sky dragon slayers” is that they have some sort of hang up with the second law of thermodynamics. (kind of like they won’t use a thermos because it can’t possibly keep their coffee warm) i’m not one of those, but i think that there are some aspects of climate science where group think has taken over. Skeptics are generally for the RCP scenerios, consistent with mainstream climate science, but i think the data disagrees. No harm in that. i don’t have a take on whether or not the very same data means that the rise in CO2 is natural. Maybe it does, maybe it don’t. (although i lean towards the rise being anthropogenic)…

      • afonzarelli

        Well, Matthew, I think that i articulated it correctly. Regardless of the rate of emissions, the atmospheric growth rate sticks with temperature. Therefore, the growth rate is tied to temperature and not emissions.
        Now, i’m not even actually sure that we need emissions at all to maintain the high growth rate. Without going into it too deeply here, i think we could be looking at a positive feedback loop with largely natural CO2. (anthropogenic causes may have jump started the loop with anomalous warming back in the nineteenth century only to have natural CO2 take over as time progressed) i’m oversimplifying here, but that’s the gist of how i sees it. i think that Jim’s observations of anthro warming going all the way back to the mid 1800s are pertinent here. With all that anomalous warming, we should expect to see anomalously high carbon dioxide levels regardless of emissions. (again, without cramming everything into one comment, i have an awful lot of explaning to do… ☺)

      • afonzarelli

        Jim, i’m a little bit pooped here (time for beddy bye). So i’ll just leave you with a parting thought. Were it not for the irregularities in the orbit of mercury would we have a general theory of relativity today?

      • fonzie, you will notice that Einstein just refined Newton’s theory, and did not overturn it. That is the way science progresses. Currently the main forcing terms are known, and their effects are being refined. There is no new force out there waiting to be discovered.

      • Willard, you are the snakeoil salesman here. Trying to sell a product that doesn’t work for a problem that doesn’t exist.

        No I’m not buying it.

      • afonzarelli: (anthropogenic causes may have jump started the loop with anomalous warming back in the nineteenth century only to have natural CO2 take over as time progressed

        Does that imply that Ocean pH has risen over the past century or so, and will continue to rise if temperature rises? (ie, the opposite of “acidification”) If natural sinks are in fact the sources of most atmospheric CO2 over the last century or so, has all human CO2 been simply added to the atmosphere? If so, how much is that?

      • Don Monfort

        fonzie, wonzie

        If you half-filled a 5 gallon bucket of water and placed it in your back yard, added a cup of water to it every day, would you say that you had no effect on the amount of water in the bucket, after a week?

      • afonzarelli

        fonzie, wonzie

        Oh No! Branded for life! (i am a goner… ☺)

        Don, not an apt analogy. Imagine a wash tub with a strong flow of water and a sizable drain. The net gain in the tub is one cup of water per minute. Now add your one cup each minute to the much larger flow from the spigot. There will be a very tiny additional net gain to the tub and an appreciable one cup gain to the amount draining from the tub. (the increase in the water draining from the tub will now be equivalent to the net gain in the tub)

        What you are tapping into here is the pseudo mass balance argument. That argument does not preclude the possibility that human emissions are having little or no impact on the carbon dioxide growth rate. (whether the mass of aco2 is sinking at a rate 50% or near 100%, nature will be a net sink for carbon in either case)…

      • afonzarelli

        Test (wp at it again)…

      • afonzarelli

        human emissions are about 4.5 ppmv CO2 per year — natural throughput is 75 ppmv in per year ~ferdinand engelbeen 4/30/14

        Matthew, even if the rise in co2 were due entirely to natural sources, those sources would be sinking out at a rate close to 100%. ACO2 could also be sinking at a rate close to 100%. (if we observe aco2 sinking at least at roughly 50%, then there is no reason why it couldn’t be sinking at a rate closer to 100%, too, with natural sources making up the difference) If that were the case, i would think that would increase ocean acidification by virtue of the ACO2 sinking into the ocean. If the rise is largely from natural sources (granted, triggered by human activities), then ACO2 would be winding up in the sinks not the atmosphere…

      • Don Monfort

        Very silly, fonzie wonzie:”If the rise is largely from natural sources (granted, triggered by human activities), then ACO2 would be winding up in the sinks not the atmosphere…”

        ACO2 mixes with the gases in the atmosphere, fonzie wonzie. Like the water you add to your bucket mixes with the water that you started with. Natural evaporation and rain changes the level of the water in the bucket. Guess what happens if you stop pouring water into the bucket, fonzie wonzie? Try to think about it like an intelligent human being and not a Sky Dragon. Give it your best shot.

      • > No I’m not buying it.

        I could not care less if you buy it or not, Hans. It’s what your government does that matters most.

        In any case, your claim that “in common day practice people don’t act according to the precautionary principle” is empirically false. Trying to make it about me won’t change that fact.

        Next time, think before you type.

      • Don, you didn’t complete your analogy:

        – How much rain fell over the course of the week?
        – How many hot, sunny days were there?
        – How many birds and animals came by to drink from the bucket?
        – How much water was lost from that tiny hole just above the 2.5 gallon mark?

        I think you can safely say that your cup of water every day has an effect on water level by the end of the week. i doubt you can quantify it with any precision without making some very careful daily observations. Certainly not by simply measuring the water level.

      • Don Monfort

        Let fonzie wonzie have a try at it, wilbot. We already know you are stubbornly clueless.

      • afonzarelli

        Don, WordPress ate my comment to you and has another comment telling you that wp ate my comment stuck in moderation with notification. (hard to enter the fight when i’m tripping over the ropes trying to get into the ring… ☺)

      • Don Monfort

        Somebody must have turned on the Sky Dragon filter. Good luck. I will help you if your comment doesn’t show up soon.

      • afonzarelli

        Don’t, short and sweet. (hopefully wp will let this go thru) Try your analogy again using a wash basin with spigot and drain on a short time scale. (then add your cup o’ water)…

      • afonzarelli

        Don’t should read Don

        (fourth down, time to punt… ☺)

      • Don Monfort

        OK, you are hopelessly hopeless. Whatever I say you will want to change the story. Watch this:the rain is the spigot, the evaporation is the drain. That is a natural system with ADDED human intervention, you putting water into the system. That’s more water into the system. No way around that, fonzie wonzie. If you want to add animals drinking, martians stealing the water at night, whatever. Have fun with that.

        I will leave you with a final thought that logically should cure your Sky Dragoness, but it won’t:

        It’s the olden olden days, before governments got involved in banking regulation. You were an industrious guy who saved his silver shekels. You deposited over time 100 silver shekels in a bank for safekeeping. OMG! The bank got robbed. You are scared it will get robbed again so you demand your 100 silver shekels. They tell you that they were robbed of 327 silver sheckels and 100 of them were yours. Sorry.

      • Don Monfort

        Have our latest infestation of buzzing Sky Dragons exited flying in random directions? Anyway, this if for the lurking Sky Dragons who don’t get into the discussion, because they know their limitations:

        Put a scale under fonzie’s bucket to record second by second changes in wt of the bucket and water. After a month, examine chart. The daily additions of fonzie’s little sippy cup of water will show clearly. The little ups and downs due to evaporation will be evident, as will any significant event of rainfall. We had continuous bucket video monitoring, in case a unicorn came along and drank some water. We didn’t see anything unusual. The bucket+water weighed more than it did in the beginning. The water at the end of the month weighed X. But what does that tell us?

        So we went back and edited the tape. We added the neighbor’s unicorn playing a trick on fonzie wonzie. Whenever fw added a cup of water, the naughty unicorn jumped the fence and drank a cup of the water from the bucket. Some of the water the unicorn drank would have been water added by fw. The water molecules were well mixed and fungible (look that up fonz). The unicorn had no preference for fonzie’s water over the rest of it. We then edited the graph of the weight record to reflect the unicorn’s drinking a cup of water for every cup that fw had added. Now the weight of the water is significantly reduced. I will leave how much less to your imagination.

        If you don’t believe me, you are a hopeless Sky Dragon committed to willful ignorance. Unless you have some convincing evidence that adding a cup of water per day, CAUSED a cup of water per day (on average) to be removed from the bucket.

      • “If you don’t believe me, you are a hopeless Sky Dragon committed to willful ignorance.”
        And if you believe Don, you are hopelessly gaslighted. Don’s bucket is leaky and he knows it’s leaky. He’s never actually measured the leaks, but he’s convinced that daily rainfall exactly compensates for them and maintains the water level in the bucket at exactly the 2.5 gallon level. He’s convinced there are no leaks above the 2.5 gallon water level, but he has never actually confirmed this. He just assumes all the leaks are below this level.

      • Don Monfort

        That’s just a load of ignorant crap, wilbot. Define the leaks and explain by what mechanism they selectivity take the well mixed ACO 2 molecules out of the atmosphere. Explain how you add gazillions of tons of ACO 2 to the atmosphere, but have no significant effect on the natural Co 2 budget. If it all goes into natural sinks then it is taking up space that could be taken up by natural CO 2. Explain how ACO 2 causes itself to be totally removed. Oh, it’s leaking. Please continue to make a fool of yourself. Yimmy loves it.

      • afonzarelli: Matthew, even if the rise in co2 were due entirely to natural sources, those sources would be sinking out at a rate close to 100%.

        What natural source other than terrestrial water produces CO2 at a rate proportional to global mean temperature?

      • “Explain how …”
        No need to, Don, you’ve already explained how it works. We are living in a golden age when CO2 sinks are perfectly and permanently balanced with all natural CO2 sources. No sinks exist that are dependent on atmospheric CO2 concentration, so that even small amounts of CO2 expelled into the atmosphere by humans will inexorably increase that concentration.

        Did I get that right?

      • Don Monfort

        Of course, you didn’t get it right. That’s a bunch of wing waving BS. You are a willfully ignorant Sky Dragon. End of story.

      • That should read that there was a balance in the surface carbon until we ejected a whole new lot of fossil carbon from deep down into the air, and the surface system is unable to keep up with that increase which is why both the atmosphere and ocean are showing increases. That extra carbon in the surface air/ocean/land system came from somewhere else and you probably know where. There is a carbon budget that only balances when you consider the fossil fuel source.

      • Don Monfort

        It could be worse, yimmy. At least he admits to the existence of ACO 2. I put him on the canvas for you, yimmy. He’s battered and exhausted. Don’t let him get up.

      • Carbon increases both in the air and in the ocean. Where did it come from? They just seem to be clueless when the amount even matches emissions. They painted themselves into a corner when it comes to budgets.

    • ???

      If the risks were really significant, insurance premiums would be much higher. But they’re not. That’s because, unlike wild eyed rabble rousers, insurers are bounded by the reality of observations.

  43. Pehr: “The risk of ignoring the possibility of really bad consequences of the increasing greenhouse gases is not acceptable.” I don’t understand this sentence in the context of this more technical blogpost.
    Re “patterns”: What pattern exactly do you mean? When one compares the spatial temperature trends for land and ocean observations vs. CMIP5 mean on longer timescales ( roughly the same time spans as in L/C18) as I did it with a little help from the KNMI climate explorer

    (observed SST vs. tos)

    (observed land vs. tas land)
    one gets the impression that the spatial correlation is nowhere above 0.6, for the oceans this correlation is in many regions well below 0,5.
    Do you mean that the spatial pattern of the CMIP5 mean is more meaningful for the estimation ot the sensivity than the observed?

    • frankclimate, as usual in climate blog discussions the context is what the presented results of climate science mean in the form of risks for future problems and catastrophes.

      Do you mean that the spatial pattern of the CMIP5 mean is more meaningful for the estimation of the sensitivity than the observed?

      Yes, because it is the final spatial pattern that counts for calculating the ECS. ECS refers to the temperature change after the planet has reached a new state of equilibrium and the spatial pattern in that equilibrium state must be known for calculating ECS. Thus, this final spatial pattern is not available through observations but only through climate models.

      • Pehr I think that ECS is only knowable through climate models, and in particular those that can emulate the experiments that define it. It is a useful metric to compare those types of models, but it has diminishing returns the further one gets away from that. To do any comparison with models that can’t replicate it one has to include a step where the more complex model converts internally to the simpler model’s metric. Not very satisfactory.

        If trying to evaluate model performance there are much better and more direct experiments to use (eg performance outside sample).

      • Peehr: “Thus, this final spatial pattern is not available through observations but only through climate models.”
        Only for clearance: As it’s shown in my comment these patterns diverge much form the observed pattern up to now. How can you know that this “final pattern” is available through climate models?
        Or the other way around: Would you also estimate in , let’s say 2050, that the obs. are some kind of meaningless if the climate models ( the mean of it) calculate still another warming pattern?

  44. Peer: “Thus, this final spatial pattern is not available through observations but only through climate models.” I don’t think so. The deviances in the “patterns” in the observed time span point more to model deficiencys than to the wisdom of used GCM vs. the real climate system. If the models would use only the known physics this would be not the case. Instead they use many “tuned” parameters including the “engineering” ot the sensivity from cloud microphysics.. The shown figures should make you sceptic too, shouldn*t they? Models should make a most reliable image of the real world. If they do not ( I showed it with the spatial pattern) you can’t trust them more than the observations.

    • frankclimate, in a very simple model like the simple energy budget model used by Nic Lewis there are only a few parameters, such as the global climate feedback parameter, but this comes with a cost, that is that the model gives a brutally simplified representation of the real physics. Thus, Nic Lewis’ model has no means of adapting to the changing warming pattern resulting in more or less erroneous ECS-values. Thus, a comparison of such ECS-values with GCM results, obtained with models adapting to a changing warming pattern, is of less interest.

      That is also the reason why I proposed that an improved, more advanced, energy budget model should be developed based on regional TOA energy balances. Such a model can adapt to a changing warming pattern and should, therefore, be a better model for producing ECS results in order to compare with GCM results.

      • Pehr,
        I often hear scientists defending AOGCM results over inferences drawn from EBMs on the grounds that you cannot expect a “brutally simplified representation of the real physics” to yield meaningful results. For some reason, this is often followed by word-salad references to “complex-coupled-non-linear-chaotic-system behaviour”, although I am not accusing you of this. Outside discussions of regional behaviour or spatial field characteristics the argument is almost always complete baloney. The climate models are deterministic models. Chaos arises from the mathematical instability of the dynamic equations describing energy flux and fluid transport in space and time. This manifests itself as a sensitivity of model forecasts to changes in initial conditions. Chaos rules k.o. At the aggregate level, however, any species which is implicitly conserved in those equations should be conserved in the aggregate. What is implicitly conserved is defined by the epistemic structure underpinning the model build.
        Consider an analogue problem – the miscible displacement of a fluid in heterogeneous porous media. The distribution of static petrophysical properties of the medium are unknown and unknowable, but the possible populations are constrained/conditioned by some sample measurements as well as softer information concerning the structural form of sedimentary system. The dynamics of this problem are described by a set of deterministic coupled non-linear equations governing the interchange of chemical components between phases and the velocities of the phases at each point in space. These equations are unstable, and the system described here is correctly characterised as a complex,coupled,nonlinear chaotic system. Multiple realisations of the heterogeneous rock properties within known or assumed constraints yield multiple outcomes in terms of the distribution of components in space at any given time. Uncertainties in fluid properties as pressure and temperature change adds uncertainty to the results for any single static realisation, magnified by the instability of the dynamic processes. Hence, the best one can hope for at any point in space is a probability distribution of the mass of any of the chemical components of the system. At the aggregate level however the system is EXACTLY specified by expressions of mass conservation: mass in less mass out equals mass accumulation, with complete validity across the entire range of predictive outcomes. They are (always) used as one of the necessary tests to verify that the governing equations are being solved correctly, and must form a quintessential part of any upscaling solution.

        I would hope that if you were working on a CCS project where you were trying to solve the miscible injection problem I describe above, and I were to point out that there must be an error somewhere in your calculation process because at the aggregate level your continuity equations were not satisfying mass balance, then you would not be stupid enough to reject the critique on the grounds that the zero-dimension equation is a “brutally simplified representation of the real physics” !

        The AOGCMs strive to satisfy energy conservation (but do not manage to do so implicitly in the equation structures which have been made publicly available). Ironically, it was a perceived need to improve the match to the energy budget which led to the introduction of a major fudge in ocean-atmosphere coupling – so-called flux adjustments – still present in most models prior to the TAR and still retained in some, I believe. Given the assumptive basis, the equation:-
        Energy In – Energy Out = Energy Accumulation is an EXACT zero-dimension expression of conservation. It is a target for AOGCMs even if they fail to quite meet it.
        Differentiating the energy balance yields the STILL EXACT expression of instantaneous net flux balance.

        We can argue about the specific assumptions that go into the estimation of the aggregate terms in this latter balance, and I will do so in a separate comment if I can find the time, but my main point is that I think it is foolish indeed to argue that there is some sort of hierarchy of validity which places the AOGCM results somehow ABOVE the energy budget. This is exactly ass-backwards.

      • Kribaez, Excellent comment. Your point about simple global conservation is applicable to most engineering calculations. I made the point above that momentum flux balance can be used to calculate the drag of an airplane if you know the thrust and lift. That’s a lot more accurate than doing a first principles CFD calculation. “More physics must give a better answer” is a common mantra in the naive modeling world that is not really supported by theory or experience.

      • kribaez, but I always thought quantum mechanics showed it was unwise to use Newtonian at the human scale because it was a “brutally simplified representation of the real physics”.

        As you say, theories are judged on their utility for the task in hand.

  45. Attempting to summarize what I have taken from this exchange (of course I could be wrong on this):

    1. Dessler’s tweet that he had already shown LC18 to be wrong is fully rebutted here and to me looks clearly wrong and to have been cast under the bus by Dessler himself in further discussions.
    2. Dessler et al showed that in ONE GCM that there was a lot of climate noise (I guess he would say its internal variability) aside from the long term forces response. Of course there is no scientific reason whatsoever to credit this single model with being able to show realistic internal variability and indeed realistic long term forced response. There have been a lot of negative results lately whose implication is clear.
    3. In fact, LC18 tries to take account of the “internal variability” and it does show up in the final PDF. Whether their model is accurate is also of course open to question.
    4. Dessler stated at ATTP to Mosher that we have no good methods to separate internal variability from forced response. If that’s true, a lot of climate science looks a lot less certain, including perhaps LC18.
    5. The pattern of warming argument is interesting as it seems to me to be strongly biased. If in fact, the pattern of warming in the real world differs a lot from that in GCM’s, the most obvious explanation is that the models are missing something in the tropics. Candidates pointed to by Zhao et al for example are tropical convection and cloud microphysics models. It is straining a lot to believe that we are in some “improbable” portion of the attractor and that in the long term GCM’s must be right. That is a faith based position. This is also the substance of Venema’s new doctrine of the single dice throw.

    I found thanks to VTG an exchange between Annan and Lewis at the defunct climate dialogue site. It’s interesting in that Annan while expressing a lot of vague reservations about Nic’s choices says essentially that the EBM methods are perhaps the best methods we have for estimating ECS. It’s excerpted above.

    The question that arises is if you don’t like the EBM results recently, what is the alternative? It seems the answer in GCM’s, and this answer is I think quite unsupported by either CFD experience, sub grid modeling science, or observations.

  46. Using a unit that communicates precise climate energy would be advantageous for recording current temperature and weather changes. Overtime, we may see a clear difference in the data. Sharing this information via cloud provider can allow us to see energy change differences between different places. I am glad you were able to elaborate and counter Dessler’s work. Thank you for being so informative.

  47. Pingback: Weekly Climate and Energy News Roundup #314 | Watts Up With That?

  48. Pingback: Weekly Climate and Energy News Roundup #314 |

  49. > It’s interesting in that Annan while expressing a lot of vague reservations about Nic’s choices says essentially that the EBM methods are perhaps the best methods we have for estimating ECS. It’s excerpted above.

    The periphrase “a lot of vague reservations” indicates we are witnessing a technical comment.

    To cut to the chase, in his Climate Dialogue main contribution, James Annan offers an overview of the debate, and concludes:

    The recent transient warming (combined with ocean heat uptake and our knowledge of climate forcings) points towards a “moderate” value for the equilibrium sensitivity, and this is consistent with what we know from other analyses. Overall, I would find it hard to put a best estimate outside the range of 2-3°C.

    He also mentions the possibility to use the “full CMIP3/CMIP5 ensemble of climate models from around the world.”

  50. It’s worth quoting in full Annan’s statements on GCM’s.

    “GCM ensemble-based constraints
    Some (including me) have tried to generate constraints based on creating an ensemble of GCM simulations in which parameters of the GCM are varied, and then the models are generally evaluated against observations in some way to see which seem more likely. Unfortunately, the results of these experiments seem to be highly dependent on the underlying GCM, as was first shown by Yokohata et al 2010 and has also been confirmed by others (Klocke et al 2011). Therefore, I no longer consider such methods to be of much use. The underlying problem here appears to be that changing parameters within a given GCM structure does not adequately represent our uncertainty regarding the climate system. An alternative which might have the potential to overcome this problem is to use the full CMIP3/CMIP5 ensemble of climate models from around the world. These models generate a much richer range of behaviour, though debate still rages as to whether this range is really adequate or not (and for what purposes).
    Some recent papers which explore the CMIP ensembles have presented arguments that the climate models with the higher sensitivities tend to be more realistic when we examine them in various ways (e.g. Fasullo and Trenberth 2012, Shindell 2014). If these results are correct, then the current moderate warming rate is a bit of an aberration, and so a substantial acceleration in the warming rate can be expected to occur in the near future, sufficient not only to match the modelled warming rate, but even to catch up the recent lost ground. It must be noted that these analyses are primarily qualitative in nature, in that they do not provide quantitative probabilistic estimates of the sensitivity (instead merely arguing that higher values are preferred). Thus it is difficult to judge whether they really do contradict analyses based on the recent warming.”

    Sounds like there is nothing much there regarding ECS. Given the recent results (more recent than Annan’s) discussion showing that ECS in a GCM can be engineered over a broad range by plausible parameter choices in convection and cloud models, the picture has gotten even more pessimitsic than Annan knew about at the time.

    I think the second paragraph pertains to the “pattern of warming argument” and appears to be rather skeptical.

    In any case Annan is just 1 scientist. As I said, recent negative results and the fundamental science of CFD, sub grid models, and nonlinear model behavior point rather strongly to GCM’s not being good for much except perhaps “understanding” the climate system. Quantitatively, they seem to exhibit little skill.

    • When one considers that the aerosol impact ( direct and indirect via ACI) is one of the key issues for the estimation of the sensivity of the climate system vs. CO2 one should include the opinion of a specialist in this field: Bjorn Stevens. He states:
      “We are averse to the idea that climate models, which have gross and well-documented deficiencies in their representation of aerosol–cloud interactions
      (cf. Boucher et al. 2013), provide a meaningful quantification of forcing uncertainty. Surely after decades of satellite measurements, countless field experiments,and numerous finescale modeling studies that have repeatedly
      highlighted basic deficiencies in the ability of comprehensive climate models to represent processes contributing to atmospheric aerosol forcing, it is time to give up on the fantasy that somehow their output can be accepted at face value.”
      Source: http://pubman.mpdl.mpg.de/pubman/item/escidoc:2382803:9/component/escidoc:2464328/jcli-d-17-0034.1.pdf
      In the end I don’t think that the ( present) model derived sensivity estimation will have a good standing in the science community.

  51. Pingback: It’s mostly about risk | …and Then There's Physics

  52. > In any case Annan is just 1 scientist.

    This technical comment may be imprecise, since we should also count Jules.

    Nic also is only one researcher. Applying the pigeonhole principle (a 1-1 correspondence) between scientists in favor of the mainstream view and contrarians may exhaust the Contrarian Matrix real quick. It might be best to build a deeper bench before making any bandwagon appeal.

    It might be worth quoting what James said about seeking the lowest bounds of luckwarm disingenuousness:

    One rather fundamental point need to be clearly understood at the outset of the discussion: there is no “correct” pdf for the equilibrium sensitivity. Such a pdf is not a property of the climate system at all. Rather, the climate sensitivity is a value (ignoring quibbles over the details and precision of the definition) and a pdf is merely a device for summarising our uncertainty over this value. An important consequence of this is that there is no contradiction or tension if different lines of evidence result in different pdfs, as long as their high probability ranges overlap substantially. All that this would mean is that the true value probably lies in the intersection of the various high-probability ranges. Thus the question of weighting different methods higher or lower should not really apply, so long as the methods are valid and correctly applied. If one result generates a range of 1-3.5°C and another study gives 2-6°C then there is no conflict between them.

    Adjusting for a bit of over-optimism in each study (i.e. underestimation of their relevant uncertainties) we might conclude in this case that an overall estimate of 1.5-4°C is probably sound – the result in this hypothetical case having been formed by taking the intersection of the two ranges, and extending it by half a degree at each end. Additionally, if one result argues for 2-4°C and another 1-10°C, then the latter does not in any way undermine the former, and in particular it does not represent any evidence that the former approach is overconfident or has underestimated its uncertainties. It may just be that the former method used observations that were informative regarding the sensitivity, and that the latter did not.

    A formally superior approach to calculating the overlap of ranges would be to combine all the evidence using Bayes’ Theorem (e.g. Annan and Hargreaves 2006). In this paradigm, “down weighting” one line of evidence would really amount to flattening the likelihood, that is, acknowledging that that the evidence does not distinguish so strongly between different sensitivities. In principle it is not correct to systematically down weight particular methods or approaches, so long as their uncertainties have been realistically represented. It is more a case of examining each result on its merits. Just as some papers have underestimated their uncertainties, other papers have surely overestimated theirs.

    https://www.mwenb.nl/wp-content/uploads/2014/05/James-Annan-guest-blog-def.pdf

    If one argues that one researcher’s opinion matters less than a bunch of them, then one should also argue that we’re better off with a bunch of methods too.

    • “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709


      “Generic behaviors for chaotic dynamical systems with dependent variables ξ(t) and η(t). (Left) Sensitive dependence. Small changes in initial or boundary conditions imply limited predictability with (Lyapunov) exponential growth in phase differences. (Right) Structural instability. Small changes in model formulation alter the long-time probability distribution function (PDF) (i.e., the attractor).”

      The CMIP5 ensemble is not a pdf – but a collection of equally possible – or impossible – nonunique trajectories that owe a great deal to nonlinear evolution of solutions. The question to be asked – before considering the result – is how this could possibly be a valid methodology? Perhaps if one is a pompous but technically ill informed urban doofus hipster – one might consider the mainstream depth of the bench on this one.

    • You response is somewhat muddled aside from the quotation from Annan.

      1. You don’t know what Jules thinks about this. Her studies with Annan suggest ECS in the low 2’s range.
      2. There are multiple lines of evidence. Paleoclimate should be weighted lower because of large uncertainties.
      3. GCM’s are simply not valid scientific evidence on ECS. Zhao et al (2016 I think) is a very strong negative result. You can engineer ECS in a GCM by changing parameters for cloud models and convection models. There are no observational constraints to narrow the ranges for these parameters. Read the paper.

      Nic is just one scientist, but he has changed climate science by for example showing the errors inherent in uniform priors. He has held his own here and sent Dessler away with his tail between his legs. I find it interesting that discussions with Nic usually end up showing his broad knowledge and expertise.

      • it is interesting that their result might suggest lower trends than have been observed.

        Surface temperature change is not a direct result of increasing greenhouse gases in the atmosphere

        there isn’t a simple relationship between changes in system heat uptake rate (planetary energy imbalance) and changes in surface temperature.

        the forced response pattern effect, which shouldn’t be confused (e.g. see angech’s confusion) with the quite independent issue of temporary variance in effective sensitivity caused by unforced natural variability.

        That might actually have some relevance for L&C2018. Their aerosol forcing update derives from Myhre et al. 2017, which reports on model simulations based on emissions estimates since 1990 and suggests a positive forcing change of about 0.1W/m2 between 1990 and 2015. The model which produces comfortably the largest positive change is SPRINTARS, which is basically MIROC, and about half the increase in that model involves a step change in cloud albedo effect between 2000 and 2005 which doesn’t make any physical sense.

        > [Nic] has changed climate science by for example showing the errors inherent in uniform priors.

        This technical comment may lack historical accuracy. Vintage 2009:

        As I hinted at in a previous post, I’ve some news regarding the uniform prior stuff. I briefly mentioned a manuscript some time ago, which at that time had only just been submitted to Climat[e] Change (prompted in part by Myles Allen’s snarky comments, I must remember to thank him if we ever meet). Well, eventually the reviews arrived, which were basically favourable, and the paper was accepted after a few minor revisions. The final version is here, and I’ve pasted the abstract at the bottom of this post.

        The content is essentially the same as the various rejected manuscripts we’ve tried to publish (eg here and here): that is, a uniform prior for climate sensitivity certainly does not represent “ignorance” and moreover is a useless convention that has no place in research that aims to be policy-relevant. With a more sensible prior (even a rather pessimistic one) there seems to be no plausible way of creating the high tails that have been a feature of most published estimates. I’m sure you can join the dots to the recent IPCC report, and the research it leant on so heavily on this topic, yourself.

        http://julesandjames.blogspot.com/2009/09/uniform-prior-dead-at-last.html

        The “we” in that quote is Jules & James, and this wasn’t their first try. There’s one that goes back to March 2006

        Sometimes what appears muddled comes from the recipient’s lack of accurate priors.

        ***

        > Read the paper.

        The one you think but can’t even cite properly?

        It’s not impossible to “engineer” and EBM by tweaking aerosol data, BTW:

        Both the forcing and heat uptake rate are higher, significantly so*. [* Though the heat uptake rate is lower than indicated by all other sources I’ve seen for the 2007-2016 period. I think the issue there is that their method is simple subtraction involving just the two end point years, and 2016 happens to be a bit of a low outlier for OHC. The latest CERES-EBAF data, calibrated to Argo OHC, says it’s 0.8W/m2, compared to 0.65W/m2 in L&C2018.] In the case of forcing there is a big upward revision of methane forcing, and a fairly substantial upward revision of Ozone forcing. Both of those seem reasonable AFAICS, but then there is also a big revision of aerosol forcing, which is very much not reasonable, and clearly wrong. Despite insisting repeatedly that they have retained the AR5 best estimate of -0.9W/m2 for 2011, the best estimate figure they actually use for 2011 is -0.777. Work that one out.

        https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117338

      • verytallguy

        dpy,

        respectfully, your comments are not based on the science.

        GCM’s are simply not valid scientific evidence on ECS.

        Is not supported by Zhao. Who rather state:

        Given current uncertainties in representing convective precipitation microphysics and the current inability to find a clear observational constraint that favors one version of the authors’ model over the others, the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.

        If you wish to be taken seriously, you need to stop imposing your own unjustified and frankly unscientific extrapolations on published papers. This is not the first time you’ve been caught doing this; you have a track record of citing papers that do not on investigation support your assertions.

        Indeed, Zhao suggests that deliberately engineering high and low sensitivity models and testing them against historical variability would be a useful tool for exploring future projections.

        http://pcmdi.github.io/CFMIP/Data/media/2015/Zhao_CFMIP_2015.pdf

        Perhaps you can also explain in what sense

        sent Dessler away with his tail between his legs

        is part of the scientific debate you purport to wish to focus on?

      • TallGuy, That you are not a scientist is showing. The quotes you reproduced supports my conclusion. Strong evidence of ECS cannot come from a model whose ECS is in reality a broad range dependent on parameter choices for which there are no good observational constraints. Then there are other issues such as lack of skill at reproducing real world cloud fraction as a function of attitude, lack of skill at regional climate, failure to reproduce the pattern of warming, etc. The list is quite long.

      • Given the current level of uncertainty in representing convective precipitation microphysics, this study suggests that one can engineer climate sensitivity in a GCM by the approach used for parameterizing convective precipitation. The differences between the present-day mean climate simulations in the three models described here are modest, as shown in Fig. 2, with L performing a bit less well than H and M, but we suspect that by op- timizing other areas of the model we could reduce some of these differences. So far, we have not found a clear constraint that we feel would make one model choice more plausible than another. Therefore, holistic mea- sures of the overall quality of the mean climate simula- tions do not appear to provide adequate guidance for choosing between these models. To accelerate progress for understanding and constraining cloud feedback and climate sensitivity, comprehensive approaches are nec- essary. They would include 1) a thorough evaluation of GCM-simulated cloud variability at all temporal and spatial scales in addition to the mean climatology, 2) development of hypotheses (or story lines) around key questions and/or processes for models to confront (e.g., Bony et al. 2015), 3) process-oriented investigations using high-resolution cloud-resolving and large-eddy simulations (e.g., Bretherton et al. 2013, 2014), and 4) development and use of a global cloud-resolving model or superparameterized GCM for cloud feedback.

        In addition, there is value in intentionally engineer- ing climate sensitivities in specific ways in traditional GCMs so as to provide material for further research into possible observational constraints (e.g., Mauritsen and Stevens 2015). One important way to manipulate climate sensitivity, through the convective detrainment efficiency, seems, as described here, to be particularly powerful in this regard. Models with explicitly en- gineered climate sensitivity should also be valuable in studying other constraints on sensitivity from studies of paleoclimates, volcanic responses, and simulations of the warming over the past century. In the latter case, engineering models with different strengths of indirect aerosol effects (Golaz et al. 2013) will also be needed to search for the most powerful constraints provided by the historical record. – Ming Zhao

      • verytallguy

        dpy, the quotes directly contradict your very strong conclusion “GCM’s are simply not valid scientific evidence on ECS”. I note you haven’t actually defended that.

        That you make such strong statements about your convictions without being able to cite supporting science which makes similarly strong conclusions shows that your behaviour is not that of a scientist.

        That you routinely follow this up with insults, innuendo and insinuations as you do yet again in this post about Dessler further shows that you do not behave in a scientific or collegiate manner.

        That you deride others’ contributions as not focussed on the science given your own behaviour is rather amusing.

      • Tall, There is a scientific way to validate a model when lives are at stake. You constrain the subgrid model parameters with as much data as possible. Then you use it within the range of conditions spanned by the data.

        Here there is no data to constrain the cloud and convection parameters. Thus it is impossible to scientifically validate the model.
        So ECS is constrained poorly by data. Virtually all experts and indeed regulators would say that means the model is not scientifically valid evidence for ECS.

      • verytallguy

        Virtually all experts and indeed regulators would say that means the model is not scientifically valid evidence for ECS.

        Palpable nonsense. See AR5 for the views of experts.

        A citation for your “experts”? Or is this further assertion without evidence from you, as we have become accustomed?

      • Well this is one of the problems with climate science. Virtually none of them are experts on validation of CFD. As I said given the central place GCM’s occupy in climate science there is so far as I can tell no real detailed defense of them. When challenged Dessler and Mauritian said in comments on their paper that they were as frustrated with the failings of GCM’s as I was. I took this to mean they accept Zhoa et al’s results and the other negative results. BTW, AR5 proceeded most of these papers. Held in one of his final posts refers to the weaknesses and how it is possible to incorrectly extrapolate skill at predicting Rossby waves to climate projections.

        Rather than just quoting (out of context) or referring vaguely to “experts” can you actually say something with technical content? This pattern with you is really getting into WUWT territory.

      • verytallguy

        referring vaguely to “experts”

        ’tis you that brought that into the conversation,, not I:

        Virtually all experts and indeed regulators would say that means the model is not scientifically valid evidence for ECS.

        Citation needed.

        WUWT territory, indeed.

      • Still nothing scientifically interesting. You VTG are about 0 out of 1000 for science, and 1000 out of 1000 for typical activist attempts to cast doubt on people without actually being able to show any evidence the person is wrong. Do you really expect anyone to take you seriously?

      • verytallguy

        Seriously dpy?

        You’ve contradicted yourself more times than you’ve posted. Which is a *lot*!

        Be serious.

      • “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such (perturbed physics) model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.” https://www.ipcc.ch/ipccreports/tar/wg1/505.htm

        It has been known for a while now. Pick a solution from 1000’s. Mine is the thick black one.


        https://www.nature.com/articles/ngeo1430

  53. Cumulative monthly toa radiant imbalance v. ocean heat.

  54. > [Nic] has changed climate science by for example showing the errors inherent in uniform priors.

    This technical comment may lack historical accuracy. Vintage 2009:

    As I hinted at in a previous post, I’ve some news regarding the uniform prior stuff. I briefly mentioned a manuscript some time ago, which at that time had only just been submitted to Climat[e] Change (prompted in part by Myles Allen’s snarky comments, I must remember to thank him if we ever meet). Well, eventually the reviews arrived, which were basically favourable, and the paper was accepted after a few minor revisions. The final version is here, and I’ve pasted the abstract at the bottom of this post.

    The content is essentially the same as the various rejected manuscripts we’ve tried to publish (eg here and here): that is, a uniform prior for climate sensitivity certainly does not represent “ignorance” and moreover is a useless convention that has no place in research that aims to be policy-relevant. With a more sensible prior (even a rather pessimistic one) there seems to be no plausible way of creating the high tails that have been a feature of most published estimates. I’m sure you can join the dots to the recent IPCC report, and the research it leant on so heavily on this topic, yourself.

    http://julesandjames.blogspot.com/2009/09/uniform-prior-dead-at-last.html

    Note the title. The “we” in that quote is Jules & James, and this wasn’t their first try. There’s one that goes back to

    Sometimes what appears muddled comes from the recipient’s lack of appropriate priors.

    ***

    > Read the paper.

    The one you think but can’t even cite properly?

    It’s not impossible to “engineer” and EBM by tweaking aerosol data, BTW:

    Both the forcing and heat uptake rate are higher, significantly so*. [* Though the heat uptake rate is lower than indicated by all other sources I’ve seen for the 2007-2016 period. I think the issue there is that their method is simple subtraction involving just the two end point years, and 2016 happens to be a bit of a low outlier for OHC. The latest CERES-EBAF data, calibrated to Argo OHC, says it’s 0.8W/m2, compared to 0.65W/m2 in L&C2018.] In the case of forcing there is a big upward revision of methane forcing, and a fairly substantial upward revision of Ozone forcing. Both of those seem reasonable AFAICS, but then there is also a big revision of aerosol forcing, which is very much not reasonable, and clearly wrong. Despite insisting repeatedly that they have retained the AR5 best estimate of -0.9W/m2 for 2011, the best estimate figure they actually use for 2011 is -0.777. Work that one out.

    https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117338

  55. Life would be so much simpler and certain if we could directly measure the energy budget wouldn’t it (rather than being reduced to the vagaries and biases of models and estimates) ?
    Would it not make sense to divert resources from elsewhere (modelling?) into the empirical side ?

  56. ATTP wrote:

    Yes, if we consider a case where we fix the forcing at today’s value (but with aerosols and short-lived GHGs removed) then the impact of potential feedback time dependencies would probably be modest. However, even Mautisen & Pincus’s Supplementary Information indicates that it could increase their ECS estimate from ~1.8K to ~2.4K.

    I welcome the chance to clear up this misunderstanding. The impact to the end of the century (which is what Mautisen & Pincus’s relevant estimate concerns) of increases in forcing from now on will be almost entirely determined by the TCR level, with some contribution from the excess of effective climate sensitivity over TCR. Any excess of ECS over effective climate sensitivity due to possible time dependent feedbacks would be totally negligible, given the sub-century timescale involved.

    The CMIP5 model with the most extreme time-dependence of effective climate sensitivity may suggest a substantial increase in Mautisen & Pincus’s ECS estimate. However, if time dependent behaviour of all CMIP5 models is used to estimate the increase in energy-budget ECS estimates, which appears more appropriate for a probabilistic estimate, it works out at under 10%. LC18 estimates an increase of ~ 6.5%.

    • Nic,
      What misunderstanding? I’m well aware of your arguments with respect to the relevance of the TCR. All I was pointing out was that the small change seem in Mauritsen & Pincus was mostly because they were fixing the forcing at today’s values (with aerosols & SLGHGs removed) and that the time dependency could potentially lead to a reasonable increase in the ECS. FWIW, I hope you’re right about the TCR. I’m far less confident than you are that you will be.

  57. This is the summary of my view of the outcome of this discussion.

    A possible explanation of the ECS-values calculated by Lewis and Curry (LC18) being less than those calculated by other methods can be the shortcomings of the climate model, that is the simple energy budget model, they were using.

    The critical shortcoming of the climate model used by LC18 is that it cannot account for the changing warming pattern when the planet is warming due to increasing carbon dioxide (and other GHGs) in the atmosphere.

    Dessler, Mauritsen and Stevens in ACP (D18) reports evidence of the changing warming pattern resulting in the shortcomings of the kind of simple energy budget models like that used by LC18. Several other scientific studies since 2013, cited by D18, analysing results from GCMs, are also the basis of such evidence.

    The inconsistency of the simple energy budget model in case of a changing warming pattern can also be demonstrated by simple algebraic calculations, as done by me in the short comment SC3 to D18 in ACP.

    Perhaps it cannot be ruled out that the ECS-values calculated by LC18 are less than can be fully explained by the error caused by the changing warming pattern. However, because the inconsistency of the simple energy budget model is of a fundamental nature one should be cautious in drawing such conclusions.

    • The warming is not an “error”. It’s reality. If GCM’s disagree enough they must be in error.

    • LC18 have also shifted the center of their aerosol contribution closer to zero which allows all the warming so far to be explained by a lower sensitivity than is the case in AR5. That is, they say you don’t even need much sensitivity to get 100% anthropogenic. It’s a paradigm shift from what skeptics have tried in the past. We’ll see if it catches on among them.

      • In a nutshell, they have gone from there being too little anthropogenic forcing to account for all the warming, to the other extreme that there is so much anthropogenic forcing that the sensitivity must be low to only have this much warming. In this way they have spanned reality in a single bound. How about something in the middle, which is where the consensus is at.

      • Jim D:

        You’re steps ahead of me here.
        I want to claim we caused all the warming and I am Gavin Schmidt (sarcasm). It was about 110% of it.

        When I claim this, sensitivity is low. 0.7 C in 63 years.

        But I want to claim sensitivity is high, as Kyle Swanson says, so is natural variability, as sensitivity means to all things including natural variability.

        So we have to take one with the other assuming narrow ranges for both things. But we can point to someplace else, warming in the pipeline to reconcile.

        As Trenberth said, It’s a travesty that we can’t have high sensitivity to GHGs and a low sensitivity to natural variability.

        When Atomsk’s said the two things did reconcile because they overlapped, I said the error bars were too wide to give it much meaning. What you said pointed out to me an accounting tie. Such as everything ties out to the dollar. Every dollar is accounted for. One has proved the statements. The tie here is you can’t have both high sensitivity and low natural variability. You can have wide error bars and make up anything you want, like that guy on the commercial that is unsure what a lug wrench is. Maybe.

      • The choice is between high forcing with low sensitivity or low forcing with high sensitivity. In both cases the warming is the same, which is the observed amount, and has the same ratio of warming to CO2 forcing, because both those numbers are fairly fixed by observations. This is why I prefer to use that ratio, currently 1 C per 100 ppm (2.4 C per doubling) as an effective transient rate. There are some ratios that are more certain, and those are the more useful ones to plan by.

      • Jim D:

        “…they have gone from there being too little anthropogenic forcing to account for all the warming, to the other extreme that there is so much anthropogenic forcing that the sensitivity must be low to only have this much warming.”

        1) Sensitivity is low. There must be natural variability.
        2) Natural & internal variability zeroed out (see the above diagram).
        3) Given 2) sensitivity is this.

        Change equals Sensitivity.

        AR5 figure 10.5 is not made up.
        When taken as true, Change equals Sensitivity.
        This is consistent with arguing natural variability evens out. That it’s minor, that it is not driving. When the point is conceded to you:

        Change equals Sensitivity.

        Beginning cash plus income or loss equals ending cash. It’s an accounting tie. Both things are tied together. When we look at beginning and ending cash, whatever story we tell has to tie out. When we tell the story of AR5 figure 10.5, everything falls into place. All the math in the world cannot change this.

      • Lewis chose his endpoints to cancel out internal and natural variability, so you can take up with him whether it is near enough zero over given periods. Are you trying to argue that he is wrong on that? It is an area where he agrees with AR5, but it looks like you don’t. If there are strong forcings he missed, you need to suggest one to support your argument.

      • Jim D:

        I am not arguing that Lewis wrong. I am trying to circle around something that may not be there. I think your first comment about what they did is valuable to me. To be honest I don’t know a thing about the details of LC18.

        Say the issue is portrayed as If/Then as you said it. If you remove internal and natural variability, Then sensitivity equals this. Sensitivity becomes manageable.

        The following is way out there:
        I am aware of the 50-50 argument post by Curry, and Schmidt’s reply to it. I’ve argued the attribution and sensitivity banner statements need to reconcile to each other. LC18 is a baseball curve for a strike reply back to Schmidt. It agrees with both above banner statements above. It seems pretty strong to me. It is an argument that says, Assume you’re right. This is what we get.

        And I think it illustrates the other side overplaying its hand.

      • The 50-50 argument is the opposite to the LC18 argument for the reasons I mentioned, and the 50-50 argument has further problems because the imbalance is positive meaning that the forcing change is still larger than all the warming so far is able to offset. So we are left with LC18 and AR5. These don’t differ in the warming or that the GHG part of the forcing which accounts for all the warming plus more in the pipeline. They differ in aerosols which affects the net forcing that LC18 have as much higher. LC18 have CO2 at less than 70% of the net forcing while AR5 has it at 80+% due to a larger offsetting by aerosols. This is one difference that applies to TCR. LC18 make further simplifications in getting ECS that also errs towards underestimation. They only consider CO2’s part of the warming in the sensitivity while there is 50% more in their net forcing that you can’t ignore in projections. Bottom line, be careful in using the LC18 sensitivity on its own because that is only responsible for part of their net anthropogenic warming.

      • Jim D:

        Still on figure 10.5 of AR5 above. ANT equals GHG minus OA. I measured the differences with a ruler. ANT is us as far as I know. And that’s the question.

        Looking at figure 10.5, if you halve OA to -0.1, GHG drops from about 0.9 to 0.8. Dropping GHG 0.1 that is in the portrayed 0.9 range seems minor. The OA range is a little less at 0.7.

        You lost me where you said LC18 only looks at CO2. A now you see me now you don’t approach would have been pointed out vigorously. However, you know more about it than I do.

        “LC18 make further simplifications…”

        Schmidt uses figure 10.5 to support his first plot of his attribution reply. And the audience for his reply should be considered. Their knowledge level is higher than a congressional staffer.

      • If you halve OA, this affects ANT and raises it to 0.8, for example. It doesn’t affect GHG. GHG is the sum of CO2, methane and other GHGs. CO2 may be about 70% of that total. Raising ANT forcing to somewhere like 50% more than due to CO2 alone, decreases the effective sensitivity compared to if it was less than 25% more than CO2 alone. In AR4 the net was about the same as for CO2 alone because other GHGs canceled OA. Now AR5 has them larger, and LC18 has them much larger. The difference in sensitivity comes from the difference in net forcing (ANT) which is largely from aerosol central estimates.

    • Pehr Björnbom (@pehr_bjornbom)

      This is the summary of my view of the outcome of this discussion.

      Your “summary” is not a summary of the discussion. It is a summary of your own, faulty, views.
      D18 provide no evidence of warming patterns changing when the planet is warming due to increasing GHGs in the atmosphere.
      While AOGCMs do mostly suggest such changes occur, the effect on their ECS values is generally modest. LC18 provides an ECS estimate that allows for this effect as reflected in the behaviour of current AOGCMs. Its median value is 6% above that of the corresponding simple energy budget model ECS estimate.

      • Nic Lewis,

        Your “summary” is not a summary of the discussion. It is a summary of your own, faulty, views.

        Fair enough. I can imagine that your climate scientist opponents could think the same about your own “Plain language summary” at the beginning of your blog post here.

        D18 provide no evidence of warming patterns changing when the planet is warming due to increasing GHGs in the atmosphere.

        Your statement isn’t correct. D18 provide evidence of an evolving pattern of surface temperature caused by internal variability. They also cite evidence of pattern changes from the literature that are “caused by differing regional heat capacities that cause some regions to warm more slowly than others during forced warming”. During warming caused by increasing GHGs, pattern changes will be caused both by internal variability and differing regional heat capacities.

        Thus, a climate model that cannot account for such pattern changes, like the energy budget model used in LC18, in principle cannot be used for calculating correct climate sensitivity values. As D18 emphasise in the conclusions section, recent research has suggested that this inconsistency is related to the TOA net flux response being “not just determined by the magnitude of Ts but also includes other factors, such as the pattern of Ts (e.g., Armour et al., 2013; Andrews et al., 2015; Gregory and Andrews, 2016; Zhou et al., 2017) or the lapse rate (e.g., Zhou et al., 2017; Ceppi and Gregory, 2017; Andrews and Webb, 2018)”.

        A simple algebraic analysis can also easily demonstrate how the changing pattern of Ts makes the energy budget model with only one global climate feedback parameter inconsistent, as I did in the short comment SC3 to D18. Perhaps an energy budget model with several regional climate feedback parameters could resolve this problem.

    • Pehr, you state “The critical shortcoming of the climate model used by LC18 is that it cannot account for the changing warming pattern when the planet is warming …” and earlier you made this comment to Nic.” I think that the simple energy budget model you are using in your research cannot overrule the results of GCMs…”
      A recent paper ( https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017GL076649 ) deals exactly with this problem. The authors investigate the spatial patterns of warming calculated from models and the observed pattern. They find “. For 39% of the Earth’s surface, the GCMs and the historical data are statistically in disagreement. The GCM regional projections are thus of limited utility since they poorly reproduced past regional variability and their projections are practically rescaled maps of their historical RTCS. It is more reliable to extrapolate the RTCS of the data, especially for short-term projections where the transient response is more important”
      Later on ( in the conclusions) they suggest: “Given these facts, it is questionable how accurate the mean projection of the MME will be given that its spatial pattern of warming evolves little over the 21st century and remains very close to its RTCS over the historical period, which significantly differs over most of the globe from the actual RTCS of historical observations.
      Therefore, the resulting MME projection will suffer from the same faults as the historical runs; that is, it will be a projection of the faulty GCM climate in historical runs. GCMs are important research tools, but their regional projections are not yet reliable enough to be taken at face value.”

      These results make it very doubtful to follow your approach to overrule the observed data with present GCM, the cited paper approves explicite the other way around when it comes to “warming patterns” IMHO.

  58. > The warming is not an “error”. It’s reality. If GCM’s disagree enough they must be in error.

    There is a way to validate the scientific content of this technical comment:

    I agree with James’ observation that the transient pattern of warming is likely to be a little different from the equilibrium result, which may result in the ECS estimates from instrumental period warming studies involving only global or hemispherically-resolving models (which usually represent effective climate sensitivity) differing a little from equilibrium climate sensitivity.

    https://www.mwenb.nl/climate-sensitivity-and-transient-climate-response/#comment-921

    Vintage 2014. This acknowledgement has yet to be made in this thread.

    By the “scientific validation” logic offered by our in-house mechanical engineer, we may need to infer that the EBMs must be in error.

    Unless “a little” is not “enough,” of course.

    • Poor wee willie trolls the climate blogosphere like some forlorn wraith at a banquet. None of it is science of course – just somebodies opinion on the internet. Science to poor wee willie is a closed book – and the quotes from the blogosphere dredged up are accordingly just words. Embellished as they are with imposed emphasis and snide aspersions. The latter is of course his entire purpose. A fruitless purpose but there it is.

      “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

      CMIP5 model runs are of course hopelessly inadequate – although flogging a dead horse is a popular hobby. An EBM at the very least captures a component of measured planetary energy content. Albeit in surface temperature a very minor one – some 2-3% of the total.

      There is a more direct connection between ocean heat and measured energy imbalance – incoming solar less outgoing reflected SW and emitted IR. There are large energy changes related in part to eastern Pacific sea surface temperature and coupled cloud processes. But note the relentless rise in cumulative energy imbalance.

      Climate regimes do of course shift with changing rates of warming and cooling – at all scales from micro-eddies to decades to millennia. The fractal property of chaos. For which we have not the physics, math or even much understanding. Sometimes not knowing is fruitful – the problem is usually what we know that is false. Aye poor wee willie?

    • In fairness to Nic, this was before several of the negative results papers came out on GCM’s.

    • The 2014 comment of mine that Willard cited is completely consistent with what is written in LC18. The word “may” means something is possible but not certain, and does not indicate a high probability of it being so.

      • > The word “may” means something is possible but not certain, and does not indicate a high probability of it being so.

        There are very few things in the empirical sciences that are certain. Very little climate considerations regarding climate sensitivity indicate a high probability. So we might as well declare that almost every luckwarm claim is “completely consistent” with everything else other researchers say, including what we can find in the IPCC’s reports.

        By the same token, aerosols forcings may be negative. But can anyone only say that they *may* be positive without pushing the lowest bounds of disingenuousness? In any event, how to go from first principles to under 2C remains an open problem. There’s not much a formal semantics of “may” could do about that, and so far all we got on the luckwarm table are “if by whiskey” counterfactuals that do not get us beyond the “may”s as defined above.

  59. Right you are, dpy. Here is an excerpt for those who only click on links to huffo and the guardian:

    The report also alludes multiple times to the notion that climate science is on shaky ground.

    “Climate science has significant work to do to make its data and its statistical procedures properly reproducible,” Randall said.

    Randall cited Judith Curry, a world-renowned climatologist, who has warned that the climate science field is heavily affected by groupthink, a collective way of thinking that has been known to stop individuals from questioning widely accepted theories.

    Randall said he believes that climate change data needs to be reproducible because it is “more than usually intrusive into the lives of Americans.”

    To provide the public with accurate statistical information, the report endorses the expansion of the Secret Science Reform Act of 2015 to cut down on irreproducible data used to back public policy.

    When asked what the average person could do in order to make sure that the information that is backing public policy is credible, Randall recommended: “Always ask ‘has this study been reproduced? Did this study have pre-registered research protocols? Does it support an unpopular belief?’ If the answer to any of these is no, suspend judgment. Don’t disbelieve blindly, but don’t believe blindly either.”

    • One can particularly see this political bias in psychology I think which is one reason their measured replication rate is so low. Lewindowski is a prime example. There is a good post at clisceps about a new paper that just came out that is far more scientifically sound showing that Lew’s link of conspiracy ideation and climate skepticism is wrong.

  60. For those interested in the GCM vs. EBM debate, here’s a really detailed (as usual) response by Nic to the Met Office report that concludes there is no need to revise future projections based on GCM’s.

    https://www.nicholaslewis.org/nic-lewis-on-the-uk-met-office-on-the-pause/

    VTG and Willard might find it a good summary of some of the important issues.

    • Thank you for this technical comment.

      The post is prefaced thusly:

      Originally a guest post on September 14, 2013 at Climate Etc.

      and points to this web page:

      https://judithcurry.com/2013/09/14/nic-lewis-on-the-uk-met-office-on-the-pause/

      Lots of “willard (” on that page. In one of them there’s another quote by James:

      I have some doubts about Nic Lewis’ analysis, as I think some of his choices are dubious and will have acted to underestimate the true sensitivity somewhat. For example, his choice of ocean heat uptake is based on taking a short term trend over a period in which the observed warming is markedly lower than the longer-term multidecadal value. I don’t think this is necessarily a deliberate cherry-pick, any more than previous analyses running up to the year 2000 were (the last decade is a natural enough choice to have made) but it does have unfortunate consequences. Irrespective of what one thinks about aerosol forcing, it would be hard to argue that the rate of net forcing increase and/or over-all radiative imbalance has actually dropped markedly in recent years, so any change in net heat uptake can only be reasonably attributed to a bit of natural variability or observational uncertainty. Lewis has also adjusted the aerosol forcing according to his opinion of which values are preferred – concidentally, he comes down on the side of an answer that gives a lower sensitivity.

      http://julesandjames.blogspot.com/2013/02/a-sensitive-matter.html

      That post is worth revisiting, e.g.:

      According to [the Company that Shan’t be Named], the 787 design used 800,000h of supercomputer modelling and 15,000h of wind tunnel tests.

      […]

      I think that makes [dpy] 2% correct :-)

      • “Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

        Nor are CFD problems solved in that way and for the same reasons.

        https://en.wikipedia.org/wiki/Computational_fluid_dynamics

        The physics are not perfectly known – nor – 30 years later – has math come to the rescue. One can only conclude that a trained philosophical sophist like poor wee willie should not try their hand at even simple percentages.

      • Don Monfort

        Thank you for the shameless self-promotion, willito. It’s a good feeling for my eyes to glaze over, when I see your dead giveaway links-italics-bold BS. Very restful. Like taking a nap.

      • Well, You cite out of context from Annan’s comment. What I said on that comment thread about CFD is quite true. Design can legally be done with simulations even though CFD is always tempered with 5 other data sources including test data. Structural analysis is a well posed problem and simulation is more reliable there. Regulators however demand test data (taking years of testing to gather) for certification and they are right to do so.

        Recent negative results have only strengthened the case for what I was saying about turbulent CFD having high uncertainty. GCM’s are worse actually. What I didn’t fully appreciate at the time is how flawed the literature is in these fields. The literature paints a vastly over optimistic picture with negative results systematically suppressed.

      • verytallguy

        negative results systematically suppressed.

        Gosh. How scary.

        Who is directing this system of suppression, pray tell dpy?

        (But be sure to be scientific and technical in your response, natch)

      • We’ve been over this before VTG so I’m surprised you keep trying to distract with out of context minor word games.

        https://www.nature.com/news/registered-clinical-trials-make-positive-findings-vanish-1.18181

        “There is a strong positive bias in the literature…” is what one scientist said.

        Reading comprehension I fear must be added to your list of challenges. “Suppressed” just means that people choose not to publish their negative results , use p hacking to find positive results, etc. This is often motivated by a cultural prejudice that “if I run the model right I will get the right answer” or by the internalized need to sell the work and keep the funding stream going. Sometimes senior people can suppress negative results about their work, but its not particularly common.

        I think its telling that most scientists when presented with the evidence of pervasive bias simply don’t respond. A few engage in outright denial, particularly in climate science, but avoidance of unpleasant evidence is a more common reaction.

      • verytallguy

        Suppressed” just means that people choose…

        Not according to the Oxford dictionary, or any usage I’m aware of. They say:

        1Forcibly put an end to

        I gently suggest you may need a little more precision and perhaps a little less bombast to ensure your scientific and technical posts aren’t misinterpreted.

        There are some excellent undergraduate guides to help you, for instance:

        Use objective rather than subjective language.

        https://www2.le.ac.uk/offices/ld/resources/writing/writing-resources/science

      • Very Tall,

        The guide your propose omits the most relevant advice: one does not move the goalposts in the middle of one’s technical comment.

        The reason why James mentioned supercomputer simulations was to counter the claim that it was virtually impossible to simulate wind tunnels. It has nothing to do with any conspiracy. It has everything to do with the “but observations” meme that we’re currently being served.

        Please don’t let your leg be pulled like that.

      • I explained this previously. For certifying a new product flight tests for at least a year are required. That’s because CFD is well known to be inadequate especially for separated flows and the public doesn’t like unsafe products.

        There is no meme here except to those incapable of evaluating actual content or doing research beyond pulling quotes out if context.

        Another concensus enforcement tactic is to be as uncharitable as possible interpreting every word in the most extreme way. VTG its time to end the charade and try to see the whole picture of what is being said. Thomas Hobbs had a perfect line on what you are doing. Casting atoms of scripture as dust before men’s eyes make everything more obscure than it is. An ordinary artifice of those who seek not yet truth but their own advantage.

      • verytallguy

        Honestly dpy, I’m bored of this now. Stay sciencey wontcha.

      • > There is no meme here except to those incapable of evaluating actual content or doing research beyond pulling quotes out if context.

        James was responding to this technical comment:

        Bull!! I have first hand experience and know.

        This was in response to Eli’s

        [Our Technical Commentator] is simply wrong on wind tunnels. The large subsonic ones have pretty much vanished and been replaced by modeling. Last time Eli looked some of the hypersonic ones were on shaky legs. Simply too expensive

        This was in response to a very important technical comment:

        Just a slight quibble. It’s attached flow vs separated flow. Transition to turbulence is so nonlinear it’s virtually impossible to predict. If models are too dissipative, then they will probably miss these things too.

        This was after Carrick’s reminder, after a digression on cochleas:

        I think Eli’s point is that you don’t just have observations in the absence of a credible physical model, and it’s one I agree with.

        This support, along with James came to our Technical Commentator’s surprise, for here’s what started all this:

        Eli, the distintion between “physics” and observation is so false. Physics can only be seen if it is accurately simulated and its not. We can’t even predict a turbulent boundary layer in a pressure gradient, much less cumulous convection. Observation is the only real anchor where your models are really rather hopeless. That’s how subgrid models are “tuned” after all. First principles subgrid models simply don’t exist and so your only other altermative is expert judgment, which is a euphamisn for prejudice.

        Our Technical Commentator is a one-trick pony.

      • “The practice of low-speed experimental aerodynamics has continued to evolve and it is a cornerstone in the development of a wide range of vehicles and other devices that must perform their functions in the face of forces imposed by strong flows of air or water. It was believed that in the 1970s and in the early 1980s the use of wind tunnels, especially in the subsonic regime, would rapidly disappear as computational fluid dynamics (CFD) would become a more attractive option to obtain data for many engineering applications, since it would be better in cost-effectiveness (Barlow et al. 1999). Nevertheless, computational simulations improved since then but they have not come close to reaching a level sufficient to replace the need for experimental data in development projects.

        In fact, the investigative methods leading to quantitative predictions have been a combination of experiment and theory, with computational methods becoming a new powerful tool in this field. However, experimental explorations remain the mainstay for obtaining data to designers to achieve detailed results and final decisions across a broad range of industrial and academic applications. A primary tool for experimental aerodynamics is the wind tunnel. A well-designed wind tunnel could supply technical information for a large number of engineering applications such as external aerodynamics (flow over terrestrial and aerial vehicles), civil engineering (flow over bridges, buildings, cables, etc.), sport activities (flow over cyclists, design of volley and basket balls, wind sails etc.), fundamental fluid dynamics (laminar and turbulent flow over simple and complex geometries) and an extensive frame of other options in industry and research centers at universities around the world.” http://www.scielo.br/scielo.php?script=sci_arttext&pid=S2175-91462018000100200

        It is not clear which blog Eli consulted in his research on wind tunnels – but there may be a fundamental disconnect with design reality. Poor wee willie of course gleams his reality entirely from climate blogs. 90% of climate science is of appalling quality – including the piece on which this post is based – and 99.9% of climate blogs are entirely worthless. It seems therefore a pointless exercise to copy and paste blog opinions exclusively – but that’s poor wee willie for you. No scientific discrimination at all. As for VTG – does he ever say anything that isn’t just snide calumny? Yes we are bored VTG.

      • Well, once again, what I said is true. Subsonic wind tunnels have not disappeared. There are several in Europe that are heavily used.

        In any case, a chemist really is not knowledgable about CFD modeling and there was no real substance to Eli’s comments.

        I think if you look at the literature, you will see the problem here. Look at the high lift workshops for example. The spread of the CFD results is not good enough to replace testing as everyone in the field acknowledges.

      • Robert Ellison, Thanks for that excerpt. I hadn’t seen that one.

      • What this dispute about wind tunnels shows is how silly rabbits can spread complete mythology on climate blogs and have it repeated by equally ignorant Climateballers. It is a shame that these Climateballers don’t even bother to correct their pseudo-senctific nonsense. It was in fact trivial for Robert to verify the facts. It shows a shameful disregard for truth and a lack of integrity.

      • > a chemist really is not knowledgable

        Another very technical comment.

        Alas, picking on Eli won’t counter James’ point:

        According to [the Company that Shan’t be Named], the 787 design used 800,000h of supercomputer modelling and 15,000h of wind tunnel tests.

        […]

        I think that makes [dpy] 2% correct :-)

        http://julesandjames.blogspot.com/2013/02/a-sensitive-matter.html?showComment=1360200117777#c9201553147671422433

        My turn to make two technical comments.

        First, our Technical Commentator constantly appeals to his authority without showing any substantial evidence. That can only be infelicitous.

        Second, observations and theories go hand in hand. Idolizing data is futile.

      • Don Monfort

        Willito: How much of that computer time was for designing the air frame and all that goes into it vs. some sort of simulation of the aircraft in flight? Why did they need the 15,000h of wind tunnel testing, willito? Don’t they trust the computer models?

      • Climateball again! I explained this before and you apparently can’t read either or perhaps you simply prefer games to truth. The quote is about simulation time. CFD was only a small part. CFD is used for design (testing is heavily used too). However the vast majority of certification requires flight testing for tens of thousands of hours.

        Testing and simulation complement each other. However, CFD is much less accurate than naive laymen or obtuse climate ballers think. Climate models are much worse because of convection, clouds, etc.

      • Why would a technical commentator omit for more than a decade that wind tunnel tests are built on experimental model:

      • Why would a technical commentator omit (for more than a decade now) that wind test tunnels are experiments that are unlike anything in climate and that the data measured out of these toy models that require some data reduction or post-test processing, Don Don? Why could Dan Hughes provide more evidence in one comment hereunder than all the contributions so far over the years from our technical commentator? When our technical commentator will be able to construct an Earth model the same way he can build aircraft components, he might have a shot at a case.

        Meanwhile, if you could tell him to stop conflating observations with test data, that’d be great.

      • A bizarrely nontechnical commentator echoing blog comment as a basis for tribal bloviation. Poor wee willie.

        CFD and climate models share a core mathematical framework. The essential principle involves conservation of properties across mesh boundaries. Neither CFD or climate models solve the equations of fundamental physics of fluid transport as such – as in the quote I provided above these may be unsolvable. Neither can realistically be described as experiments. More tellingly – both climate and models are coupled, nonlinear, chaotic systems. Models can evolve along many trajectories from very small initial differences. Climate evolves in surprising ways because understanding of the physics of the system is far from complete.

      • Don Monfort

        Tryin to figure out why this keeps going into moderation:

        You are XXXXXXXXX again and didn’t answer the questions, willito. We are not surprised. A ClimateXXXler with no handles, no bounce and no shot.

      • > CFD and climate models share a core mathematical framework.

        Tell that to our technical commentator who’s harping about wind tunnels, Chief. You might like:

        To many observers the rivalry between wind tunnels and Computational Fluid Dynamics (CFD) is a zero-sum game – as CFD matures it simply replaces wind tunnels. However, this is far from the truth.

        https://www.symscape.com/blog/wind-tunnels-and-cfd

      • It seems clear that is what has been said by everyone – apart from perhaps Eli. Poor wee willie.

      • The obfuscations from Willard are piled quite high by now.

        that wind test tunnels are experiments that are unlike anything in climate and that the data measured out of these toy models that require some data reduction or post-test processing

        This is really muddled and unenlightening. All “test” data requires adjustments including climate data such as thermometer readings. Flight test which is really the mainstay of certification is also subject to lots of noisy data because the atmosphere is very turbulent. All this shows is that even our best data is flawed. Climate data is worse because the instruments are worse.

        Meanwhile, if you could tell him to stop conflating observations with test data, that’d be great.

        Hard to tell if this is an even muddled version of the first one. See above response.

        First, our Technical Commentator constantly appeals to his authority without showing any substantial evidence.

        What a sad documentation of your poor memory Willard. I’ve repeatedly referenced the literature in the past. There was one in The Aeronautical Journal on turbulence modeling from 2001 that is quite good. You should exhibit a little industriousness and google high lift prediction workshop (mentioned above). But that would be if your goal were to fact check rather than to obfuscate.

        in his response to my comment on their recent paper, even Dessler said that they were as frustrated as I was by the weaknesses of climate models. I took that to mean he did not dispute any of the recent negative results.

        Most of what I have said here is really pretty uncontroversial among practitioners. Sophists like yourself, well that’s another matter.

      • > Most of what I have said here is really pretty uncontroversial among practitioners.

        Since it’s easy to substantiate the idea that wind tunnels experiments and CFDs go hand in hand because they help orient one another, that might not be a precise claim.

        Where’s your Earth tunnel?

        Produce one or shut up.

        ***

        > I’ve repeatedly referenced the literature in the past.

        Do it now.

        I dare you.

      • It’s only getting worse Willard. I suggest a time out for you in the penalty box. Laziness on your part is not an excuse for well laziness. Or maybe its inability to understand any references you might find.

        Flight test is perfectly analogous to weather stations or weather balloons. Both instrument a complex system and the data is noisy.

        The problem here is in “predicting what will happen in a new situation.” That’s where CFD and much more so climate models lack quantitative skill.

      • > Flight test is perfectly analogous to weather stations or weather balloons. Both instrument a complex system and the data is noisy.

        Sure, David. The radius of the Earth is more than six thousand kilometers while the biggest wind tunnel is less than two thousand feet long:

        The largest wind tunnel in the world is at NASA’s Ames Research Center. This subsonic tunnel, which can test planes with wing spans of up to 100 feet, is over 1,400 feet long and 180 feet high. It has two test sections: one 80 feet high and 120 feet wide, the other 40 feet high and 80 feet wide. Air is driven through these test sections by six 15-bladed fans. Each fan has a diameter equal to the height of a four-story building. The fans are powered by six 22,500-horsepower motors.

        https://www.nasa.gov/centers/langley/news/factsheets/WindTunnel.html

        You still haven’t cited your stuff. Shall I do it for you?

      • The Navier-Stokes equation of fluid motion is wonderfully elegant math – its realization in CFD and climate somewhat less so. The difference being that CFD can be validated – climate models as long term projections not at all. Poor wee willie clings to the delusion that the inability to put Earth in a wind tunnel is a plus for climate models.

        As a hydrodynamic modeler with decades of experience – there are theoretically justifiable uses for climate models. Climate sensitivity using opportunistic ensembles is not one of these.

      • As usual, Chief gets everything backward.

        I’m not the one who clings to the idea that it’s possible to put Earth in a wind tunnel. I don’t even think our observations are remotely like wind tunnel data. That’s our Technical Commentator’s idée fixe.

        Someone who can accept a vulgar energy-balance model of the Earth’s atmosphere surely can accept CFDs approximations of Navier-Stokes equations, for which we have evidence there’s no unique weak solutions anyway.

      • Don Monfort

        Little willto’s argument has shifted from there ain’t any more wind tunnels to, well they ain’t very long compared to the thickness of our planet. He’s in Bozo territory, now. Please Stop The Clowning, willito.

      • Willard has had this explained several times already. But then there is reading comprehension.

        Energy balance is a fundamental law of physics. It is valid for all turbulent flows. To compute the drag of an airplane if you know the thrust and lift you can get a pretty accurate number. This is a momentum balance method and ignores all those small eddies that we can’t compute anyway. Detailed Navier-Stokes simulations are much further off. Chief is right in this regard and Nic is right that EBM’s can be more accurate than GCM’s if the inputs are correct. Of course the forcings are inaccurate, but GCM’s have much larger errors.

        The scale of the flow has nothing to do with it Willard. The Navier-Stokes are the same for turbulent flows on a 300 foot airframe as on the surface of the earth. Turbulence is basically the same and chaos is the result.

      • Energy balance calculations are indeed extremely useful. They’re very common in physics (and astrophysics). They clearly give a very good idea of what sort of answer one would expect. You’d be extremely surprised if the actual answer were wildly different from what you would get from some kind of energy balance calculation. The problem here, though, is that we’re not really discussing wildly different results; what we’re considering is whether or not the energy balance best estimate is likely to be close to the “real” value. It almost certainly is; would be very surprised if the “real” value were much more than twice what the energy balance approach suggests. I wouldn’t, however, be surprised if it were different by 50%. In fact, it seems quite likely that the “real” value will lie within the range presented by this energy balance approach. So, some people seem to think that we should simply work with the best estimate from the energy balance approach. Others (myself) think we should also consider that the “real” value may actually end up being somewhat different to what this best estimate suggests (especially given that there are other lines of evidence that suggest this). Noone credible, however, is suggesting that it is going to be wildly different.

      • Regarding energy balance:

        1.) there’s uncertainty, but the current understanding of satellite based albedo estimates indicate that GCMs are wrong because they use albedo some 2% too high.

        2.) according to the same estimates, a large portion of the warming for the last eighteen years is due to the large decrease in shortwave albedo:

      • > The scale of the flow has nothing to do with it

        Car manufacturers will be pleased to know that they can make test drives using toy cars. By the same token, why not create a smaller model of the Earth and go from there? Should be easier to measure. Scott Pruitt could then cut even more fundings for our meager observations we have.

        If wind tunnels could stand alone, no aerospace company would ever invest in CFDs. But they don’t – simulations are crucial to apply post test corrections to inaccurate data. Heck, they even help correct scale problems that our technical commentator insists does not exist:

        This thesis deals with the problems of scaling aerodynamic data from wind tunnel conditions to free flight. The main challenges when this scaling should be performed is how the model support wall interference and the potentially lower Reynolds number in the wind tunnel should be corrected.

        Yes, Virginia – data is not always as pristine as our engineers try to portray them. In the real world, models and observations (both from experiments and in situ) go hand in hand. Using a simplistic model like an EBM (which provides the loosest constraint physically possible) does not make the model disappear or the data sovereign. The short of it is that this whole line of luckwarm marketing rests on a fairly basic epistemological blunder.

        Still no sign of our technical commentator’s citations.

      • Don Monfort

        According to willito, before CFDs human flight in randomly thrown together heavier than air contraptions was not possible. Crash and burn was the norm. Willito is berry smart.

      • Well Willard you finally said something that’s partially correct. This is the Reynolds’ number issue. Reynolds number scales inversely with the size of the object and changes the relative thickness of the boundary layer. However from the perspective of simulation, higher Reynolds’ numbers are harder because the viscous term is so small compared to the inertial terms.

        But in terms of the difficulties of doing accurate simulations, the atmosphere and oceans are vastly more difficult than engineering simulations. You know the issues: Convection is an ill-posed problem, clouds are very difficult, large scale vortical flows (Rossby waves) are notoriously difficult to simulate. Then there are energy flows and precipitation. All these are ignored in engineering simulations.

        Be very careful here. Even a passing familiarity with CFD shows that climate models are solving a vastly harder problem than engineering simulations which are highly idealized. Another reason regulators require actual testing for all parts of the operating range.

        So if engineering CFD simulations have quite broad uncertainty bands and the literature systematically understates them, climate models have huge uncertainty ranges. This is really not controversial at the technical level. The only reason its an issue is that climate scientists are often quite good at passing by in silence the broad uncertainty bands in order to communicate the “message”. They are expert practitioners of the dark arts of colorful fluid dynamics or more cynically continuous fraud and deceit. It’s a systematic problem in science generally. Everyone is rewarded by making their simulation tools look really good.

      • > So if engineering CFD simulations have quite broad uncertainty bands […], climate models have huge uncertainty ranges.

        Uncertainty is related to the claim one is willing to make. There’s no need for an engineer-level formal derivation to foretell that summer will follow spring, This changes when we want to make more precise claims.

        Furthermore, there’s a whole world of difference between trying to predict how a plane will behave in all kinds of wind conditions and to project how the climate of a whole planet will evolve. With less granularity power comes less predictive responsibility. The point remains that we just don’t have an Earth wind tunnel to test our stupid modulz, whether they are EBMs, CGMs or everything in between.

        In any event, uncertainty is no luckwarm friend.

        Still no citations from our technical commentator.

      • Lets cut through the fog of falsehoods and obscurantism that follows Willard like the dirt cloud surrounding Pig Pen of Peanuts fame and summarize:

        1. Engineering CFD simulations are highly idealized but still have rather high levels of uncertainty.
        2. The literature gives a vastly overly optimistic impression about this and uncertainty levels are almost never assessed at all and if they are they are understated, often dramatically so. This is related to the flawed culture of modern science. Selection bias and positive results bias infects virtually all the literature.
        3. This is why a very costly and long campaign of flight testing is required by regulators to ensure public safety. Neither wind tunnel testing nor CFD is good enough for certification even though both are used in designing products which is a largely unregulated activity. This is where the science of model validation comes in.
        4. The whole wind tunnel thing is an example of how easy it is for the practitioners of the dark arts of colorful fluid dynamics to hoodwink pesky rabbits and philosophers and indeed those providing soft money to these very practitioners.
        5. The testing time vs. CFD computer time canard is meaningless for drawing any conclusions.
        6. The atmosphere and the oceans are vastly more difficult to accurately simulate than engineering situations. Recent negative results reveal that the uncertainty is quite high. Such things as cloud and convection models are a major source of uncertainty that affects ECS.
        7. Climate scientists often give an optimistic aura to the accuracy of climate models, usually just passing over in silence the high uncertainty. When challenged, they actually will sometimes admit the obvious, now that a few negative results have come from climate scientists. What took so long?
        8. Pay no heed to what has been said here by Willard and Very Tall as most of it is either irrelevant pettifogging or patently false.

      • > The literature gives a vastly overly optimistic impression

        Speaking of obscurantism, another technical comment void of citations.

        Here’s a short list of the most important misdeeds by our Technical Commenter, in this subthread alone:

        – refusing to acknowledge that test data and simulation are entertwined;
        – conflating test data with in situ measurements;
        – rejecting the importance of scaling in modelling;
        – denying that he’s seldom if ever citing anything traceable;
        – minimizing that EBMs are simplistic physical models;
        – downplaying the fact that uncertainty is not his friend;
        – and confusing precision with accuracy.

        This is only a short list, and only for the technical comments.

        And that’s notwithstanding all the abuses and other kind of misdemeanours that would kick him out of any team project, even if he was twice as good as he suggests he is.

        This act has run for more than a decade, now. You can’t make this up.

        Go team!

      • Geez Willard, are you being serious? You are easily fooled and accusing me of things you can’t even define or are ignorant of. Just to rebut one of your points.

        conflating test data with in situ measurements

        In situ measurements say of atmospheric pressure in a weather station and flight testing with pressure taps are not very different. Both instrument a flow field and record the measurements. There is lots of noise however due to turbulence. Wind tunnel testing does pretty much the same thing.

        I did not say much about EBM’s except that they are accurate if the inputs are accurate. You just failed to read very carefully.

        Reading comprehension for Willard is a challenge as many of the other points were addressed above such as the Reynolds’ number scaling issue. Perhaps Willard was thrown off by the use of a technical term.

      • Wind tunnels versus whatever. A design is stable in a wind tunnel. Or it is hopelessly not. If we could put the Earth into a wind tunnel, which of these would it be?

        LC18 showed that it has resisted overheating. But that is luck, the luck of the luckwarmers. A roll of the die. Its design is unstable, that is heats up a lot, but the roll of the die doesn’t support that.

        Which reminds me of the most important though not new thing Curry said last night on Tucker Carlson. We’ve done the experiment. LC18 looks at the result of our experiment and tried to describe it and add error bars.

      • ATTP, You and I don’t disagree much on the EBM models then. I think however that recent evidence shows that GCM’s are not really valid evidence for ECS because ECS is highly dependent on subgrid models that cannot be constrained with data. One should really quote a wide range of ECS for a given model and I know that range for turbulence models for example is large. You can engineer the force over quite a range. We have a better library of data however for simple clear air flows.

        Paleoclimate, well we have enough trouble getting temperature and forcings right in the modern era.

      • Don Monfort

        Are we still on wind tunnels? Like some other characters here who suffer from a woeful lack of punching power, willito employs the weakling’s version of the rope-a-dope. Take a long drawn out thrashing hoping that the opponent will collapse from exhaustion, or just get bored from the tedious claptrap and go away.

      • > 3. This is why a very costly and long campaign of flight testing is required by regulators to ensure public safety. Neither wind tunnel testing nor CFD is good enough for certification even though both are used in designing products which is a largely unregulated activity. This is where the science of model validation comes in.

        I wonder the mortality rate per hour flight time of onboard model validation units over time, dp?

        I bet it’s the very small very fast airframes that have bought more farms per flight hour than your average twin-decker super-jumbo.

        Other than the odd volcanic eruption or even odder impact event, I doubt supersonic flows would be of much concern for climate muddlers except for the ones our aerospace industry is making.

        I can hear it now; global warming caused by sonic booms.

      • > In situ measurements say of atmospheric pressure in a weather station and flight testing with pressure taps are not very different. In situ measurements say of atmospheric pressure in a weather station and flight testing with pressure taps are not very different.

        Repeating the same sentence twice doesn’t an argument make, David.

        There is a difference between a wind tunnel and real flight. Even if you measure the same parameters, you don’t measure the same thing. To paraphrase Norbert Wiener, the best model of a flying cat is a flying cat, preferably the same flying cat. Size does indeed matter in modelling nonlinear behavior. If you dispute that, you might need to read the thesis of a guy who spent a few years trying to reconcile the two.

        My point here is not to suggest that we can abstract away observations, but that when tackling complex problems, we need all the tools we have. Discussing wind tunnels is not that important for climate science, since we have yet to build such kind of model for the Earth. If we could have one, I’d be the first to cheer.

        I know that you bet on observations, as you said at James’ and repeat over and over again. The issue I have with this stance is simple – our climate data are very clunky compared to when we engineer stuff that needs to work because lives are on the line. This clunkiness should compel us to be very circumspect in following it blindly.

        Think of it this way. If an EBM gave you a sensitivity near zero, that would tell you either that physics doesn’t work as we think, that the data suck, a little of both, or else. Same if that EBM told you that sensitivity should be 9C. More iterations of measuring and modelling is required to move forward, using all the approaches and the frameworks we can. Cheering on the sidelines for one result and one result only won’t change that fact.

        It is a shame you don’t use your professional experience to make people learn about your turf. Making it your priority, instead of gainsaying for the sake of it, would improve the contrarian brand. The world needs better contrarians, David. ClimateBall is obviously not for you.

      • > In situ measurements say of atmospheric pressure in a weather station and flight testing with pressure taps are not very different. In situ measurements say of atmospheric pressure in a weather station and flight testing with pressure taps are not very different.

        Repeating the same sentence twice doesn’t an argument make, David.

        There is a difference between a wind tunnel and real flight. Even if you measure the same parameters, you don’t measure the same thing. To paraphrase Norbert Wiener, the best model of a flying cat is a flying cat, preferably the same flying cat. Size does indeed matter in modelling nonlinear behavior. If you dispute that, you might need to read the thesis of a guy who spent a few years trying to reconcile the two.

        My point here is not to suggest that we can abstract away observations, but that when tackling complex problems, we need all the tools we have. Discussing wind tunnels is not that important for climate science, since we have yet to build such kind of model for the Earth. If we could have one, I’d be the first to cheer.

        I know that you bet on observations, as you said at James’ and repeat over and over again. The issue I have with this stance is simple – our climate data are very clunky compared to when we engineer stuff that needs to work because lives are on the line. This clunkiness should compel us to be very circumspect in following it blindly.

        Think of it this way. If an EBM gave you a sensitivity near zero, that would tell you either that physics doesn’t work as we think, that the data suck, a little of both, or else. Same if that EBM told you that sensitivity should be 9C. More iterations of measuring and modelling is required to move forward, using all the approaches and the frameworks we can. Cheering on the sidelines for one result and one result only won’t change that fact.

        It is a shame you don’t use your professional experience to make people learn about your turf. Making it your priority, instead of gainsaying for the sake of it, would improve the contrarian brand. The world needs better contrarians, David. C-Ball is obviously not for you.

      • Brandon, I can’t tell exactly what you are trying to say here, but the issue here is not about supersonic flow. It’s about simulations of chaotic flows and all turbulent flows are chaotic. It’s more about large scale vortical flows, convection, and clouds. These are all well known weak points of CFD.

    • I can barely suppress giggling over your foolishness, verytrollguy.

    • From 2008:

      Mark R. Melanson, An Assessment of the Increase in Wind Tunnel Testing Requirements for Air Vehicle Development over the last Fifty Years,
      46th AIAA Aerospace Sciences Meeting and Exhibit AIAA 2008-830, 7-10 January 2008, Reno, Nevada.

      Abstract
      Future wind tunnel test requirements are very difficult to forecast due to the uncertainties of anticipating the future direction of national needs, budgetary pressures, military requirements, and evolving technology. Wind tunnel test history from past programs is often used as an indication of potential future wind tunnel test needs. Past history, in the form of wind tunnel test hours (most commonly referred to as “user occupancy hours”) for each program, has been the primary metric used to portray test histories. This metric, while interesting, fails to take into account that a test hour in the early days of wind tunnel testing produced far less “data” than a contemporary test hour today. A more meaningful representation of historical wind tunnel test requirements would be the total amount of “data” obtained for each program. However, historical records are somewhat haphazard; and these measures are generally not available for many programs. Using available data rate trends from a variety of sources (in the form of runs or polars per test hour), more realistic “data” requirements history has been generated, showing a dramatic increase over the last fifty years in the data required for air vehicle development.

      These data were collected over a decade ago. If electronics has advanced in wind tunnel testing as in the general case, “user occupancy hours” is very likely even less meaningful now than then. The evolutionary path of commercial aircraft, in contrast to a revolutionary path, also probably reduces the number of test hours required. Recent advances seem to me to have been on structural materials.

      If you’re into Validation, you’re into wind tunnel, and flight, testing. Otherwise, probably not so much. Not to mention that wind tunnels are used also for Validation of CFD for a wide range of applications not related to flight.

      Pat Roache has addressed this issue, but I can’t find that material now. I vaguely recall he addressed the fact that someone had said that CFD would completely annihilate the need for wind tunnels. He pretty much annihilated that thought. The rise of Verification and Validation

      If you don’t do Verification and Validation, you’re pretty much playing with toy models, not working with tools.

      wind tunnel usage

      • oops.

        The rise of Verification and Validation, especially Validation, as crucial aspects of scientific and engineering modeling and software and computing demands data.

      • double oops. The Abstract ended at ” . . . air vehicle development.”
        All the other words are mine.

      • Thanks Dan, There is another issue here conflating the sophistical comparison of “testing hours” vs. computer hours for CFD. Testing of any kind is very efficient in that perhaps hundreds of different configurations/conditions can be done in a single hour. By contrast, depending on how you measure “computer hour” even a simple CFD simulation can take several hundred computer core hours). This comparison of “hours” is just meaningless. Perhaps the real comparison is throughput/cost/accuracy. It’s controversial and sound bites like the one Annan quotes are useless.

  61. At least with regard to the HADCM3 model, this is very telling. It indicates something structurally wrong with the model.

    The TCR of the Met Office’s previous generation climate model, HadCM3, is at the 95% upper bound for TCR of 2.0°C given in Otto et al. A perturbed physics ensemble study based on adjusting the parameters in HadCM3, now set out in two major published papers (Sexton et al 2012 and Harris et al 2013), represents the techniques and climate model on which the official UK Climate Projections (Murphy et al. 2009) produced by the Met Office were based. HadCM3’s high TCR might not matter if, as the Report claims, “uncertainty in the response of the climate system to CO2 forcing is comprehensively sampled” in the study. However, it is not. Despite thorough attempts to make it do so by varying its parameters, HadCM3 is unable to sample the region where, according to several recent observational studies, key characteristics of the real climate system are most probably located. It is therefore unsurprising that incorporating observational data barely alters the Harris et al 2013 prior central estimate for TCR.

  62. “This result is another link in a growing chain of evidence that internal climate variability played leading order role in the trajectory of 20th century global mean surface temperature…

    Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcings.” http://www.pnas.org/content/106/38/16120

    Internal variability is the result of changes in ocean and atmospheric circulation. Arising prominently from enhanced meridional patterns emerging from both poles and modulating both THC in the Atlantic and upwelling on the eastern Pacific margin.

    More Lorenzian trigger than climate forcing.

    • Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcing.
      http://www.pnas.org/content/106/38/16120

      I have seen that comment in passing several times, but have never been able to track it to some theory or observation in support of it. In this case the reference is to a Von Storch and Zwiers publication titled: Statistical Analysis in Climate Research which is a compendium on statistical methods for application to climate science. Does anyone here have supporting theory or evidence for this statement?

      • Forced change may initiate abrupt climate change – intrinsic change emerges from chaos in Earth’s flow field.

        https://watertechbyrie.com/2014/06/23/the-unstable-math-of-michael-ghils-climate-sensitivity/

      • Ken,
        A more sensitive climate is one where the temperature change required to produce an increase in outgoing radiation sufficient to counteract an imposed (+ve) forcing is larger. Since a considerable element of climate variability arises from quasi-random cloud changes that produce forcing changes, one would expect a more sensitive climate to imposed forcing to exhibit greater temperature variability.

      • Changes in global cloud cover are dominated by changes in Pacific cloud over the eastern upwelling regions. Clement et al (2009), Observational and Model Evidence for Positive Low-Level Cloud Feedback – regressed cloud amounts against sea surface temperature.

        It is caused in part by Rayleigh–Bénard convection in a fluid – the atmosphere – heated from below. Closed cloud cells tend to form over cool, upwelling zones increasing global albedo – open cloud cells form over warmer surfaces – decreasing planetary albedo (Koren et al 2017).

        The figure below shows global “wind and gyre circulation changes hypothesized to be associated with multidecadal (a) warm and (b) cool phases of the North and South Hemispheres. White arrows indicate regions of enhanced wind and black arrows indicate areas of enhanced gyre circulation. The blue patches indicate the sinking waters in the North Atlantic. The zonal warm phase occurred from the 1910s to 1940s and 1970s to 1990s and is characteristic of strong westerly winds in the northern and southern hemisphere. North Pacific and North Atlantic subarctic gyre circulations enhance with sinking waters associated with the northern North Atlantic winter. In the Atlantic subtropical gyre circulations also enhance. Some surface waters travel from the Indian Ocean to the south Atlantic and join the Gulf Stream in the North Atlantic. The meridional cool phase occurring from the 1940s to 1970s and 1990s to present consists of equatorward winds over the continents and poleward winds over the subarctic and sub-antarctic oceans, resulting as Rossby wave formations. Intensified circulation in subtropical gyre systems enhances upwelling and productivity in the California and Peru systems. Strengthened easterly trade winds increase equatorial current circulation in the Pacific. The background global chlorophyll is from Yoder et al.”

        These changes are associated with changes in surface pressure at the poles – that have been linked to solar UV/ozone chemistry in the upper atmosphere. It is a small change that trigger major shifts in the system on multi-decadal to millennial scales. With cloud feedbacks.

      • Kyle Swanson argue it’s true here:

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        Climate doesn’t know it’s CO2. It just knows it’s warmer. We can measure CO2, but it’s hard to measure climate variability.

      • kenfritsch

        Nic, I surmise that you are saying here that cloud cover causes changes in radiation reaching and being absorbed by the earth and that forcing is well understood in its relationship to the magnitude of climate sensitivity. That part I understand. That climate variability – and here I assume we are talking about multidecadal and reoccurring variations – is a major function of cloud cover is not something that I have a good understanding and primarily because I was not aware of a theory relating cause and effect for those cycles. I have been able to empirically find those cycles with Empirical Mode Decomposition but could not find a theory to support the existence of these cycles.
        On the other hand I may not be understanding the implications of your statement here.

      • I should add here that if for some reason the low frequency natural temperature variation was taken as part of the temperature attributed to other forcings used to calculate TCR or ECS, then a high variability in natural forcing as manifested in the low frequency natural temperature variation would bias the climate sensitivity to the high side.

  63. Nic and Judith,
    Congratulations on a worthwhile and timely paper.

    Nic,
    Your response to Dessler is I think fully sound. However, I would like to return for a moment to your discussion of GA16 where you concluded:-
    “Our finding that the relationship between pentadal ΔR and ΔT in HadGEM2-A during its amipPiForcing experiment is stable, apart from two excursions, (Figures 5a and S3) strongly points to the observed SST pattern evolution being largely forced and to much lower λ values in years 1−20 of HadGEM2-ES’s abrupt4xCO2 experiment reflecting unrealistic simulated SST pattern evolution.”
    I don’t disagree that HADGEM2-ES displays unrealistic simulated SST pattern evolution – since there is independent evidence from direct regional comparisons of SST. However, I do not think that this is the correct explanation for the large difference in (apparent feedback) gradient between the amipPIcontrol runs and the first 20 year gradient from the 4xCO2 Gregory plot, which difference is confirmed by your detailed analysis. At best, your explanation is incomplete unless the alternative explanation is excluded, and as such your conclusion in this may be scientifically unsafe – even if Hadgem2-ES is indeed unrealistic.

    I think the true explanation is much simpler in a way. It is quite simply that the 1-20 years gradient is a quite appalling estimate to use for this particular purpose.

    HadGEM2-ES is one of the many GCMs which conform to an LTI system i.e. net flux and temperature response can be both be predicted reasonably accurately using only an LTI assumption with no requirement for constant feedback assumptions. (Aside for Pehr Bjornbom: Note that this means that an EBM can readily reflect characteristic SST pattern development in GCMs if the GCM displays consistent curvature on a Gregory plot.)

    The convolution equation for temperature prediction requires a boundary condition of temperature (change) equals zero at the onset of any new applied forcing. The convolution equation for net flux prediction, on the other hand, requires that the change in net flux is equal to the change in forcing at the onset of any new applied forcing. What this means from a simple maths perspective is that temperature prediction from net flux forcing is not particularly sensitive to approximations made in the early years – specifically the use of ERF plus constant feedback assumption yields reasonable answers – but that for accurate net flux prediction we need to use a forcing species which reflects actual TOA (or top-of-model) net flux change AND an accurate estimate of dN/dT at and shortly after t=0. The use of an ERF forcing species lops off the net flux response at t=0 and results in a gross underestimate of the true magnitude of dN/dT in the early period response of the GCM at EACH change in forcing. So for accurate prediction of net flux we need forcing values which truly reflect instantaneous TOA change in the GCM – Fi values for preference. The above is all testable, even though it (still) does not appear to be widely known. In summary, ERF and constant feedback are OK for temperature estimation, including TCR and ECS, but NOT good for net flux estimation.

    In order to predict net flux response from just temperature change, as in the amipPIcontrol cases, we have a similar challenge for similar reasons. The instantaneous change in net flux is a point estimate obtained from the instantaneous change in temperature. This is not necessarily a single-valued function because of the changing climatology at the time the temperature increment is applied, but, if we assume for a moment that this effect is small, then in theory we can develop a simple curve of DelN vs DelT directly using F-N vs T from a Gregory plot for the same GCM. There are unfortunately two huge problems. The first question is: what constant value of F should be used in developing this curve? It is most certainly NOT the intercept from the 1 to 20 year regression fit which is probably close to the ERF rather than Fi for the model. Ideally, as I explained above, it should be the Fi value corresponding to the 4xCO2 case for the model. The second question is: how do we define the curve shape to cover a temperature change in the amipPicontrol runs spanning only a bit over one degree C when the very first point of information on the Gregory plot – averaged over the first year – covers the same temperature change? We need an estimate of deltaN vs delta T in the very early period and so it is supersensitive to the choice of F, if it is derived from a Gregory plot, and the N vs T curve shape from 0 to 1 degree temperature change. The 1 to 20 year regression has the wrong intercept and is averaged over the wrong temperature interval of interest. It tells us literally nothing about the expected flux change over the interval of interest seen in the amipPIcontrol cases.
    Your own calculation of gradient ca 1.9 W/m2/deg C therefore just suggests in the first instance that the Fi value for Hadgem2-ES is a long way north of an ERF estimate obtained from first-20-year linear regression. I do not believe that you can then conclude per se that Hadgem2-ES has some crazy pattern development, at least not before ruling out this simpler explanation. My own fit to Hadgem2-ES 4xCO2 experiment from some years ago, which predicted the 1% experiment with high-order accuracy as well as matching the 1290 year forward simulation, used a 3-body model fit (the first 3 eigenvector solutions). This gave a SECANT gradient for fixed forcing over the first degree temperature change of around 1.5W/m2/deg C and an Fi value of 8.3 W/m2. It would be interesting to test what value of linear regression gradient this would give using the temperature changes from the amipPIcontrol results. I suspect that it would not be far off your own result, but in any event it would be significantly higher than the first-20-year gradient.

    [As an addendum, I note that amipPIcontrol runs have now been requested from all of the modeling labs as part of the PCMDI. This is fine as far as it goes, but it is far more important IMO that the modeling labs be asked to provide Fi data, for both CO2 only and historic period, which would resolve a lot of the present ambiguity in explaining GCM results. The backward lurch into adoption of ERF as a forcing “standard” was an abomination in my view. While I understand the historical drive towards it, it leaves us with a standard which is ambiguously defined, highly dependent on features of the dynamic model, prohibitively expensive to calculate dynamically, and ultimately unavailable as a continuous series. What more could one ask of a “standard”?]

    • Paul,
      Thanks for your very interesting comment. I will need to do some thinking about what you say before responding properly. But one or two initial thoughts.

      First, I don’t regard the fact that the AMIP temperature change is less even than that in year 1 of the abrupt4xCO2 simulation as a problem, since most AOGCMs (and certainly HadGEM2-ES) behave remarkably linearly over the abrupt4xCO2 temperature range.

      Secondly, wouldn’t the point you make about net flux (N) estimation being difficult, if it is a serious problem, lead to inaccurate step-emulation, using abrupt4xCO2 data, of the N response in 1pctCO2 ramp-forcing simulations? In fact, that emulation is pretty accurate throughout the 140-year period involved.

      • “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709

        The characterization of models as linear and accurate is an astonishing delusion.

      • Thanks Nic,
        Since writing my comment above, I have revisited the prequel paper to GA16, namely the Andrews et al 2014 paper “The dependence of radiative forcing and feedback on evolving patterns of
        surface temperature change in climate models”. I have decided that the “simple explanation” that I offered in my previous comment, while still having some validity, cannot on its own explain the behaviour exhibited in the Met Office models, and in particular their HadCM3-A ST4 case. This case suggests an extreme sensitivity to climatology and that N is a broad-spread multivalued function of average T in this model. On reflection then, I would therefore concede that your explanation of the discrepancy being due to unrealistic pattern development in HadGem2-ES is looking credible, even though I still have reservations about characterisation of first year net flux change in that model.

        “Secondly, wouldn’t the point you make about net flux (N) estimation being difficult, if it is a serious problem, lead to inaccurate step-emulation, using abrupt4xCO2 data, of the N response in 1pctCO2 ramp-forcing simulations? In fact, that emulation is pretty accurate throughout the 140-year period involved.”

        Actually, Nic, the answer is that we should NOT expect to see any change in net flux prediction in the emulation of the 1% cases since any change in forcing or intercept assumption results in self-cancelling scaling relationships, leaving only the difference arising from the incremental forcing at time of application – which is VERY small for the 1% cases. In effect, you will get the same exact match apart from a very small translation of the curve which is significantly less than the uncertainty in baseline translation for the abstraction of net flux values. To give you some indication of the difference, if we both use identical curves from year 1 onwards but you use an intercept of 8 W/m2 (ERF) and I use a t=0 value of 10 W/m2 – a sizeable difference, the upper bound on the translation between the two prediction curves translates into the difference between the Forcing intercepts divided by the duration to reach that forcing level, say 140 years, i.e. in other words a translation of just 2/140 W/m2. So the difference in forcing assumption is invisible for matching the 1% net flux response. (It is not invisible when abstracting properties since you have a forcing vs time curve climbing to 8 W/m2 and mine rises to 10 W/m2 over the same period.)
        We cannot take too much comfort from this however. The choice of forcing becomes important when you wish to use the step forcing data in a convolution problem with variable forcing data series from another source – such as modeling the historic period, or, as I tried to explain previously, when you wish to predict the net flux change from surface temperature data over the early low temperature interval.

      • Paul,

        I didn’t follow your comment:

        This case suggests an extreme sensitivity to climatology and that N is a broad-spread multivalued function of average T in this model.

        Doesn’t it just imply that the N and T response of HadCM3-A is almost entirely determined by the pattern of SST warming? The “climatology” term only seems to come into things as a way of projecting the local SST back to t=0 along the 20-100 year abrupt4xCO2 local regression line.

        You also say that for a 2 W/m2 difference in F_4xCO2, when emulating the ipctCO2 response:

        the upper bound on the translation between the two prediction curves translates into the difference between the Forcing intercepts divided by the duration to reach that forcing level, say 140 years, i.e. in other words a translation of just 2/140 W/m2.

        I agree, you are right on this point.

        You go on to say:

        The choice of forcing becomes important when you wish to use the step forcing data in a convolution problem with variable forcing data series from another source – such as modeling the historic period

        I can’t see why the same argument as applies to emulating the response in the 1pctCO2 case doesn’t also apply here.

      • Nic,
        Let me try to deal with your second point first.

        We agree that the net flux predicted in the 1% p.c. case is not sensitive to differences in forcing assumptions AEBE since it results in a negligible translation of the 1% p.a. case.

        I wrote: “The choice of forcing becomes important when you wish to use the step forcing data in a convolution problem with variable forcing data series from another source – such as modeling the historic period.”
        You wrote:
        “I can’t see why the same argument as applies to emulating the response in the 1pctCO2 case doesn’t also apply here.”

        The same argument DOES apply in the sense that, if all forcings are consistently scaled, then the OVERALL gradient of the predicted net flux change with time due to a forcing change over a long enough period is negligibly altered by the change in forcing assumption. Net flux and temperature prediction are scale-independent in this sense. [Indeed, this argument can be used to validate the robustness of the assumptions that go into your TCR and ECS estimation methodology. However, to be crystal clear, the validity of your methodology in this was not in question in my mind – only the legitimacy of the inference drawn about the Met Office models.] What the change in forcing assumption/intercept does change however are the PEAKS AND TROUGHS of the net flux prediction when forcing is variable (as in e.g. the historic period or the amip standard run period), and in this, the difference is small but not negligible. The largest variability comes from volcanic forcing, but, since an efficacy factor is typically required in any event, the differences arising from forcing assumption can be nullified by a rescaling of the efficacy factor. However, even when volcanic forcings are excluded from the picture, the incremental year-to-year change in input forcing values can be greater than 0.1 W/m2 and changes of over 0.3 W/m2 over 5 year periods are not uncommon (GISSE2-R Fi forcing data). Our hypothetical difference in forcing intercept assumption of 20% now converts into a translation of up to 0.02 W/m2 on an annual point basis. This may not sound like much but the total rise in net flux estimated in GCMs from 1850 to 1950 is only about 0.25W/m2 with perhaps a further 0.35 W/m2 from 1950 to now. These incremental forcing changes are large enough to result in visible differences in the peaks and troughs of net flux series predicted under assumptions of different forcing intercepts – even when the rest of the step function defining net flux evolution with time is identical. I would argue that the use of Fi values reduces the RMS error, but the effect is not dramatic.
        Well so what? We have agreed that this doesn’t challenge the validity of your temperature and net flux estimation methodology at all. And the use of ERF plus constant feedback assumption for temperature and net flux prediction is robust because of scale-independence to changes of forcing assumption. However, the same does not apply to the restorative flux; it is NOT scale-independent to changes in forcing (intercept) assumptions . So consider the inverse problem presented by GA16 – which was where I started from. Definitionally, I can plot the restorative flux against temperature in my emulation model of the given GCM by taking at each point over the historic period the forcing that I have applied and subtracting the net flux calculated at the same point in time, F-N vs T. You can do the same. Since our net flux and temperature predictions are very similar, the only difference in this calculation is that I have a forcing intercept which is (by hypothetical assumption) 20% higher than yours in the step-result. When you plot F-N vs T from your emulation result, if you have assumed a constant feedback and a compatible Forcing intercept at T=0, you should see a perfect straight line with gradient equal to your constant feedback . I see (instead) a multivalued function of T which if force-fitted with a straight line produces a gradient which is a lot higher than your constant feedback value, since it is a forcing-time-weighted average of secant gradients drawn from my higher forcing series. Now suppose that we are both given just the emulated temperature data from the match over the historic period, and asked to predict the restorative flux with no information on the forcing history. From my F-N vs T plot, I will produce estimates of DelN for any given DelT which are always greater than yours despite the fact that our respective forcing assumptions give “satisfactory” matches to temperature and long-term net flux change. This is the problem presented by the GA16 amipPIcontrol challenge. And because ERF estimates already are known to eliminate some restorative TOA net flux adjustment, I believe that Fi values give a more realistic estimate of the starting point and early gradient of temperature-dependent restorative flux adjustment in the 4XCO2 case . Hence, all else being equal the 1 – 20 year gradient from the 4xCO2 case is an appalling estimate to be used as a comparative against a gradient purporting to represent restorative flux change with temperature – unless of course the intercept corresponds to the Fi value for that model.
        So while I stand by the above, the reason I have backed off the libellous suggestion that your conclusion re Hadgem2-ES may be unsafe is that I cannot find any better explanation for the dramatic change in outgoing net flux in HadCM3-A other than an extreme sensitivity to temperature pattern .
        “Doesn’t it just imply that the N and T response of HadCM3-A is almost entirely determined by the pattern of SST warming?” Yes, it does.
        “ The “climatology” term only seems to come into things as a way of projecting the local SST back to t=0 along the 20-100 year abrupt4xCO2 local regression line.” Agreed, but if we are still talking about the ST4 case, because they have included an intercept term in that calculation, the first step changes both the pattern and the average temperature. Despite all my waffling above about the restorative flux not being scale-independent, there is no way that (such) a different treatment of the restorative flux calculation can explain the massive net flux change due solely to pattern change, nor the large spread of restorative net flux for allegedly the same average temperature. It really does look like unrealistic pattern distribution plus a sensitivity in restorative flux which is possibly heavily magnified by questionable CRE change. There may be an alternative explanation but I don’t really have the time to search for it.

    • In the linked article and Supplementary Material the authors show the best fitting model for GMST versus time for the CMIP5 abrupt 4X CO2 and the 1% CO2 experiments between a 2 and 3 time constant (2TC and 3TC) model and a diffusion model. The 3 TC model fits best for the 4X CO2 case and the diffusion model and 2TC model fits best for the 1% CO2 case. The difference for the 4X CO2 case is in the first 5 to 20 data points depending on CMIP5 model data used where the 3TC model can better fit the initial points. It appears that in the 1% CO2 case – which is closer to reality in forcing rates – the extra time constant is not required and makes me believe that the abrupt 4X CO2 initial data points are an artifact of the initial very rapid warming.

      http://iopscience.iop.org/article/10.1088/1748-9326/8/3/034039

      I have found that the model fitting using the Bayesian approach and including the combined effects of changes in GMST and net TOA radiation in the linked article here shows the same initial better fit for all 150 year data points for the 3TC model that gives way to a 2 TC model when the first 5 to 20 initial data points are eliminated.

      http://advances.sciencemag.org/content/3/7/e1602821/tab-pdf

      • Ken,
        I think the main reason that a 3 TC model fits the abrupt4xCO2 case best is that the year 1 point is significantly affected by the adjustment of forcing from its instantaneous TOA value towards its ERF value, as the stratosphere, troposphere and land surface (and probably SST patterns) adjust.
        In the 1pctCO2 case, by the time the response is non-trivial the contribution from year one response increments has been heavily diluted by response increments from later years, so merging the first two TC responses into one with an intermediate timescale (median 3.3 years vs 0.6 and 7.2 years for the Caldeira & Myrhvold study you cite) causes very little inaccuracy.

      • kenfritsch

        Nic, if in the 1% CO2 experiment the forcing is close to the equivalent increase in AGW forcing in the observed world, then no matter what the actual mechanism is that gets one to a 2 TC (or diffusion model) fit for that experiment it must be a more realistic emulation of earthly warming than the 4XCO2 experiment and would indicate that the observed warming should better fit a 2 TC or diffusion model than a 3 TC model.

        Although it varies with CMIP5 models, the point at which a 2TC model provides the better fit for either the T vs t or the TOA net vs t is beyond where the first time constant of a 3 TC model is varying – which is between approximately 3 to 5 years as I recall – and in most cases is well into the years where the second time constant is varying to a large extent.

        If the problem in gaining a better understanding these mechanisms for both the observed and modeled cases is one of separating the secular trend from the noise and noise that includes multi decadal reoccurring variations, I would hope more effort in the climate science community would be devoted to analysis such as Empirical Mode Decomposition where a secular trend due to AGW forcing could be more readily resolved and estimated.

  64. Nic: We appear to have more than a “single representation of reality”. Otto 2013 analyzed five representations of reality (four different decades and one four decade period). You have analyzed several different 65- and 130-year “representations of reality”. I don’t remember the details of other publications, but others periods were probably used. The fact that all produce ECS around 1.5 to 2.0 K suggests to me that internal variability is not playing a big role in the outcome.

    If one performs a linear fit to the UAH record of “reality”, for example, one obtains a rate of warming and a confidence interval. The width of that confidence interval reflects sampling a reasonable number of examples of “short-term internal variability” such ENSO, but an inadequate sampling of “long-term internal variability”, such as AMO. Within a single 39-year “representation of reality”, one finds many shorter periods selected by chaos.

  65. franktoo,
    I agree with you, one obtains similar sensitivity estimates using various longish term periods. More evidence that internal variability is not a major issue is given by Figure 5b of LC18, reproduced below.

    • Nic: Continuing the idea that the historical record contains “multiple representations of higher frequency reality”, one can show that the UAH record is inconsistent with the multi-model mean using the standard formula for the significance of a difference between two means (as Steve did at CA). So I looked at the details of the confidence interval of Dessler 2018 and found some surprises. The full range for lambda in the 100 runs was -1.88 to -1.01 W/m2/K (ECS 3.9 to 2.1 K), but the 5%-95% range was only -1.63 to -1.17 W/m2/K with a median of -1.43 W/m2/K (ECS 2.4-3.3; 2.7 K). These values were obtained from the temperature difference between the average of the first and last decades, periods too short to fully sample internal variability on the ENSO time scale. When the first two and last two decades were averaged, the spread was cut by 45% from 0.87 to 0.48 W/m2/K. If we cut the above 5%-95% by 45%, we get lambda = -1.43 (-1.56 to -1.30) W/m2/K or ECS 2.5-3.0 K. The spread doesn’t narrow averaging over more than the last two decades.

      When Dessler et al calculate a misleadingly large spread for ECS of 2.1-3.9 K, a large part of that range is due to: 1) Higher-frequency internal variability that can be eliminated by averaging over two rather than one decade. 2) Reporting a non-standard interval rather than a more traditional confidence interval like 5%-95%: probably near ECS 2.5-3.0. (On the average, full range for 100 normally distributed data points is 5 standard deviations (with a standard deviation of 0.6 standard deviations); equivalent to an 0.7% to 99.3% confidence interval.)

      Dessler et al are correct when they say that internal variability can perturb estimates of ECS, but averaging over two initial and two final decades provides a pragmatic solution to this problem. Problems can arise from high frequency internal variability such as ENSO, suggesting that the internal consistency in Otto (2013) may have been fortuitous.

      • franktoo,
        Agreed. I made a similar point myself in this article.

        It is worth noting that the submitted manuscript only gave estimates based on 10 year periods, and only gave the wide complete ranges of lambda and ECS values.The 5-95% ranges and alternative, much narrower, ranges using 20-year averaging periods were added as a result of peer reviewer comments.

        Defending the use of only 10 year base and final averaging periods, Andrew Dessler wrote in response to the review point:

        “We calculate ECS using this approach because this is the way most ECS calculations based on the 20th-century observational record are done. Thus, our results can therefore directly provide insight into the impact of variability in the observational estimates of ECS.

        I can’t actually think of any serious observational estimates of ECS that use 10 year averaging periods to calculate both endpoints for their analysis.

    • Nic: Thanks for the reply. Sorry I haven’t mastered at the material in LC18.

      As I mentioned before, I was looking for a way to use the standard formula for the significant of the difference between two means (given their confidence intervals) to assess the significant of the inconsistency between model ECS and “observed” ECS (from EBMs). Your Figure 5b seems to put this within reach. The uncertainty in the slope dR/dT is due to both internal variability and errors in dR and dT. Dessler et al can provide you will 100 MPI-ESM1.1 model runs with dR and dT. If you average over the first two and last two decades, the confidence interval should be fairly narrow. It should be possible to show that the MPI model is inconsistent with multiple representations of reality (different time periods) with some degree of confidence (“likely”, “very likely”, etc.). The same may be true of the multi-model mean. (Of course, I won’t be surprised to find you are way ahead my comments.)

      In my dreams, the problem of a single vs multiple representations of reality is addressed by sampling as many time periods as possible, just not periods in phase with the AMO. I presume you have seen the triangular trend viewer at Nick Stokes blog that lets one see the mean trend (or upper or lower ci) for all possible starting and ending points. It is unfortunately good for cherry-picking. First, the trends with very wide confidence intervals should be blanked out, so users don’t focus on noise. Then the highest 5% and lowest 5% of trends should be blanked out to provide a graphical representation of a 5-95% ci for the variation in trend. It might be possible to have a similar figure for ECS from EBMs, showing median or upper/lower confidence intervals using any starting and finishing points. One might want to have the number of years to be averaged at the beginning and end as a variable. One might want to be able to optionally eliminate periods likely to have been perturbed by volcanos (or perhaps dubious periods like WWII SSTs). If such a tool were available, I might first focus on which periods produce the narrowest ci’s and then see how well their ECS agrees with the bulk of the other periods that produce usefully-narrow ci’s. Anyone who bothered experimenting with such a tool might become convinced that interval variability is not the explanation for why EBMs and models disagree.

      • franktoo,
        I think it is best to estimate (effective) climate sensitivity of AOGCMs using data from their 1pctCO2 and/or abrupt4xCO2 simulations rather than from multiple historical simulations, even where the forcings in historical simulations have been estimated from other simulations. That is because the signal-to-noise ratio is much higher in 1pctCO2 and (particularly) abrupt4xCO2 simulations, and the forcing is known much more accurately (and is not a mixture of forcings). See section S4 of the Supporting Information to LC18.

    • I have a hard time accepting that we know net outgoing radiation going back to 1873.

      • People have a hard time imagining that it is known back to 1985.


        AR4 Fig 3-23

        “To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.” AR4

      • I don’t think you have taken in what the penultimate sentence of the caption to Fig. 5b says.

      • David Wojick

        True enough, since I do not know what that sentence means. The point is that there is no way to know what the net outgoing radiation has been since 1873, whatever that sentence means, so what it means does not matter.

      • “Since “panta rhei” was pronounced by Heraclitus, hydrology and the objects it studies, such as rivers and lakes, have offered grounds to observe and understand change and flux. Change occurs on all time scales, from minute to geological, but our limited senses and life span, as well as the short time window of instrumental observations, restrict our perception to the most apparent daily to yearly variations. As a result, our typical modelling practices assume that natural changes are just a short-term “noise” superimposed on the daily and annual cycles in a scene that is static and invariant in the long run. According to this perception, only an exceptional and extraordinary forcing can produce a long-term change.” https://www.tandfonline.com/doi/abs/10.1080/02626667.2013.804626

        This is the essence of the problem you have in assuming the comprehensiveness of broad IPCC forcing estimates. What matters is how much energy accumulates in the system due to modulation of TOA power flux – and why. This can only be understood with ocean heat and satellite radiant flux data. What data there is suggests that your thesis is fundamentally flawed.

  66. “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

    There are deterministic chaotic changes – regime change – in ocean and atmospheric circulation over millennia – it is the core nature of a dynamically complex system. Understanding the intrinsic nature of the system is key to playing in the right ballpark. These discussions of climate sensitivity show little hint of understanding that chaos – as in deterministic chaos – is more than a synonym for random.

    Nic’s figure above relies on the assumption that IPCC ‘forcings’ are a complete accounting of change in the system. Based as it is on an assumption that natural variation is white noise and thus zeroes out – no accounting of large variation in toa radiant flux from changes in ocean and atmospheric circulation is required. Natural variation is not white noise – it has Hurst effects over millennia.

    Modern satellite records enable analysis of the causes of toa flux and ocean heat changes. Ocean heat changes rapidly with large annual cycle forcing – swamping small and slow changes in anomalies. Much of the change in anomalies reflects changes in cloud anti-correlated with sea surface temperature.

    It results in changes to the energy budget of the planet that can be seen in a simple running sum of energy in less energy out.

    The parsimonious hypothesis is that changes in the modern satellite record reflect much longer term mechanisms of variability in the system.
    Nic’s characterization of these changes as ‘quasi-random’ presumes I suppose that these are seemingly random – but completely deterministic – regime changes. 20 to 30 year regime changes add to variability over millennia. The essence of Hurst effects.

    The US National Academy of Sciences (NAS) defined abrupt climate change as a new climate paradigm as long ago as 2002. A paradigm in the scientific sense is a theory that explains observations. A new science paradigm is one that better explains data – in this case climate data – than the old theory. The new theory says that climate change occurs as discrete jumps in the system. Climate is more like a kaleidoscope – shake it up and a new pattern emerges – than a control knob with a linear gain. This alternative paradigm of climate science explains rather than ignores intrinsic variability seen in climate series.

    The new paradigm suggests that the system is pushed by greenhouse gas changes and warming – as well as solar intensity and Earth orbital variability – past a threshold at which stage the components start to interact chaotically in multiple and changing negative and positive feedbacks – as tremendous energies cascade through powerful subsystems. Some of these changes have a regularity within broad limits and the planet responds with a broad regularity in changes of ice, cloud, Atlantic thermohaline circulation and ocean and atmospheric circulation.

    • Robert wrote: “Natural variation is not white noise – it has Hurst effects over millennia.”

      Do AOGCMs produce output with appropriate unforced variability? If not, then there is no reason to prefer AOGCMs over EBMs.

      We have roughly 100 centuries of Holocene climate variability to compare with the forced changes of the past century. Those hundred centuries suggest that the most recent century might have been a period of modest unforced cooling or warming on top of larger forced warming from GHGs. If so, our estimates of ECS from EBMs with be flawed. However, AOGCM, if I understand correctly, suggest that unforced variability is small. See my other comment showing that Dessler et al exaggerated the spread of ECS from the MPI model. 2.5-3.0 K appears to be a more realistic range.

  67. Models do indeed have Hurst effects due to the non-linear math at their core.

    e.g. https://www.nature.com/articles/srep09068

    An unstable solution first happened to me in the 1980’s – my thesis supervisor knew what to do. Models can evolve multiple exponentially divergent solutions from very small initial differences. That was precisely the puzzle Edward Lorenz had with his convection model in the 1960’s.

    Nor is intrinsic variability modest – as reveled by satellite data. But the problem in a chaotic climate is not one of quantifying climate sensitivity in a smoothly evolving climate but of predicting the onset of abrupt climate shifts and their implications for climate and society. The problem of abrupt climate change on multi-decadal scales is of the most immediate significance.

    • Robert replied: “Models can evolve multiple exponentially divergent solutions from very small initial differences.”

      Absolutely. However, the magnitude of those [chaotic] divergences from the long-term mean is limited by negative feedback. If Dessler et al had merely averaged the first two and last two decades before calculating warming, they would have obtained a 5%-95% ci for ECS of about 2.5-3.0 K/doubling. Intrinsic variability in the MPI model is low once the high frequency component is suppressed by averaging.

      Were major climate shifts during the Holocene (LIA, MWP, RWP, etc.) mostly naturally-forced or mostly unforced? If unforced, 20th-century warming could have been significantly amplified or suppressed by low-frequency unforced variability. Lewis and Curry’s ECS could be too high or too low. However, their ECS is certainly more reliable than the ECS from climate models that don’t produce the large amounts of unforced variability we (both?) fear may exist in our climate system. Model PI control runs show little unforced variability on decadal and longer time scales.

      • frank too, Be careful here with the very fuzzy distinction between variability and forced response. The issue is the shape of the attractor and how attractive it is. An area of deep ignorance. Dessler admits there are no good methods for disentangling “internal variability” and “forced response.”

        The other issue is that are a lot of bifurcation points where multiple long term outcomes are possible.

      • In recent decades net anthropogenic forcing has provided enough energy for half of it to account for the total rise in ocean heat content and the other half to be offset by the warming response. There really isn’t room for anything else in the energy budget.

      • Yet change it does – large changes in both IR and SW – in response to both the changes in the annual and longer flow of energy through the Earth system and internal responses in ocean and atmospheric circulation.
        https://watertechbyrie.com/2018/05/05/a-maximum-entropy-climate-an-idea-under-construction/

        Without data and its detailed analysis everything descends to tribal narrative – as both Jimmy and poor wee willie show.

      • RIE, there are papers that tell you to expect to see the feedback in the shortwave rather than the longwave (see positive albedo and cloud feedbacks), so you are just looking at the positive feedback there.

      • There are oppositely signed cloud radiative effects (CRE) in IR and SW. With low marine stratocumulus – net change is SW dominated. CRE changes are large and largely natural. I have of course looked at both sides – and this should not have to bear repeating. It’s Jimmy’s gold fish bowl again – going around – and forgetting any context there ever was until everything is meaningless, repetitive narrative.

        “Marine stratocumulus cloud decks forming over dark, subtropical oceans are regarded as the reflectors of the atmosphere.1 The decks of low clouds 1000s of km in scale reflect back to space a significant portion of the direct solar radiation and therefore dramatically increase the local albedo of areas otherwise characterized by dark oceans below.2,3 This cloud system has been shown to have two stable states: open and closed cells. Closed cell cloud systems have high cloud
        fraction and are usually shallower, while open cells have low cloud fraction and form thicker clouds mostly over the convective cell walls and therefore have a smaller domain average albedo.4–6 Closed cells tend to be associated with the eastern part of the subtropical oceans, forming over cold water
        (upwelling areas) and within a low, stable atmospheric marine boundary layer (MBL), while open cells tend to form over warmer water with a deeper MBL.” https://aip.scitation.org/doi/pdf/10.1063/1.4973593

        And please – SST changes from AGW do not begin to compare to changes in SST seen with intrinsic variability.

      • You seem interested in the positive albedo feedbacks from ice, snow and clouds and should look into those more.
        As for SST, I was talking about OHC. The change in OHC in the last few decades can be fully accounted for with half the net anthropogenic forcing over that same period. There is no need for another explanation for the OHC increase is what I am saying. It’s like where the atmospheric CO2 increase is only half the emissions. When that happens we are not looking for another CO2 source, only sinks. That’s how to view the OHC increase relative to anthropogenic forcing. More than enough by itself.

    • You seem remarkably cognitively objurate – you should try to do something about that. I graphed ocean heat – and the main source is CRE, Notably in the past few years. The global driver for this coupled ocean/atmosphere mechanism is Rayleigh–Bénard convection – the results of which are shown in a photograph taken over the Pacific that can be seen just above. Did you not understand the bit about bi-stable cloud cells and SST? Or did you you simply decide that if you cannot deny it you must divert attention for the sake of the tribal narrative? Of sociological interest if one were bent that way but hardly conducive to explorations of natural philosophy.

      And I am interested in almost everything but linear climate sensitivity and your endless pedantic repetitions little Jimmy.

      • Oh – and perhaps I should emphasize that it isn’t a feedback. It is a mechanism – Rayleigh–Bénard convection – in a fluid – the atmosphere – heated from below.

  68. Nice work Nic, thanks for sharing.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s