Impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity

by Nic Lewis

We have now updated the LC15 paper with a new paper that has been published in the Journal of Climate “The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity“.  The paper also addresses critiques of LC15.

There has been considerable scientific investigation of the magnitude of the warming of Earth’s climate by changes in atmospheric carbon dioxide (CO2) concentration. Two standard metrics summarize the sensitivity of global surface temperature to an externally imposed radiative forcing. Equilibrium climate sensitivity (ECS) represents the equilibrium change in surface temperature to a doubling of atmospheric CO2 concentration. Transient climate response (TCR), a shorter-term measure over 70 years, represents warming at the time CO2 concentration has doubled when it is increased by 1% a year.

For over thirty years, climate scientists have presented a likely range for ECS that has hardly changed. The ECS range 1.5−4.5 K in 1979 (Charney 1979) is unchanged in the 2013 Fifth Assessment Scientific Report (AR5) from the Intergovernmental Panel on Climate Change (IPCC). AR5 did not provide a best estimate value for ECS, stating (Summary for Policymakers D.2): “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence”.

At the heart of the difficulty surrounding the values of ECS and TCR is the substantial difference between values derived from climate models versus values derived from changes over the historical instrumental data record using energy budget models. The median ECS given in AR5 for current generation (CMIP5) atmosphere-ocean global climate models (AOGCMs) was 3.2 K, versus 2.0 K for the median values from historical-period energy budget based studies cited by AR5.

Subsequently Lewis and Curry (2015; hereafter LC15) [i] derived, using observationally-based energy budget methodology, a median ECS estimate of 1.6 K from AR5’s global forcing and heat content estimate time series, which made the discrepancy with ECS values derived from AOGCMs even larger. LC15 also derived a median TCR value of 1.3 K, well below the 1.8 K median TCR for CMIP5 models in AR5.

The LC15 analysis used a global energy budget model that relates ECS and TCR to changes (Δ) in global mean surface temperature [T], effective radiative forcing (ERF) [F] and the planetary radiative imbalance [N] (estimated from its counterpart, the rate of climate system heat uptake) [ii] between a base and a final period. The resulting estimates were considerably less dependent on comprehensive global climate models (GCMs) and allowed more thoroughly for forcing uncertainties than many others.[iii]

Considerable effort has been expended recently in attempts to reconcile observationally-based ECS values with values determined using climate models. Most of these efforts have focused on arguments that the methodologies used in the energy budget model determinations result in downwards-biased ECS and/or TCR estimates (e.g., Marvel et al. 2016; Richardson et al. 2016; Armour 2017).

We have now updated the LC15 paper with a new paper that has been published in the Journal of Climate “The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity“.[iv] The paper (hereafter, LC18) addresses a range of concerns that have been raised about climate sensitivity estimates derived using energy balance models. We provide estimates of ECS and TCR based on a globally-complete infilled version of the HadCRUT4 surface temperature dataset as well as estimates based on HadCRUT4 itself.[v] Table 1 gives the ECS and TCR estimates for the four base period – final period combinations used.

Table 1 (based on Table 3 in LC18) Best estimates (medians) and uncertainty ranges for ECS and TCR using the base and final periods indicated. Values in roman type compute the temperature change involved (ΔT) using the HadCRUT4v5 dataset; values in italics compute using the infilled, globally-complete Had4_krig_v2 (Cowtan & Way) dataset. The preferred estimates are shown in bold. Ranges are stated to the nearest 0.05 K. Also shown are the comparable results (using the HadCRUT4v2 dataset) from LC15 for the first two period combinations given in that paper. The values from the IPCC AR5 are provided for reference.

.
The new LC18 ECS and TCR estimates are very similar for all the period combinations used. That implies that the ‘hiatus’ – the period of slow warming from the early 2000s until a few years ago – had little effect on estimation. The preferred pairing is of the 1869–1882 and 2007–2016 periods, which provides the largest change in forcing and hence the narrowest uncertainty ranges, notwithstanding that both these periods are the shortest ones used. Using 1869–1882 as the base period avoids both any significant volcanism and the period of particularly sparse temperature data spanning most of the 1860s. Estimates are almost identical when using the longer 1850–1882 base period and excluding years affected by volcanism or with very sparse temperature data.

The new LC18 ECS and TCR HadCRUT4-based best estimates, respectively 1.50°C and 1.20°C, are approximately 10% lower than those in LC15. These reductions stem primarily from a significant upwards revision in estimated methane forcing following more accurate determination of the forcing-concentration relationships for the principal well-mixed greenhouse gases (WMGG)[vi] and revisions to post-1990 AR5 aerosol and ozone forcing estimates that reflect updated emission data,[vii] partially offset by a 2.5% upwards revision in the forcing from a doubling of preindustrial carbon dioxide (CO2) concentration, F2⤬CO2.[viii]

The 5% uncertainty bound of the AR5 2011 aerosol forcing estimate was changed from −1.9 Wm−2 to −1.7 Wm−2 to reflect substantial recent evidence against aerosol forcing being extremely strong.[ix] Doing so had virtually no effect on the median ECS and TCR estimates, and accounted for only a small fraction of the major reductions in their 83% and 95% upper uncertainty bounds from those in LC15. Most of that reduction is due to the revised forcing estimates and to average greenhouse gas concentrations over 2007–2016 being higher than over 1995–2011.

Figure 1 shows a comparison of the revised, extended forcings estimates with their original AR5 values. The significant increase in ‘Other WMGG’ forcing reflects the revision of the methane forcing component. Figure 1 shows a comparison of the revised, extended forcings estimates with their original AR5 values. The significant increase in ‘Other WMGG’ forcing reflects the revision of the methane forcing component.[x]

There is some recent evidence that AR5 volcanic forcing estimates, which in LC18 are extended to 2016 using the AR5 calculation basis, may be biased low due to omission of volcanic aerosol in the lower stratosphere.[xi] However, once an adjustment is made for the background level of volcanic aerosol there appears to be virtually no effect on the changes in volcanic forcing between the base and final periods used in LC18.[xii]

Figure 1 (based on Figure 2 of LC18) Anthropogenic forcings from 1750 to 2016. In some cases the Original AR5 1750–2011 time-series overlay the Revised 1750–2016 time-series prior to 2012. Unrevised anthropogenic forcing components have been combined into a single ‘Other Anthropogenic’ time-series. Solar and Volcanic forcings are not shown; they have not been revised and their post 2011 changes are very small.

The new best estimates using globally-complete surface temperature data, of 1.66°C for ECS and 1.33°C for TCR, are almost the same as the LC15 ECS and TCR estimates based on non-infilled temperature data. Both the LC15 and LC18 ‘likely’ (66%+ probability) ranges are both very much towards the bottom ends of the corresponding IPCC AR5 ranges.

Figure 2 shows probability density functions for each of the ECS and TCR estimates, with the AR5 ‘likely’ ranges (shaded lime green) for comparison. The PDFs are skewed due principally to the dominant uncertainty in forcing, affecting the denominator of the fractions used to estimate ECS and TCR.

Figure 2 (based on Figure 4 of LC18) Estimated probability density functions for ECS and TCR using each main results period combination. Original GMST refers to use of the HadCRUT4v5 record; Infilled GMST refers to use of the Had4_krig_v2 record. Box plots show probability percentiles, accounting for probability beyond the range plotted: 5–95 (bars at line ends), 17–83 (box-ends) and 50 (bar in box: median). Lime green shading shows the AR5 ‘likely’ (17–83% or better) ranges.

LC18 also derived, on comparable bases, ECS and TCR values for all current generation (CMIP5) GCMs for which the requisite data were available.[xiii] A majority of this ensemble of 31 CMIP5 models had ECS and TCR values that exceeded the 2.7°C and 1.9°C 95% uncertainty bounds that we derived for those parameters using globally-complete surface temperature data.

The foregoing ECS estimates reflect climate feedbacks over the historical period, assumed time-invariant. Two recent studies asserted that ECS estimates for CMIP5 models derived from forcing data comparable to that available for use in historical period (post-1850) observationally-based energy budget studies, using a constant feedbacks assumption, were biased low. They concluded that CMIP5 model ECS estimates were on average some 30% higher when derived from their response to an increase in CO2 concentration in a way that allows, insofar as practicable, for time-varying feedbacks.[xiv] We show that their calculations are biased and that, when calculated appropriately, the difference is under 10%.[xv] Allowing for such possible time-varying climate feedbacks increases the median ECS estimate to 1.76°C (5−95%: 1.2−3.1°C), using globally-complete temperature data. A majority of our ensemble of CMIP5 models have ECS values, estimated in the way designed to allow for time-varying feedbacks, that exceed 3.1°C.

It has been suggested in various studies that effects of non-unit forcing efficacy, temperature estimation issues and variability in sea-surface temperature change patterns likely lead to historical period energy budget estimates being biased low.[xvi] We examined all these issues in LC18 and found that only very minor bias was to be expected when using globally-complete temperature data.[xvii]

Over half of the 31 CMIP5 models have ECS values estimated using a comparable change in forcing to that over the historical period[xviii] of 2.9 K or higher, exceeding by over 7% our 2.7 K observationally-based 95% uncertainty bound using infilled temperature data. Moreover, a majority of these models have a TCR above our corresponding 1.9 K 95% bound.

The implications of our results are that high estimates of ECS and TCR derived from a majority of CMIP5 climate models are inconsistent (at a 95% confidence level) with observed warming during the historical period. Moreover, our median ECS and TCR estimates using infilled temperature data imply multicentennial or multidecadal future warming under increasing forcing of only 55−70% of the mean warming simulated by CMIP5 models.

I hope to discuss in more depth in a subsequent article some of the material in LC18 and its Supporting Information that has been dealt with only very briefly here.

.Nic Lewis                                                                                              April 2018.


[i] Lewis, N., and J. A. Curry, 2015: The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Climate Dynamics, 45(3-4), 1009-1023. Note: the paper was initially published online in 2014. An article about the paper and its results was posted here.

[ii] Total heat uptake by the Earth’s climate system, 90%+ in the ocean, necessarily equals the Earth’s top-of-atmosphere radiative imbalance, neglecting the tiny and near-constant geothermal heat flux (which has a negligible effect on ΔN).

[iii] Although none of the forcing estimates used are fully independent of GCMs, they do not appear to be materially affected by the ECS and TCR values of the GCMs involved. The early industrial heat uptake estimates used are GCM-derived and dependent on the GCM’s sensitivity, but they are small and a correction factor is applied to allow for the sensitivity of the GCM being higher than the energy budget derived sensitivity estimate.

[iv] Lewis, N. ,and J. Curry, 2018:The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. J. Clim. JCLI-D-17-0667 A copy of the final submitted manuscript, reformatted for easier reading, is available at my personal webpages, here. The Supporting Information is available here.

[v] Cowtan, K., and R. G. Way, 2014: Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quart. J. Roy. Meteor. Soc., 140(683), 1935-1944 (update at http://www.webcitation.org/6t09bN8vM).

[vi] Etminan, M., G. Myhre, E. J. Highwood, and K. P. Shine, 2016: Radiative forcing of carbon dioxide, methane, and nitrous oxide: A significant revision of the methane radiative forcing. Geophys. Res. Lett. 43(24) doi:10.1002/2016GL071930.

[vii] Myhre, G., and Coauthors, 2017: Multi-model simulations of aerosol and ozone radiative forcing due to anthropogenic emission changes during the period 1990–2015. Atmos. Chemistry and Phys., 17(4), 2709-2720.

[viii] The almost identical proportional reduction in HadCRUT4-based ECS and TCR estimates between LC15 and the new study reflects the fact that heat uptake and forcing changes increased in similar proportions relative to the temperature change.

[ix] See extensive discussion in section 3a of LC18. Note that the (revised) 2011 AR5 aerosol forcing uncertainty range is – as for all the AR5 forcing uncertainty ranges – merely used, after dividing by its median, to estimate fractional uncertainty in the ERF best estimate time series, as revised.

[x] The reason why recent CO2 forcing is almost unchanged despite F2⤬CO2 being 2.5% higher is that the revised greenhouse gas forcing formulae embody a slightly faster than logarithmic increase in CO2 forcing with concentration.

[xi] Andersson, S. M., et al., 2015: Significant radiative impact of volcanic aerosol in the lowermost stratosphere. Nature communications, 6, 8692.

[xii] LC18 Supporting Information, S1

[xiii] We excluded FGOALS-g2 as its 1pctCO2 simulation results are abnormal and the p2 variants of GISS-E2-H and GISS-E2-R as their model physics is intermediate between the main (p1) and p3 physics versions. That left 31 CMIP5 models. See Table 2 in the Supporting Information for their calculated ECS and TCR values. Note that the reference to ECS calculated on a comparable basis (to our observational energy budget ECS estimates) is to the ECShist values in Table 2.

[xiv] Armour, K. C., 2017: Energy budget constraints on climate sensitivity in light of inconstant climate feedbacks. Nature Climate Change, 7, 331-335.
Proistosescu, C., and P. J. Huybers, 2017: Slow climate mode reconciles historical and model-based estimates of climate sensitivity. Science Advances, 3(7), e1602821.

[xv] Section 7f and Supporting Information S5.

[xvi] Marvel, K., G. A. Schmidt, R. L. Miller and L. S. Nazarenko, 2016: Implications for climate sensitivity from the response to individual forcings. Nature Climate Change, 6(4), 386-389.
Richardson, M., K. Cowtan, E. Hawkins, and M. B. Stolpe, 2016: Reconciled climate response estimates from climate models and the energy budget of Earth. Nature Climate Change, 6(10), 931-935.
Gregory, J. M., and T. Andrews, 2016: Variation in climate sensitivity and feedback parameters during the historical period. Geophys. Res. Lett., 43: 3911–3920.

[xvii] See sections 7a, 7c and 7e of LC18.

[xviii] Where types of ECS estimate are distinguished in LC18, this type is termed ECShist. Since forcing in CMIP5 models’ historical simulations is model-dependent and unknown, their ECShist is estimated (in LC18 and other studies) using data from their simulations driven by known changes in CO2, in such a way as to mimic the ECS estimates that would be derivable from their responses to representative historical forcing.

283 responses to “Impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity

  1. Pingback: New Data Imply Slower Global Warming | The Global Warming Policy Forum (GWPF)

  2. Thanks for this update, Nic.

    I focus on your best estimate of ECS = ~1.65C. This is pretty consistent with your previous papers and posts. Importantly, it seems, for any given increase in CO2 concentration, warming will be around half that predicted by the GCMs.

    • Peter,
      For an elevator-conversation, you are better taking the ratio of TCR’s as an estimate of error in projections of future warming – at least over the next century or two. Median TCR of the GCMs ca 1.8, best TCR estimate from observations (Lewis and Curry) ca 1.2. All else being equal (which it really isn’t) the GCMs are overprojecting temperature rise per unit forcing on average by a factor of about 1.5 or conversely actual warming should be about 2/3rds of median GCM projections if we ignore epistemic challenges.

      • kribaez,

        Yes, good point and thank you. I agree.

      • stevefitzpatrick

        Paul K,
        Good point. It is prudent to keep in mind that the lower the ECS, the smaller the ratio between TCR and ECS. Eg, Nic and Judith generate a TCR/ECS best estimate of ~1.2/1.66, while GCMs average somewhere near ~1.8/3.2. The unfortunate result is that the measured transient response does not constrain the equilibrium response as well as we might hope, because modelers can cling to the high end of plausible aerosol offsets to justify continued high calculated sensitivities…. in spite of clear contrary empirical evidence. As I think I said to you years ago, I find climate modeling as intellectually corrupt an activity as I have ever encountered in science. It would be funny were it not so costly and damaging to science.

  3. Pingback: New Data Imply Slower Global Warming

  4. Curry and Lewis.
    For aerosol forcing there is an error in your text.
    It isn’t -1.9 W/m2 to -1.7W/m2 but -0.9W/m2 to -0.7W/m2.

    • meteor31
      No, the text is correct. The -1.9 W/m2 to -1.7W/m2 values are the 5% points of the original and revised uncertainty range for 2011 aerosol forcing (relative to 1750). You are thinking of the AR5 median (50% probability point) aerosol forcing value of -0.9 W/m2, which is unaffected by this change.

      • OK Nic.
        But in your fig 1 it seems that the mean aerosol forcing decrease (in absolute value) from -0.9 to somewhat as -0.7W/m2 .
        No?
        Moreover many articles from you and others mention the fact that indirect aerosol forcing should be lower in the real world than in AR5.

      • The median aerosol forcing weakens from -0.9 W/m2 in the early 1990s to -0.8 W/m2 by 2016. But that is due to incorporating the revisions to post-1990 ozone and aerosol forcing changes per Myhre et al 2017, which reflect updated emissions data, not to a change in understanding about what forcing aerosol emissions produce.

  5. Wonder what the Lewis-Curry opinion is of the mood in the literature that there is no ECS because there is no proportionality between log(CO2) and temperature and that therefore we should abandon ECS and move on to TCRE?

    Reto Knutti calls for trashing ECS and moving on to TCRE
    Knutti, R. (2017). Beyond equilibrium climate sensitivity.
    Nature Geoscience , 10.10 (2017): 727

    Here is what the editor of Nature wrote on this topic:
    “To date, efforts to describe and predict the climate response to human CO2 emissions have focused on climate sensitivity: the equilibrium temperature change associated with a doubling of CO2. But recent research has suggested that this ‘Charney’ sensitivity, so named after the meteorologist Jule Charney who first adopted this approach in 1979, may be an incomplete representation of the full Earth system response, as it ignores changes in the carbon cycle, aerosols, land use and land cover. Matthews et al. propose a new measure, the carbon-climate response, or CCR. Using a combination of a simplified climate model, a range of simulations from a recent model intercomparison, and historical constraints, they find that independent of the timing of emissions or the atmospheric concentration of CO2 emitting a trillion tonnes of carbon will cause 1.0 – 2.1 C of global warming, a CCR value that is consistent with model predictions for the twenty-
    first century.”

    Here are the TCRE citations
    Matthews, H. Damon, et al. “The proportionality of global warming to cumulative carbon emissions.” Nature 459.7248 (2009): 829.
    Allen, Myles R., et al. “Warming caused by cumulative carbon emissions towards the trillionth tonne.” Nature 458.7242 (2009): 1163.

    I have also written on this topic. My comments are available at the SSRN site
    https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2220942

    Sincerely
    Jamal Munshi

    • ” they find that independent of the timing of emissions or the atmospheric concentration of CO2 emitting a trillion tonnes of carbon will cause 1.0 – 2.1 C of global warming, a CCR value that is consistent with model predictions for the twenty-first century.”

      Are they suggesting that CO2 absorption does not follow a log dependency? Not consistent with standard physics.

    • Looking at other measures in addition to ECS and TCS seems fine. This idea of “trashing it” and never discussing it again just when the observational based ECS and TCS are showing that the GCMs are way off base seems a bit desperate. Reminds me a bit of changing” global warming” to “climate change” and jumping from heat buried in the ocean to “ocean acidification”. I guess it is much harder to pin down a moving target though.

    • There is interesting work showing ECS and TCR are constructs with no physical meaning: Beenstock, Michael and Yaniv Reingenertz [2009] Polynomial Cointegration Tests of the Anthropogenic Theory of Global Warming.

    • Jamal Mushi
      Sorry for the delay in responding. I think all of ECS, TCR and TCRE/CCR have their uses. Note that the assumption that the response to cumulative carbon emissions is roughly constant for hundreds of years after the first 50 or so years only holds in ESMs with relatively high sensitivity. In such models, the emergence of unrealised warming once the TCRE has been reached,as ocean heat uptake falls, is matched by the reduction in atmospheric CO2 levels as ocean and land CO2 uptake continues but at decreasing levels. With TCR and ECS values similar to those in Lewis & Curry 2018, the temperature declines after peaking since the unrealised warming (the difference between ECS and TCR) is much smaller, and positive carbon feedbacks are also smaller due to lower warming.

      I think the 1.0-2.1 K/TtC TCRE range you quote is too high. I derive an observationally-constrained range of about 0.7-1.6 K/TtC (median marginally over 1 K/TtC.

    • Please provide specific cite where they made this statement “To date, efforts to describe and predict the climate response to human CO2 emissions have focused on climate sensitivity”. If so, it really emphasizes how narrowly (and incorrectly) they have dealt with climate.

  6. This needs a simple, three paragraph press release that can be understood by climate writers/reporters at major serious publications like the Wall Street Journal and New York Times. Most climate “news” comes from press releases issued by university or institution press offices.

    While this is good work and scientifically correct, publishing here is preaching to the choir.

    If you want wide distribution, include a stock photo of beautiful people skiing or lounging on a beach.

  7. The weakening of the aerosol forcing lower bound seems fairly reasonable. However, something which seems to be sneaked in is a weakening of the central estimate to -0.8W/m2 as of 2011 despite the text suggesting that the AR5 estimate of -0.9W/m2 (which is specifically intended to refer to the situation as of 2011) had been retained. It appears the -0.9W/m2 figure has been shifted to 1995, which doesn’t make any sense with regards the AR5 estimate.

    Also, the substantial positive movement of aerosol forcing since 1995 based on Myhre et al. 2017 appears to be strongly influenced by models with large aerosol indirect effects, which is at odds with the central aerosol ERF estimate and weakening of the lower bound described in the paper. It also appears to be substantially influenced by models not including nitrate aerosols, which continue to cause increasingly negative forcing.

  8. Using the know change in forcing from axial tilt, and the change in temperature for stations in the extratropics by 10°latitude band has far lower CS to changes in insolation.

    https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity/

  9. Steven Mosher

    Solid work as always

    Can you report the values if you use

    1859–1882

    As the base period. In your previous work you used this period.

    1. Why the change?
    2. What effect does this change have on your answers.

    My expectation is that the temperature estimates for pre 1900 figures
    are going to improve and tighten as folks recover more data from this period.

    Consequently it is important to explain why you changed base periods
    and what the effect ( structural uncertainty) that has on the final estimate.

    • Same question from you at Climate Audit answered there. Brief answer is better data and very little effect.

      • Steven Mosher

        Problem. You didnt quantify what you meant by “better data”
        and every little effect. Going forward it would be important to know what exact numbers you used to determine “better data” and “little effect”

        My sense is that the uncertainty from the shorter period ( 1869-1882)
        Is going to be larger than the overall uncertainty from the longer period
        ( 1859-1882)

    • 1. Why the change?
      Using 1869–1882 as the base period avoids both any significant volcanism and the period of particularly sparse temperature data spanning most of the 1860s.
      2. What effect does this change have on your answers?
      1859-1882 1.64°C
      The new LC18 1.50°C
      Both answers from the article above.

    • “My expectation is that the temperature estimates for pre 1900 figures
      are going to improve and tighten as folks recover more data from this period.”
      Hard to imagine how. More of the past information is now being lost every day than is being found. What is found is either discarded if it does not fit in or is being adjusted out of all recognition, then discarded and the adjusted, tightened and improved anomalies only are kept.

      • Just more vacuous nonsense.

      • Just more vacuous nonsense.
        “From February 2016 to February 2018 (the latest month available) global average temperatures dropped 0.56°C. You have to go back to 1982-84 for the next biggest two-year drop, 0.47°C—also during the global warming era. All the data in this essay come from GISTEMP Team, 2018: GISS Surface Temperature Analysis (GISTEMP). NASA Goddard Institute for Space Studies”-
        Another month, another chance for a temperature drop.
        Or not.

  10. Nick, Can you confirm two things for me? It looks as though using the in-filled HadCRUT data did increase the ECS and TCS but that the other changes to methane forcing, etc. had a bigger effect such that ECS/TCS were a bit lower than LC15. Also, is TCS also related to a doubling of CO2? Will 1% increase over 70 years double it? I am just wondering if the 1.2 C for some TCS means that 70 years from now the temperature would be 1.2 C higher or is is still based on a doubling of CO2 such that in 70 years it will be 1.2 C higher than it was in 1880 or 1920 when CO2 was 280 ppm? If the latter, then this suggests only a 0.4 – 0.6 C increase over the next 70 years? Thanks.

    • billw1984
      I assume that by TCS you mean TCR.
      Using the in-filled HadCRUT data did increase the ECS and TCS but the other changes to methane forcing, etc. had a similar offsetting effect such that ECS/TCR median estiamtes were almost identical to LC15.

      1% a year compound growth over 70 years (actually 69.66 years) results in a doubling: 1.01^69.66 = 2.000.

    • (and we can assume that by Nick you mean Nic)…

  11. billw1984: what du you mean with “TCS”? And a hint: b4 asking in public it’s also possible to inform yourself, google is your freind:-)

    • Yeah. Had to leave for work. Do you have a job? :) It was a simple question that did not take long to answer. TCS is catching on. Nic used it in his response. :)

  12. Pingback: New data imply lower climate sensitivity, thus slower global warming | Watts Up With That?

  13. “…variability in sea-surface temperature change patterns likely lead to historical period energy budget estimates being biased low… .[we] found that only very minor bias was to be expected when using globally-complete temperature data.”

    Passes the smell test so I guess we must conclude government-funded climate science stinks to high heaven.

  14. Hi Nic. Dessler has a new paper arguing that surface temperature is poorly constrained by TOA radiative imbalance and that one should use temperature at 500 millibars. Just wondering if you had any thoughts on this paper. Because of lack of data prior to the use of weather balloons, you would be using a shorter time period.

    • What they argue is that outgoing energy is better correlated with 500kPa temperature than surface temperature.

    • dpy6629
      Dessler’s approach reduces one aspect of uncertainty at the expense of introducing another, IMO far worse, one. His estimate of ECS involves multiplication by a key ratio that converts 500-hPa tropical temperature interannual feedback strength into long term forced response feedback strength. As he admits, that ratio “comes from climate model simulations; we have no way to observationally validate it, nor any theory to guide us”. So his ECS estimate is strongly dependent on the feedback behaviour of GCMs.

    • Apparently Dessler is convinced they have already disproven Nic and Judith’s paper.

      • I think what Andrew Dessler’s tweet certainly shows is that he wants to convince other people that his work demonstrates that ours is wrong. It doesn’t. It is just a wild, unjustified, claim.

      • My experience is that Dessler is awfully confident in his own results. I think he is overconfident particularly in GCM results.

      • nobodysknowledge

        In discussions at SoD Dessler has said that he has more confidence in estimates based on models than on observations.

      • nobodysknowledge

        Dessler: “Mr. Lewis suggests that one way around this is if the water vapor + lapse rate feedback are overstated b/c the atmosphere is not warming up as fast as expected (“no hot spot”). The evidence on that is mixed, with some data sets showing expected warming and others not. Obviously, some of these observational data are wrong — and my guess is that the data sets that don’t show a hot spot are wrong.
        The reason I have that view is that the atmosphere and surface are tied together by pretty simple physics (see moist adiabatic lapse rate) and if the atmosphere is not warming as fast as expected, then something really weird is going on. The more parsimonious explanation is that data sets that don’t show warming are wrong.”
        https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/

      • Dessler is an activist IMO much more than a scientist. More about this species in this recent paper: http://journals.sagepub.com/doi/abs/10.1177/1077699018769668?journalCode=jmqc . It’s about the German scene more or less, but IMO one can extrapolate it to the global one. Politisiezed science is the counterpart of real science..
        Why don’t we find the critiques on twitter ect. and not here where is the chance to point the doubts to the lead author? My response: Because the critique is only propaganda and the oponents are afraid of the response.

      • Lol. Dare to defy the great Lewis. No gatekeeping here.

        Meanwhile, Bjorn Stevens, once a one of the “good guys” is a coauthor.

      • That’s an interesting abstract Frank. It’s a self defeating cycle. To get people to take action, you exaggerate and when you exaggerate people become either fatalistic and think they can’t do anything or they become resistant to your message because of the long track record of exaggerations.

      • Looks like he did say that,

        “The more parsimonious explanation is that data sets that don’t show warming are wrong.”

        https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comment-123475

        See the entirety for more context. Data is assumed to have been used to develop the formulas. Then the formulas are used to evaluate the data. It could be the formulas only captured what the data does some of the time. They capture modes. Some formulas may almost always work. Stringing together many formulas, or iterating a limited number formulas many times might just yield something not useful.

      • Well the moist adiabatic lapse rate is a simple theory that would need observational support. This is especially true as convection is an ill-posed problem and recent work shows that GCM’s models of it leave out critical elements such as aggregation. Further, these sub grid models can be credibly tuned to give a wide range of ECS. It strikes me as an area where more fundamental work is needed and an example of “simple physics” really being “verbal formulations” that lack quantification.

      • nobodysknowledge: Dessler:
        The reason I have that view is that the atmosphere and surface are tied together by pretty simple physics (see moist adiabatic lapse rate) and if the atmosphere is not warming as fast as expected, then something really weird is going on. The more parsimonious explanation is that data sets that don’t show warming are wrong.”

        That is a weak basis for a belief. The hydrologic cycle is not an adiabatic process. And it is not simple. To start with, look up CAPE in any reference; e.g. Thermal Physics of the Atmosphere, by Maarten H. P. Ambaum, p 121 et seq.

      • JCH
        “Meanwhile, Bjorn Stevens, once a one of the “good guys” is a coauthor.”

        Stevens and Mauritsen (another “good guy”) supplied the data that Dessler used. It appears to be common practice in climate science for suppliers of non-published /non-publicly available data that a paper is strongly dependent on to be invited to be, and to become, co-authors. I don’t think you can read much into who the co-authors are in a case like this.

    • JCH; what about arguments?

      • I think accusations of alarmism/activism are nonscientific BS.

      • Where is the stadium wave? When’s it coming? Oh wait, it’s ever present and also right around the corner. Pray.

      • JCH, I don’t see any gatekeeping here on this post. I personally want to try to understand the differences between Nic and Andrew scientifically. Nic has helped me do that. The evidence of Dessler’s activism is very strong including his own descriptions of his beliefs and in his presentations. Pointing this out is not BS but completely true and not irrelevant.

      • It’s completely irrelevant.

        AE Dessler has an extensive publication record, and Bjorn Stevens, a CargoCult Etc. designated good guy, a man of courage, is coauthor on the Dessler paper.

        As for the arguments, I think only time will obliterate the work of Nic Lewis: the advance of observations and the advance of cloud science, of which Dessler and Stevens are leading scientists.

        Since 2011, the annual rate of warming is currently .0635 ℃, and I doubt that surge in warming is approaching an end.

      • http://www.woodfortrees.org/graph/plot/hadcrut4gl/from:2000

        Global temps have already returned to the anomaly which defined the pause. (since 2011, the annual rate of warming is currently 0°C)…

      • JCH: Dessler breathes fire and brimstone on twitter but he doesen’t justify his claims. Nic has dealt with Desslers claims and rejected it but Dessler goes on. This is not a proper scientific behaviour and the only explanation IMO is the one I gave here. If you think the paper of Nic and Judy is wrong please argue or let it be.

      • “afonzarelli | April 26, 2018 at 10:22 pm |
        Global temps have already returned to the anomaly which defined the pause. (since 2011, the annual rate of warming is currently 0°C)…”
        JCH | April 26, 2018 at 6:33 pm |
        “Since 2011, the annual rate of warming is currently .0635 ℃,”
        JCH is right re the trend. Fonz, you have to have equal cooling under the line to that above the line to claim a pause.
        Sorry.
        “From February 2016 to February 2018 (the latest month available) global average temperatures dropped 0.56°C.”
        Should make you feel better.

      • angech, not so… There is a certain anomaly at which the world stopped warming. Once we return to that anomaly, the pause is back. Under your definition of the pause, we could return to the anomaly of the pause indefinitely and the pause would never come back. Trend lines do not define the pause!
        Now, obviously, I was being facetious there with jch and so was not entirely accurate for other reasons. (just trying to highlight his knack for making warming out of no warming at all)…

  15. “That implies that the ‘hiatus’ – the period of slow warming from the early 2000s until a few years ago – had little effect on estimation.”

    All well and good for eliminating biases, but the hiatus actually gives human forcing. The ocean is the basis of any concept of a distinction between equilibrium and transient “sensitivities”. Ocean fluctuations (AMO, PDO, ENSO) go on the natural side of the ledger, and blur our equilibrium/transient distinction.

    Radiation travels at the speed of light, and radiative equilibrium in the atmosphere is achieved in real time. This includes the radiative “forcing” of the human increment of atmospheric CO2. LW radiation from CO2 can only warm the ocean to the extent that it warms the atmosphere, thereby reducing the ocean atmosphere temperature gradient.

    The CERES TOA data, which coincidentally corresponds with the hiatus, does not show any decrease in this instant equilibrium LW radiation to space. It actually shows a small increase.

    What is decreasing in instant equilibrium is SW (reflected) radiation to space.

    CO2 does not significantly absorb SW radiation. The ocean does, and water in the atmosphere does.

    Since LW radiative loss to space has increased during the hiatus, and the transient (instantaneous) human increment of CO2 forcing is included in this increase, and ocean fluctuations reputedly responsible for the hiatus represent a limit on natural “forcing”; the remainder is human forcing.

    • The changes are consistent with changing low level cloud. Reduced SW reflection and increased IR emission. They are large and consistent with Argo data.

      Cooling (since 1998 perhaps) and latter day warming. Almost exclusively in the tropics. The ocean response is fairly quick. The rate of warming exponentially decreases in the static view as the ocean/atmosphere heat gradient decreases. In the dynamic view there much larger changes in TOA radiaitive flux with large ocean warming and cooling on annual and longer timescales.

      Low cloud is anti-correlated to SST.

      “The modeling approach involves the incorporation of observed patterns of satellite‐derived shortwave cloud radiative effect (SWCRE) into the coupled model framework and is ideally suited for examining the role of local and large‐scale coupled feedbacks and ocean heat transport in Pacific decadal variability. We show that changes in SWCRE forcing in eastern subtropical Pacific alone reproduces much of the observed changes in SST and atmospheric circulation over the past 16 years, including the observed changes in precipitation over much of the Western Hemisphere…

      Clouds play an important role in the modulation of Earth’s climate because they are very effective at reflecting incoming solar radiation and absorbing and emitting Earth’s infrared radiation. Unfortunately, the accurate simulation of clouds and their effects on radiation (particularly low clouds that have a net cooling effect on the planet) remains one of the greatest challenges to climate modelers who want to predict climate variations on decadal to centennial timescales.” https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016GL071978

      • Maybe it is difficult to assess cloud types at night, but the ISCCP kept records of daytime cloud types from 1983 through 2009.
        The significant decrease in cloud type over the entire period, interrupted by the eruption of Mt. Pinatubo, is in cumulus clouds. These are low level clouds, ranging from .67 to 2.7 km according to MODTRAN.

        Many other cloud types are flat through about the millennium, and then make a change in trend. Among these are stratocumulus, another low level cloud, and nimbostratus, yet another. Stratocumulus decreases after the millennium at roughly the rate of cumulus, but nimbostratus increases after the millennium.
        As usual, it’s fantastically complicated. Maybe CO2 boundary layer warming inhibits cumulus, but ocean fluctuations control stratocumulus and nimbostratus in opposite ways? The net is a reduction of low level clouds as you suggest.

        There is information in the hiatus. At the very least, the hiatus demonstrates that these ocean fluctuations very nearly equal the human forcing. The ocean fluctuations are currently out of phase. What will natural variability be when they align?

      • There are a number of difficulties in observing cloud – and dozens of adjustments. But there are changes in outgoing radiant flux – similar in ERBE and ISCCP – that are both large and have obvious connections to natural phenomenon. Nice graphic btw.

  16. Well done. The key thing IMO is not the LC18 result, in line with previous findings. It is that the various critiques of LC15 dont stand up to serious scrutiny.

    • Yes, the key chapter is no. 7 ( Discussion). There Nic and Judy argue with many reservations (in short: the models are more real than observations) vs. the used method from authors after 2014.

  17. This plot:

    Has punch, a good clear important point. It says, What crisis? I think it has planning value superior to most information. However, it depends on what the author’s point is?

  18. Nic Lewis, thank you for the essay.

    Congratulations on getting it published.

    • Steven Mosher

      so much for the myth of gatekeeping

      • The congratulations, perhaps, was offered much in the same way one would offer congratulations to a PowerBall winner. Knowing they beat extremely long odds, they deserve a pat on the back.

        I offer congratulations to both authors for having a paper that, as of this minute, has not been challenged by any of the usual suspects, ostensibly because they lack the intellectual heft to do so.

        I’m not as confident in it getting a Most Favored Paper designation in the next IPCC report, however. Some work in the past, like the Houston and Dean paper (2011), have been deep sixed by the gate keepers, and have not found their way into the august list of approved studies.

        The fidelity of the IPCC could get a boost if this paper is part of the calculus in the next report.

      • That’s what I’ve been naming it to myself: gatekeeping review.

        Gatekeeping review is not how science was done until recently. This is a change that has occurred within our lifetimes. Calling it peer review seems disingenuous.

        Peer review should mean other scientists publicly commenting on and criticizing research. The more people that comment on a paper, the more that paper has been peer reviewed. There’s also a good deal of difference between the depth of understanding that different people manifest when commenting on a paper. Having that criticism public makes it possible for others to evaluate that.

        The disadvantages and potential for abuse of gatekeeping review should be obvious. Allowing one or few people to block the publication of a paper. Hidden reasoning that other people cannot evaluate. The practical reality that most papers are being accepted or rejected based on no more than a few hours of thought. And the fact that all to many people, especially non-scientists, falsely believe that being ‘peer-reviewed’ is a significant affirmation of the truth of the research.

      • Their implicit (and mainstream) assumption that 100% attribution can be made with suitable endpoint choices is common to this class of papers. They can sneak this attribution assumption by the skeptics here apparently unnoticed along with the implication that it doesn’t even take much CO2 sensitivity to account for all the warming so far anthropogenically.

  19. Lewis and Curry’s new median estimate for ECS* of 1.66°C (5–95% uncertainty range: 1.15–2.7°C) is consistent with Chris Monckton’s calculation of ECS=1.2.

    It would be a very satisfactory – if somewhat anticlimactic – resolution of the bitter and rancourous climate war to simply coalesce at a physically realistic and observation-consistent lower ECS of this order.

    • Where are the deeep deniers on this one? I will get them started:

      What’s the point of talking about ECS, when there is no experimental proof that adding CO2 to the atmosphere can cause warming?

  20. The 1.66 is veryyyyyyyy close to my calculation of 1.62 based on the specific heat ratio of the different gases in the atmosphere.

  21. Am I wrong in assuming that the ECS and TCR values do not take into account a lasting [i.e. decadal to inter-decadal] warming/cooling response of the Earth’s atmosphere to the ENSO cycle? I believe that there is considerable evidence to prove that this is indeed the case.

    The (Extended) Multivariate ENSO Index

    The Multivariate ENSO Index is defined at the NOAA website located at:
    http://www.esrl.noaa.gov/psd/enso/mei/

    The Extended Multivariate ENSO Index is defined at the NOAA website located at:
    http://www.esrl.noaa.gov/psd/enso/mei.ext/index.html

    The important point to note is that Multivariate ENSO Index is the most precise way to follow variations in the ENSO phenomenon:

    The Cumulative Sum of the MEI

    Negative values of the MEI represent the cold ENSO phase, a.k.a. La Niña, while positive MEI values represent the warm ENSO phase (El Niño).

    [N.B. If the cumulative sum of the MEI over a given epoch steadily increases throughout the epoch then the impact of El Ninos exceed the impact of the La Ninas over this epoch.

    If the cumulative sum of the MEI over a given epoch steadily decreases throughout the epoch then the impact of La Ninas exceed the impact of the El Ninos over this epoch.]

    The dotted red line in the above graph shows the cumulative sum of the extended Multivariate ENSO Index (MEI) between the years 1880 and 2000 A.D. The cumulative sum has been taken over each of the four 30 year epochs, starting in the years 1880, 1910, 1940, and 1970.

    The solid blue line in the above graph shows the cumulative sum of the extended Multivariate ENSO Index (MEI) between the years 1886 and 2006 A.D. The cumulative sum has been taken over each of the four 30 year epochs, staring in the years 1886, 1916, 1946, and 1976.

    It is clearly evident from this plot that whenever the cumulative MEI index is systematically decreasing over a 30-year epoch i.e. between 1886 and 1915, and between 1946 and 1975, the world’s mean temperature decreases. It is also evident that whenever the cumulative MEI index is systematically increasing over a 30-year epoch i.e. between 1916 and 1945, and between 1976 and 2005, the world’s mean temperature increases.

    CONCLUSIONS

    1. The ratio of the impact of El Ninos to the impact of La Ninas upon climate can be monitored over multi-decadal time scales using the cumulative MEI.

    2. The cumulative MEI shows that since roughly 1880 there have been four main climate epochs, each 30 years long. There are have been two 30 year periods of cooling (i.e. from 1886 to 1915, and from 1946 to 1975) and two 30 year periods of heating (i.e. from 1916 to 1945, and from 1976 to 2005).

    3. Periods of warming occur whenever the impact of El Ninos exceeds the impact of La Ninas. Periods of cooling occur whenever the impact of La Ninas exceed the impact of El Ninos.

    • On a better track here than simply doing something or other else with the MEI. ENSO modulates the global energy budget over millennia.

      • Robert, We have done just that in the paper I wrote with Nikolay Sidorenkov:

        https://www.researchgate.net/publication/324102702_A_Luni-Solar_Connection_to_Weather_and_Climate_I_Centennial_Times_Scales

        This is the first paper in a series that explains the Lunar mechanism. These papers show that temperature changes on the Southern Colorado over thousands of year can be directly linked to lunar tidal changes associated with extreme Perigean New/Full moons that operate over centennial to millennial timescales. In addition, they show that the lunar tidal mechanisms that operate over centennial to millennial times scales are directly associated with the lunar tidal mechanism that produces the warming/cooling of the Earth’s atmosphere on decadal timescales and the initiation of El Nino events.

      • Robert,

        This is just a restatement of the work of Russian school of thought that prevailing (air-masses and) winds over the Eurasian continent oscillate between being meridional (north-south) to zonal (east-west) every 30 years. I am sure that you are aware of the results that are obtained using the (Vangengeim-Girs) Atmospheric Circulation Index – ACI and its relationship to world mean temperature.

        Many of the Russian researchers have moved on and they now know that the 60-year pattern in the meridional/zonal wind flows (and its manifestation in the instability of the mid-latitude jet-stream) is simply a reflection of the 59.75-year cycle seen in the extreme seasonal spring-tides produced by Perigean New/Full moons.

        The article you refer to points at the manifestation of the driving mechanism and not the actual driving mechanism itself. There is proof of this assertion in the second paper of our series which should be coming out in the coming months.

      • Here’s what the authors say about the cause of the decadal shift between the warming and cooling phases:.

        “Hypotheses to explain the decadal shifts are not mutually exclusive and include: (1) atmospheric-ocean interaction dynamics; (2) the rate of Atlantic meridional overturning circulation; and (3) a statistical combination of climate indices and Arctic sea ice variability. To date an accepted causation is uncertain for any of these proposals.”

      • To date an accepted causation is uncertain for any of these proposals.”

        I’m not sure if you are referring to the causation of these effects directly, I thought that led back to gravitational interactions.
        If however you wonder how these effects affect temperatures, it’s that Tmin follows dew points, which follow the ocean cycles. Changes in co2 have very little impact on Tmin.

      • Changes between meridional and zonal patterns emerge from the polar annual modes. This has been linked to solar UV/ozone chemistry in the upper atmosphere.

        e.g. https://www.nature.com/articles/ncomms8535

        The suggestion is that the 20 to 30 year periodicity is related to the Hale solar cycle.

      • Robert,
        Let me spell it out for you. The article you referred me to above defines the NAO as follows:

        “The NAO is defined by sea level pressure differences between the low pressure cell around Iceland and the high pressure cell around the Azores [1]. A positive NAO index indicates a large pressure difference between Iceland and the Azores, whereas a negative index indicates a small pressure difference [1]. The NAO is essentially a measure of the variability of the zonal flow with [the] strong zonal flow during the positive cycle [Zone wind flow] and meridional Rossby wave blocking north-south patterns during the negative cycle [Meridional Wind].”

        The article then goes on to define the AMO as follows:

        “The AMO may be described as the time-integrated effect of the NAO on ocean currents that results in an out of phase surface cool pool/warm pool in the North Atlantic during positive/negative NAO.”

        Now the NAO is proportional to the Time Rate of Change of the LOD:

        Figure 1: The top graph shows the time rate of change of the Earth’s length of day (LOD) between 1865 and 2005. (Note: The LOD data has been transformed into arbitrary units so that it can be compared to the NAO index). Positive means that LOD (length-of-day) is increasing compared to its standard value of 86400 seconds and that Earth is slowing down. The bottom graph shows the North Atlantic Oscillation Index between 1864 and 2006. The data points that are plotted in both graphs have been obtained by taking a five-year running mean of the raw data.

        Hence:

        AMO is proportional to_____time integral of the NAO
        NAO is proportional to_____time derivative of the LOD

        This implies that the:

        AMO is proportional to_______LOD

        This could mean that the intensity of the zonal winds could control the changes in the LOD, which is plausible.

        OR

        the changes in the LOD control the AMO, and hence the changes in the world mean temperature on decadal timescales.

        I have evidence to support the latter explanation in my paper:
        Wilson, I.R.G., 2011, Are Changes in the Earth’s Rotation
        Rate Externally Driven and Do They Affect Climate?
        The General Science Journal, Dec 2011, 3811.

        Thus, it appears that the data supports the latter explanation not the former.

      • The NAO is the Atlantic expression of the AO – otherwise kn own as the Northern Annular Mode. The AMO is indeed an expression of the NAO/AO – but not much relevant. The same meridional patterns influence influence NH temperature in the zone of influence. But the global temps are more proximately the result of changes in the Pacific state – with the spinning up of winds and ocean gyres in both hemispheres. The change in ENSO state you started with?

        There is no forcing as such but a Lorenzian trigger in a spatio/temporal chaotic system. The most modern hypothesis being solar UV/ozone influencing polar surface pressures through atmospheric pathways defined in the process models at the core of the last study I linked to.

        I have changed my mind – you don’t know a damn thing. You have an idiosyncratic theory with not a shred of physical mechanism.

      • Robert,
        I am in agreement with you because of my paper:

        Wilson, I.R.G., 2011, Are Changes in the Earth’s Rotation
        Rate Externally Driven and Do They Affect Climate?
        The General Science Journal, Dec 2011, 3811.

        which shows that Sun drives the longer-term changes in Earth’s rotation. These Sun-induced changes in the Earth’s LOD precede observed change in the PDO by roughly 8 years. They also drive the observed changes in the AMO which appear to drive the observed changes in the world’s mean temperature.

        What I am claiming is that the Moon is synchronized with the observed changes in the Sun and so its tides act to reinforce the solar forcing.

      • Robert,
        You say that:
        “I have changed my mind – you don’t know a damn thing. You have an idiosyncratic theory with not a shred of physical a [sic] mechanism.”

        I think that you have made a terrible mistake. You have come to this conclusion without knowing the full extent of my arguments. I am no expert on the AMO but I do know how to construct a simple logical argument. I will try to give my perspective as an astrophysicist in the hope that you will see the overall argument from a different viewpoint. I will hope that you forgive my general ignorance about the AO/NAO & AMO connections.

        1. There are straight-forward mathematical arguments to show that a zonal pressure difference between two mid-latitudinal regions of the Earth’s atmosphere, when measured at a given fixed altitude in the troposphere (which is effectively the NAO index), is proportional to the time rate of change of the LOD.

        I have presented a graph (above) that conclusively shows that this is indeed the case. Note – the data shows that there is no time-lag between these two parameters.

        2. The conventional explanation for this rests upon the conservation of rotational angular momentum. If the westerly wind speeds in the North Atlantic increase (as is the case when you have a positive NAO), then the underlying rotational angular momentum of the Earth must decrease by an amount that balances the increase in rotational angular momentum of the atmosphere. This argument assumes that there is no external torque acting upon the Earth and its atmosphere.

        3. If you took this at face value, then you would conclude Earth’s rotation is just responding to whatever the charges are the mid-latitude westerly winds [which over the Atlantic ocean are a manifestation of the NAO/AO].
        This would be supported by the lack of a time-lag between the NAO index and the time rate of change of LOD.

        4. However, I believe that I have conclusively shown that deviations in the rotation rate of the Earth from its long-term trend are closely synchronized with the Barycentric motion of the Sun over the last 350 years. Not only that, I claim (using observational evidence) that these deviations in the Earth’s LOD from its long-term trend precede changes in the PDO index by roughly 8 years. This means that the changes in the Earth’s rotation rate act as a causation of the change in the PDO index.

        [I am guessing that this is the point where you think that I am off with the fairies but I think that you have misunderstood what I am trying to say.]

        I am not saying that the Barycentric motion of the Sun is in any way directly related changes in the Earth’s rotation rate. However, I am saying that there must a third factor that causes what appear to be two completely different phenomena to appear to vary in synchronization.

        As an astronomer, I can show you that the cycles that are present in the Barycentric motion of the Sun are also present in the complex motion of the Moon about the Earth. This leads me to conclude that underlying (third factor) that creates an apparent link the Barycentric motion of the Sun and the Earth’s rotation is the tidal influence of Moon on the Earth’s oceans and atmosphere – with particular reference to the ENSO cycle and its long-term apparition, the PDO.

        5. Logic tells you that if the changes in the Earth’s rotation precede the changes that occur 8 years prior to any changes in the PDO, then it is very likely that whatever is causing the changes in the Earth’s rotation rate must also be driving the PDO.

        I have spent the last four or five years searching for observational evidence that the lunar tides to the ENSO cycle. I believe that I have been able to identify the actual physical mechanism involved. This mechanism will be revealed in the second paper in my current series of papers being written with Nickolay Sidiorenkov.

        All I can say is that I have a physical mechanism related to the extreme Perigean Spring tides that initiate the El Nino events through their influence upon the generation of Madden Julian Oscillations.

        I hope that you will reconsider your objections to my work.

      • Robert,

        I have visited your blog site called Terra et Aqua and seen that one of your subtopics is headed “The illusion of climate cycles”. This gives me some idea where you are coming from and why you have made some of your claims. Here is my interpretation of what you are saying:

        In your most recent post on JC’s blog site you said:

        “But the global temps are more proximately the result of changes in the Pacific state – with the spinning up of winds and ocean gyres in both hemispheres. The change in ENSO state you started with.”

        0n your own blog site you mention that: “Changes in trajectories of global surface temperature occur at the same times as shifts in Pacific climate state.” and you refer to a paper by (Swanson et al, 2009, Has the climate recently shifted?). You say that the authors of this paper “used network math across a number of climate indices to confirm that synchronous chaos is at the core of the global climate system. This has led you to conclude that: “Climate is a globally coupled spatio/temporal chaotic system.”

        On your own blog site, you go on to say that this means that:

        “More or less upwelling in the eastern Pacific is linked to changes in wind and gyre circulation – in both hemispheres – driven by changes in surface pressure in the polar annular modes. This, in turn, has been linked to solar UV/ozone chemistry translated through atmospheric pathways to polar surface pressure.

        In your most recent post on JC’s blog site, you go on to say that:

        “There is no forcing as such [of the climate system] but a Lorenzian trigger in a spatio/temporal chaotic system. The most modern hypothesis being solar UV/ozone influencing polar surface pressures through atmospheric pathways defined in the process models at the core of the last study I linked to.

        I have two questions/comments/queries about the things you are claiming.

        1. You are claiming that “More or less upwelling in the eastern Pacific is linked to changes in wind and gyre circulation – in both hemispheres – driven by changes in surface pressure in the polar annular modes.”

        The Antarctic polar annual mode is moderated by 9.1-year variations in the mean latitude of the peak in the Southern Summer [DJF] Sub-Tropical High-Pressure Ridge [STHPR].

        Ian R. G. Wilson
        Lunar Tides and the Long-Term Variation of the Peak Latitude Anomaly of the Summer Sub-Tropical High-Pressure Ridge over Eastern Australia, The Open Atmospheric Science Journal, 2012, 6, 49-60

        Ian R. G. Wilson and Nikolay S. Sidorenkov
        Long-Term Lunar Atmospheric Tides in the Southern Hemisphere
        The Open Atmospheric Science Journal, 2013, 7, 51-76

        These variations are driven by 9.1-year and 20.29/10.15-year lunar tidal variations in the Earth’s atmosphere and oceans.

        These lunar-driven systematic decadal variations of pressure with the latitude of the STHPR would play a far greater role in setting pressure of the surface pressures in the polar regions than the influences of UV/Ozone.

        2. You also claim that: “the global temps are more proximately the result of changes in the Pacific state – with the spinning up of winds and ocean gyres in both hemispheres. The change in ENSO state you started with.”

        I have definitive proof that El Nino events are triggered by lunar-generated Kelvin-like waves that travel along the Coriolis in the form of Madden Julian Oscillations. The long-term aggregate of El Nino events plays a seminal role in determining the state of the PDO and hence indirectly the world’s mean temperature on decadal timescales.

      • Your problem is that it is the La Nina normal that drives ENSO. You are on the wrong side of the ocean.

    • Does the lack of a response mean no? I would have thought that the basic idea was to include forcings on the atmosphere from all sources, including man-made CO2 and then to see how the temperature of the atmosphere responds to these forcings. This method works provided you include ALL of the forcings.

      My contention is that you have not included all of the forcings. The ENSO cycle clearly acts as forcing on the atmospheric that produces temperature changes lasting of decadal time-scales.

      I believe that lunar tides that are formed by Extreme Perigean New/Full moons that occur above the Equator and the latitudes of the lunar standstills produce Kelvin waves that travel along the (Coriolis) equator in the form of Madden Julian Oscillations (MJOs). Extreme forms of these MJOs known as Pacific Penetrating MJO’s generate Westerly-Wind Bursts (WWBs) in the western equatorial Pacific ocean which trigger El Nino events. The subsequent ENSO cycles modulate the absorption by the atmosphere of solar energy over a wide range of latitudes, as a result of regional variations in the total amount and type of cloud cover.

      Part of this overall process includes the dissipation of lunar tidal energy in the depth of the large ocean basins which helps drive the upwelling of deep ocean water that is associated with the ENSO process.

      In summary, the lunar tides of the Moon act as a systematic forcing agent upon atmospheric temperature over decadal time scales through its long-term influence upon the ENSO cycle. It does this by triggering EL Nino events that initiate the ENSO cycle. These, in turn, modulate the rate of warming and cooling of the Earth’s atmosphere by systematically changing the Earth’s regional cloud cover.

    • astroclimateconnection: I hope that you will reconsider your objections to my work.

      That was a good series of posts. Thank you.

  22. So in a nutshell, we could say that a hadsst3 0.6°C warming of the global sea surface since 1869 could be seen to be mostly due to a sprinkle of anthropogenic GHG’s, but only while it is believed that the Sun has nothing to do with major ocean cycles and their effects on clouds and water vapour.
    http://www.woodfortrees.org/graph/hadsst3gl/from:1869

  23. Pingback: Surprise: ‘Climate Change’ Not As Bad As We Thought, Say Scientists » Pirate's Cove


  24. the transient climate response (TCR) is unlikely to be larger than about 1.8C. This is roughly the median of the TCR’s from the CMIP3 model archive, implying that this ensemble of models is, on average, overestimating TCR. ~Isaac Held (2012)

  25. https://www.express.co.uk/news/uk/950748/climate-change-scientists-impact-not-as-bad-on-planet

    Climate change is ‘not as bad as we thought’ say scientists

    The study questioning the future intensity of climate change was carried out by American climatologist Judith Curry and UK mathematician Nick Lewis.

    The study in the American Meteorological Society’s Journal of Climate predicts temperature rises of 1.66C compared to one IPCC forecast of 3.1C and 1.33C compared to another IPCC study predicting 1.9C.

    The 2015 Paris climate agreement sought to limit climate change to 2C above pre-industrial levels and no more than 1.5C if possible.

    Mr Lewis, said: “Our results imply that, for any future emissions scenario, future warming is likely to be substantially lower than the central computer model-simulated level projected by the IPCC, and highly unlikely to exceed that level.”

    This story from the UK Express was picked up by the MSM and subsequently the Drudge Report

  26. As far as I understand even the IPCC seems to agree that a small increase in CO2 is not capable of causing the assumed temperature rise (sensitivity). (AR5 ch 8, FAC8.1) Their trick is to assume that a minor increase in the radiative forcing caused by increasing CO2 causes an increase in evaporation, and since water vapour is a more effective greenhouse gas than CO2, H2O is the main cause of the warming effect. The funny thing is that this is not considered by the IPCC a forcing but only a feedback.

    .

    • nobodysknowledge

      BW: “H2O is the main cause of the warming effect. The funny thing is that this is not considered by the IPCC a forcing but only a feedback.”
      I think Lerwis and Curry are treating H2O as a feedback too. It would be nice to see how water vapor and clouds change in the 2XCO2 scenario, as H2O is central to the climate sensitivity.

  27. H2O is the main cause of the warming effect. The funny thing is that this is not considered by the IPCC a forcing but only a feedback.

    The logic behind is that feedbacks are a consequence of the effect of forcings. The problem in reality is far more complex. The CO₂ we produce is a forcing, but the CO₂ released by the oceans as they warm is a feedback.

    In practical terms anything coming from outside the Ocean-Atmosphere coupled system is a forcing (anthropogenic, volcanic, solar, lunar, Milankovitch), and anything within the system is either a feedback if it is a response to a forcing-induced change, or natural variability if it isn’t.

    It is logical then, that H₂O is not considered a forcing under this paradigm.

    The problem is that the paradigm induces to think that feedbacks have a secondary importance in terms of climate change to forcings. However in a dynamically complex chaotic system the non-linear behavior of the feedback might be more important in the final result that the change in forcing.

    • The problem is that the paradigm induces to think that feedbacks have a secondary importance in terms of climate change to forcings. However in a dynamically complex chaotic system the non-linear behavior of the feedback might be more important in the final result that the change in forcing.

      It is more important than forcing, and I think that’s the reason they remove it, it effectively has canceled the increases in GHG forcing, as the troposphere has to abide by PVT and the impact on water vapor.
      Why does it slow or stop cooling in the middle of a clear calm night, while the radiative sky is 100F colder than the surface, as measured from the surface.
      This is the only question people need to understand.

    • “However in a dynamically complex chaotic system the non-linear behavior of the feedback might be more important in the final result that the change in forcing.”

      Simple, but brilliant!

  28. Pingback: STUDY: UN climate Models ‘inconsistent’ with real temperature

  29. Pingback: Much Ado about the Unknown and Unknowable | POLITICS & PROSPERITY

  30. It is in all respects an impossible math. With models – it is a nonlinear math.

    “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of (perturbed physics) ensembles of model solutions.” IPCC TAR 14.2.2.2

    Within this simplistic – there is no other word for it – framework is far too much that is glossed over.

    “In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.” IPCC 4AR 2.4.4.1

    In principle it seems real enough. A 2.1 W/m2 warming in SW and 0.7 W/m2 coolingt in IR. “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” Loeb et al 2012

    The physics of large natural variability involves Rayleigh–Bénard convection and nonlinear transitions between closed and open cell cloud. Closed cells with high albedo tend to form over cold, upwelling regions and open – low albedo – over warmer seas (Koren et al 2017). The dynamic involves upwelling air in the center of closed cells and downwelling at the outsides. The transition to open cells involves aerosol loading and droplet formation that rain out the center of closed cells. The tropical Pacific is where SST varies most and over a large part of the global tropics.

    Let us know when you decide this is real.

  31. Dessler’s objections to LC18 sound much like what he did several years ago in response to our papers demonstrating that time-varying internal radiative forcing pretty much prevent feedback (and thus ECS) diagnosis from short-term temperature and radiative flux variations. I’m not sure he actually understands what others have done, now including the LC (and Otto et al.) alternative methodology of diagnosing feedback from long-term changes in Tsfc, ocean heat content, and assumed radiative forcings. None of these methods are great, because of associated assumptions. Be he seems too quick to discount ANY study that suggests low ECS. I wonder why? I’m willing to consider high or low, whereas he just published a paper saying there is NO evidence of ECS below 2 deg. C.

    • In an extensive interaction with him, he seemed really rather unable to defend technically his work. My supposition is that like lots of climate science, his work is fraught with uncertainties. It is certain that many of his papers rely fundamentally on GCM simulations for their conclusions.

      It is true that energy balance methods are simplistic and rely on other methods to determine forcings that may be questionable. But they are surely better than GCM’s whose ECS has recently been shown to be strongly dependent on unconstrained parameter choices in their models of tropical convection and clouds. This really I think is a pretty strong negative result and should give cause to shift resources away from GCM’s to other lines of inquiry.

      • You don’t need GCMs to show higher values. Just take the temperature and CO2 changes since 1950 to compute an effective TCR and you get 2.4 C per doubling. Conservatively 80% of this is CO2 versus the added effect of aerosols and other GHGs, still giving something near 2 C per doubling. Plus they are seen to be strongly related when plotted, both accelerating since 1950. LC18 only explains half this warming rate, which raises questions of what exactly that is supposed to be. The plotted CO2 here corresponds to 2.4 C per doubling if it matches the temperature for the last 60 years, which it does.
        http://woodfortrees.org/plot/best/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25

      • Sorry, Jim… 80% is a liberal figure. An upper bound. You assume that none of the warming is from solar, geo thermal, or internal chaotic variability, etc. Never assume because you make an ass out of u and of me. (ass + u + me = assume)…

      • LC18 don’t assume that either. They chose their endpoints to cancel all that out. Yes, 80% is conservative. CO2 accounts for 75% of the GHG changes since 1950, but you also have to subtract aerosols from the forcing change which raises CO2 as a fraction of the net.

      • You don’t need GCMs to show higher values. Just take the temperature and CO2 changes since 1950 to compute an effective TCR and you get 2.4 C per doubling.

        1.76C

      • Not clear why he insists on starting in 1950.

      • In an extensive interaction with him, he seemed really rather unable to defend technically his work

        No dpy, in an extensive interaction with him you were unable to engage in technical discussion but instead resorted to insults.

        I viewed the video you linked and it is very disappointing. The last third of it is just self-serving tripe. It really does help me understand Dessler’s appearance here and his biases. It explains why he has to just ignore real critiques or lines of evidence that the doesn’t like. I must say as well it calls into question for me his honesty and directness in dealing with science generally

        https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comment-123671

      • Jim D wrote:
        “Just take the temperature and CO2 changes since 1950 to compute an effective TCR and you get 2.4 C per doubling. Conservatively 80% of this is CO2 versus the added effect of aerosols and other GHGs, still giving something near 2 C per doubling”

        Your calculation is unsound. Using the C&W infilled version of HadCRUT4 and all forcings, as per LC18, changes from a 21 year mean centered on 1950 to the 2007-16 mean give a TCR estimate of 1.31 C.

        Regression is not a suitable method for esrtimating TCR over 1950-2016, because volcanic forcing is weighted towards the start of the period and it has a much lesser effect on surface temperature than other forcings.

      • Jim D: For the time from 1950 on I did it with the “tamino adjusted” records ( for Enso, solar, volcanos): https://judithcurry.com/2016/10/26/taminos-adjusted-temperature-records-and-the-tcr/ .
        For C/W the result is TCR= 1.35 and almost congruent to the result in L/C 18.

      • Jim D: Would you please so kind to show us ( Nic and me) how you could get this result “Just take the temperature and CO2 changes since 1950 to compute an effective TCR and you get 2.4 C per doubling.”? I’m very interested in your “alternative way” and/or “alternative data” to claculate this measure!

      • What happened here from verytallguy is quite instructive on propagandistic techniques. First you quote out of context a single comment in a thread with hundreds of comments. Then you ignore the technical content of the other comments and accuse someone of not engaging technical issues when the hundreds of other comments are technical engagement. It’s very dishonest and shows how harmful non-scientist consensus enforcers can be to technical discussions.

      • dpy,

        That’s really pathetic. I provided a link so your comments can be seen in context.

        Own your behaviour.

      • VTG, Do I really have to offer proof? Above you said:

        No dpy, in an extensive interaction with him you were unable to engage in technical discussion but instead resorted to insults.

        That’s a transparent misrepresentation of a thread full of technical discussion and can only be based on the out of context comment you cited. Propagandistic to the core.

      • I have shown how to get 2.4 C per doubling countless times. Start with this fit of 1 C per 100 ppm which tracks really well.
        http://woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25
        This implies 0.9 C for the 90 ppm change in CO2 from 310 to 400 ppm. This gives an apparent 2.4 C per doubling (at least 80% being CO2 itself). The time prior to 1950 was fairly volcano free and had a positive solar anomaly compared to now, so net solar+volcanic forcing since 1950 is generally regarded as negative, plus much of the aerosol increase occurred from the late 50’s that would also reduce the apparent CO2 effect. I used BEST, but you can also use HADCRUT4 and GISTEMP with the same result.
        http://woodfortrees.org/plot/hadcrut4gl/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.35/plot/gistemp/from:1950/offset:-0.1/mean:12

      • (ass + u + me = assume)…

      • dpy,

        The comment was not out of context. I provided a link. I recommend people follow it.

        You spent the whole thread insinuating dishonesty and malpractice. Andrew Dessler was remarkable in his patience with you. You were warned *twice* by SoD moderating.

        Some more snips follow.

        I believe that in climate science there is a group of activist scientists who are reluctant to publish results that are not alarming.

        [ followed by mod]: In this blog – check the Etiquette – we ignore presumed motivations.

        So Andrew, You drop by for another drive by comment. 

        What the Lancet says about science is in my view at least as applicable to climate science where the data are noisy, the models not very skillful, and the activism pretty poorly disguised

        the politically motivated accusations of science denial many climate scientists peddle

        your denial that there is a significant problem in climate science leads me to distrust your professional opinion on other issues

        I think Andrew knows this quite well as its “obvious to the most casual observer.”

        I hope that readers will observe what Andrew just did here. He was given at least 4 lines of evidence that called into question his arguments, some on first principle grounds. He didn’t respond to a single one of them.

        He then employed a rhetorical device to try to discredit the person providing the lines of evidence and fell back on his own authority. He doesn’t know much about me but presumes to read my mind. Classically fallacious

        [further from mod]: This blog definitely doesn’t accept character attacks. It doesn’t even accept discussions of motivation. Some people find it impossible to work within these guidelines – please join other blogs and comment there to save me deleting your comments here.

        Your attempts to deny your behaviour are abject.

      • VeryTall, Readers can observe the technique you use here. Most of what you excerpted is about science and is quite true even though inconvenient for you. As a non-scientist, you have never responded to any of the substance of this thread, which I understand you are trying to obscure. Dessler didn’t respond either. That tells me that you know nothing about it and are merely trying to delegitimize what I said. That’s shameful and really unethical and a propagandistic technique at which you seem quite expert. An anonymous internet activist returns again and again to a nontechnical sophist argumentum ad hominem. Really?

      • dpy,

        The reality, as so richly evidenced by that SoD thread, is that you are unable to engage in discussion without insults.

        Own your behaviour.

      • stevefitzpatrick

        Nic Lewis,
        People have been telling JimD his simplistic Wood-for-Trees methods are unsound for years. Explanations have no effect on him: he just keeps repeating the same nonsense. Don’t waste you time.

      • VeryTall Guy, You have completely avoided technical substance here and I have never seen you make a comment anywhere with any technical substance. GCM’s and tropical convection are a very interesting technical subject and you are the one who keeps contaminating comment threads with non-technical issues over and over again. Can we please just discuss science?

      • dpy,

        I think your keeping to technical issues would be an excellent idea, yes. I look forward to it.

      • Think about what just happened here. It’s the perfectly executed propagandist attack. An anonymous internet non-scientist with no track record of any real contributions of substance anywhere relentlessly focuses on the largely imaginary sins of the target he is attacking and refuses to say anything that is technical in nature or indeed anything about what the target said that is technical in nature. Pretty shameful.

      • dpy, the science guy,

        Your latest post doesn’t obviously have a focus on the science you say you want to stick to, but maybe I’m missing something here.

        Anyway, looking forward to you staying on the science henceforth.

      • VeryTall, You did miss something both here and at SODs but perhaps that’s a selection bias issue.

        https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/#comment-871225

        Refreshing your memory: Here’s the pull quote from a Nature article about pre-registration of clinical trials: “Loose scientific methods are leading to a massive false positive bias in the literature,”

        https://www.nature.com/news/registered-clinical-trials-make-positive-findings-vanish-1.18181

        In case you missed it, most of what said at SOD’s was amplification of these points with specific references to the literature and the very important Zhao et al paper on convection and cloud parameterizations in GCM’s showing that ECS could be essentially engineered using credible values of the parameters. It basically more or less proves Nic’s point that GCM’s are not satisfactory scientific evidence concerning ECS. And that’s a critical point that has been ignored by Dessler amoung others.

      • It basically more or less proves Nic’s point that GCM’s are not satisfactory scientific evidence concerning ECS.

        GCMs are one tool in the kit.

        I think it’s best to judiciously use all of the tools.

      • Well, that’s a motherhood statement that means little. Some tools are a lot better than other ones for specific purposes.

        The problem for Dessler is that he is using GCM’s to predict cloud feedbacks in the tropics when that’s exactly the portion that Zhoa dealt with and showed could be tuned with credible parameters to change these feedbacks over a broad range. Thus GCM’s are simply not scientifically valid evidence on these issues. There was never a response to this except that we are all frustrated at how poor GCM’s are. Oh and I forgot they agree with “simple theory.” That seems to mean unquantified verbal formulations that are kind of akin to theological explanations.

        GCM’s were developed to predict weather, i.e., Rossby waves and are highly tuned for this purpose. Their other predictions seem to lack skill such as regional climate which overlaps with the “warming pattern” that Dessler also touts as proving that energy balance methods underestimate ECS. One again, GCM’s are not valid scientific evidence for this.

        If you feel like leaving your propagandist shell and reading up on this, I’d be happy to see some response or some further information.

      • The emergent constraints papers, talked about here before, show that the GCMs that do best against observations with tropical clouds in the current climate are also the ones with more positive cloud feedbacks and higher climate sensitivities as a result.

      • Nic did a really job, JimD, looking at this in a recent post here. I don’t credit these emergent constraint methods either. There is no reason whatsoever to suppose that 2 emergent properties (ECS is one) of a very complex model are linked in any such way. You simply don’t know how the model was tuned or what the sensitivities to parameters are.

        You need to really get enough good data to really come up with a good model of convection/clouds/precipitation (if that’s even possible). That’s the only way to really address the current lack of skill. One of my complaints is that no one seems to be working on this hard fundamental problem. Instead people are running the models and drawing questionable conclusions.

      • If you feel like leaving your propagandist shell…

        This whole “let’s just discuss the science” thing, dpy?

        You’re finding it tough, right?

        Hang in there buddy. I’m rooting for you.

      • I think it’s good thing the ECS can be engineered in GCMs.

      • Nic Lewis backed up the importance of low clouds in the tropics and more sensitive GCMs getting them right as a robust result from the emergent constraint studies.

      • I think I’m going to end this here with you Very Tall. You don’t contribute anything technical but are an expert at ignoring 99% of what is said. That makes you a propagandist and more importantly a waste of time.

      • You’re really very funny dpy.

        Good luck with the whole staying sciencey thing.

      • Speaking of bad behavior it seems very tall person can’t say anything about science, but is a hypocrite as well who likes to attack people without having anything of substance to say. Is this guy actually a real person or a bot?

        https://judithcurry.com/2018/04/20/week-in-review-science-edition-80/#comment-871191

      • Still not sticking to the science dpy?

        Keep trying, you’ll get there in the end.

      • VeryTallTales, You have yet to make any technical comment here. It’s just another proof that you are not acting in good faith. I’m merely pointing out that you are a hypocrite who hurls non-technical mud at people and then criticizes others for much more justified criticisms.

        There’s plenty of technical discussion here if you want to try to actually make a contribution. Of your 20 or so comments here, none has any technical content. Why is that?

      • dpy,

        another hilarious technical post from you, for sure!

        Here you are, with your history of being moderated because you can’t manage a scientific interaction without reverting to personal insults.

        Yet you demand others stick to discussing science, whilst yourself putting up a long series of accusations, post after post after post.

        Naturally, this shows I’m the hypocrite in the room.

        Anyway, do carry on. A little light relief is always good.

      • Well there you go again. it’s by now obvious to everyone what you are doing and it is cheap and adds nothing to the thread.

      • Dpy, you’ve got stamina for this non technical banter, I’ll give you that.

        It’s almost like you think your imperative for technical content only applies to other people.

      • Giving in to an anonymous bully who advances nothing but tripe just encourages them to continue trying to silence people with smears. You’ve done it twice now on different people.

        You are still zero on technical content. Still you feel no shame. Perhaps a socialization issue.

      • verytallguy

        Of course, I’m a “bully”.

        From an exchange which started with how you have a history of resorting to personal insults causing you to be moderated, and continued with you offering insult after insult after insult. Whilst all the time simultaneously and hilariously whining about the lack of technical content.

        This is really simple.

        You like dishing it out. You don’t like taking it, and you *really* can’t cope with your behaviour being called out.

    • Dessler’s objections are reflexive, just as are all such objections from the establishment. Protect the dogma at all cost is their mantra. It’s not about science. That is clear to anyone who has read the activists’ reactions to any work challenging the apocalyptic narrative.

      The same kind of reaction is perfectly predictable for the next paper that doesn’t follow the company line, whether next year or 10 years from now. The are all well scripted and well rehearsed and tedious.

    • he just published a paper saying there is NO evidence of ECS below 2 deg. C.

      Can you provide a citation please?

      The recent Dessler paper most people have been referring to is

      A. E. Dessler et al.: Internal variability’s impact on climate sensitivity
      Atmos. Chem. Phys., 18, 5147–5155, 2018
      https://www.atmos-chem-phys.net/18/5147/2018/acp-18-5147-2018.pdf

      That concludes

      We find that the method is
      imprecise – the estimates of ECS range from 2.1 to 3.9 K
      (Fig. 2), with some ensemble members far from the model’s
      true value of 2.9 K. Given that we only have a single ensemble
      of reality, one should recognize that estimates of ECS derived
      from the historical record may not be a good estimate
      of our climate system’s true value.

      (my bold)

      Which is not at all the same as your characterisation. But perhaps you are referring to a different paper.

      • raypierre says: October 10, 2017
        ” while we need longer and better satellite data to pin down cloud feedback, the record is already getting good enough to rule out a significantly stablilizing cloud feedback, which is the only thing, as you note, that could drive climate sensitivity below 2C.”

      • here’s the recent Dessler & Forster paper I referred to… It’s at a preprint service site that has no peer review, I assume they have it submitted somewhere else and is in review:
        https://eartharxiv.org/4et67/

      • Thank you, that now makes a lot more sense

        I’m not sure they claim anything more than that their analysis doesn’t support ECS below 2, not that no analyses support that (in fact I think they explicitly state that).

      • I linked to that preprint here several days ago.

  32. Thanks for this analysis, Judith and Nic.

    Your analysis assumes that all of the warming since 1869 was caused by greenhouse gases and that there has been no natural warming, and the HadCRUT4 temperature dataset is unaffected by the urban heat island effect (UHIE).

    Earth climate history shows an obvious millennium temperature cycle as show by this graph of extra-tropical North America (ETNH) temperature proxies.

    The temperature rise from 1869 to 1900 is all a natural recovery from the Little Ice Age (1400 to 1700) as humans could not have had any effect on climate during this period. The temperature rise from 1900 to 1950 is almost all natural, as the CO2 rise was insignificant. This shows that your assumption that the earth was in temperature equilibrium in 1869 – 1882 is false, and that a significant portion of the temperature rise was natural.

    The global temperatures vary by only 80% of the ETNH according to HadCRUT4. The global natural recovery from the Little Ice Age since 1900 is estimated at 0.084 °C/century based on the millennium cycle from the graph and the global adjustment. This reduces the calculated equilibrium climate sensitivity (ECS) by 0.23 C.

    Numerous papers have shown that the UHIE contaminates the instrument temperature record. A study by McKitrick and Michaels showed that almost half of the warming over land since 1980 in instrument data sets is due to the UHIE. A study by Laat and Maurellis came to identical conclusions. A study by Watts et al presented at the AGU fall meeting 2015 showed that bad siting of temperature stations has resulted in NOAA overestimating US warming trends by 59% since 1979. A study by Dr. Roy Spence also shows that about half the warming over land is UHIE. The UHIE over land is 0.14 °C/decade, or 0.042 °C/decade on a global basis since 1979. The UHIE
    correction over the period 1980 to 2008 is 0.11 °C. Making the conservative assumption that there was no UHIE before 1980, this reduced the ECS by 0.20 C.

    Correcting the Lewis & Curry ECS estimate for the preferred base and final periods, the ECS is reduced by 0.43 C from 1.50 C to 1.07 C.

    Using the most recent version of the FUND integrated assessment model, assuming a 3% discount rate, emissions in 2018 and ECS = 1.50 C, the social cost (benefit) of carbon dioxide is +US$1.36/tonne CO2. However, using the corrected ECS of 1.07, the social cost (benefit) of carbon dioxide is US$-20.06/tonne CO2.

    Using a more realistic discount rate of 5%, the SCC for 1.5 C and 1.07 C is US$-0.28/tonne CO2 and US$-10.61/tonne CO2, respectively. The negative signs means that the benefits of emissions exceeds the costs of emissions.

    FUND is the world most detailed, evidence-based integrated assessment model.

    Rather than imposing carbon taxes, fossil fuel use should be subsidized by US$10 to US$20/tonne CO2.

  33. I’m wondering about the choice of datasets for LC18. Why hasn’t Berkeley land/ocean been considered? It is the most comprehensive of all datasets, using virtually all available met station data.
    The trend of Berkeley l/o is 21 % larger than Hadcrut4 and 12% larger than C&W for the full period, 1850-now.

    Also, I’m not sure that Era-interim is gold standard for validating coverage and blend bias in Hadcrut4. Era-interim has the known SST- bias in 2001/2002, and also a significant cool bias in Antarctica.
    For instance, the Era-interim trend for the South pole is -0.4 C/ decade, whereas the corresponding trend measured at the Amundsen-Scott base is +0.2 C/ decade

    All datasets mentioned here rely on the HadSST3 analysis, which might be a limitation. Other SST datasets should be considered to assess a wider range of possibilities. Cowtan had an alternative kriged dataset based on COBE2SST rather than HadSST3, but the URL on his web site is blank right now..

    • “Cowtan had an alternative kriged dataset ”
      was that kriged or rigged?, Never mind, sour grapes on my part.
      A data set done by two skeptical science enthusiasts that shows a low ECS when used this way [no pun intended] is still pretty good.
      What are you wanting to do, keep moving around until you find one that agrees with your view of the world?
      Bit like Facebook really, why does not someone come up with an alternative Facebook or Cowtan and Way.

    • Berkeley land/ocean is unsupported by any peer reviewed paper. Indeed, I can’t find any real documentation for it. And its dataset up to 2016 underwent an unexplained major change last year that substantially increased its trend. The Berkeley Earth Analysis Code webpage says that it was “updated Jan, 2013” and that “A change log summarizing the major changes will be available shortly.” None of this gives me any confidence in their methods or their land/ocean data.

      The ERA-interim data and trends in LC18 have been adjusted by ECMWF for the 2001/2 SST bias issue. The analysis in Simmons et al 2016 (DOI:10.1002/qj.2949) shows quite good capture by ERA-interim of temperature changes at various Antarctic stations, so the difference at the South pole may well be an isolated one. Antarctica is anyway a small region. ERA-interim is not perfect, but it appears to be the best of the current reanalyses.

      HadSST3, unlike most other SST datasets, makes a decent attempt to correct the problems with ship measurement data in WW2 and the following decades.

      We showed in LC18 that both the GISS and NOAA datasets, which don’t use HadSST3, warmed very similarly to HadCRUT4v5 betwen the first and last two bidecadal periods in their records, and noticeably less than does the infilled (C&W) version of HadCRUT4. GISS and NOAA datasets are unsuitable for the main analysis as they only have a few years of data before an exended period of heavy volcanism.

      • All bodies of open water are assumed to have constant SW solar input. Then it’s a question of water to atmosphere output. The difference is put into the bank or taken out of the bank. This is another threat to climate science when accountants think they know something.

        If water is a poor atmosphere to water IR absorber, the atmosphere can still slow water joule emission by being warmer. Let’s go down the path that because of science, additional IR just evaporates more water. This additional water vapor is a GHG that hangs around for at least 5 seconds before it flies off to where it’s still a GHG but doesn’t matter. But in that 5 seconds it bounces more IR to the water surface which likewise just evaporates, making more GHG. But we have our water vapor train to nowhere so it doesn’t hardly matter.

        Now here’s the deal with, I am water so atmospheric IR can’t warm me. Water has to sacrifice itself by evaporating to use this shield. Evaporation ain’t free. You have to do something with the energy that tried to warm you. It’s like the tiles on the shuttle. The shuttle said, You can’t burn me up on reentry. But the heat said, I can take some of your tile mass.

      • Well, the methods for the Berkeley land dataset was published in March 2013: http://berkeleyearth.org/papers/
        I’m uncertain about the merged land/ocean dataset.
        The code is publicly available
        I’ve seen other studies that prefer Berkeley l/o because the version 2 of Cowtan and Way isn’t peer-reviewed and published.

        I know that the Copernicus version of Era-interim ((global SAT) has been corrected, but is it really possible to correct the relation between SST and MAT without rerunning the reanalysis?
        I think we have to wait a year or so for the real gold standard reanalysis, ERA-5. There are 10 years of data available at KNMI explorer right now. I compared the trends for Antarctica (land 60S-90S), and the trend of Era -interim was actually 0.40 C/ decade lower, quite a lot in my view. The other gold standard reanalysis, JRA-55, agrees more with ERA-5 for this short period..

      • Geoff Sherrington

        There was a lot of Climate Audit discussion about the quality of thermometry data from Antarctica about year 2006. I just did a quick revisit and found this summary of GHCN Antarctic data at the time (click on station number).
        http://climate.unur.com/ghcn-v2/700/
        The quality was poor. I am unaware of ways in which it could be/has been improved. Is it wise to base modern comments and calculations on this poor data? Geoff.

    • Steven Mosher

      “I’m wondering about the choice of datasets for LC18. Why hasn’t Berkeley land/ocean been considered?”

      Nic wrote to us on April 7th asking a question about a recent change to our product. My guess is he published without waiting for an answer, but I’m not sure.

      The biggest issue with Nic paper is he tends to “cherry pick” data sets and give post hoc explanations of why he selected x or y. To get the full measure of uncertainty this is a backward approach. Its funny to select re analysis as the gold standard when it relies on the surface temps to prove its accuracy.
      meh.. not worth a blog war.

      That said, between observational datasets there is not much of a difference
      so the corect approach is to show them all. That’s what we try to do when comparing answers. The series that lack global coverage are quite wrong, but not by much, polciy shouldnt turn on which you select

      • nobodysknowledge

        I tend to agree with you on the last point, SM. All datasets have their weeknesses, so let the reader judge. The Cowtan and Way dataset has been much critizised, but I even think their study has some qualities. I have seen that Clive Best comes close to C and W on global temperatures, and he seems to do a very serious work.

  34. I should add to my previous note, that since IPCC points out in AR5 WG1 Ch8 FAQ 8.1, that A rise in CO2 does not by itself cause the assumed temperature rise, but that the CO2 increase induces evaporation and the main GHG effect is the result of additional water vapour.

    I wonder about the amount of H2O evaporated and termed a feedback and used in the GCMs. How does the minor CO2 forcing in the form of IR radiation penetrating just the topmost millimetre of the sea surface relate to the amount of water being evaporated by direct solar energy that warms oceans and other water basins. My logic states that the the effect of CO2 as a GHG is out of league competing with water vapour.

    It is also a fact that water in its gaseous, liquid and solid forms is
    together with our Sun the main driver of climate. The CO2 from burning of fossil fuels is just a critically important plant food. Finally, I do agree that many of the compounds in the emissions form a definite hazard to life and should be controlled.

    • All bodies of open water are assumed to have constant SW solar input. Then it’s a question of water to atmosphere output. The difference is put into the bank or taken out of the bank. This is another threat to climate science when accountants think they know something.

      If water is a poor atmosphere to water IR absorber, the atmosphere can still slow water joule emission by being warmer. Let’s go down the path that because of science, additional IR just evaporates more water. This additional water vapor is a GHG that hangs around for at least 5 seconds before it flies off to where it’s still a GHG but doesn’t matter. But in that 5 seconds it bounces more IR to the water surface which likewise just evaporates, making more GHG. But we have our water vapor train to nowhere so it doesn’t hardly matter.

      Now here’s the deal with, I am water so atmospheric IR can’t warm me. Water has to sacrifice itself by evaporating to use this shield. Evaporation ain’t free. You have to do something with the energy that tried to warm you. It’s like the tiles on the shuttle. The shuttle said, You can’t burn me up on reentry. But the heat said, I can take some of your tile mass.

      I see I misposted this above. I think I have it this time.

  35. astroclimateconnection, I have submitted a paper that does what you suggest for inclusion of ENSO effects (MEI), based upon our previously published work that El Nino and La Nina have a radiative forcing component. It’s a simple 2-layer forcing-feedback energy budget model for the global ice-free ocean to 2000 m depth, and uses MEI as proxy forcing, with CERES satellite measurements 2000-2017 quantifying how ENSO affects Eart’h radiative budget in both a forcing and feedback sense. The model uses both RCP forcings as well as MEI 1880 to present, run in Monte Carlo mode to optimize 7 adjustable model variables, and it produces enhanced warming before the 1940s, slight cooling from the 40s to 70s, strong warming late 70s to late 90s, then weak warming late 90s until the 2015-16 El Nino. The model is pretty straighforward. I submitted a paper to Geophy Res. Letters recently and was promptly rejected. Now reworking the paper for another journal. It produces a best estimate of ECS of 1.75 deg. C, not much more than the LC18 estimate without Arctic infilling. That ECS was mostly determined by the recent rate of 0-2000m warming (1990-2017) and the 1880-2017 SST trend, with only a little influence from inclusion of ENSO.

    • This seems to show the stronger warming from the late 1990’s?

    • Roy,
      You are way ahead of the curve on this one. Well done! I hope that your work gets past peer review soon. If you correctly account for this effect then I am reasonably confident that you are much closer to the correct value for the ECS than most other authors.

      Spencer et al. (2018) best estimate ECS = 1.75 deg. C
      LC18 (2018) best estimate = 1.66 deg. (1.15 to 2.7 deg. C)
      AR5 best estimate = 1.5 to 4.5 deg. C

      Clearly, the data support a value for the ECS that is at the lower end of the range for AR5’s best estimate.

  36. > The implications of our results are that high estimates of ECS and TCR derived from a majority of CMIP5 climate models are inconsistent (at a 95% confidence level) with observed warming during the historical period.

    One could also argue that a more important implication of L18 is that their derivation of lowball estimates based on a simplistic energy balance model is incompatible (at a 97% confidence level) with the majority of CMIP5 climate models.

    Conflating one’s lowballing with the observed warming itself may not be that scientific.

    Correcting the typo in that sentence may or may not compromise the “observational” brand that seems lukewarmingly important.

    • One might also argue that while I have a chance of understanding LC18, I have no chance of understanding the CMIP5 climate models estimates of ECS and TCR. The more something is grounded in everyday things and the less steps it takes, the more worth is has compared to a thing, one of about twenty I think, that sits at some hoity toity university. The deplorables are incompatible with high confidence levels of things they don’t understand. This is neither right or wrong, it is. The recognition of what is is the first step in the pursuit of success. The acceptance of what is.

      • Yes, it’s ocams razor! All the estimates of higher TCR/ECS are deduced from models and the estimations of higher internal variability which are not justified from ANYTHING we observed. It’s pure speculation. It’s your choice to believe ( yes, believing is one of the most important requirements to follow this projections) in some possibilities of gloom and doom or to follow the trusted science.

      • > The more something is grounded in everyday things and the less steps it takes

        That may indicate you haven’t really read L18, Ragnaar. In fairness, few contrarians will.

        Don’t be suckered by the 24 occurences of “best estimate” and the 26 occurences of “observational,” many of which only serve to hammer in the lukewarm brand. Notice instead the 36 occurences of “variability,” most of which minimizing it.

        That L18 minimizes variability should make Mr. T raise at least one eyebrow.

      • Willard:

        Tax season is over. Calm has been restored until the next cycle approaches its zenith.

        I am hearing variability has been minimized and LC18 should broaden its error bars. Lewis said that with or without the pause…

        It is true by thinking long term, problems may seem lessened. I thought it was a good move on his part. I cannot say if the error bars should be widened? I think the deep reserves of the oceans will save us. The SSTs have lagged the GMST as they sit on about 4 kilometers of cooler water.

        I validated this when I didn’t know my water bed heater had to work.

        I think the lower error bars go along with fewer step and inputs. My view is that it’s like an accounting margin. Grocery stores have low profit margins. Jewelry probably has high profit margins. And calculating the profit margin is pretty easy. As I recall, business school used to teach all kinds of simple financial ratios even to non-accountants. How are you supposed to be in charge if you don’t understand those? So, a business school grad can understand 3/4s of what Lewis wrote above. Input this, get this. They are also aware of all the whiz kid models that fell flat on their face. Derivatives.

      • > They are also aware of all the whiz kid models that fell flat on their face. Derivatives.

        You could not have said it better, Ragnaar. Perhaps unknowingly. Nic used to be an accountant. He may relate to it.

        Now, in your story, are the deratives freaks those who’d try to downplay risks based on observable success, or those who’d prefer to play it safe?

  37. Andrew Dessler has made a couple of comments on ATTP’s post on this paper. And they are mostly a rehash of the discussion he and I had at SOD’s. It really all gets down to the credibility of simple energy balance methods vs. GCM’s. And this gets to the issue raised at SOD’s about ECS of GCM’s being essentially tunable within a broad range using parameters that are poorly constrained by data. This negative result on GCM’s is never addressed in these discussions. Nic makes this point very eloquently in his writeup on ECS. Another issue has to do with aggregation of convective cells, which is unrepresented in GCM’s.

    Cawley has a quite remarkable comment saying that the complex models are better based on really nothing. This is demonstrably false in aerodynamics for example. If you can measure the thrust of the engines, then you have a pretty good idea of the drag because of momentum balance. Computing drag from first principles using eddy resolving simulations is of course quite impossible. Using turbulence models, the results are still quite variable and not very convincing. No competent fluid dynamicist would advocate CFD as being more accurate than the simple balance model.

    The pattern of warming argument is interesting as it seems to be prima facia evidence that GCM’s are inaccurate and/or do a poor job with short term variability. Yet it is ironically cited as a reason energy balance methods are biased low. That’s a conclusion that can be sustained only given an initial bias the models must be right in the long term.

    Thus, these discussions always get down to ignoring recent negative results and cultural prejudices about complex models and “modeling all the physics.”

  38. Re
    Andrew Dessler @AndrewDessler 14 Dec 2017
    Terrific session on climate sensitivity today at #AGU17. Lots of discussion about climate sensitivity (ECS) estimates from the 20th century. The problem is that estimates of ECS from the 20th century obs. record are lower (1.5-2°C) than models (3°C)
    This leads to one of the biggest “skeptical” talking points: “Observational estimates of ECS are much lower than models” or “Models are too sensitive to CO2”. In the session today, it’s clear that the scientific community has beaten the ***** out of this problem.

    • “This leads to one of the biggest “skeptical” talking points: “Observational estimates of ECS are much lower than models” or “Models are too sensitive to CO2”. In the session today, it’s clear that the scientific community has beaten the s**t out of this problem.”

      “In a year or so, “Observational estimates of ECS are much lower than models” will be sitting next to “global warming stopped in 1998” or “the surface temperature record can’t be trusted” in the junk yard of discredited skeptical ideas.”

      Please see the context of the above here:
      https://www.theguardian.com/environment/climate-consensus-97-per-cent/2017/dec/18/scientists-have-beaten-down-the-best-climate-denial-argument

      It’s still a problem. Beating on it isn’t going to solve a thing. One should’ve been able to figure that out by now. There was the polar bears and beating the stuff out and that didn’t work either.

      Then Dessler makes predictions. I’ll make my own: The global warming has been stuffed into a bottle and held there for a long time. It should have popped out by now. But at long last it is here, or almost here, or here by next year. And it’s like I said. I’ve been saying it for a long time. It’s relentless and fast and never mind all those years nature had it stuffed in a bottle. It is here, or just about, and I have been/will be vindicated. Science will triumph. And all the naysayers will see. And they’ll be even more marginalized than I said they were about 5 years, as I’ve been saying all along.

  39. There are a lot of discussions at ATTP’s revolving around ECS.Type in Dessler to search for ones relevant to this discussion.

    Including this hidden gem on the pause
    “JCH says: December 23, 2017 at 1:09 pm
    It just seems basic to me that if the warming hiatus that never happened was caused by a strengthening of ocean heat uptake efficiency during a period of time that coincided with Matt England’s anomalous intensified tradewinds, which both actually happened and then went away, that the observations are a bit F’d Up for primetime. So which longterm variation are they talking about? Because, as far as I can find, Matt England’s anomalous intensified tradewinds are a one-off phenomena.

    The winds came; there was a warming hiatus in improvable datasets;

    the winds subsided; GMST has been shooting through the roof ever since. Anyway, I’m reading all the cloud stuff. Seems to be pointing mostly in the same direction: upward ECS.”

  40. Everett F Sargent says: December 23, 2017 at 2:51 pm
    “Interesting looking paper, thanks. Amazed that they don’t seem to cite any of Nic Lewis’s work.”
    From a Google Scholar citation search of … “An objective Bayesian improved approach for applying optimal fingerprint techniques to estimate climate sensitivity”
    Currently at 69 citations, including …
    JA Curry, N Scafetta, C Loehle, A Parker, R McKitrick, Gervais (new one for me at least), A Ollila, M Connolly, R Connolly, RSJ Tol, J Marohasy, PJ Michaels, PC Knappenberger, WWH Soon, DR Legates, WM Briggs, Monkers, The Auditor
    There are, of course, a bunch of real climate science papers, I just happened to notice a preponderance of ‘so called’ addictive authors.
    Is NL a ‘so called’ gateway drug (a habit-forming author that, while not itself addictive, may lead to the use of other addictive authors)? Maybe a ‘so called’ keystone domino (but this time with real references) would be a better euphemism? 😉”
    Did not realise Nic was addictive until now. I always found his work too full of real scientific stuff.

  41. While static climate sensitivity is a perennial talking for climate tragics on both sides – it will inevitably miss the next globally coupled shift in Earth’s spatio-temporal chaotic flow field.

    Dynamic climate sensitivity – in the way of true science – suggests at least decadal predictability. The current coolish Pacific Ocean state seems more likely than not to persist for 20 to 30 years from 1998/2002. The flip side is that – beyond the next decade – the evolution of the global mean surface temperature may hold surprises on both the warm and cold ends of the spectrum.

  42. Dessler totally ignores the fact that if there are radiative variations due to, say, chaotic cloud variations, it decorrelates the relationship between radiative flux and temperature. This leads to low regression slopes and underestimates of the net feedback parameter and thus overestimates of climate sensitivity. This fact, which we have now demonstrated a few times in the literature (as has Lindzen & Chou) has finally led to some new independent research which concludes “The results suggest that regression-based feedback estimates reflect contributions from a combination of stochastic forcings, and should not be interpreted as providing an estimate of the radiative feedback governing the climate response to greenhouse gas forcing.” ( https://eartharxiv.org/5dsbf/ ) Dessler’s invoking of regional averaging effects is a red herring. The “noise” causing the problem in diagnosing feedback from regression is simply ignoring that causation flows in both directions between radiative flux and temperature. This has been clearly demonstrated, if people would bother to understand it: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2009JD013371

    • it decorrelates the relationship between radiative flux and temperature.

      Unless I’m misunderstanding what you’re saying, isn’t this essentially what Andrew Dessler is highlighting? In reality there isn’t a simple relationship between changes in planetary energy imbalance and changes in surface temperature. Hence, if you apply a simple energy balance approach to try to estimate climate sensitivity, you can recover a result that is far from the true value.

      • stevefitzpatrick

        Ken Rice,
        Short term variability makes short term diagnosis impossible. EB estimates generally cover multi-decade to century+ periods, where short term variability has much less influence. There is no “simple relationship” between short term surface temperature changes and changes in radiation to space of course, but EB methods are pretty much immune to those short term changes. A more legitimate technical objection to EB estimates is the possibility of very long term (multi-century) changes in temperature response to constant forcing (eg. as indicated by Gregory plots of some GCM projections). But from a public policy standpoint, TCR seems to me much more relevant, and the TCR probability distribution from EB calculations provides the most robust basis for public policy discussions.

        That said, all technical evaluation of sensitivity seems to me rather moot… nothing is going to bring about significant reductions in global CO2 emissions over the next few decades. By 2050, the reality of ~500 PPM CO2 will have settled most of the technical questions. But that won’t make the policy disagreements disappear, nor should they: the fundamental disagreements are in values, priorities, and perceived costs and benefits. It is naive to imagine those disagreements are going to disappear if TCR turns out to be 1.2C or 2.1C. We might all hope technology provides additional lower cost policy options by mid-century.

      • Hence, if you apply a simple energy balance approach to try to estimate climate sensitivity, you can recover a result that is far from the true value.

        This goes to what one defines ECS as.

        Heat can go into the oceans, which is the current understanding.
        But to increase any future heat content of the atmosphere, it must return.
        Current understanding is that return of any excess heat from the oceans will occur much more slowly than current uptake.
        So, ECS may never actually occur. It may be a unicorn.

        The only other possibility is that warming changes SW albedo or LW emissivity. Should such changes be significant, it is just as physically plausible that such changes would act negatively as positively, possibly lowering the response.

        Here is a comparison of SW albedo response from an early hypothetical model of Manabe with a recent ( though aging ) GISS model:

        Much of the early model warming was in reality from SW albedo decrease!
        This response diminished greatly in the more recent model, but even with the GISS model, much of the forcing is SW albedo feedback.
        Since this is dependent on non-linear dynamics, as with the failing “hot spot”, this feedback is highly suspect.

        Ultimately, TCR is reality and models of ECS are unvalidated and unverified speculation.

      • Heat can go into the oceans, which is the current understanding. But to increase any future heat content of the atmosphere, it must return.

        No, it doesn’t have to return. The energy that warms the surface so that we eventually return to equilibrium doesn’t have to be energy that initially went into the oceans. It may well be that various internal cycles do mean that some of the energy that heats the surface does come from the oceans, but it’s not some kind of requirement that this be the case.

      • Yes, as ATTP says, the heat doesn’t have to “return” from the ocean. You have to think of the ocean overturning circulation as a delaying mechanism to the inevitable rise to the ECS. It can delay the rise to the new equilibrium, and does, but it can’t stop it.

      • “Heat can go into the oceans”

        “No, it doesn’t have to return. The energy that warms the surface so that we eventually return to equilibrium doesn’t have to be energy that initially went into the oceans. It may well be that various internal cycles do mean that some of the energy that heats the surface does come from the oceans, but it’s not some kind of requirement that this be the case.”

        Let me be explicit: in order for ECS to be higher than observed TCR, there must either be a positive feedback of SW or LW, or a transfer of oceanic heat to the atmosphere. Feedbacks are plausible but speculative. And any return of currently presumed oceanic heat uptake may well be a millenium away from very slowly returning, beyond which time RF will ( based on demographics ) likely to have long since ceased to be forced by humans.

      • TE,

        in order for ECS to be higher than observed TCR, there must either be a positive feedback of SW or LW, or a transfer of oceanic heat to the atmosphere.

        Maybe I’m misunderstanding what you’re saying, but the ECS is almost certainly higher than the TCR. For a given change in external forcing, we will return to equilibrium when the surface temperature has increased so that the flux of energy out of the system matches the flux of energy into the system (on average). On long enough timescales, this shouldn’t depend on where the energy that heats the surface comes from. It should (on average) depend primarily on the various feedback processes. The point here, though, is that internal variability can impact the feedback responses in a way that suggests that if you try to infer the ECS from the historical temperature record you can recover a result that is not close to the true value.

      • Also in reply to TE, if we stopped emitting tomorrow and the forcing then stayed constant, there will still be surface warming until the imbalance is removed. This comes via the deep ocean warming further, not losing energy, and that warming comes from the net radiation until the imbalance goes away. Since about 1980, the ocean has been warming slower than the global average, so it has some catching up to do.

    • We will argue that the largest source of error in feedback diagnosis is the presence of time‐varying radiative forcing generated internal to the climate system, which then contaminates the radiative feedback signal.

      This interval forcing is what I’ve been trying to explain to all of you for a year and a half

      There’s a self induced water vapor feedback that is a stabilizing response, and it’s time varying. And greatly impacts sensitive.
      https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/

    • Roy, I agree that there are chaotic cloud variations and others which introduce a noise in the relationship between global radiative flux and the GMST. As an engineer in electrotechnics I would use a sample time as long as possible to cancel out these randomly fluctuations to improve the S/N. In so far short time obs. ( annual or decadal) one should use with a grain of salt. In L/C 18 they use a much longer time span. Do you think that this sampling time is long enough to determine the uncertainty as ist was done in L/C 18?

    • Yes, there is noise and care must be taken in applying simple conservation methods. Let’s use an analogy of an easier to understand problem. Let’s say we want to calculate the drag of an airplane based on the lift and thrust in an actual flight test. Now one can point out that there is a lot of turbulence so all these quantities vary a lot in time. In fact, in stormy conditions, the methods would be hopelessly inaccurate due to this noise. That’s why you choose a clear day with minimal turbulence and you use flight controls to minimize the noise from the remaining turbulence.

      Seems to me the analogy for energy balance methods is selection of base and final periods. They should be as similar in ocean state (ENSO) and volcanic activity as possible. Can you fit your paper into this framework Roy or am I completely off base here?

    • No one expects the surface temperature to correlate with monthly radiative anomalies, otherwise we would be predicting ENSO from the radiative balance. The surface temperature on monthly to annual time scales is highly determined by chaotic internal variability, mostly in the ocean. What you need to do is look at decadal scales of forcing and temperature rates of change. For example the CO2 forcing rate of change has accelerated from 0.1 W/m2 per decade in the 1950’s to 0.3 W/m2 per decade since about the 1980’s. This acceleration has gone with a similar acceleration of the temperature change rate. While 0.1 W/m2 per decade is within other natural forcing changes, 0.3 W/m2 per decade is well above anything else going on, so it is no surprise that the temperature response has become so much clearer in recent decades when it wasn’t earlier.
      http://woodfortrees.org/plot/hadcrut4gl/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.35/plot/gistemp/from:1950/offset:-0.1/mean:12/plot/best/from:1950/mean:12/offset:-0.1

  43. ….not to say that global average feedback isn’t variable temporally, but there is no observational evidence of that because it would require accurate observational diagnosis of feedback, which is not possible without accurate knowledge of all (anthropogenic and natural) radiative forcings and surface temperature variations, at a minimum.

    • I disagree Roy, if you do a network analysis it’s obvious. And is contained in one simple question, “why does the cooling rate slow down or stop in the middle of the night, while the surface to Sky temperature difference hardly changes”?

      • Because a temperature inversion forms a few meters above the surface which puts a stop to convection. This disappears quickly after sunrise.

      • Then it should have disappeared as I walked further up the side of the hill I live on.

      • a wholly different issue from climate sensitivity.

      • Not if Tmin is being regulated by the amount of water vapor there is, and the air temp. That’s what sets how much it cools at night, and controls the energy balance. Not the noncondensing GHG’s. Which is why the CS you get is so high, you’re including the regulation.

  44. Jim D, this is in response to your comment at 7.02pm; that’s now in the middle of the thread so nobody would see if I responded there.

    You’re assuming that looking at CO2 vs temperature gives you TCR because aerosols offset forcing from non-CO2 GHGs. Look at figure 1 of the article. From 1950 to 2011 aerosol forcing was about 0.5w/m2 per AR5; it was more like 0.35w/m2 per LC18. That’s quite a small forcing – not much more than tropospheric ozone, which eyeballing is about 0.25w/m2.

    (I remember LC14 had txt files with all the data on ocean heat, forcings, etc – will that be available for this paper also?)

    Looking only at CO2, 1950-2016 forcing is about 1.2w/m2; looking at all anthropogenic forcings, it’s 2.1 or 2.2. So total anthropogenic forcing is about 80% greater than from CO2 alone. You calculated a TCR of 2.4 based on CO2 alone; if to account for an 80% greater forcing you divide that figure by 1.8 you get 1.33 which is more or less the result in LC18.

    • Yes, their departure is to say that the total forcing change is almost double that of CO2 alone in periods like that since 1950. They get that by maximizing other GHGs and minimizing aerosols. Most likely CO2 contributes 80% or more since 1950 as in the AR5 estimate plotted there. Taking the total as proportionate to the CO2 (as is seen graphically), they would also get 2.4 C per effective CO2 doubling because that number comes from the raw data, not using these assumptions about partitioning the forcing that muddy the numbers rather. If you want to predict the warming as a function of emissions, you need to use the effective total value and not ignore those other components that Lewis has to have as a major contribution to account for all the warming we have had.

    • Alberto, thanks for clarification. And Jim D: No, they do not maximise/minimise! They use the best available forcing data from the scientific literature. Some scientists sacrificed much life time to get these numbers. But (of course) you know it better. We should stop at this point.

      • It should be very clear to you that since about 1980, CO2 has been the dominant driver in the forcing change, which is due to the fact that its forcing rate has tripled since 1950. It accounts for at least 80% of the total rate at this time, and 80% of 2.4 C per effective doubling is near 2 C per doubling, with the rest also being anthropogenic and not a get out of jail free card. The partitioning into the CO2 part alone without mentioning the anthropogenic total is deceptive. As mentioned by Lewis in LC15, the total is highly correlated with the CO2 part alone, this also being a clue to its dominant percentage of the total.

      • Jim, this is no more true now than the first time you said it…

  45. “Here we show that distinct large‐scale patterns of SST and low‐cloud cover (LCC) emerge naturally from objective analyses of observations and demonstrate their close coupling in a positive local SST‐LCC feedback loop that may be important for both internal variability and climate change. The two patterns that explain the maximum amount of covariance between SST and LCC correspond to the Interdecadal Pacific Oscillation (IPO) and the Atlantic Multidecadal Oscillation (AMO), leading modes of multidecadal internal variability.” https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018GL077904

    Clearly not a novel discovery in an effect that varies naturally – and chaotically – on decadal to millennial scales. And something that suggests critical deficiencies in observational data. What we have suggests a relatively large contribution to 20th century warming of this ocean/cloud coupling process – centered on the tropical Pacific. .


    “Tropical mean (20°S to 20°N) TOA flux anomalies from 1985 to 1999 (W m–2) for LW, SW, and NET radiative fluxes [NET = −(LW + SW)]. Coloured lines are observations from ERBS Edition 3_Rev1 data from Wong et al. (2006) updated from Wielicki et al. (2002a), including spacecraft altitude and SW dome transmission corrections.” https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

    “Marine stratocumulus cloud decks forming over dark, subtropical oceans are regarded as the reflectors of the atmosphere.1 The decks of low clouds 1000s of km in scale reflect back to space a significant portion of the direct solar radiation and therefore dramatically increase the local albedo of areas otherwise characterized by dark oceans below.2,3 This cloud system has been shown to have two stable states: open and closed cells. Closed cell cloud systems have high cloud fraction and are usually shallower, while open cells have low cloud fraction and form thicker clouds mostly over the convective cell walls and therefore have a smaller domain average albedo.4–6 Closed cells tend to be associated with the eastern part of the subtropical oceans, forming over cold water (upwelling areas) and within a low, stable atmospheric marine boundary layer (MBL), while open cells tend to form over warmer water with a deeper MBL. Nevertheless, both states can coexist for a wide range of environmental conditions.5,7 Aerosols, liquid or solid particles suspended in the atmosphere, serve as Cloud Condensation Nuclei (CCN) and therefore affect the concentration of activated cloud droplets.8 Changes in droplet concentration affect key cloud properties such as the time it takes for the onset of significant collision and coalescence between droplets, a process critical for rain formation. The onset of significant collision-coalescence process can thus be represented by a delay factor.” https://aip.scitation.org/doi/10.1063/1.4973593

    Discussions not rooted in the broader context of large changes in toa radiant flux as a result of changes in ocean and atmospheric circulation are doomed to irrelevance.

    • Robert,
      “Discussions not rooted in the broader context of large changes in toa radiant flux as a result of changes in ocean and atmospheric circulation are doomed to irrelevance.” Agreed. These changes quite fundamentally control net SW via cloud modulation.

      However, you also wrote above:-

      “There is no forcing as such but a Lorenzian trigger in a spatio/temporal chaotic system.” There is strong direct evidence that at the least the quasi-60 year cycles which appear in many climate indices are FORCED variations. More specifically, they cannot be explained as a redistribution of heat with no net energy addition or loss to the climate system.

      The evidence for this is clear in the relative phasing of net flux and temperature peaks and troughs. With a forced response to an oscillatory input forcing, we expect net flux peaks (to lead forcing peaks) to lead temperature peaks. The theoretical separation between net flux and temperature is exactly pi/2 radians for single-body heating or a maximum of pi/2 radians for more complex (connected multiple-body) heating models. Conversely, for unforced natural variation, we would expect downward net flux to reach a maximum (minimum) when surface temperature reaches a minimum (maximum). Net downward flux and temperature should be exactly pi radians out of phase under this model of unforced natural variation.

      The quasi 60 year oscillations in surface temperature show peaks around 1880’s, 1940’s, 2000’s and corresponding troughs around 1910 and in the late 1970’s. With unforced variation, we would then expect to see peaks in net (downward flux) in 1910 and the 1970’s and troughs in the 1880’s, 1940’s and 2000’s.

      The satellite data, if anything shows a peak in net flux occurring in the 2000’s – the exact opposite of what we would expect under an unforced natural variation model. Equally importantly, it suggests that the peak is dominated by SW change, not LW. This also tells us that the SW variation cannot be ascribed to temperature-dependent feedback – since an SW feedback sufficient to overwhelm the Planck response would imply a net positive feedback which can be unequivocally ruled out on basic physics.

      The best long-term proxy we have for net flux is probably the derivative of MSL from long-term tide-guage data. (We know that the peaks and troughs in MSL are dictated largely by thermosteric effects rather than mass addition or isostatic variation because of detailed analysis by Neven using modern data. He also found a (to him puzzling) peak in the derivative of MSL which occurs around 2003, coincident with the net flux peak seen in satellite TOA measurements.) From Figure 3 in Jevrejeva 2008 (ftp://soest.hawaii.edu/coastal/Climate Articles/Jevrejeva_2008 Sea level acceleration 200yrs ago.pdf ) we can see that the peaks and troughs in the rate of change of MSL historically are more or less in phase with the peaks and troughs in temperature. They are almost pi radians (30 years or so) out of phase with what we would expect under an unforced natural variation model.

      The phasing however IS reasonably compatible with a model of forced variation from an unknown driver. That driver appears to change atmospheric circulation and ocean currents which then modulates cloud cover which causes SW heating and cooling. My favourite suspect, given its long-term recurrence, is externally forced change of the angular momentum of the “solid” Earth. But whether this is correct or not, I believe that it is incorrect to say “There is no forcing as such…”

      • The proximate cause of upwelling in the eastern Pacific – and thus multidecadal patterns of surface warming and cooling – is the state of north and south Pacific gyres and winds. These are spun up in negative phases of the polar polar annular modes and that results in more upwelling. An internal planetary response with multiple feedbacks in a resonant system.


        https://earthobservatory.nasa.gov/IOTD/view.php?id=8703

        The suggestion is that zonal or meridional patterns of the polar modes are influenced by solar uv/ozone changing surface pressure at the poles via atmospheric pathways. Canonically small changes that result in large variation in a complex and dynamic system. Thus a Lorenzian trigger that biases the system to more or less upwelling over decades to millennia. The suggestion is that the 20 to 20 year periodicity derives from the Hale solar cycle of magnetic reversals. Cloud albedo changes with sea surface temperature. With marine stratocumulus cloud SW dominates the energy dynamic.

  46. Robert I Ellison: Discussions not rooted in the broader context of large changes in toa radiant flux as a result of changes in ocean and atmospheric circulation are doomed to irrelevance.

    Thank you for the links in that post.

  47. Geoff Sherrington

    People who use the historic land records of temperature, with a century or more based almost entirely on Tmax and Tmin measured by LIG thermometers in shelters, seem not to appreciate that they are not presented with a temperature that reflects the thermodynamic state of a weather site, but with a special temperature – like the daily maximum – that is set by a combination of competing factors.
    Not all of these factors are climate related. Few of them can ever be reconstructed.
    So it has to be said that the historic Tmax and Tmin, the backbones of land reconstructions, suffer from large and unrecoverable errors that will often make them unfit for purpose when purpose means reconstructing past temperatures for inputs into models of climate.
    Tmax, for example, arises when the temperature adjacent to the thermometer switches from increasing to decreasing. The increasing component involves at least some of these:- incoming insolation as modified by the screen around the thermometer; convection of air outside and inside the screen allowing exposure to hot parcels; such convection as modified from time to time by acts like asphalt paving and grass cutting, changing the effective thermometer height above ground; radiation from the surroundings that penetrates necessary slots in the screen housing; radiation from new buildings if they are built.
    On the other side of the ledger, the Tmax is set when the above factors and probably more are overcome by:- reduced insolation as the sun angle lowers; reduced insolation from clouds; reduction of radiation by shade from vegetation, if present; reduction of convective load by rainfall, if it happens; evaporative cooling of shelter, if it is rained on at critical times.
    It does not seem possible to model the direction and magnitude of this variety of effects, some of which need metadata that were never captured and cannot now be replicated. Some of these effects are one-side biased, others have some possibility of cancelling of positives against negatives, but not greatly. The factors quoted here are in general not amenable to treatment by homogenization methods currently popular. Homogenization applies more to other problems, such as rounding errors from F to C, thermometer calibration and reading errors, site shifts with measured overlap effects, deterioration of shelter paintwork, etc.
    The central point is that Tmax is not representative of the site temperature as would be more the case if a synthetic black body radiator was custom designed to record temperatures at very fast intervals, to integrate heat flow over a day for a daily record with a maximum. T max is a special reading with its own information content; and that content can be affected by acts like a flock of birds passing overhead. The Tmax that we have might not even reflect some or all of the UHI effect because UHI will generally happen at times of day that are not at Tmax time. And, given that the timing of Tmax can be set more by incidental than fundamental mechanisms, like time of cloud cover, corrections like TOBs for Time of Observation have no great meaning.
    It seems that it is now traditional science, perceived wisdom, to ignore effects like these and to press on with the excuse that it is imperfect but it is all that we have.
    The more serious point is that Tmax and Tmin are unfit for purpose and should not be used.

  48. The pause fooled a lot of really smart people. Still is. It’s funny.

  49. Nic, this may be a bit simplistic on my part but I have put the arguments about differences in ECS and TCR estimates between using the CMIP5 climate models data or that from the observed records into two parts. Minimizing the differences would have to come from two sources, in my view, with the first being that the estimates from the observed are biased low due to uncertainty in the various observed forcing variables and the second being that model data can be manipulated to show that the radiative feedback parameter for (most) models changes over time such that the model data provides a better match to the observed over the instrumental period.

    Your current work and analyses appears to be concentrated on updating the observed temperature and forcing data with the latest values and applying it to initial and final time periods that are most free from the effects of changes in natural variations. While your determinations are on the low end of the published results for ECS and TCR estimates from the observed data, taking the published observed results together shows those estimates on average being lower than those from the CMIP5 models where the radiative feedback is assumed to be constant. You have put your analyses and critiques of the published papers showing a non constant feedback on various blogs (and without necessarily implying that the feedback parameter is constant).

    I am more familiar with the publications that you have referenced in the thread here about the constancy of the feedback parameter and thanks to your help I see several weakness in the conclusions that the authors have drawn from the data that were used. I am thinking that it might be quite helpful for those of us with an interest in these estimates for you to briefly summarize in one space (a) the critiques of the approaches for estimating ECS and TCR from observed data and how you defend your use of the data and your methods and (b) your critiques of the papers showing a non constant feedback parameter.

  50. Ken, I plan to do what you suggest, but it wmay be difficult to summarize briefly. The arguments and evidence are set out in some detail in section 7 of the paper, and related parts of the SI. Both are available on my (new) website – see note [iv]

  51. Thanks, Nic, for the references.

  52. Kenfritsch | April 29, 2018 at 11:25 am | Reply

    “the estimates from the observed are biased low due to uncertainty in the various observed forcing variables”

    The charitable view , ken, might be that the observations are correct , uncertainties cut both ways remember. The only reason we have an issue is that a set of models refuses to follow the observations.

    “model data can be manipulated to show that the radiative feedback parameter for (most) models changes over time such that the model data provides a better match to the observed over the instrumental period.”

    meaning that since the models are so obviously wrong, yet the physics is known, there are some very biased [shall we dare say wrong] assumptions in the models.Mainly related to feedback amplification I guess.

  53. How Cooling Laptops Led to Constructal Theory

    Does the Earth know how not to die? It does.

  54. I’ve been doing a little reading on Nic’s web site and it is a treasure trove of analysis and data. It also contains careful analysis of recent negative results on GCM’s.

    To me this discussion here and especially at ATTP seems so superficial in that the negative results about other lines of evidence on ECS is ignored. It’s particularly odd that people who should know these results such as Andrew Dessler simply don’t respond or mention them.

    It also strikes me that many of the lines of evidence used against energy balance methods, such as the pattern of warming argument, are based on GCM’s. All of these arguments it seems to me are strained interpretations. They all it seems to me to have a much simpler interpretation, namely, that GCM’s do a poor job at replicating the historical climate record. This is of course not new news, but that mainstream climate scientists don’t consider it is to me biased.

    It is also interesting to me that having insisted for 30 years that internal variability cannot have caused any of the recent warming, many climate scientists are now claiming that internal variability must explain the differences between GCMs and the climate record. I see no reason to credit this strained interpretation. A far simpler explanation supported by simple analysis of CFD and sub grid models is that the GCM’s are simply not credible and this disagreement is simply clear empirical evidence of their lack of credibility.

    • What Lewis is also telling you is that with a suitable choice of endpoints you can cancel all natural variations and the rest is anthropogenic. That is 0.88 C in the case of 1869-1882 and 2007-2016. I thought this would be more controversial among the skeptics here, but it appears no one is arguing. This either looks like some major progress, or he snuck it by you all.

      • Well, Yes that is their assumption. If it’s not all anthropogenic, the ECS will be less than that calculated by their method.

        But it seems to me that Dessler and the team have a lot more explaining to do here concerning internal variability. It seems to be a witch that can be rounded up and burned when cultural prejudices are not bourn out. Dessler even said that there are no good methods to disentangle forced response from internal variability. That kind of throws the classical attribution argument under the bus.

      • This whole method falls apart if you can’t either disentangle natural variations or assume that they are small compared to the anthropogenic effect over the chosen periods. This is an unstated assumption of the method Lewis uses, but the skeptics aren’t skeptical. If I was a true skeptic on attribution, I would be angered by such a basic thing being taken as true, but we don’t see that criticism directed at him. I guess he is given a pass.

      • Not sure how skeptical it is – but there seem clearly to be factors beyond this analysis. Grounded as it is in IPCC forcing estimates.

        “….not to say that global average feedback isn’t variable temporally, but there is no observational evidence of that because it would require accurate observational diagnosis of feedback, which is not possible without accurate knowledge of all (anthropogenic and natural) radiative forcings and surface temperature variations, at a minimum.” Roy Spencer

        There is a broader if more difficult context.

        https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/#comment-871310

        And the fact remains that any intrinsic 20th century warming – in a 1000 year peak in EL Niño frequency and intensity – suggests a more muted AGW response. As well as the potential for cooling coming off the peak.

        I say El Niño – but it must be understood in terms of constructal theory. After all – panta rhea – everything flows.

      • Jim D: Not everything you don’t understand ( see your “genious” calculations …) must be wrong and fall apart. You can’t have read the issue of discussion here because the content would have educated you and prevent you from writing this nonsense.

      • Geoff Sherrington

        Jim D,

        This sceptic, me, is being critical, see my posts above questioning the wisdom of using historic temperature data that I claim to be unfit for purpose. This criticism applies to both observational and modelled sensitivity studies. It is hard for me to take my thinking much further than ultra basic, because over the last decade or more I have sought 2 fundamentals, without success. The first is a single, quantitative, reviewed, accepted paper that shows that in the climate system, as opposed to the lab, CO2 has an effect on climate, especially temperature. The second is a single accepted reviewed data set of historic temperatures that answers the various criticisms over the years instead of duck shoving them by appeals to authority, cherry picking, etc.
        There is a third which I hold back until the two are resolved. It is whether anyone has yet shown that natural climate variability has been successfully separated form anthropogenic influences, if any.
        Why should a scientist get excited about a new field of research (as climate with catastrophic anthropogenic GHG was back then) when two – or three – of its most fundamental postulates have not even passed the first base?
        It’s back to Sunday school, singing “build on the rocks and not upon the sands”. Geoff.

      • Well JimD, its is true that the energy balance method will get better as the periods of time considered get longer. But the internal variability vs. forced response is a serious problem for any attempt to estimate ECS. It’s worse for GCM’s where the so called “internal variability” is confounded by numerical noise.

      • At least for GCMs you know what the external forcing and ocean heat content are, while in observations it is not possible to separate these from internal variations leaving the external forcings like solar and volcanic as guesswork. The GCMs show you the scale of internal variability controlling for these other variables.

      • Jim D,
        It seems that Lewis and Curry since their first paper, published as LC15, set out to reconcile estimates of climate sensitivity from the AOGCMS with empirical estimates, using the same data and the same fundamental assumptions.
        “The key issue faced in the AR5 assessment was interpreting the discrepancy between climate sensitivity estimates based on climate models (higher values) versus recent empirically-derived sensitivity analyses (lower values).”
        It may seem to some that it was a very intelligent thing to retain the same data and assumptions in order to highlight unequivocally the oversensitivity in GCM projections; in fact those same people might actually admire Lewis and Curry for avoiding an epistemic swamp which would have destroyed the impact and importance of the study.
        On the other hand, maybe you are right. They just fooled us all.

      • I would dispute that CO2 is such a low fraction (~60%) of the net anthropogenic effect, but that is the means by which they can get such a large warming with such a low sensitivity to CO2. If CO2 is at least 80% of the net forcing, the sensitivity returns to the kind of values suggested in the mainstream theory and GCMs. That is the crux of the difference. Not so much how much net anthropogenic effect, but how it is distributed among CO2, and other GHGs and aerosols. The net warming is also affected by the choice of endpoints with their method.

      • JimD, What you say is true about the forcings, but they use the latest IPCC estimates. It’s hard to see how they could credibly do any better. They also look at a lot of time periods and all lie pretty close to the linear trend line. If internal variability played a big role, that would not be the case.

        On the end point periods issue: Nic addressed it in the paper. They choose the periods with the largest delta forcing that are similar in volcanic activity and I think AMO index.

      • As I showed, you can take the 1950’s and current decades as endpoints and get 0.9 C warming with 90 ppm CO2, an effective transient rate of 2.4 C per doubling that fits the acceleration in whole intermediate period well, so when they have only half this warming for CO2 for a similar period, they have clearly done something that would not be a good predictor for the amount of warming we have had without other factors added to the CO2, a point they don’t address. Their choice of endpoints and a questionably low aerosol effect, lead to their differences from AR5. If the net forcing since preindustrial is less than 2.5 W/m2, as in the central IPCC estimates, CO2 accounts for at least 80% of it. Lewis is choosing values nearer the upper end of their 5-95% range.

      • Jim D: “As I showed, you can take the 1950’s and current decades as endpoints and get 0.9 C warming with 90 ppm CO2, an effective transient rate of 2.4 C per doubling…”
        You did not show this! Alberto has nailed it here: https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/#comment-871296

        Your calculation is unsound, I can’t find another word. Repeating nonsense makes no sense of it,

      • Calculate it for yourself using 0.9 C and 400-310 ppm. 2.4 C per doubling. How else can I explain it?

      • Okay, I’ll try it for the last time: The observed warming comes from all the forcings ( the total forcing). This includes CO2, other GHG, Aerosols, ozone, black carbon, solar, volcano, solar and so on. The numerical relation shows this figure:

        In the cited article I could eliminate the volcano, and solar forcing because it was also eliminated by “tamino” in his adjusted records. Take the published data for the forcing and try it for yourself. You can’t attribute all the observed warming to CO2, you must include all the forcings at work. If you don’t do it correctly you get too much T-response from too less CO2 forcing. But I think you know it and you try to confound here.

      • I define an effective TCR using the CO2 part of the forcing which is much more certain, along with the warming itself. This definition is also more useful and direct for converting emissions to warming because it takes into account any proportionalities using the observed record. While the net forcing appears not to be known to within about 0.5 W/m2, largely due to aerosols, the CO2 part is known to within about 0.1 W/m2 at any given time and also therefore gives a more clearly defined effective TCR and prediction.

      • Even with the more recent ENSO spike. Intrinsic warming had a local peak around 1944.

      • Still fitting a line to a curve there.

      • Why don’t you complain to Kevin Cowtan? He might care about your silly little games. Is it just a least squares trend over a period with immense intrinsic variability. Beats starting deep in a cool phase and ending with an ENSO plus spike.

      • Are you blaming your linear fit on Cowtan?

      • It does come from his trend calculator page. Although I wouldn’t generally include the trend from 1998 until the fat lady stops singing. Regimes are a bitch when it comes to simple minded assumptions.

      • He probably has a disclaimer for improper use.

      • It is just a trend from the height of the early century warm regime. Starting from a La Nina, finishing with an El Nino and calling the difference AGW is the height of scientific absurdity Jimmy.

      • Which is why I didn’t do that. I use decadal averages and trends over longer periods to rule out things like that.

      • You start in 1950. Regimes are a bitch.

      • Lewis chose 1940 and you didn’t complain about that. The 1950’s have very little anomaly in Lovejoy’s plot, being close to the long-term rising trend.

      • yes there is absolutely no difference between 1940 and 1950 and there is no mid century cooling – I always have to check my sanity with jimmy around

      • 1950 was after a warm period and ahead of a cool period as you always like to show.

      • This is a smoothed graph from from Swanson et al – a 21 year running mean as it says. We have the residual warming rate of 0.1K/decade after ‘removing’ decadal variability – with absolutely no reason for this to accelerate from an emissions perspective. Now all there is to deal with is centennial to millennial variability.

        You cannot just grab a graph that I have previously discussed to claim something it was not meant for. The other graphic – showing synchronicity and coupling as well as the surface record is a batter bet for a more honest broker.

      • I think you can debate the merits of Tsonis and Swanson against Swanson ad nauseum, so I put it out there for you to see which one you chose or if your head explodes. You’re not particularly interested in Lewis using 1940 as a starting point, however, so there’s that if you want to say something else.

      • You do realize that it is almost the same people – with the addition of George Sugihara on one – just a different topic?

        https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2008GL037022

        http://www.pnas.org/content/106/38/16120

        George Sugihara is pretty impressive in his own right.

        https://www.quantamagazine.org/chaos-theory-in-ecology-predicts-future-populations-20151013

      • Do you prefer this to Lewis and his very simple constant-lambda model? Why?

      • Robert I Ellison

        Thank you for the link re: George Sagihura and “empirical dynamic modeling”, a comfortable read from which I learned a lot and gives me pause on thoughts of restricting funding for climate science. The limits of the Taken theorem are the awareness that nature is chaotic and there may not be stationarity; ie, items may not stay the same over time with which I agree since cell adaptation and expression change almost on a minute-by-minute basis. The other limitations are the need for “big data” and, in climate science, there is such a limit. Hence, empirical dynamic modeling is still limited to short term predictions.

        The issue of funding, currently under scrutiny because of the “clique” of modelers who have dominated the climate science scene for now three decades or more, deserves to reallocate funds to generate “big data” which means more sensors on more vehicles in more places. The maxim: doing the same thing over and over again and expecting a different outcome is…insanity. It is time to put all the modelers into one room with one computer and a limited budget. It is time to propose new hypothesis. It is time to collect lots more data. It is time to get back to science and leave the chaos in Washington.

    • Here’s an example of what I mean from Dessler at ATTP:

      Note to Mosher: no ensemble study can tell you where the historical climate trajectory would fall within the ensemble. that’s really an ill-posed question. rather, the ensemble tests tell us that the methodology produces imprecise answers.

      other papers (e.g., Marvel et al., Zhou et al.) show using different methods that the existing surface pattern is causing energy balance methods to yield too low of an ECS.

      combine all of these results and you arrive at a reasonably robust conclusion that L&C’s ECS estimate (and others derived the same way) is biased low.

      Except that the surface pattern is prima facia evidence the GCM’s used in all these studies lack skill. But I forgot, they agree with “simple theory” which is unquantifiable. Am I missing something here?

    • the CO2 part is known to within about 0.1 W/m2 at any given time

      The CO2 part is 0.0 plus or minus some unknown, unmeasurable, amount.

  55. Pingback: Weekly Climate and Energy News Roundup #313 | Watts Up With That?

  56. Pingback: Weekly Climate and Energy News Roundup #313 |

  57. I oppose using climate sensitivity for anything. It is too easy to manipulate parameters such that the global temperature can be anything you wish.

    • I never know what sensitivity means in practical terms. In models it may literally be almost anything at all – with the right, non-unique choice if initial conditions.

      “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

      Seemingly random and nonlinear transitions can be found in climate series everywhere – from modern instrumental records to Quaternary spanning proxies. They were first identified in a more than millennia long record of Nile River heights in the 1950’s. It was later recognized that these are dynamical regimes.

      :https://watertechbyrie.files.wordpress.com/2018/05/vriability.jpg

      “Our climate system is not just a static thermodynamic system, it is a fluid dynamical system, and the effects of dynamics (especially on a rotating planet) can often be counterintuitive.” http://www.pnas.org/content/110/14/5281

      Regimes sum not to zero but to extreme variability in cloud, ice and dust over millennia. Superimposed on intrinsic dynamic regime behavior of climate in the 20th century are thermodynamic and perhaps dynamical changes resulting from athropogenic greenhouse gas changes in the atmosphere. Disentangling that is not a problem that has been solved.

    • The Roman and Medieval warm periods did happen without manmade CO2 emissions. In practical terms, CO2 sensitivity means modern scam. The people promoting this alarmist stuff, the alarmists and the lukewarmers, promote warming caused by manmade CO2 and they do not have data that proves they are right. Their analysis would not support the same warming that happened in past cycles. Their analysis would only work with hockey stick temperature history.

  58. I am curious about this statement from the paper: “Throughout 140-year simulations in which CO2 forcing is increased smoothly at 1% per annum (1pctCO2), the responses of almost all CMIP5 AOGCMs can be accurately emulated by convolving the rate of increase in forcing with the step response in their simulations in which CO2 concentration is abruptly quadrupled”

    My linear systems theory is very rusty, but modulo a constant, isn’t this what we would expect from a linear, time-invariant system? If so, that suggests we may be able to dramatically simplify the forcing response of GCMs.

    • Yes. As regards many variables, AOGCMs behave very like linear, time-invariant systems. This fact is used to emulate various aspects of their response to forcing using much simpler models.

      • As regards many variables, AOGCMs behave very like linear, time-invariant systems. This fact is used to emulate various aspects of their response to forcing using much simpler models.

        If it’s not matching the actual nonlinear response, it’s wrong. Though that might explain why GCM’s continues to come up with far to large a CS value.

  59. Tucker Carlson just threw softballs to Judith tonight on Fox News. Generic stuff, fairly content free, nothing new from her and very short.

  60. As far as cable news goes, Carlson wins his time slot.

  61. Pingback: Senaste forskningsrönen: koldioxidens påverkan på klimatet är starkt överdriven - Stockholmsinitiativet - Klimatupplysningen

  62. Pingback: Explainer: How scientists estimate ‘climate sensitivity’ | Climate Change

  63. Pingback: Explainer: How scientists estimate ‘climate sensitivity’ – Alternative Energy Omniverse