Updated climate sensitivity estimates

by Nic Lewis

An update to the calculations in Lewis and Curry (2014).

Lewis and Curry: The implications for climate sensitivity of AR5 forcing and heat uptake estimates (2015; online 2014)[1] (LC14) made careful use of estimates of effective radiative forcings and planetary heat uptake from the recently published IPCC 5th assessment Working Group 1 report (AR5) to derive estimates for equilibrium/effective climate sensitivity (ECS) and transient climate response (TCR). A simple but robust global energy budget approach was used, with thorough treatment of uncertainties.

TCR and ECS were estimated from the ratio of the change ΔT in global mean surface temperature (GMST) between a base and a final period, to the corresponding change ΔF in radiative forcing respectively before and after deducting the change ΔQ in planetary heat uptake rate. These ratios were then scaled by the forcing from a doubling of CO2 concentration, F2xCO2, taking the AR5 value of 3.71 Wm−2, to give the TCR and ECS estimates. The resulting best (median) estimates for ECS and TCR were 1.64°C and 1.33°C respectively.

The forcing and heat uptake estimates given in AR5 ended in 2011; the main final period of 1995– 2011 was selected as the longest stretch at the end of the record throughout which volcanic forcing was low. Since strong warming has occurred over the four years since then, and ocean heat uptake has increased, I thought it worth updating the LC14 estimates using data ending in 2015, the main final period becoming 1995–2015. I also have extended the previous 1987–2011 long final period at both ends, to 1980–2015. The base periods used are unchanged.

Updating global temperature and changes ΔT therein

LC14 used HadCRUT4v2. For the updated estimates, the latest version, HadCRUT4v4, is used for calculation of all changes in GMST and of uncertainty therein. Doing so increases the TCR and ECS estimates based on the original 1995-2011 final period by 1%.

Updating forcings and changes ΔF therein

All forcings used in LC14 were sourced from AR5 Table AII.1.2. These estimates are in principle of effective radiative forcing (ERF), but in practice are mainly based on estimates of stratospherically-adjusted radiative forcing (RF). In most cases, AR5 assessed there to be insufficient evidence for ERF differing from RF and took them to be the same. Only for anthropogenic aerosol forcing and the very small contrails/ contrail-induced cirrus forcing does AR5 estimate ERF to differ from RF.[2]

Values for each individual forcing given in AR5 Table AII.1.2 were updated from 2011 to each of years 2012 to 2015 using observational data where available and otherwise whatever method was thought most appropriate. Details are given in Appendix A. The estimated change in total forcing between 2011 and 2015 is 0.28 Wm−2, of which 0.15 Wm−2 is due to increases in well-mixed (long lived) greenhouse gases (GHG) and 0.09 Wm−2 due to rising solar forcing, both of which are based on observational data. Changes in other forcings accounted for the remaining 0.04 Wm−2 of the increase.

Updating heat content and changes ΔQ in heat uptake

The planetary heat uptake rate, Q, is dominated by changes in ocean heat content (OHC). The Domingues 0-700 m layer OHC dataset[3] used in AR5 ends in 2011. Moreover, it is necessary to switch from 3 and 5 year averages, used in AR5 for the 0-700 m and 700-2000 m ocean layers, to annual means in order to be able to update beyond 2013. The NOAA/Levitus OHC dataset,[4] used in AR5 for the 700-2000 m layer, is employed (taking changes in annual rather than pentadal mean global values for the final two years) to provide changes over the full 0-2000 m layer from 2011 to each of the subsequent four years, obviating the need for a separate 0-700 m dataset. The minor deep (>2000 m) ocean, land, ice and atmosphere heat content amounts are updated from 2011, based on their trends over 2000–11 save for the atmosphere. For the atmosphere, updating is based on the change in GMST since 2011 and the regression slope of heat content on GMST over 2000–11. Details of the method used to estimate the planetary heat uptake rate and uncertainty in the estimate are given in Appendix B.

The thus estimated annual mean planetary heat uptake rate over 1995-2015 is 0.63 Wm−2, rather higher than the 0.51 Wm−2 over 1995-2011 used in LC14. If instead the Ishii 0-700 m dataset,[5] which has been updated to 2015, were used over the whole period, with the NOAA/Levitus dataset continuing to be used just for the 700-2000 m layer after 2011, the linear trend planetary heat uptake estimate would be slightly lower, resulting in the main ECS estimate being 0.05°C lower. The heat uptake and ECS estimates would also be marginally lower if the NOAA/Levitus dataset were used for the full 0-2000 m ocean layer over the whole period rather than just post 2011.

Results

Table 1 gives the ECS and TCR estimates for the main base period – final period combination and using the more recent 1930-1950 base period. Both base periods give a good match to the relevant revised final period(s) as regards mean volcanic forcing and state of the Atlantic Multidecadal Oscillation (AMO). The 1850-1900 base period that was also used in LC14 to match the longer 1987-2011 final period does not match 1987-2015 so well as regards these factors; 1980-2015 is used instead as it matches 1850-1900 better.[6] The original LC14 estimates for the corresponding periods ending in 2011 are shown in the second section of Table 1, for comparison.

The ECS estimates using the 1930-50 base period are particularly uncertain: anthropogenic influence may have resulted in significantly different OHC changes during this period than were incorporated in the model-based estimate that had to be used. I have dropped the long 1971-2011 final period as it was less well matched with its base period, arguably biasing sensitivity estimation down; the new 1980-2015 period is almost as long.

The third section of Table 1 shows what the results would be were aerosol forcing estimates in line with the analysis in Stevens (2015) used in place of those given in AR5.[7]

The final section of Table 1 shows comparative estimates from the widely-cited Otto et al. (2013) paper,[8] and from the 2014 Lewis and Crok report on climate sensitivity.[9]

Slide1

Table 1: TCR and ECS estimates (medians and 5-95% uncertainty ranges) on various bases. Main results are shown in bold. ΔT, ΔF and ΔQ are changes between base and final period mean values.

The consistency of the TCR estimates across periods is supportive of the accuracy of the energy budget estimation method, although with only four more year’s data being added one would not expect a large change in TCR estimates. Whilst four years is far too short a period to estimate TCR or (even more so) ECS over, it is interesting to note that TCR estimated from ΔT and ΔF from 1859– 82 to 2012–15, the added years, is almost the same as for the full 1995–2015 period, at 1.30°C.

Although the AR5 allowance for an upwards trend in >2000 m deep OHC from 1992 on has been retained, a post-AR5 study indicates that, contrary to the studies relied upon in AR5, >2000 m OHC has been reducing rather than increasing since 1992.[10] If the AR5 based allowance for warming in the deep ocean were omitted, the preferred ECS estimate based on 1859-1882 to 1995-2015 changes would reduce from 1.74°C to 1.68°C.

Now that the Argo network has had reasonable coverage down to a depth of ~2000 m for a decade, it is practicable to estimate ECS using Argo-period-only OHC data, although a decade is on the short side for reliable estimation. I have done so using a final period of 2006-2015, substituting the trend in annual mean NOAA/Levitus 0-2000 m OHC data for the mixture of Argo and pre-Argo, pentadal and annual, OHC data that has to be used for the 1995-2015 period. An allowance for OHC increases below 2000 m was again added, giving a planetary heat uptake of 0.78 Wm−2. The resulting TCR estimate is 1.32°C, and 1.84°C for ECS. However, there are reasons for thinking that the NOAA/Levitus mapping (infilling) method may lead to an overestimation of the increase in OHC as observational coverage increases, which it did over 2006–2015 as the recently-established Argo network became denser. The Lyman and Johnson 0-1800 m OHC Argo dataset,[11] which uses a mapping method that largely avoids such a bias, produces an OHC trend over 2006-2011 0.07 Wm−2 below that per the NOAA/Levitus data, after allowing for heating in the 1800-2000 m layer. Unfortunately, the Lyman and Johnson data ends in 2011, but if the NOAA/Levitus dataset had overestimated the ocean heating rate by 0.07 Wm−2 throughout 2006-2015, the 1.84°C ECS estimate would reduce to 1.76°C. Another estimate for the 0-2000 m Argo-only OHC trend increase is 0.44Wm−2 over 2006-2013,[12] which is likewise 0.07 Wm−2 smaller than the NOAA/Levitus estimate for the same period, again suggesting an ECS estimate of 1.76°C.

Appraising claims that energy budget studies underestimate equilibrium climate sensitivity

It has been claimed (apparently based on HadCRUT4v1) that incomplete coverage of high-latitude zones in the HadCRUT4 dataset biases down its estimate of recent rates of increase in GMST.[13] Representation in the Arctic has improved in subsequent versions of HadCRUT4. Even for HAdCRUT4v2, used in LC14, the increase in GMST over the period concerned actually exceeds the area-weighted average of increases for ten separate latitude zones, so underweighting of high-latitude zones does not seem to cause a downwards bias. The issue appears to relate more to interpolation over sea ice than to coverage over land and open ocean in high latitudes. The possibility of coverage bias in HadCRUT4 has since been independently examined by ECMWF using their well-regarded ERA interim reanalysis dataset. They found no reduction in that dataset’s 1979-2014 trend in 2 m near-surface air temperature when the globally-complete coverage was reduced to match that of HadCRUT4v4.[14] Since the ERA interim reanalysis combines observations from multiple sources and of multiple atmospheric variables, based on a model that is well-proven for weather forecasting, it should in principle provide a more reliable infilling of areas where surface data is very sparse, such as high-latitude zones, than mechanistic methods such as kriging. Moreover, during the early decades of the HadCRUT4 record (which includes the 1859-1882 base period) data was sparse over much of the globe, and infilling may introduce significant errors. It has also been claimed, based on climate model simulations, that the use of sea surface temperature (SST) as a proxy for near-surface air temperature for ocean areas causes a downward bias.[15] There are some theoretical grounds supporting this argument. However, the two main surface temperature datasets that are based (on a decadal timescale upwards) on air temperature above the ocean as well as land, and which moreover interpolate to obtain complete or near complete global coverage (NOAAv4.01 and GISSTEMP) show an almost identical change in mean GMST to that per HadCRUT4v4 from 1880-1899, the first two decades they cover, to 1995-2015, the final period used for this update of LC14. Whilst some downwards bias in HadCRUT4v4 may nevertheless exist, there are also possible sources of upwards bias, such as urbanisation effects.

It has also been claimed that a downward bias in TCR,[16] and also ECS,[17] estimation from historical data arises from other types of forcing having different effects on GMST than does CO2 (that is, having efficacies differing from one). Spatially inhomogeneous forcing, principally from aerosols, has been raised as a particular concern, with a levelling off of aerosol forcing in recent decades adding to potential underestimation. However, these model-based results are contradicted by other model-based studies that use simulations specifically designed for measuring forcing efficacies. The first and most thorough investigation of forcing efficacies was by Hansen et al.[18] They found almost all types of forcing to have efficacies very close to one provided that they were measured by ERF, with volcanic and ozone forcing having efficacies slightly below one and methane forcing having an efficacy slightly above one. Moreover, they found that the historical mix of forcings over 1880–2000 had an ERF efficacy of almost exactly one. Marvel et al. (2016)17 also found aerosol ERF to have an efficacy extremely close to one, and although it found ozone RF and ERF to have an efficacies well below one, that appears to have been due to an inappropriate climate state being used to measure ozone forcing, resulting in it being substantially overestimated.[19] Other studies also find no evidence for aerosol ERF having a high efficacy.[20] There is, however, evidence that volcanic forcing (which had a zero trend over the 1880-2000 period investigated by Hansen et al.) has a low efficacy, particularly when measured by RF.[21] This finding does not affect LC14 or the update thereof given here, since the base and final periods have been matched as to their levels of volcanic forcing. To summarize, there are no good grounds for believing that the LC14 original or updated results are biased by historical forcings having different efficacies to that of CO2.

Strictly, ECS estimated from transient climate change, as here, represents effective rather than equilibrium climate sensitivity, as the ocean has not reached equilibrium. In many current generation (CMIP5) atmosphere-ocean general circulation models (AOGCMs), equilibrium climate sensitivity exceeds effective sensitivity derived from forcing with a time-profile comparable to that available from the instrumental record. From analysing all CMIP5 models for which I have data, the mean difference in the two ECS measures is somewhat under 10%, using the same standard method of estimating AOGCM equilibrium sensitivity as in IPCC AR5. It is unknown whether or to what extent the two measures differ in the real world. Paleoclimate ECS estimates based on the transition from the last glacial maximum to the Holocene (LGM studies), may cast some light on this, despite considerable uncertainties, although estimates of ECS from more distant paleoclimate periods are difficult to directly compare with ECS in today’s climate state.[22] Simple calculations in which GMST at the LGM is divided by the total estimated forcing, both relative to the preindustrial state, have long been used to generate estimates of equilibrium climate sensitivity.[23] Current best estimates of GMST (4.0 ± 0.8°C below preindustrial)[24] and of relevant forcings (6–11 Wm2 below preindustrial)[25] at the LGM imply an ECS estimate of 1.75°C, essentially identical to the updated instrumental-observation energy budget estimate presented here. That suggests effective and equilibrium climate sensitivities in the real world may be very close to each other, but only about half the average equilibrium sensitivity of CMIP5 models.

Conclusions

Estimation of TCR seems remarkably robust to the analysis period, although the concentration of TCR best estimates in the narrow 1.31–1.36°C band may be fortuitous. However, estimating TCR by regressing GMST on total forcing with years affected by volcanic activity excluded, as in Gregory and Forster (2008),[26] produces reasonably consistent values over different long periods: 1.38°C over the full 1850–2015 HadCRUT4 period; 1.41°C over 1945–2015 (a period with zero trend in volcanic forcing that approximately spans the 65-70 year apparent periodicity of the AMO); and 1.27°C over 1850–1965, a period with a modest trend in volcanic activity (1.32°C when scaling volcanic forcing by an efficacy factor of 0.55, found in LC14 to be appropriate). To the extent that the forcing and GMST estimates used are in error, all these TCR best estimates would of course change. The dominant source of uncertainty is the AR5 aerosol forcing estimate.

ECS estimates are somewhat more sensitive to the period and the particular OHC dataset used, and the lack of OHC measurements prior to the 1950s makes dependence on model-based OHC estimates for the base period difficult to avoid. Based on the latest version of the HadCRUT4 dataset, an early base period and longer than decadal final period, the ECS best estimates fall in the range 1.65–1.75°C depending on the final period and OHC dataset used. For two base period – final period combinations not meeting these criteria, the ECS best estimates are around 1.85°C, but these are likely less reliable. There are some grounds for thinking that the final period OHC increases used may be slightly overestimated, at least in some cases. The uncertainty in ECS estimates remains very large. As for TCR, this is due primarily to uncertainty in the AR5 aerosol forcing estimate.

If the aerosol best estimates and uncertainty range are derived from Stevens (2015) rather than AR5, the best estimates for TCR and ECS reduce by respectively slightly less and slightly more than 10%, but the upper 95% bounds are greatly reduced: from 2.44°C to 1.67°C for TCR and from 4.45°C to 2.38°C for ECS. This highlights the importance of narrowing uncertainty regarding aerosol forcing.

There appears to be little substance in claims that global energy budget studies systematically underestimate TCR and/or ECS to a significant extent, although the possibility of a modest degree of underestimation cannot be ruled out.

Appendix A – derivation of individual forcings for 2012 to 2015

Well mixed greenhouse gases

Forcing from CO2, CH4 and N2O was calculated for each year from 2011 to 2015 using data for mean atmospheric concentrations[27] and the formulae in AR5 8.SM.3. Forcing from minor GHGs was projected based on the recent growth rates and levels of forcing by CFCs, HCFCs and HFCs shown in AR5 Figure 8.6. I estimate that these imply a growth rate of 0.02 Wm−2decade−1, in line with the growth rate implied by the difference between the 2005 and 2011 Total halogens forcing values in AR5 Table 8.2. The annual change from 2011 in the thus calculated total forcing from GHG was then added to the AR5 GHG forcing estimate of 2.83 Wm−2 (which uses marginally different concentrations of CO2, CH4 and N2O). This results in the 2015 GHG forcing value being 2.98 Wm−2, of which CO2 accounts for 1.94 Wm−2.

Aerosols

AR5 estimates that total aerosol forcing declined at a rate of 0.24% per annum over 2002-11. Global aerosol optical depth (AOD) estimates over 2014–12 from three satellite instrumentation based datasets and a specialised model driven by assimilated meteorological observations are very similar, with three having trends indistinguishable from zero and one a trend of approximately −1% per annum, all with no sign of any change in trend (Ma and Yu 2015, Figure 1.a).[28] Consistent with this, the AR5 estimates are extrapolated from −0.90 Wm−2 in 2011 to 2012-15 using the same −0.24% per annum trend as over 2002-11, reaching −0.891 Wm−2 in 2015.

Ozone

AR5 presents evidence for both tropospheric and stratospheric ozone concentrations gradually increasing since the late 20th century, resulting in positive forcing trends. I am not aware of any evidence that these trends have changed materially since 2011. Therefore, I extrapolate the AR5 2011 tropospheric and stratospheric ozone forcing values using their respective trends over the decade to 2011. The change by 2015 is an increase of 0.007 Wm−2. Satellite observations suggest that the post 2011 increase in stratospheric ozone forcing has been rather faster than per the trend over the previous decade,[29] but the difference in forcing is only 0.004 W m−2 by 2015 and the data are noisy so the trend increase has been used.

Other anthropogenic forcings

The AR5 estimates for minor land use change (albedo), stratospheric water vapour, black carbon on snow and contrails forcings have likewise been extrapolated from 2011 to 2015 using their trends over the decade to 2011. The net effect is an increase in forcing of 0.008 Wm−2 by 2015.

Solar

I updated solar forcing using TSI data from SORCE,[30] rebasing it to give the same anomaly, 0.03 Wm−2 , in 2011 as per AR5. Solar forcing climbs over 2012-15, reaching 0.119 Wm−2 in 2015.

Volcanic

The post 1850 AR5 volcanic forcing estimates are slightly rounded from −25x stratospheric AOD, based on the Sato dataset.[31] The latest value in that dataset is for 2012, and implies a slightly smaller forcing of −0.107 Wm−2 in 2012, very close to the AR5 2006–11 mean, than in 2011, which was more affected by the Nabro eruption. Sulphur dioxide emissions from explosive volcanic eruptions during 2013 to 2015 and the resulting stratospheric AOD levels remained at a similarly modest level to 2012 (Carn et al. 2016, Figure 9).[32] I have therefore taken volcanic forcing to be −0.107 Wm−2 throughout 2012-15.

Appendix B – estimation of heat uptake rate

LC14, following Otto et al. (2013), used the difference between the OHC estimates for the first and last years when calculating planetary heat uptake for the final period. With the final year OHC no longer representing an average over several years, it is more appropriate to estimate heat uptake using the linear trend over the period rather than the differencing method. The 1995-2015 heat uptake rate is 0.63 Wm−1, whether calculated using the newly adopted linear trend method or the previous differencing method.

Since error standard deviation estimates are available for all years of OHC data, uncertainty in the regression slope is estimated by performing a large ensemble of regressions each with all individual year heat content values perturbed by random draws from their uncertainty distributions. To allow for non-independence of errors in nearby years, which is to be expected even when 3 or 5 year running means are not used, all estimated heat content uncertainties are multiplied by sqrt(3). This multiplier reflects the dominance of uncertainty in 3-year mean 0–700 m OHC until the mid 2000s. Its use results in a slightly lower standard error estimate for the 1995-2011 heat uptake rate, of 0.081 Wm−2, that the 0.087 Wm−2 when using the difference method. That is consistent with use of regression producing only a modest reduction in uncertainty for estimating heat uptake over fifteen years when the initial and final values used in the difference method are dominated by means over several years.

The scaled-down model-derived estimates of heat uptake in the base periods, and uncertainty therein, are unchanged from LC14.

[1] Lewis N., Curry J.A., 2014: The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Clim Dyn, 45, 1009-1023. Final accepted version available here

[2] In addition, AR5 estimates black carbon deposited on snow to have 2–4 times an effect on GMST per unit forcing, compared to other types of forcing. This was taken account of in LC14. For volcanic forcing, it is not apparent that AR5 considered the relationship between RF and ERF.

[3] Data available here

[4] Available at http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_avt_data.html

[5] Available at http://www.data.jma.go.jp/gmd/kaiyou/english/ohc/ohc_global_en.html

[6] TCR and ECS estimates are almost identical whether using a 1987-2015 or a 1980-2015 final period, with a 1850-1900 base period

[7] Stevens, B (2015) Rethinking the lower bound on aerosol radiative forcing. J Clim, 28, 4794–4819. Details of the derivation from that paper of a best estimate time series for aerosol forcing are available here.

[8] Otto A et al (2013). Energy budget constraints on climate response. Nat Geosci 6:415–416

[9] Lewis N and M Crok (2014): A Sensitive Matter. Published by the Global Warming Policy Foundation, London. 65 pp

[10] Liang, X, C Wunsch, P Heimbach and G Forget (2015). Vertical Redistribution of Oceanic Heat Content. J Clim, 28, 3821-3833

[11] Lyman J, G Gregory (2014). Estimating Global Ocean Heat Content Changes in the Upper 1800m since 1950 and the Influence of Climatology Choice. J Clim, 27, 1945-1957. The omission of OHC changes in the 1800-2000 m layer is estimated to account for no more than a heating rate of 0.02 W/m2 over 2006-2011.

[12] Roemmich, D. et al. (2015) Unabated planetary warming and its anatomy since 2006. Nature Clim. Change 5, 240–245. The 0.44 W m−2 value is based on the average of the OI and RSOI Global values in Table 1, with 0.1 added to the OI value to allow for its smaller spatial domain, then divided by 0.83 to allow for the areas not sampled, and converted from 1021 J yr−1 to W m−2.

[13] Cowtan, K. & Way, R. G. (2014) Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Q. J. R. Meteorol. Soc. 140, 1935_1944.

[14] See http://www.ecmwf.int/en/about/media-centre/news/2015/ecmwf-releases-global-reanalysis-data-2014-0. The data graphed in the final figure shows the same 1979-2014 trend whether or not coverage is reduced to match HadCRUT4.

[15] Cowtan, K et al. (2015) Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophys. Res. Lett., 42, 6526–6534.

[16] Shindell, DT (2014) Inhomogeneous forcing and transient climate sensitivity. Nat Clim Chg, 4, 274–277

[17] Marvel, K, et al. (2015) Implications for climate sensitivity from the response to individual forcings. Nat Clim Chg DOI: 10.1038/NCLIMATE2888

[18] Hansen J et al (2005) Efficacy of climate forcings. J Geophys Res, 110: D18104, doi:101029/2005JD005776

[19] See http://climateaudit.org/2016/01/08/appraising-marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates/

[20] Ocko IB, V Ramaswamy and Y Ming (2014) Contrasting climate responses to the scattering and absorbing features of anthropogenic aerosol forcings. J. Climate, 27, 5329–5345; Forster, P (2016) Inference of Climate Sensitivity from Analysis of Earth’s Energy Budget. Annu. Rev. Earth Planet. Sci. 2016. 44, doi: 10.1146/annurev-earth-060614-105156

[21] Gregory, J M et al. (2016) Small global-mean cooling due to volcanic radiative forcing. Clim Dyn DOI 10.1007/s00382-016-3055-1. See also the discussion of volcanic forcing in LC14.

[22] Section 10.8.2.4 of AR5

[23] See, e.g., Annan, J and Hargreaves, J., 2015. A perspective on model-data surface temperature comparison at the Last Glacial Maximum. Quaternary Science Reviews, 107, 1-10

[24] Annan, J.D., Hargreaves, J.C., 2013. A new global reconstruction of temperature changes at the Last Glacial maximum. Clim. Past 9 (1), 367-376.

[25] Annan, J.D., Hargreaves, J.C., 2006. Using multiple observationally-based constraints to estimate climate sensitivity. Geophys. Res. Lett. 33, L06704.

[26] Gregory JM, Forster PM (2008) Transient climate response estimated from radiative forcing and observed temperature change. J Geophys Res 113:D23105. Halving their volcanic forcing threshold, to −0.25 Wm−2, has a negligible effect on TCR estimates.

[27] Concentrations up to 2014 from http://ds.data.jma.go.jp/gmd/wdcgg/pub/global/globalmean.html. Increase from 2014 to 2015 was based: for CO2, on the Mauna Loa mean from ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_annmean_mlo.txt; for CH4, on the smoothed trend curve at http://www.esrl.noaa.gov/gmd/ccgg/trends_ch4/; and for N2O on data at ftp://ftp.cmdl.noaa.gov/hats/n2o/combined/HATS_global_N2O.txt

[28] X Ma and F Yu (2015): Seasonal and spatial variations of global aerosol optical depth: multi-year modelling with GEOS-Chem-APM and comparisons with multiple-platform observations. Tellus B 2015, 67, 25115. http://www.tellusb.net/index.php/tellusb/article/view/25115

[29] KNMI MSR ozone data at http://www.temis.nl/protocols/o3field/o3mean_msr.php shows a global increase of 4.3 Dobson units for 2015 over the mean for 2008-11, and regressing AR5 stratospheric O3 forcing on the MSR data yields a slope of 0.0016 Wm−2 Dobson unit−1 (correlation: 0.80), implying 2015 forcing of −0.044 Wm−2 vs −0.048 Wm−2 from continuing the 2001-11 trend.

[30] Downloaded from http://lasp.colorado.edu/lisird/sorce/sorce_tsi/index.html; divided by 4 to give mean TOA downward solar radiation

[31] http://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

[32] Carn S.A., L. Clarisse, A.J. Prata (2016) Multi-decadal satellite measurements of global volcanic degassing. Jnl of Volcanology and Geothermal Research 311, 99–134

JC note:  As with all guest posts, keep your comments civil and relevant.

149 responses to “Updated climate sensitivity estimates

  1. Pingback: Updated climate sensitivity estimates – Enjeux énergies et environnement

  2. Nic Lewis,

    Very Interesting. Thank you.

    It would be great if someone reading this could run the Nordhaus DICE-2013R Integrated Assessment Model with updated inputs for:
    – climate sensitivity (e.g. use ECS=1.74C or TCR= 1.34C)
    – GHG emissions rate (e.g. use RCP4.5 or RCP6 instead of RCP8.5)
    – damage function (appropriate updated values)
    – participation rate (e.g. use 1/2 the Copenhagen ‘Optimistic’ rate used in DICE-2013R for the ‘Copen’ scenario)

    to calculate the net benefit to 2100. I’d like to see the updated net benefit per 5 years for the Copen and 1/2 Copen scenarios compared with the chart below (which assumes ECS = 3.2C, emissions rate ~RCP8.5, high damage function, and unrealistically optimistic participation rate).
    http://anglejournal.com/site/assets/files/1727/lang_1.png
    Explanation here: http://anglejournal.com/article/2015-11-why-carbon-pricing-will-not-succeed/

    The DICE-2013R model can be downloaded in Excel here: http://www.econ.yale.edu/~nordhaus/homepage/
    Introduction and Users’ Manual is here: http://www.econ.yale.edu/~nordhaus/homepage/documents/DICE_Manual_100413r1.pdf

  3. Nic Lewis:

    You stated:

    Current best estimates of GMST (4.0 ± 0.8°C below preindustrial)[24] and of relevant forcings (6–11 Wm−2 below preindustrial)[25] at the LGM imply an ECS estimate of 1.75°C, essentially identical to the updated instrumental-observation energy budget estimate presented here.

    However, Annan (2015) suggests:

    The magnitudes of the large-scale changes are increasingly well-constrained, with a recent model-data synthesis generating a value of 4 °C, which suggests a moderate equilibrium climate sensitivity of about 2.5 °C.

    Is there a simple explanation for these differing ECS estimates?

    • Yes. The 2.5°C figure comes, I believe, from Hargreaves et al (2012) ‘Can the Last Glacial Maximum constrain climate sensitivity?’. That study derived an ECS estimate of 2.3°C by regressing the ECS of seven CMIP3 generation climate models on the LGM tropical temperature change and using the regression relationship to convert the observational estimate of that temperature change into an-observationally-constrained model ECS value. The ECS estimate was then upped to 2.5°C by shifting it towards the average unconstrained model ECS value, using the subjective Bayesian approach that James Annan favours. I see no justification for doing so.
      In principle there is something to be said for this type of “emergent constraint” method of estimation, where two variables are strongly related across a set of climate models and observational evidence as to one of the variables is used to infer the probable value of the other variable, based on their relationship in the set of models. Unfortunately, all too often supposed emergent constraints prove to be ephemeral, and subsequently sink again. That has happened in this case. There is essentially no relationship between ECS and LGM tropical temperature change in the more recent, CMIP5, generation of climate models (Hopcroft and Valdes, 2015. DOI 10.1002/2015GL064903). Accordingly, there is now no reason to regard the Hargreaves et al (2012) ECS estimate as being valid.

      • Thanks!

      • Thanks 2

      • I forgot to mention that it is pointed out in Hargreaves et al (2012) that the PMIP2 experimental model for the LGM that their climate model simulations were based on omits forcing due to dust and vegetation changes. They suggest that the ECS estimates should, effectively, be multiplied by 0.85 to account for this. The corrected regression based ECS estimate would then be ~1.95 C, not far off the 1.75 C I obtain by a direct comparison of the estimated LGM changes in GMST and total forcing (including from dust and vegetation changes).

  4. Very interesting indeed that also with updated observation-data the values for TCR, ECS seem to be constant as one awaits it. Over at ATTP https://andthentheresphysics.wordpress.com/2016/04/24/the-tcr-to-ecs-ratio/ the moderator asked a question about the TCR/ECS ratio which sholud be near 0.8 following the observations. In the discussion the followers there digressed ( of course ;-) ) to the (C)AGW- matter. It seems to me that there at this blog some scientific discourse is impossible. Nic, what is your opinion about the discrepancy between CMIP5 and observations the TCR/ECS ration in question?

    • The ratio of ocean heat uptake to the increase in GMST, both in the mixed layer and in the rest of the ocean, can be estimated reasonably well from observations. The ocean mixed layer reaches equilibrium in a matter of years, so heat uptake by it only slightly depresses TCR. For any given value of the ratio of sub-mixed layer ocean heat uptake to increase in GMST, the higher ECS is, the smaller TCR will be relative to ECS. So it is unsurprising that CMIP5 models, which (taking the average of models used for the RCP8.5 projections) have an estimated ECS of 3.4 C and a TCR of ~1.9 C, have a much lower ratio of TCR to ECS (~0.56) than the ~0.8 ratio for my observationally based TCR and ECS estimates.

      If I substituted an ECS of 3.4 C in the 2-box model that gives a TCR of 1.35 C when ECS is 1.75 C, the resulting TCR would be ~2.15 C. And that isn’t far off the warming that the average CMIP5 model produces in the second half of a 140 year 1% p.a. CO2 ramp, after deducting the warming in the pipeline at year 70 that a 2-box model would project to emerge during years 71-140. That behaviour reflects most CMIP5 models exhibiting, during the initial few decades of response to a forcing, an effective climate sensitivity which is significantly lower than its estimated equilibrium climate sensitivity.

  5. Nic Lewis:

    I wonder why you did not include the positive forcing for the reduction of anthropogenic sulfur dioxide aerosols emissions from the atmosphere due to Clean Air efforts in your analysis, since this forcing is quite large..

    • Burl Henry,
      The positive effect on forcing of a reduction in SO2 emissions due to Clean Air actions in the West is reflected in the AR5 aerosol forcing estimates. It mainly occurred by 2011, but is estimated to have been to a substantial effect countered by increases in SO2 emissions elsewhere in the world, particularly Asia (China, etc.). And SO2 emissions are not the only source of negative aerosol forcing – there are also nitrates, primary organic compounds, etc.

      • Nic Lewis:

        In answer to my previous question re reductions in SO2 emissions, you stated “It mainly occurred by 2011, but is estimated to have been a substantial effect, countered by increases in SO2 emissions elsewhere in the world”

        Actually, between 1975 and 2011 Western reductions in SO2 emissions were countered by Asian emissions only between 2000 and 2005, when global emissions rose by 5.584 Megatonnes (see Table 1, in “The last decade of global anthropoigenic sulfur dioxide emissions: 2000-2011 emissions”, by Z. Klimont, et al. (this increase caused a drop in average land-ocean global temperatures of about 0.1 deg. C.).

        All other years Western reductions exceeded Asian increases so that solar insolation increased due to the less polluted air.

        The IPCC diagram of radiative forcings .fails to include any positive forcing due to the reduction in anthropogenic SO2 aerosol emissions, showing only a small negative forcing for SO2. This is a huge error, and, I believe, is not addressed in your paper.

      • Burl Henry

        This is a huge error, and, I believe, is not addressed in your paper.

        Clould you give me some idea of what you mean by “huge error”. How much difference do you think it would make to Nic Lewis’s estimates of TCR and ECS? For example, if the “error” was corrected would it ECS change from 1.75 to, 1.76, or 1.85. or 2.25 or 2.75? I am just looking for some indication of how significant is the “huge error”

      • Peter Lang:

        The climate sensitivity factor for the removal of SO2 aerosols is approx. .02 deg. C. of temp. rise for each net Megatonne of reduction in global anthropogenic SO2 emissions.

        Between 1975 and 2011, for example, anthropogenic SO2 emissions were reduced from approx. 131 Megatonnes to 101 Megatonnes due to Clean Air efforts.

      • Burl Henry,

        Thank you for replying.

        However, your answer doesn’t mean anything to me. I was asking you how significant is the “error” in terms of the estimates of ECS/ If what you believe is the correction needed was applied, how much would it change the estimates of ECS (1.75C) and TCR (1.35C)

    • David Springer

      Yeah if we burned coal in the dirty manner that God meant for us to burn it then the negative aerosol forcing would be negating the GHG forcing and we would be talking about wearing particulate filter masks in major cities instead of forcing everyone else to pay more for “clean” energy. I hate cities so I’m voting for the inhabitants to deal with their own filth and not tasking the rest of us with cleaning it up.

    • Burl Henry
      The IPCC AR5 best estimate of direct aerosol forcing (per the second order draft – omitted in the final version) reduced in magnitude by 13.5% between 1990 and 2011, I believe reflecting the decrease in SO2 emissions, partly countered by increases in primary and secondary organic aerosols. But the AR5 best estimate of the total aerosol forcing magnitude rose by 3%. This relationship is not inconsistent with the decline in SO2 emissions.

      The cloud albedo indirect effect is thought to be roughly proportional to the logarithm of cloud condensation nuclei (CCN) concentration. As the increase in SO2 emissions in Asia were into a less polluted atmosphere than that in the West, they would on this basis have had more effect on aerosol forcing (making it stronger) than emission reductions of the same amount in the West would have had in the opposite direction.

  6. … is there some simple conclusion, as in a “bottom line”?

  7. There appears to be little substance in claims that global energy budget studies systematically underestimate TCR and/or ECS to a significant extent, although the possibility of a modest degree of underestimation cannot be ruled out.

    That is, if you ignore the Roman and Medieval Warm periods. They did happen without man-made CO2. What caused them that has ceased? This whole CO2 debate is meaningless without understanding what caused warm periods in the past. It is normal and natural for a warm period to follow every cold period. It is natural cycles and we do not cause them.

    If you don’t know what causes natural cycles, you have no clue if there is any man-made part from CO2.

  8. Ice Extent on Earth has decreased since the Little Ice Age. Glaciers have retreated, Ice Shelves have reduced, Sea Ice is less. That reduces Albedo. That is not included in the debate. We are warmer, like the Roman and Medieval times, because ice depletes in cold times and retreats. Now that we are warmer, snowfall has increased and the ice is rebuilding. This takes a few hundred years and then we will get cold again. This has always happened before. http://popesclimatetheory.com/page54.html

    • Warm times are natural and necessary. Oceans must be more open to produce more ocean effect snowfall to rebuild ice on land. Oceans must be more closed to produce less ocean effect snowfall to allow ice on land to deplete. These conditions alternate to regulate the temperature and sea level cycles. The thermostats are the freezing and thawing of Polar Oceans. The temperature at the thermostats is the same in the North and South Polar Oceans. Temperature is regulated inside the same bounds in both hemispheres.

      • What were sea levels like during the Roman and Midieval warm periods?

      • 2000 years ago, the Emperor of Rome entered the Blue Grotto, a cave on Isle of Capri, in a rowboat, at low tide, because the entrance is covered at high tide. In 2011, my wife and I went in that same cave in a rowboat, at low tide, because the entrance is covered at high tide. Sea level there is about the same as 2000 years ago.

      • Dave

        I believe Popesclimatetheory is correct, in as much around the time of the birth of Christ the climate was warming and sea levels rising.

        It is thought that at the end of the Roman warm period-around the 5th Century- sea levels had become slightly higher than today.

        There was a drop during the cold period of the Dark Ages and temperatures rose again to two peaks higher than today, around the 12th century and the last part of the 16th century. There had been a limited drop in between, during the renewed glaciation of some of the 13th century.

        The LIA -the coldest period of the Holocene in the last 8000 years- locked up plenty of ice which started to melt again around 1750 and that continues to this day.

        I wrote about the Roman warming here

        https://judithcurry.com/2011/07/12/historic-variations-in-sea-levels-part-1-from-the-holocene-to-romans/

        There is a longer version linked by the word ‘document’ early on in the piece

        tonyb

      • climatereason, thanks for that link. That was a really good thread. I followed that thread, but it is more meaningful to me now than it was five years ago.

      • tonyb

        “There was a drop during the cold period of the Dark Ages and temperatures rose again to two peaks higher than today, around the 12th century and the last part of the 16th century. There had been a limited drop in between, during the renewed glaciation of some of the 13th century.”

        Could you please link to evidence of (global or English) temps being higher than today “around the 12th century and the last part of the 16th century”.

        I know of none.
        Thanks.

      • Could you please link to evidence of (global or English) temps being higher than today “around the 12th century and the last part of the 16th century”.
        The Vikings settled in Greenland because it was warmer then. They died or moved out because it got cold again.

  9. Nic Lewis — I am a simple layman on climate science and have a question.

    In articles (like at Climate Dialogue: http://www.climatedialogue.org/climate-sensitivity-and-transient-climate-response/) different studies are compared to each other.

    With all the fussin and fightin over what the 97% Consensus even means, it just seems like TCR is a much better line of inquiry.

    Question: Has there been any statistical effort to ask Climate Scientists what TCR analysis they feel is more appropriate?

    • Stephen Segrest: I agree that estimating TCR is probably a more fruitful exercise at this point, certainly for projecting warming over the rest of this century, although from a point of view of gaining a better understanding of how the climate system behaves, ECS is the natural variable to study.

      In answer to your question, the authors of the Otto et al (2013) energy budget analysis included many of the mainstream climate scientists whose work relates to estimating ECS and TCR from observations: 14 lead authors of relevant IPCC AR5 chapters. So I think one could say that this method (the same as in Lewis and Curry 2014 save for the source of the forcing estimates) is held to be appropriate for estimating TCR from observations, by many if not most climate scientists with expertise in the area.

      The statistical methods employed in Otto et al were an objective Bayesian one (for the numerical estimates) and a frequentist profile likelihood method for the graphs. They gave identical results.

      As shown in Table 1, the Otto et al TCR estimates are essentially identical to the Lewis and Curry ones – both original and updated.

    • Mr. Segrest, your question may be answered by the “consensus scientists” and their followers’ rush to redefine the onset of catastrophic climate impacts beginning at 1.5 degrees C, not the haphazardly selected 2 degrees C. Say what you will, the climate profiteers of any stripe are not stupid.

      I was intrigued when COP 20 morphed into a 1.5 degree C movement. Not for a moment did I believe developed and developing nations and big green cared about a small collection of atolls. Little did I suspect that it may have revolved around constantly reducing estimates of TCS.

      Dave Fair

    • David Springer

      I suspect most of the “consensus” doesn’t know what TCR and ECS even stand for and would have to rely on others to give them their opinions.

      The biggest problem with consensus science is it’s a bandwagon effect. Scientists outside their field of expertise trust in the conclusions of self-annointed “experts” in the field. They see the small consensus among scientists with a vested interest in castrophism, ignore the potential for motivated reasoning, and being liberal leaning by virtue of years in the academy to begin with, find a comfortable spot on the bandwagon.

      It’s not necessarily a conspiracy, although climategate emails revealed at least a small inner core of conspirators, but rather a social phenomenon that’s not really evil but has the same end result as evil intent.

    • Stephen Segrest: Question: Has there been any statistical effort to ask Climate Scientists what TCR analysis they feel is more appropriate?

      I tried to organize an invited speaker session on this topic for the Joint Statistical Meetings a few years ago. I got some speakers to agree, and send abstracts, but I was too late to get the session approved by the conference committees.

      There have been some publications in some statistics journals, such as the Journal of the American Statistical Association.

  10. Given that ECS response time is in the order of centuries It is of absolutely no relevance to policymakers. Which makes the low TCR fantastic news, even an RCP8.5 emisson scenario will keep us below two degrees C temperature increase this century.
    https://klimaathype.files.wordpress.com/2015/06/sres_1_3_sensitivity.png

    • Hans Erren: Given that ECS response time is in the order of centuries It is of absolutely no relevance to policymakers.

      More relevant for policy is: How long is required for the surface temperature to increase by 90% (or 80%, or 95%) of its equilibrium value? Equivalently, what would be the half-life of surface cooling should there be a step decrease in forcing?

    • Mr. Errin, that boat has sailed; it is now 1.5 degrees C that the climate profiteers target.

    • Steven Mosher

      “Which makes the low TCR fantastic news, even an RCP8.5 emisson scenario will keep us below two degrees C temperature increase this century.”

      Err not so.

      Given the huge spread in the TCR estimate you have a few more steps
      to take.

      1. Given an emissions scenario
      2. Given a PDF for TCR
      3. You have a PDF for Delta T
      4. Given a damage PDF as a function of Delta T
      5. You have a expected Damage as a function of the DeltaT PDF and Damage PDF.

      Basically, you have to comine all the uncertainties ( uncertainty of damage as a function of delta T, and uncertainty in delta t) to see the expected range of results.

    • Hans: assume TCR is some other predictive factor that will determine the ultimate cost to completion of some earth science related (very high uncertainty) project like the cost to extract and the value of the commodity for geothermal energy from a new type of hot dry rock reservoir in the middle of no-where in Australia. Do you think it best to invest your entire fortune based on the most highly promising estimate (swag) of the cost to extract and the value of the commodity?

      The cost to extract is like TCR and the value of the commodity is the RCP ranges. In my experience, the geologists who low-ball are charismatics who are into making money on the front-end, usually from government or naive investors. Remember your Twain on speculating in geology: The definition of a mine is a fool at one end and a liar at the other.

      • The definition of a mine is a fool at one end and a liar at the other. … – funny.

      • The climate issue is a boundary value issue, not an initial value issue. Even the worst case scenario is not “our” problem, it is the “problem” of india and China, in 50 years at the earliest. The worst case damage PDF is 10% GDP, In other words, that’s just a big recession, not the end of the world as we know it.

        Who has ever summarised the benefits of a three degree temperature rise? Like global warming is only detrimental for cute animals and a boost for creepy animals.

        Conclusion: A hugely out of proportion blown academic issue.

      • Hans Erren,

        You are debating academics, or sciolists, who quite obviouisly have no real-life experience in mining operations or investing.

        The wildcatter who is willing to roll the dice, risking his “entire fortune” on the success of a single project of “very high uncertainty,” is more a myth than a historical reality. Likewise, industrial capitalists (like those in the mining industry) who don’t produce a product at a competitive price, and know how to manage money, soon find themselves out of business.

        Primary materials production these days is a mine field, just like it’s always been. And it’s the commodity price and interest rate roller coaster, not geology, that presents the greatest risk to those in the business.

        We’re currently in a commodities bear market with the price of most commodities in the toilet. Investors in the production of primary materials, therefore, have bigger fish to fry than worrying about whether it’s going to be 1º hotter or 4º hotter in 2100 than it is now.

        They’re in survival mode. So I think you’re right, most of those vested in primary materials production see AGW as a “hugely out of proportion blown academic issue.”

        And I’d add that this applies not only to investors, but some entire nations as well. They also fall into staples traps. Just look at Mexico and Canada. And they’re not the most acute examples. Think of poor old Brazil, or Nigeria, or Angola, or Venezuela. With the wolves howling at the door, there’s much more attention focused on getting through the next month or the next year than worrying about something that might happen 50 or 100 years from now.

      • Agree that climate change is topical and of interest only to academics with a funding skin in the game.

  11. One thing that bothers me about the concept of ECS is that is presumes a known equilibrium. The OHC increase, particularly at depth, implies a fairly rapid uptake. If this is so ( notwithstanding all the uncertainties and deeper water formation at the poles ), the release rate would appear to be much slower than the uptake rate. Heat taken up by the deep oceans would be, as Pielke states, lost to the climate system. Making TCR the much more relevant number?

    • Turbulent Eddie: Making TCR the much more relevant number?

      Please note my questions to Hans Erren and Nic Lewis. Equilibrium takes forever, but most of the response (ca 97%) occurs in only 5 half-lives, if the half-life is constant. (Harder to calculate if the half-life changes during the processes). Considering how rapidly the Earth surface warms after sunrise and how rapidly it cools after sunset, the initial half-life can’t be very long.

  12. Nic Lewis, thank you for your essay.

    Here is a follow-on question: given that the ECS=1.74C and TCR= 1.34C, how much time elapses between a temperature increase of say 1.35C and a temperature increase of 1.65C (starting each from the same baseline)? Put differently, after a step increase in forcing, how long before we can say that the equilibrium value is “nearly” reached? Put differently still, a few years after a step change in forcing, can we tell how much warming remains “in the pipeline”?

    I have two more question, in case you care to take them on.

    1. Do the Earth surface, middle troposphere (say at the cloud condensation level) and upper troposphere (say really close to the tropopause) warm at the same rates or to the same asymptotic limit (e.g. the equilibrium amount 1.74C)?

    2. If the Earth surface temperature increases 1C, globally averaged, how much does the surface cooling rate increase (radiative, evapotranspirative, advective/convective processes combined)?

    I think the answers are important for public policy, but I don’t know them.

    • MM not NL but some partial answers.
      1. Models or observations? CMIP5 has the upper troposphere warming about 2.5x more than the surface in the tropics. Observations don’t show this (the missing tropical troposphere hotspot). Everything will however reach the ECS asymptoticly. Importand for policynis when. TCR is defined by a doubling in 70 years, temp averaged around the 70 year point (if I recall correctly from 60 to 80). Using NL values, that is about 3/4 of the way to equilibrium. ECS is in the eye of the beholder. Effective or true equilibrium? Hansen argued for 1000 years. Ocean thermal inertia (thermohaline transport) argues for ~800. In practice, model ECS spools up 4x not 2x CO2 over 200 years then projects out another 200 or so using a changing slope (derivative of the changing temp profile) method. At least is what the CMIP5 experimental,protocol called for.
      2. In global equilibrium, by same amount at ERL up in the troposphere. But this could vary a lot near the surface given different land regimes, ocean currents, and latitudes. The models don’t regionally downscale very well, so hard to get a handle of regional variations.

      • ristvan: Everything will however reach the ECS asymptoticly.

        If we assume that equilibrium provides a really good approximation, that’s true. And I more or less went along with the assumption when asking the question as I did. However, I think the concept of steady-state is more appropriate (not “true” because all the rates depend non linearly on the states); for steady-state, the several compartments do not increase to the same final temperatures.

      • AFAIK there’s no evidence whatsoever that the world’s climate has been in a “steady-state” since at least the beginning of the Pleistocene.

        Even if some mythical “global average temperature” remained within limits varying according to some semi-predictable pattern, regional climates show no evidence (AFAIK) of doing likewise.

    • matthewmarler, I’ll just, answer your main question, at least for the present.

      If there is a step forcing from an instantaneous doubling of CO2 concentration, and TCR and ECS values are respectively 1.35 C and 1.75 C, with no time variation of effective sensitivity (a constant climate feedback parameter) and a two-box ocean model (which provides a good match to AOGCM ocean behaviour), then the GMST increase would reach 1.35 C (77% of the equilibrium response) after 12 years. The GMST increase would reach 1.45 C (83%) after 65 years, 1.55 C (89%) after 187 years and 1.65 C (95%) after 395 years.

      • David Springer

        TCS is probably larger than ECS. The upper layer of the ocean reacts to increased forcing quickly and warms the atmosphere rapidly. Over a longer time period the frigid abyss absorbs the excess energy from the overheated surface layer.

        One of the pause explanations put on the table was the mix rate of surface and abyss was faster than expected. There’s no reason to presume said mix rate is a constant either. Too little is known about the ocean in particular and the water cycle response in general to increasing non-condensing GHG.

      • Nic Lewis, thank you for your answer. Those are longer time spans than what I was expecting. That’s not a critique, just my surprise reaction.

        Here is a brief introduction to me:
        Public Profilehttps://www.linkedin.com/in/matthew-marler-9a921b15

      • David Springer: TCS is probably larger than ECS.

        I think you’d have a lot of trouble trying to prove that, or demonstrate something like that with a compartment model.

      • David Springer

        There seems to be trouble proving *anything* with regard to global average temperature. What’s your point?

    • David Springer

      If ocean equilibrium takes 1000 years and anthropogenic CO2 emission can be maintained in excess of natural sink rate for only 200 years or less then before equilibrium temperature is reached the forcing at the ocean surface starts to decline and the originally calculated equilibrium temperature never happens.

      There is so much uncertainty in all this that anything can happen. Meanwhile the benefits of aCO2 emission far outweigh the short term consequences (and likely the long term consequences too). The ONLY good reason to be unhappy with the status quo is a finite supply of economically recoverable fossil fuels. There just isn’t enough of it to cause catastrophic heating. For that reason and that reason alone alternatives should be sought. R&D in alternative energy proceeds faster when industrialized nations have vibrant economies with discretionary income to throw money at energy R&D that has a payoff in a politically distant future. If push comes to shove in shrinking or stagnant economies alternative energy R&D gets cut ahead of more immediate concerns like food, clothes, and shelter.

      Technology is the goose that lays the golden eggs and cheap energy is what she eats. Don’t starve the goose.

      • David Springer: If ocean equilibrium takes 1000 years and anthropogenic CO2 emission can be maintained in excess of natural sink rate for only 200 years or less then before equilibrium temperature is reached the forcing at the ocean surface starts to decline and the originally calculated equilibrium temperature never happens.

        That’s better than the comment I challenged a few minutes ago.

  13. As far as I understand the solar irradiance at the top of the atmosphere has been measured the last few decades. How can we accurately know the short term variation / long term trends hundreds or even thousand years ago – before it was actually measured? How large (in W/m2) is the uncertainty in solar irradiance in 18´th and 19´th century?

    • The best discussion of this that I am aware of is in Ch.8, section 8.4.1 of IPCC AR5 WG1, available here: http://www.climatechange2013.org/images/report/WG1AR5_Chapter08_FINAL.pdf. They seem to think that the change in solar forcing from the Maunder minimum to the present day is in the range ~0.1 to 0.2 W/m2. However, variations in spectral irradiance (especially in UV) may also be important, and these are much less certain. See section 8.4.1.4.3.

      • David Springer

        ” variations in spectral irradiance (especially in UV) may also be important”

        +1

        Maybe you can explain to Mosher the English undergraduate that different EMR wavelengths have different effects on the matter they illuminate. In this case the level in the atmosphere where absorption and re-radiation takes place. UV primarily lights up the ozone layer. If 10% of the total solar power shifts from near infrared to far ultraviolet that would make a large impact.

      • “” variations in spectral irradiance (especially in UV) may also be important”

        and may also be unimportant.

        Springer

        “Maybe you can explain to Mosher the English undergraduate that different EMR wavelengths have different effects on the matter they illuminate. In this case the level in the atmosphere where absorption and re-radiation takes place. UV primarily lights up the ozone layer. If 10% of the total solar power shifts from near infrared to far ultraviolet that would make a large impact.”

        Yup David, had to learn that on the job.

        But

        ” If 10% of the total solar power shifts from near infrared to far ultraviolet that would make a large impact.”

        Wow, settled science from Springer.. “large impact” so precise..

      • Weeds and smarmy.

      • Steven Mosher: Wow, settled science from Springer.. “large impact” so precise..

        It was clearly hypothetical, not “settled”. Would a 0.3% change in global mean temperature be “large”? I’d say so, and not out of the question. Worthy of study, even.

      • David Springer

        “Wow, settled science from Springer.. “large impact” so precise..”

        Si fueris Rōmae, Rōmānō vīvitō mōre; si fueris alibī, vīvitō sicut ibi. ~St. Ambrose

        Italicized Latin is SO sciency. :-)

      • David Springer

        “Yup David, had to learn that on the job”

        Learn harder.

      • David Springer

        By “job” you mean volunteer work, right?

      • The Maunder minimum ended in 1715. IPCC “seem to think that the change in solar forcing from the Maunder minimum to the present day is in the range ~ 0.1 to 0.2 W/m2.” For the period from 1750 (Actually 1745 for solar irradiance) to 2011 IPCC report a change in solar irradiance of 0.05 W/m2. As illustrated in the figure SPM.5:

        http://www.climatechange2013.org/images/figures/WGI_AR5_FigSPM-5.jpg

        This value is further explained in: 8.4.1.2 Total Solar Irradiance Variations Since Preindustrial Time:
        “The best estimate from our assessment of the most reliable Total Solar Index reconstruction gives a 7-year running mean RF between the minima of 1745 and 2008 of 0.05 W m–2. Our assessment of the range of Radiative Forcing from TSI changes is 0.0 to 0.10 W m–2 which covers several updated reconstructions using the same 7-year running mean past-to-present minima years (Wang et al., 2005; Steinhilber et al., 2009; Delaygue and Bard, 2011), see Supplementary Material Table 8.SM.4. All reconstructions rely on indirect proxies that inherently do not give consistent results. There are relatively large discrepancies among the models (see Figure 8.11). With these considerations, we adopt this value and range for AR5.”

        However, I notice that in the paper: Reconstruction of solar spectral irradiance since the Maunder minimum (Krivova et al. 2010) it is stated:
        “We now find a value of about 1.25 W/m2 as our best estimate
        for the 11-yr averaged increase in the TSI between the end of the Maunder minimum and the end of the 20th century compared to 1.3 W/m2 derived by Balmaceda et al. [2007] and Krivova et al. [2007].”

        I also notice that in the paper: Total solar irradiance during the Holocene (Steinhilber et al., 2009) it is stated:
        “Our estimated difference between the Maunder Minimum and the present is (0.9 ± 0.4) W/m2.”

        I have no idea how IPCC could arrive at the low and very accurate “range of RF from TSI changes is 0.0 to 0.10 W m–2 ” – given the estimates, uncertainties and challenges referred to above.

        An increase in Total Solar Irradiance of approximately 1 W/m2 since the Maunder Minimum (1715) would be quite significant. In particular if it is combined with the cloud feedback effect – which is a feedback from warming and not directly to anthropogenic forcing.

        My point is – the uncertainty in solar irradiance seem to be underreported by IPCC. There seem to be much more room for warming by natural causes than stated by IPCC.

      • Now I have an idea – I´m off by at least a factor 4.
        I guess Total Solar Index will have to be averaged over the surface.

  14. “Forcing from CO2, CH4 and N2O was calculated for each year from 2011 to 2015 using data for mean atmospheric concentrations[27] and the formulae in AR5 8.SM.3.”

    These formulae exaggerate the forcing because they are based on a misconception that radiation in CO2 bands at the top of the atmosphere as seen from space represents surface LW energy. It does not. Surface LW energy is nearly completely absorbed in the lower troposphere in those bands and is therefore substantially disconnected.

    What radiation the satellites see in CO2 bands is SOLAR energy kinetically lighting up CO2 first from near IR absorption by water beginning in the mid troposphere, and then more strongly from UV absorption by ozone at the tropopause.

  15. Thanks for this important update. IMO two important takeaways.
    1. The warmunist critiques of observational energy budget TCR and ECS (e.g. Marvel) have little validity. TCR~1.3 and ECS ~1.7-1.8 provide no basis for alarm or expensive to impossible mitigation.
    2. CMIP5 is way off. Median TCR 1.8C (ditto mean), median ECS 3.2C. The only model ensemble reasonably tracking actual temperatures measured by satellite and sondes (Christy’s now famous model busting chart that Gavin Schmidt hates) is the Russian INM-CM4. It has an ECS of 2.0.

    • Steven Mosher

      The GCM that match temperature best ( from a SPATIAL perspective)
      have TCR of 1.3 -1.4

      That said. NO SKEPTIC should ignore the wide uncertainty in Nic’s estimates.

      The uncertainty monster is not your friend

    • I know. Wrote about this with respect to AR4’s very evident mistakes in this regard. Every new study since Annan and Hargreaves informed Baysian priors paper in 2009 has further constrained the high tail. And made the PDF ever less skewed. If the PDF mode is say 1.75 as here, then the P(x) of it being anything above 2.5 is now down in the cumulative 5-10% range. Given the nasty side effects of the prescribed decarbonization medicine, odds I’d take.

  16. Steven Mosher

    Referring to a blog post about ecmwf that contains no analysis and that is missing 14 months of data doesn’t strike me as robust. Rather than arguing for hadcrut using poor sources ( Grey literature) just use multiple series and show the structural uncertainty in your approach. If the answer is the same that improves your reliability.
    That nitpick aside …nice work.

    • Steven
      I didn’t refer to “a blog post about ecmwf”. I referred to a news release BY ecmwf which, inter alia, detailed results from an analysis they had performed of how GMST estimates from their ERA-interim dataset would have been affected by reducing coverage to match that of HadCRUT4. That is a fairly simple comparison to make and I see no reason to think that the experienced scientists at ecmwf would get it wrong. It is, IMO, too minor a result to be worth their trying to publish in a peer reviewed journal.

      The HadCRUT4 100-member ensemble data gives estimates of measurement and sampling uncertainty and of uncertainty in global and regional averages with incomplete regional coverage, so encompasses structural uncertainty. And I’m not comfortable with datasets that infill to achieve globally complete cover in periods prior to reasonable coverage being available.

      IMO it would be illogical to use multiple datasets for one variable, GMST, but not for forcing and heat uptake. But doing so for all of them would make for a large number of best estimates and uncertainty ranges. I’m not sure that would be an improvement.

      I’m glad you liked the rest of what I did.

      • But doing so for all of them would make for a large number of best estimates and uncertainty ranges. I’m not sure that would be an improvement.

        It would still be interesting to see this. As far as I can see, the TCR estimate could be quite different (at the 10% level, at least) if you used a different surface temperature dataset. I realise that you’ve justified using HadCRUT4, but that doesn’t mean that it is the best representation of how surface temperatures have changed.

      • ATTP, if the model doesn’t agree with your assumption, you suggest to adjust the observation to your liking.

      • Hans,

        if the model doesn’t agree with your assumption, you suggest to adjust the observation to your liking.

        No. Maybe try reading Steven’s early comment again.

      • ATTP

        If hadcrut4 isn’t the best representation, then the British taxpayer has wasted an awful lot of money over the years on top scientists and increasingly powerful computers.

        Which data set might be more to your liking?

        Tonyb

      • Steven Mosher

        “BY ecmwf which, inter alia, detailed results from an analysis they had performed of how GMST estimates from their ERA-interim dataset would have been affected by reducing coverage to match that of HadCRUT4. That is a fairly simple comparison to make and I see no reason to think that the experienced scientists at ecmwf would get it wrong. It is, IMO, too minor a result to be worth their trying to publish in a peer reviewed journal.”

        1. science by pre release is as bad as science by blog.
        2. They provide no numbers and no detailed description of how they
        did it.
        3. It is worthy of publication, I’ll have to look up a recnt comparison re analysis focus on the arctic, but the picture is more uncertain than you suggest.

        As it stands the treatment of uncertainties you present in incomplete.

        “A simple but robust global energy budget approach was used, with thorough treatment of uncertainties.”

        Treat the uncertainty due to selection of temperature records and then you can claim “Thorough” I dont expect the answer will change much.
        That is not the point. the point is exploring ( as you do elsewhere with ease) the importance or lack of importance in analysts choices.
        Analyst choice is a source of uncertainty.

      • Which data set might be more to your liking?

        None, specifically, but – as Steven says – if you’re going to claim that you’ve thoroughly treated uncertainties, but have ignored that there are other possible datasets, then that’s not really a thorough treatment of uncertainties. It can’t be that difficult to include the other datasets in the analysis.

      • Steven Mosher

        Looking at a comparison of 7 re analysis products to the arctic

        “http://journals.ametsoc.org/doi/full/10.1175/JCLI-D-13-00014.1”

        here is my point in a nutshell.

        1. Picking Hadcrut is a choice. That choice comes with UNCERTAINTY.
        2. Justifying that choice by referencing grey literature is not good.
        3. Justifying that choice by looking at ONE re analysis product
        when there are many, is also a choice. That choice comes
        with uncertainty.
        4. The question of who estimates the arctic best is an open question
        with uncertainty.
        5. A study of 7 re analysis products concludes they all underestimate the trends.

        There is one re analysis tool that is dedicated to the arctic. That bears looking into.

      • tony

        “If hadcrut4 isn’t the best representation, then the British taxpayer has wasted an awful lot of money over the years on top scientists and increasingly powerful computers.

        Which data set might be more to your liking?

        1. hadcrut can be processed on my phone cpu.
        2. The question IS NOT which dataset is to someones liking
        3. The question is uncertainty.

        Watch what nic does. Do you see how he picks different periods?
        Why?
        To show to demonstrate that this choice does not DRIVE the answer.
        Do you see how he looks at different estimates WRT OHC?
        Why?
        To show to demonstrate that this choice does not drive the answer.

        Now comes the choice of temperature datasets.

        There are two paths.

        1. Choose one and defend it as the best.
        2. Show all of them and show that the choice does not drive the answer.

        Choice one can never be certain. Defending one as the best will always just lead to more debate.
        CHoice two just lays out everything we know. One dataset is .1C higher
        another is .07C lower..

        The point is to put all of the data and all of the choices on the table.

        That is what I was required to do as an analyst. Its just a job

      • Haven’t I seen some of these same critics of Nic Lewis’ post basically tell others to “go do your own analysis” when others identify a potentially incomplete area of a consensus study?

      • Steven Mosher | April 25, 2016 at 4:18 pm
        “As it stands the treatment of uncertainties you present is incomplete.
        A simple but robust global energy budget approach was used, with thorough treatment of uncertainties.Treat the uncertainty due to selection of temperature records and then you can claim “Thorough”
        Looking at a comparison of 7 re analysis products to the arctic
        here is my point in a nutshell.
        1. Picking Hadcrut is a choice. That choice comes with UNCERTAINTY.”
        6 others then Steven?
        lets see
        1. Picking Sadcrut is a choice. That choice comes with UNCERTAINTY.”
        1. Picking Madcrut is a choice. That choice comes with UNCERTAINTY.”
        1. Picking MERRA is a choice. That choice comes with UNCERTAINTY.”
        1. Picking ECMWF is a choice. That choice comes with UNCERTAINTY.”
        1. Picking JRA-25 is a choice. That choice comes with UNCERTAINTY.”
        1. Picking CFSR is a choice. That choice comes with UNCERTAINTY.”
        What is your argument, Steven?
        ” the point is exploring ( as you do elsewhere with ease) the importance or lack of importance in analysts choices.
        Analyst choice is a source of uncertainty.”
        No ,seven different choices of uncertainty is seven sources of uncertainty.
        And if you treat them completely ?
        You will certainly be quite uncertain.
        You have put up a ridiculous argument.
        Nic is quite entitled to pick a respected data set and use it as his example.
        It is not “cherry picking” for as he says, other data sets estimates would not be very different.

      • ‘Nic is quite entitled to pick a respected data set and use it as his example.
        It is not “cherry picking” for as he says, other data sets estimates would not be very different.’ These last words of yours prove all the rest of your words wrong. We don’t know if other data set estimates would be very different. If we would do the other estimates, then we would know. Which was Mosher’s point.
        Nic, is this hard to do? Mosher has been asking for it since your first work. How turnkey is it to replace one data set in your code with another? (That’s a serious question, as I don’t know what shape the data sets are in, nor what’s involved in plugging them into your code.)

      • Michael Aarrh | April 27, 2016 at 8:40 am |
        ‘Nic is quite entitled to pick a respected data set and use it as his example.
        It is not “cherry picking” for as he says, other data sets estimates would not be very different.’ These last words of yours prove all the rest of your words wrong. ”
        “FWIW, if the Cowtan and Way infilled version of HadCRUT4v4 had been used for the estimate based on the late, 1930-1950, base period the TCR and ECS estimates would each increase by 3%”.
        Michael Aarrh | April 27, 2016 at 8:43 am |
        Huh – I didn’t see this answer before.”
        I did hence “for as he says, other data sets estimates would not be very different.”

    • Steven – why do you claim that the ecmwf comparison exercise “is missing 14 months of data”?

      • Sorry 13 months missing.
        Dec 2014 was missing and was infilled with Nov14
        and all of 2015 was missing. Next, they did not find what you claim they found. Read again what they did.

        Nevertheless, now that you have hung your hat on ECMWF as the justification for selecting a dataset, it should be a simple matter
        to compare emcwf to all the temperature series and pick the one with the smallest error relative to the ecmwf standard… not sure you wanted to suggest that re analysis could justify a choice ( and with no error to boot).

    • Steven: “Dec 2014 was missing and was infilled with Nov14
      and all of 2015 was missing. Next, they did not find what you claim they found. Read again what they did.”

      I think you have misread the ECMWF article, and what I wrote. They didn’t infill Dec14 with Nov14 data. They used Dec14 temperature data (entirely from the ERA-interim dataset), but with the Nov14 HadCRUT4 coverage data. That is a very minor issue IMO. And 2015 wasn’t missing – their analysis didn’t extend to 2015; since it was published in January 2015, that data cannot be described as missing.

      I regard the choice of surface temperature dataset as a relatively minor issue, and I don’t intend to pursue it further at this point. You are free to use temperature dataset(s) that have been infilled back to 1850 in research you carry out if you think fit; I prefer not to. FWIW, if the Cowtan and Way infilled version of HadCRUT4v4 had been used for the estimate based on the late, 1930-1950, base period, by which time infilling is somewhat less problematic (although I don’t regard the C&W dataset as wholly unbiased), the TCR and ECS estimates would each increase by 3%.

      • Huh – I didn’t see this answer before. So you have redone the analysis for C&W?
        I hear that you regard this as a minor issue, but since it is a point of criticism you might be well served to include these kinds of results in your publication.

  17. Pingback: While my guitar gently weeps | Climate Scepticism

  18. David Springer

    Hans Erren | April 25, 2016 at 2:24 pm |

    ATTP, if the model doesn’t agree with your assumption, you suggest to adjust the observation choose a different dataset more to your liking.

    Fixed that for ya!

  19. Thanks for the post Nic

    • I really do appreciate the post and all the thoughtful and informed discussion on this site. I am disheartened, though, by the amounts of effort and money expended on “certainty” where so much uncertainty is documented. Nobody, sage, seer, or scientist, can tell us what 2050 will bring. Uncertainty is, no matter the side one takes. I’m just not willing to bet money on the outcome. And puhleeese, no precautionary nonsense.

  20. The more direct use of observations such as this gives an effective TCR of over 2.3 C per doubling.
    http://woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25
    The breaking out of CO2’s effect alone, as LC do, underestimates the total effect by ignoring the GHGs that go along with it, which is why there is such a discrepancy between LC and the number you get from this graph. For policy you need the total effect, and CO2 is not the total effect, even if it is the majority, but the temperature is changing in proportion to CO2 as the graph shows at roughly 1 C per 100 ppm. This also translates to 1 C per 1500-2000 GtCO2 of emissions, which I believe is the most useful way to express sensitivity for policy.

    • My estimates don’t ignore GHGs other than CO2; they take all of them into account.
      I agree that there is merit in use of the transient climate response to carbon emissions (TCRE) as a policy relevant measure, but I don’t think it should be the only measure used.

      • You just add uncertainty by trying to separate out CO2 from other GHGs and aerosols and then adding them back together afterwards. Better to treat the aggregate from the start. The sensitivity provided is only relative to the CO2 part of the warming, but your assumptions add about half again for the other GHGs and subtract a little for aerosols, so the net effect is much larger, and has to be to explain how much warming we already have had.

  21. Thanks for the clear and well explained post.

    What does the (1.01-Inf) 5% to 95% range for 1930-1950 mean?

    What is inf?

    • Does “Inf” in the “ECS Estimate (5% to 95%)” column confidence interval “()” mean an extremely large value?

    • Inf means infinity; under 95% of the samples drawn from the data uncertainty distributions result in the change in forcing minus the change in heat uptake rate, which constitutes the denominator of the ratio used to calculate the ECS estimate, being positive. The remainder are negative, producing physically-unrealistic negative estimates of ECS, which are not counted towards the confidence levels.

  22. Some points, just for interest.

    The term “heat uptake” seems to be a peculiarly Warmist one. I am unsure why normal terms describing physical processes are avoided. Is it possible that Warmist do not understand normal physics, or is it that they consider their cargo cult science to be superior to normal science?

    The Warmist hypothesis seems to be that CO2 is somehow responsible for variations in ocean temperatures below the level of solar radiation penetration – maybe 200 m. This seems impossible, in general. In water of the same composition, warmer water is less dense. It floats. Not only that, it loses energy to its cooler environment, and cools. It cannot both descend, and become less dense, at the same time.

    Claiming that wind, atmospheric pressure, or ocean currents somehow form pockets of warm water at great depth is simply nonsense. In any case, such warmer water would cool, being surrounded by cooler water – above, below, and to the sides. The water thus warmed, will, of course, become less dense, and rise. No deep pockets of warm water can exist without a continuous heat source to replace the heat being lost.

    Of course, such a heat source exists at the bottom of every ocean, lake, river, or even glacier. This is the crust, which under the deep oceans is generally less than 10 km away from the molten interior of the Earth.

    There exist an unknown number of thermal vents, crustal cracks exposing molten magma, and various types of vulcanism. The total heat energy warming the ocean from beneath is totally unknown.

    The Warmist understanding of physics in relation to the oceans seems greatly flawed. Oceanologists, or marine scientists, have far more understanding of the oceans than climatologists, one would hope. They admit to a lack of understanding in many areas of heat content, and heat transport.

    No heat hiding in the oceans. No warming of the deep oceans from the top. Any Warmist care to provide a mechanism for warming the deep oceans that depends on the magical properties of CO2, and can be expressed using terms generally used by physicists?

    I’ll bet you can’t.

    Cheers.

    • Curious George

      Mike, https://en.wikipedia.org/wiki/Geothermal_gradient is fairly good. Your observation that climatologists understand everything much better than anybody else – and never admit a lack of understanding – is very accurate.

      • It’s very inaccurate. Oceanographers are often climate scientists. It’s very easy to understand how the oceans warm with each addition to the level of CO2 molecules in the atmosphere. Flynn will establish some impossible personal standard, and then declare reality to be unreal. He’s a worthless gimmick. Judith referred to his nonsense as incorrect and its correction not interesting enough for her to expend the time. Because he cannot be corrected. Nobody can whack all the moles springing out of robot Flynn’s arcade brain.

      • It’s very easy to understand how the oceans warm with each addition to the level of CO2 molecules in the atmosphere.

        It’s also easy to understand that colder than average waters sink readily while warmer than average waters are more buoyant, which is why the ocean at depth is closer to freezing than it is to the 15C or so of global average temperature.

  23. Thank you, Nic.

  24. Steven Mosher — What TCR study/analysis do you think is most appropriate?

    • Segrest,

      Search for the study that produce the highest value of TCR to suit your green ideological beliefs.

  25. O/T but maybe interesting.

    From Nature Physics (last week?) –

    “As the spacecraft flew through Venus’s atmosphere, deceleration by atmospheric drag was sufficient to obtain from accelerometer readings a total of 18 vertical density profiles. We infer an average temperature of T = 114 ± 23 K . . . ”

    Maybe Venus is not quite as hot everywhere as Hansen thinks. Lowest temperature on Earth is a little warmer than this at about 160 K.

    Modelling, anyone, or should we believe calculations based on actual instrument readings, backed up by normal physics, initially?

    Cheers.

    • Just saw a study (can’t locate it at the moment) that says that the atmosphere may have more water (higher than the 0.1% or less assumed) otherwise the atmospheric window would be too open and it would be colder.

      The surface “air” density is 65 kg/m3. The surface atmosphere is about 1/10th the density of liquid ether and functions more like a liquid than a gas. The surface air density on earth is about 1.225 kg/m3.

      http://www.sciencedirect.com/science/article/pii/S0019103515004509
      99% of the time winds are under 1.8 m/s (4 mph).

      Only about 1/3 of surface heat loss on earth is by radiation. There is no evaporation on Venus and with average winds at cm/s rates convection is limited. There aren’t adiabatic updrafts.

      The two atmospheres don’t have a lot in common.

      The planet doesn’t have a magnetic field to speak of. The interaction of the ionosphere and the sun generates the magnetic forces that keep the atmosphere in place otherwise it would be another Mars or Mercury.

      • PA

        Just saw a study (can’t locate it at the moment) that says that the atmosphere may have more water (higher than the 0.1% or less assumed) ….

        Clearly that study must be wrong. Because, if the atmosphere had more water in it, the Earth would be heavier and that means all the turtles supporting Earth would have been squashed all the way down.

      • The year Bertrand Russell won the Nobel Prize for science and Literature, 1937, he was giving a public lecture on astronomy, he was describing how the earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy, he was also describing how our universe came to life through a cosmic event which we now refer to and call the ‘The Big Bang’

        At the end of the lecture, the little old lady got up and said:
        “Mr. Russell, what you have told us is a load of old rubbish; it is nothing like that at all, the world is a flat plate and it is supported on the back of a giant turtle, so there!”

        The scientist gave a superior smile before replying,” I thank you for that, but please, could you tell me, what is holding up the giant turtle?
        “You think you’re very clever young man, very clever indeed,” said the old lady, “but now I know you don’t know anything, you see, everyone knows the answer to that, it’s TURTLES ALL THE WAY DOWN…”

        http://www.turtlesallthewaydown.com.au/turtles-story.php

        Therefore, PA, clearly the study you referred to must be wrong

      • Don’t have a pony in this race since I can’t find the study.

        The people doing the Venus probe sensor analysis are pretty sure of themselves, and think it is less than 0.1% water vapor rather than around 1% that the radiation balance study was advocating.

      • “Only about 1/3 of surface heat loss on earth is by radiation”
        True but 100% of the surface heat loss eventually leaves the earth by radiation.The other causes just take the heat to a higher level to radiate off.

      • angech | April 28, 2016 at 10:00 am |
        “Only about 1/3 of surface heat loss on earth is by radiation”
        True but 100% of the surface heat loss eventually leaves the earth by radiation.The other causes just take the heat to a higher level to radiate off.

        According to the Virginia highway department pavement gets 33°C warmer than vegetation (which is normally at ambient temperature).

        The mode of heat loss makes a huge difference. The pavement has to heat until Stephan-Boltzmann + convection (which depends on temperature differential) + conduction into the ground (thermal mass absorption) loss is twice as high as the plants which loses over half by evaporation.

        In an ambient of 90F (32.2C) the road reaches 65.2C (149.5F)

        The air is radiating 493 W/m2, the plants are radiating 493 W/m2, the road is radiating 743 W/m2.

        Yeah, the method of heat loss makes a big difference.

        And the daytime surface temperature determines how much heat is stored in the ground (raising nighttime temperatures).

      • PA | April 28, 2016 at 10:46 am |

        “According to the Virginia highway department pavement gets 33°C warmer than vegetation (which is normally at ambient temperature).
        The air is radiating 493 W/m2, the plants are radiating 493 W/m2, the road is radiating 743 W/m2.”
        Whoa
        something is wrong here.
        First up a blackbody can only heat up to what is coming in and radiate it back.
        It cannot get hotter than what it is absorbing.
        ~340 W/m² of solar radiation is received by the Earth on average per 24 hours. So obviously can get a lot more in at peak day.
        the road is radiating 743 W/m2. in winter? in summer? middle of the day or early in the morning or late in the day?
        Who knows.
        Nonetheless the vegetation is at “ambient temperature”?
        I would say the road is also at the ambient temperature [for the road].
        The vegetation has a much higher albedo, trust me, so the amount of heat absorbed is a lot less than than hitting the road. If you had black plants they would be the same ambient temperature as the road.
        The air just above the road is radiating at the same temperature as the road, just as that above the plants the same as the plants.
        The road and the plants both lose heat proportional to that they absorb. Yes a lot is taken away by convection but it still has to go into space.
        Proof,. the next day it all starts over again.
        Sorry for being nit picky.

      • angech | April 29, 2016 at 4:37 am |

        Whoa
        something is wrong here.
        First up a blackbody can only heat up to what is coming in and radiate it back…
        [followed by a quarter page of additional nonsense]

        Your analysis is a dumb as the renewable energy arguments. For pretty much the same reason.

        http://www.annemergmed.com/article/S0196-0644(95)70005-6/pdf
        Pavement in Arizona is hot enough to cause 2nd degree burns in 35 seconds from 10 AM to 5 PM.

        Black body radiation is defined in terms of energy at a surface w/m2. Watt is a joule per second, not joules per day. 342 w/m2 average is quoting joules per day in another form.

        If I boil water on my stove once per day (7000 BTU or 2050 W) using the burner for 10 minutes, my daily average burner output is 14 watts. 14 watts is just enough to keep an open pot of water at body temperature. Yet I still manage to boil water. And in under 10 minutes. This must surprise you.

        The peak energy received by pavement in Virginia at 1:00 PM in the summer from solar sources is roughly 1000 W/m2. The atmosphere at 90 F (32.2 C/305.36K) contributes 493 W/m2 also for an incoming total of 1493 W/m2. Pavement isn’t a perfect absorber, but it isn’t a perfect radiator either.

        Anyway it ends up radiating around 743 W/m2. And expends the other half of the energy heating the asphalt (that is how we got to 65.2C) and through conduction to the ground underneath, and through convection to the air flowing over it until conduction/convection/radiation heat losses total to 1493 W/m2.

        http://users.wpi.edu/~rajib/Draft-2White-Paper-on-Reduce-Harvest-Heat-from-Pavements-Nov-2008.pdf
        A study of how to extract energy from pavement. Peak temperature in the dirt two inches under the asphalt is highest in the late afternoon and is around 50C.

      • PA | April 28, 2016
        “According to the Virginia highway department pavement gets 33°C warmer than vegetation (which is normally at ambient temperature).”
        Too vague and unspecified and factually [re the vegetation] wrong.
        [aside -I had black solar heating on the roof when I had a swimming pool].
        “The peak energy received by pavement in Virginia at 1:00 PM in the summer from solar sources is roughly 1000 W/m2.”
        A little better.
        ” The atmosphere at 90 F (32.2 C/305.36K) contributes 493 W/m2 also for an incoming total of 1493 W/m2.”
        Good , you have explained the 493 W/m2 derivation hence one assumes “ambient temperature”
        “The air is radiating 493 W/m2, the plants are radiating 493 W/m2, the road is radiating 743 W/m2.”
        No this is still not right.
        Paraphrasing to show why.
        “The peak energy received by plants [vegetation?] in Virginia at 1:00 PM in the summer from solar sources is roughly 1000 W/m2. The atmosphere at 90 F (32.2 C/305.36K) contributes 493 W/m2 also for an incoming total of 1493 W/m2. Plants are not perfect absorbers, but they are not perfect radiators either.”
        I cannot do the maths* exactly so to paraphrase again
        “Anyway the plants would end up radiating 630 W/m2.”
        The 137 W/m2 difference pavement/plants means the road would be somewhat warmer than the plants which are in turn warmer than the surface atmosphere.

        [Maths* [250 plus 25 275 divide 137 plus 493 630]

      • “Black body radiation is defined in terms of energy at a surface w/m2. Watt is a joule per second, not joules per day.
        342 w/m2 average is quoting joules per day in another form.”
        No
        Your misunderstanding
        My comment
        “~340 W/m² of solar radiation is received by the Earth on average per 24 hours.”
        is from Wikipedia see below
        “Incoming radiant energy (shortwave)
        The total amount of energy received per second at the top of Earth’s atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth. Because the surface of a sphere is four times the cross-sectional area of a sphere, the average TOA flux is one quarter of the solar constant and so is approximately 340 W/m².
        *** Of the ~340 W/m² of solar radiation received by the Earth,***
        an average of ~77 W/m² is reflected back to space by clouds and the atmosphere and ~23 W/m² is reflected by the surface albedo, leaving ~240 W/m² of solar energy input to the Earth’s energy budget. This gives the earth a mean net albedo of 0.29.
        If you wish to misunderstand, your problem, I never quoted “joules per day”
        Wiki and I quoted the average joules per second over a day.
        No atmosphere the roads would boil away at 1.00 pm in summer. Thank god for GHG and convection “cooling” things down.

  26. Sorry, but I don’t think this study is at all reliable. The value used for the sun’s input ([30]) is TSI, and there is no mention of clouds or albedo. So unless I have missed it there is therefore nothing representing actual insolation in the calculations. Other commenters (popesclimatetheory and Burl Henry) have mentioned ice cover and pollution as factors that affect insolation but were omitted, and I would like to add clouds. Without knowing how these three factors behaved over the study period, any findings are to my mind completely unreliable.

    • As fas as I understand the highly uncertain cloud feedback is comparable to the current global heat uptake rate.
      (IPCC; WGI; AR5; figure 7.10 in Chapter 7 – Clouds and aerosols Page 588)

      https://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2007/Fig7-10.jpg

      Figure 7.10 | Cloud feedback parameters as predicted by GCMs for responses to CO2 increase including rapid adjustments. Total feedback shown at left, with centre light- shaded section showing components attributable to clouds in specific height ranges (see Section 7.2.1.1), and right dark-shaded panel those attributable to specific cloud property changes where available. The net feedback parameters are broken down in their longwave (LW) and shortwave (SW) components. Type attribution reported for CMIP3 does not conform exactly to the definition used in the Cloud Feedback Model Intercomparison Project (CFMIP) but is shown for comparison, with their ‘mixed’ category assigned to middle cloud.

      As can be seen in this figure the unit for cloud feedback is W/m²°C .
      I believe °C in this expression is the increase in surface temperature since preindustrial times. The value is approximately 1 °C.

      More on my view on this here:
      The cloud feedback is comparable to the current global warming

      • Science of Fiction,

        Can you please clarify for me. Are you agreeing or disagreeing with those who say there is a significant error in Nic Lewis’s estimate? If you are suggesting there is a significant error in Nic’s analysis, by how much do you think correcting the error would change Nic’s estimates?

      • @ Peter. I’m sorry but I’m not ready to answer that question. My concern Is that there are relatively large uncertainty in variables which I think are crucial to such estimates.

      • OK,, but without some idea of how much you think it might change Nic’s estimates, I have to assume it’s already covered in the analyses or it’s negligible. Therefore, I’ll regard Nic’s results to be best we have available for policy analysis for the period to 2100.

  27. Wow! Looks great, thank you. Only skimmed it so far. Will study it this weekend. Great work. Loved your use of Argo ohc.

  28. Peter Lang:

    You asked “how much would it change the estimates of ECS and TCR”

    These estimates are meaningless since there is zero climatic effect with respect to the concentration of greenhouse gasses in the atmosphere

    All of the warming that has occurred has been due to the reduction in the amount of anthropogenic SO2 emissions due to international Clean Air efforts.

    Projections of expected anomalous temperature increases, based solely upon the amount of reduction in SO2 emissions and the .02 deg. C. climate sensitivity factor, for years 1975-present, are accurate to within a tenth of a deg. C., or less (for any year for which SO2 emissions are known), when transient temperature changes due to El Nino’s, La Nina’s, and volcanic eruptions are accounted for.

    Because of the precise agreement, there can never have been any additional warming due to greenhouse gasses. It is all fiction.

  29. I have had some login problem with commenting on this post.

    My point was just that for Mosh and ATTP the reason Nic did not look at other surface temp datasets is probably that it’s a small source of uncertainty compared to aerosol forcing estimates which is a point Lindzen for example has made many times. The whole model validation exercise has the same problem. Maybe more resources should be put into this area.

  30. Nick wrote: “To summarize, there are no good grounds for believing that the LC14 original or updated results are biased by historical forcings having different efficacies to that of CO2.”

    The abstract of the newest paper (Gregory 2016) on this subject concludes:

    “These findings help to relieve the apparent contradiction between the larger values of effective climate sensitivity diagnosed from AOGCMs and the smaller values inferred from historical climate change.”

    http://onlinelibrary.wiley.com/doi/10.1002/2016GL068406/full

    I am confused why you and some modelers disagree about whether current historical data on delta_T, delta_F, and delta_Q are inconsistent with the hindcasts of AOGCMs that exhibit higher climate sensitivity.* This subject would probably require a full post or paper.

    Has anyone ever made a table of delta_T, delta_F, and delta_Q for 1% scenarios or other scenarios so we can actually see how the TCR and ECS (calculated using energy balance methodology) increase with time? This would be more a more tangible way (compared with debates about ERF) to present the controversy.

    For example, this old post by Isaac Held suggests that delta_Q for models (0.6 W/m2) is higher than in your table – which would account for much of the difference in ECS.

    http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/11/3-transient-vs-equilibrium-climate-responses/

    If one goes back to fundamentals (conservation of energy), for a given stratospherically-adjusted forcing, one only has two parameters left to “play” with: the climate feedback parameter (which includes feedbacks in the troposphere that are part of ERF) and ocean heat uptake. So I’d like to know more about ocean heat uptake in models, not ERF.

    Sorry I missed this post when it first appeared. I don’t know if you want to respond when few are listening.

    Frank

  31. Frank, thanks for your comment.

    First, on the delta_Q point, Isaac Held’s unrounded figure is 0.56 W/m2, and that is for circa 2010; the average over 1995-2015 would be a bit lower. As stated in Appendix B, the rate I use over that period is 0.63 W/m2, a bit higher. The rate in the table is lower, because it is net of the (model-derived, scaled down) heat uptake rate over the base period.

    There are several reasons for CMIP5 models matching, on average, the rise in GMST over the historical period (to 2005). Higher average aerosol forcing than per AR5 is a major reason, higher average heat uptake is another (I’m dubious about the value Isaac Held quotes: AR5 shows higher heat uptake for the CMIP5 mean than observations). A third reason is that many models exhibit lower effective sensitivity to each forcing increment forr the first few decades after it is imposed than subsequently.

    Jonathan Gregory’s new paper is interesting, but it is based only on two UK Met Office models. More importantly, it only helps the case for the models being right that effective climate sensitivity is higher than energy budget studies indicate if the pattern of sea surface warming over the historical period has been distorted by natural internal variability into a pattern that causes effective sensitivity, and hence the rise in GMST, to be substantially lower than it would otherwise have been. Although not impossible, that seems quite unlikely to me. It seems much more likely that the models get the development of SST warming patterns wrong.

    Hope this helps.

  32. Nic: Thanks for the reply. I’m surprised your post didn’t prompt more discussion.

    I agree with the first two reasons you discuss, but question the third: “A third reason is that many models exhibit lower effective sensitivity to each forcing increment forr the first few decades after it is imposed than subsequently.

    Whatever differences there may be between traditional radiative forcing and effective radiative forcing (which varies with time?), there are only two places for energy to go in the real world: At equilibrium, all additional energy from forcing goes out to space via the climate feedback parameter and a rise in surface temperature. On the way to equilibrium, some is going into the ocean (below the mixed layer if we assume equilibrium between surface temperature and the mixed layer). Thus, I want to know about what models project about ocean heat uptake and temperature rise; not what they say about the “efficacy” of a forcing. If I remember correctly, efficacy is defined by the temperature rise per W/m2 of forcing and there is no reason to believe that models get any temperature response to forcing right. If you assume efficacies are correct, then you may as well assume model TCR and ECS are right.

    Writing the above paragraph leaves me more confused than ever, because I remember that Otto (and LC14?) use effective radiative forcing rather rather than a measure of radiative forcing directly related to the “imposed” radiative imbalance (before the planet adjusts). (If you have written about your decision to use ERF in these papers, please point me in the right direction.) Shindell, Marvel and Gregory are on my growing list of paywalled papers to collect on my next trip to a university library. Hansen (2005) isn’t behind a paywall, so I’ll start by re-reading it.

    • I do recognize that the direct reflection of SWR by aerosols measured in W/m2 doesn’t include the indirect aerosol effect on clouds. And oxidation of methane in the stratosphere deposits a little water vapor there. (However, volcanic aerosols remain in the stratosphere only a few years, so that water vapor may not be significant compared with that transported in and out of the stratosphere naturally.) Back to reading Hansen (2005).

    • Notes from Hansen (2005):

      “Ultrapurists may object to calling Fa a forcing, and object even more to forcings defined below, because they include feedbacks. Fa allows only one climate feedback, the stratospheric thermal response to the forcing agent, to operate before the flux is computed. The rationale for considering additional forcing definitions, which allow more feedbacks to come into play, is the desire to find a forcing definition that provides a better measure of the long-term climate response to the presence of the forcing agent.”

      However, feedbacks are included in the climate feedback parameter; it is the sum of all feedbacks including Planck feedback. Almost all of feedbacks summed to produce the climate feedback parameter are fast feedbacks. If feedbacks influence both delta_F and the climate feedback parameter, aren’t you accounting for the same phenomena twice when you use ERF in energy balance models?

      With all of limitations of the GISS model discussed, is there any reason to believe an efficacy different from unity is reliable?

      The section on oxidation of methane to produce stratospheric water vapor doesn’t make any sense. According to Hansen, methane is a well-mixed GHG with a scale height of 50 km. Oxidation of 1.8 ppm of methane can only produce 3.6 ppm of water vapor and significantly supplement the 3.0-5.4 ppm of water vapor moving upward from the tropopause – but only if a significant fraction is destroyed – which is inconsistent with the assumed scale height. Furthermore, 1% of the atmosphere lies between 10 and 1 mb, where the GISS model believes methane is oxidized. The stratosphere needs to be turned over 7 times a year for that destruction to account for the 7 year half-life of methane in the atmosphere. That turnover rate is incompatible with the persistence of volcanic aerosols in the stratosphere. If most methane oxidation occurs in the troposphere, then methane is not a well-mixed GHG. These inconsistencies make me doubt that the GISS model can reliably account for the additional forcing from oxidation of methane in stratosphere.

    • Frank, Just a brief response at present.
      Efficacies are defined as the GMST response per unit forcing relative to that of CO2. So they do not imply anything about TCR or ECS.
      The use of ERF rather than RF is meant to (and appears to) reflect most of the differneces in efficacies between forcings. See section near the start of Chapter 8 of the IPCC AR5 WG1 report. I recommend reading it – downloadable from IPCC website. LC14 did indeed use ERF estimates, as reported in AR5.
      Hansen 2005 is an excellent paper – much higher quality than Marvel et al. 2015.

      • Frank, you say:

        ‘I agree with the first two reasons you discuss, but question the third: “A third reason is that many models exhibit lower effective sensitivity to each forcing increment forr the first few decades after it is imposed than subsequently.”‘

        This issue has nothing to do with forcing efficacy or what forcing measure is used.It is simply that in many models the climate feedback parameter – the rise in outgoing net radiation per unit rise in GMST – falls over time. This seems to apply separately to eacg forcing increment. The reasons seem to be linked to changes in the pattern of SST warming, mainly in the tropical Pacific (linked to a weakenng of the Walker atmospheric circulation) and the southern ocean. The real world has, so far, nor exhibited such SST warming pattern behaviour.

  33. Pingback: Weekly Climate and Energy News Roundup #224 | Watts Up With That?

  34. Pingback: The Transient Climate Response (TCR) revisited from Observations (once more) | Watts Up With That?

  35. As a climate bystander, not that interested in all the detail, two points strike me.
    1. The robustness of the methodology and formula(e) needs more emphasis. This will doubtless not stop the ankle-snappers from directing people to this or that alternative data set, or that there has been some recent uptick in some or other data. But it would underline the pointlessness of these comments. Basic calculus ensures (%) perturbation of inputs means a corresponding perturbation of output. Together with the use of extended initial and final ranges, this makes the Lewis calculations very robust. The uncertainty ranges in Lewis and Curry must be very conservative. If forcings could ever be so far out, you wonder about the whole subject!
    2.”From analysing all CMIP5 models for which I have data, the mean difference in the two ECS measures is somewhat under 10%, using the same standard method of estimating AOGCM equilibrium sensitivity as in IPCC AR5.” This seems important, if correctly understood. Dynamics sophisticates are likely to treat the simple formulae being used with some doubt (to be polite). If in fact they provide very reasonable approximations to the full model calculations using a range of models, this seems very strong support for the approach, worth highlighting.

  36. Semi-empirical mann bends over backwards ter retain
    the models-oh-you-cargo-cult-pseudo-scientist-you!

    No matter how bewdiful, doe-eyed or high -ranking the
    model, if it don’t conform ter boring old obs – it’s WRONG!

    wattsupwiththat.com/2016/05/11/study-by-mann-admits-the-pause-in-global-warming-was-not-predictable-but-lets-models-off-the-mann

  37. An addend … o baby look at u now,. )

  38. Pingback: Una nuova stima della sensibilità climatica | Climatemonitor

  39. Pingback: Assessment of Approaches to Updating the Social Cost of Carbon | Climate Etc.