Resplandy et al. Part 3: Findings regarding statistical issues and the authors’ planned correction

By Nic Lewis

Introduction

The Resplandy et al. (2018) ocean heat uptake study (henceforth Resplandy18) is based on measured changes in the O2/N2 ratio of air sampled each year, compared to air stored in high pressure tanks originally sampled in the late 1980s and early 1990s, and in atmospheric CO2 concentration. These are combined to produce an estimate (ΔAPOObs) of changes in atmospheric potential oxygen since 1991 (ΔAPO). They break this series down into four components, including one attributable to ocean warming (ΔAPOClimate). By estimating the other three, they isolate the implied ΔAPOClimate and use it to estimate the change in ocean heat content. In two recent articles, here and here, I set out why I thought the trend in ΔAPOClimate – from which they derived their ocean heat uptake estimate – was overstated, and its uncertainty greatly understated.

The easiest issue to explain is that they did not do a simple linear ordinary least squares regression through their ΔAPOClimate series, instead they used a weighted least squares regression with the weights taken from the 1σ (1 standard deviation) uncertainty estimates accompanying the data series. This was not made clear in the paper. Since they set the uncertainty to zero in 1991 by assumption, this effectively placed unlimited weight on the first observation, forcing the trend line through it and biasing the slope upwards, as shown in Figure 1 below.

Fig1-Lewis-on-Resplandy18-Pt3-dAPO_Clim&OLS&WLSfits

Figure 1. Ordinary and weighted least squares regression linear fits to the ΔAPOClimate values.

In addition they did not calculate their uncertainties following the methods stated in their paper. Associated with ΔAPOObs and each of the components is a 1σ estimate that is intended to summarize its uncertainty range. One source of uncertainty in ΔAPOObs is thermal fractionation error (TFr) which arises because of temperature variations in the air storage tanks. Others include corrosion and leakage from tanks. Non-ΔAPOClimate components of ΔAPOObs also have  uncertainties, including about biophysical processes and fossil fuel characteristics.

It turns out that TFr is the only significant component of the error in ΔAPOClimate that varies randomly in each year. Resplandy18 states that this error (1σ) is ± 2 per meg annually after July 1992, and ± 4 before. I therefore treat the TFr error for 1992 by splitting it into two halves, with these TFr errors assumed independent. How TFr errors were treated in Resplandy18 is unclear; it looks as if they were not reduced from ± 4 to ± 2 until the start of 1994.

As I will explain below, the other components of the uncertainty in ΔAPOClimate simply add up to a linear function of time, with a slight contribution proportional to the magnitude of the ΔAPOClimate series itself. Since ΔAPO values represent changes since 1991, by construction the error bounds for these components are zero in 1991 and scale up with time over 1992–2016. I recalculated the uncertainties using the source data described in Resplandy18 and I found them to be much larger than stated in the paper.

The second author on the paper, Ralph Keeling, has put out this statement:

I am working with my co-authors to address two problems that came to our attention since publication. These problems, related to incorrectly treating systematic errors in the O2 measurements and the use of a constant land O2:C exchange ratio of 1.1, do not invalidate the study’s methodology or the new insights into ocean biogeochemistry on which it is based. We expect the combined effect of these two corrections to have a small impact on our calculations of overall heat uptake, but with larger margins of error.  We are redoing the calculations and preparing author corrections for submission to Nature.

He has also made a posting at realclimate.org describing the corrections they are proposing. I will comment on these below.

In this third article I show what I consider to be a correct estimation of the errors in ΔAPOClimate, of the trend in ΔAPOClimate, and of the uncertainty in that trend estimate, and I will incorporate the revised land O2:C exchange ratio that the authors now plan to use.

The nature of errors in ΔAPOClimate

Resplandy18 conducted a sensitivity analysis by generating a million sets of random number series scaled to have variances matching each of the measurement etc. error uncertainty ranges, recomputing the ΔAPOClimate values each time, then using the standard deviation of the resulting million values as the 1σ uncertainty for ΔAPOClimate.

I have recalculated uncertainty in ΔAPOClimate using information from the Resplandy18 Methods sections and Extended Data Table 3 and their relevant cited source studies. In Figure 2 I compare my estimates of how the total 1σ  error magnitude develops over 1991–2016 (red diamonds) with the Resplandy18 Extended Data Table 4 values (black circles). I have adopted for the present their convention that there is zero TFr error in 1991.

Fig2-Lewis-on-Resplandy18-Pt3-APOclimate_sdByType

Figure 2. Total 1σ error in ΔAPOClimate, for each year from 1991 to 2016, according to Resplandy18 and as calculated from source data. The magenta line shows how the size of total error develops when the thermal fractionation error element is excluded. The blue line shows that the development of {total error excluding thermal fractionation error} can be closely modelled by a fitted linear function of time since 1991 and magnitude of ΔAPOClimate.

After 1993, my calculated 1σ error magnitude exceeds that per Resplandy18. When the random annual TFr error is removed, my calculated remaining error 1σ magnitude (magenta line in Figure 1) grows almost linearly with elapsed years (year − 1991). It can be modelled as 0.667 ⤬ Elapsed years with a R2 of 0.999.

This almost perfect fit arises because the non-TFr error consists primarily of trend uncertainty (components that rise linearly over time) plus a lesser amount of scale uncertainty (components that rise linearly with the ΔAPO values themselves), with the small remainder relating mainly to fossil fuel emissions uncertainty, the annual errors in which are highly autocorrelated.[1] Trend errors in ΔAPOClimate increase linearly with time, emissions uncertainties almost do so, while scale-systematic errors increase linearly with ΔAPOClimate – which to a first approximation itself increases linearly with time. The non-TFr error 1σ can be modelled even more accurately as a linear function of both time and ΔAPOClimate (thin blue line in Figure 2), with an R2 of 0.9996.

It is not only the 1σ magnitude of the non-TFr errors – their standard deviation – but the actual errors in each year that can be accurately modelled as a function of time. When I sample randomly all the non-TFr error contributions to the ΔAPOClimate time series and model those errors as a multiple of elapsed years, the median R2 is 0.991, rising to 0.995 if ΔAPOClimate is also used as an explanatory variable. That is because the non-TFr errors have no randomness for individual years. Each component of them depends on a parameter that is determined once, randomly, with that value applying throughout the 1991–2016 period.

The inapplicability of Resplandy18’s statistical model

A simple linear trend regression is based the assumption that the dependant variables yt are affected by independent, random errors εt with zero mean. That is:

yt = a +bt + εt  with all the εt independent

If the errors εt are independent and all have equal variance, ordinary least squares (OLS) regression is optimal for estimating the trend b. If the variances are not constant (which is called heteroscedasticity)  then weighted least squares (WLS), in which the weight given to the squared error for each data point t is inversely proportional to the variance of εt, is optimal.[2] If the function being fitted can accurately represent the expected value of the variable y, both OLS and WLS will produce unbiased estimates of the function parameters – of the intercept and slope, in the case of a linear function – but with WLS providing a more precise (smaller uncertainty) estimate if the εt have unequal variances.

It appears that Resplandy18 used WLS regression on the ΔAPOClimate time series to estimate its trend. However, there is a problem with doing so here. While the ΔAPOClimate uncertainty estimates vary over time, the diagnosis of heteroscedasticity is correctly based not on those uncertainties but on the regression residuals. Their regression residuals are not heteroscedastic.[3] Neither are the residuals serially-dependent.[4]

Consequently OLS is the appropriate method to use and the trend estimate is 0.88 per meg per year, being the slope of the OLS linear fit in Figure 1. It can be seen that the WLS fit, which has a slope of 1.16 per meg per year, considerably overestimates ΔAPOClimate values in later years.

A quadratic trend?

It can be seen from Figure 1 that a linear fit to ΔAPOClimate is not particularly good. The bias in direct WLS estimation of the trend in ΔAPOClimate arises from the combination of the flawed statistical error model with a linear function not being able to model ΔAPOClimate accurately. By forcing the WLS linear trend to go through the 1991 value the slope is biased upward. However, OLS estimation shows that a squared time trend variable belongs in the model (p = 0.002) and when it is included the fitted quadratic line goes almost exactly through the 1991 value. This neutralizes the WLS bias associated with the assumption of zero uncertainty in 1991. Figure 3 shows that WLS applied to estimate a quadratic trend yields a nearly identical fit to OLS, primarily because the constraint on the first observation now has little influence. The estimated mean trend over 1991–2016 is nearly identical in both cases (OLS +0.880 per year, WLS +0.881 per year).

Fig3-Lewis-on-Resplandy18-Pt3-APOclimate_quadOLS.WLSfits

 Figure 3. Ordinary and weighted least squares regression quadratic fits to the ΔAPOClimate values.

Estimating the uncertainty in the ΔAPOClimate trend estimate  

Since most of the measurement uncertainty in ΔAPOClimate is characterized by Resplandy18 as a linear trend it can be removed (or reduced to a constant term) by taking the first differences:

dAPOClimate = ΔAPOClimate[1992:2016] − ΔAPOClimate[1991:2015]

Note that the mean of dAPOClimate  (0.93) corresponds to the trend in ΔAPOClimate,, and a regression of dAPOClimate  on a constant and a trend would correspond to a quadratic trend estimation. Although taking first differences removes the linearly-trending portion of the uncertainties in different years that exists in the ΔAPOClimate time series, the first difference uncertainties still contain a component that does not vary randomly from year to year. While that component is not fast growing, it dominates uncertainty in trend estimation since, unlike the random annual TFr errors, its effects on the trend do not reduce over time.

A satisfactory way to estimate the overall uncertainty in the ΔAPOClimate trend estimates involves first sampling randomly all the error contributions to the ΔAPOClimate time series and using them to produce many random realisations of possible ΔAPOClimate time series. The ΔAPOClimate trend estimate and its 1σ uncertainty can then be calculated as the mean and the standard deviation of trend estimates from using the first differences of each of those randomly realised individual dAPOClimate time series.

I applied this method, adding in the missing 1991 TFr error, taking 100,000 samples and forming their first differences. Using OLS, the linear trend in ΔAPOClimate, derived directly as the mean of the dAPOClimate values, was 0.93, or 0.89 when WLS was used and the weighted-mean taken. The respective 1σ errors were respectively 0.72 and 0.71, showing that the two methods are almost equally efficient. The mean trend estimate from WLS linear regression of dAPOClimate, giving a quadratic fit to the ΔAPOClimate trend, was marginally higher at 0.91.

Estimating the ocean heat content trend and uncertainty in it

The mean trend estimates from WLS linear regression of dAPOClimate have the lowest uncertainty – reflecting the better fit to ΔAPOClimate values of a quadratic rather than a linear function combined with the more sophisticated treatment of uncertainty offered by WLS. However, it is usual to use a  linear rather than a quadratic fit when estimating the trend in ocean heat content (OHC). That is what Resplandy18 did. On that basis, and retaining their evident use of WLS, the appropriate estimate of the ΔAPOClimate trend to use is 0.89 ± 0.71.  The conversion factor from ΔAPOClimate in per meg to ΔOHC in 1022 J given in Resplandy18 is division by 0.87 ± 0.03. Combining the two probabilistic estimates gives an estimate for the trend in OHC of 1.03 ± 0.82 ⤬ 1022 J per year.

It is unclear from Resplandy18 how they derived the uncertainty in their ΔAPOClimate trend estimate, not helped by the fact that although they state that trend as 1.16 ± 0.15 per meg per year in two places, in a third place they give it as 1.16 ± 0.18 per meg per year. Certainly, they seem to have hugely underestimated the true uncertainty in their ΔAPOClimate trend estimate, and hence in their OHC trend estimate.

November 15 Update

Second author Ralph Keeling has posted an explanatory note at realclimate that outlines their response to my criticisms to date and their submission of a correction to Nature. He concedes that use of OLS is appropriate and would yield a lower trend magnitude, and he also concedes that their trend uncertainties were understated. The OLS/WLS distinction, as I have shown, disappears when a quadratic trend is used and arises primarily because of the inappropriate assumption of a zero error in 1991.

They have simultaneously made another revision to their method by reducing the land exchange or oxidative ratio (OR) coefficient from a fixed 1.10 to 1.05 ± 0.05, which they say causes the ΔAPOClimate annual trend to rise by 0.15. Combined with the switch to OLS they end up with a new trend value of 1.05 ± 0.62 per meg per year.

I have also not been able to replicate exactly the +0.15 change from changing the OR value from 1.10 to 1.05. Calculating the effect of the new OR assumption on ΔAPOClimate requires recalculating all the other ΔAPO time series, which in turn necessitates using their source data for O2 and CO2 concentrations and for fossil fuel emissions.[5] When I did so, the estimated ΔAPOClimate time series changed from that in Resplandy18’s Extended Data Table 4. Their ΔAPOClimate time series increases more slowly as time progresses, whereas my calculated time series increases somewhat more linearly. The change in the ΔAPOClimate time series is almost entirely due to my calculated ΔAPOObs time series differing from that in Resplandy18. It is unclear whether Resplandy18 made a (presumably uncorrected) error in their ΔAPOObs calculations or whether I did something contrary to what was specified in their Methods section.[6] My calculated ΔAPOObs time series is arguably more logical than Resplandy18’s, which more strongly indicates ocean heat uptake slowing (the OHC trend declining) over time, contrary to what in situ temperature measurements suggest.[7] Although the absolute OHC trend estimated by their ΔAPO method is very uncertain, their method should provide much less uncertain estimates of changes in OHC trend.

Although my calculated ΔAPOClimate values were lower in most years that those in Resplandy18, with a 1.10 OR value the mean OLS linear trend estimate was marginally higher than for Resplandy18’s ΔAPOClimate values: 0.89 rather than 0.88 per meg per year (Figure 4). When the OR value was changed to 1.05 ± 0.05, the OLS linear trend became 1.03 ± 0.69 per meg per year – close to the revised trend of 1.05 ± 0.62 per their correction, but still with higher uncertainty. The corresponding estimated trend in OHC is 1.18 ± 0.79 ⤬ 1022 J/yr, slightly lower and rather more uncertain than their revised estimate of 1.21 ± 0.72.

Fig4-Lewis-on-Resplandy18-Pt3 APO_Clim.R18.OR1.1&Calc.OR1.1&CalcOR1.05

Figure 4. Mean estimate ΔAPOClimate time series as originally stated in Resplandy18 and as calculated by me from source data, both based on Resplandy18’s original oxidative ratio (OR) of 1.10, and as calculated by me based on their revised 1.05 OR value. Thin straight lines show the OLS linear fits to the corresponding time series.

My calculated ΔAPOClimate uncertainty estimate is 10% or so higher than their revised estimate. Ralph Keeling refers to their having originally incorrectly treated systematic errors in the O2 measurements, which they have presumably dealt with in their correction. However, I believe that Resplandy18 also seriously underestimated systematic errors in the ΔAPOFF (fossil fuel) time series, and that the correctly calculated uncertainty in its values is approximately double the values they gave.[8] When propagated to ΔAPOClimate time series values, this underestimation of ΔAPOFF systematic uncertainty would account for the 10% difference between their and my estimate of the OHC trend uncertainty range. I believe that they will need to revise their correction accordingly.

I turn now to the revision of the OR value used by Resplandy18. The only recent paper Ralph Keeling cited to justify reducing  OR to 1.05 is Clay and Worral 2015,[9] which actually estimates a value of 1.06 ± 0.06. When using the Clay and Worral OR estimate, I calculate an  ΔAPOClimate  trend of 1.00 ± 0.69 per meg per year, and an OHC trend of 1.15 ± 0.80 ⤬ 1022 J/yr.

The case for changing the OR parameter may very well be valid, although while the correction provides a convenient opportunity for them to make this change I doubt that they would have done so otherwise.  However, the sensitivity of the ΔAPOClimate trend to the OR value also helps illustrate how very uncertain the ΔAPOClimate trend actually is. They now conclude that:

The revised uncertainties preclude drawing any strong conclusions with respect to climate sensitivity or carbon budgets based on the APO method alone, but they still lend support for the implications of the recent upwards revisions in OHC relative to IPCC AR5 based on hydrographic and Argo measurements.

In fact, with the uncertainty range being so large, the ΔAPO based OHC trend estimates do not usefully discriminate between the AR5 estimates and more recent, generally higher, values. Moreover, even with the original, vastly underestimated, uncertainty range the conclusions that Resplandy18 drew with respect to (the lower bound of) climate sensitivity and carbon budgets were unwarranted, because estimated ocean heat uptake has little or no impact on either of these.

Conclusions

It is clear that the statistical method used in the original article resulted in a substantial overestimation of the trend in ΔAPOClimate, and hence of ocean heat uptake, and a vast underestimation of the uncertainty in those estimates. Using their uncertainty weights to obtain a linear WLS fit to the first differences of the ΔAPOClimate time series, as is appropriate given the nature of the errors in ΔAPOClimate, and correcting the omission of uncertainty in 1991, has major effects. The ΔAPOClimate trend estimate changes from 1.16 ± 0.15 to 0.89 ± 0.71 per meg per year. That is a 23% reduction in the central estimate and a near quintupling of estimated uncertainty. Correspondingly, the ΔOHC trend estimate changes from 1.33 ± 0.20 to 1.03 ± 0.82 ⤬ 1022 J/yr.

After their revision of the value used for the land oxidative ratio from 1.10 to 1.05 (± 0.05), the Resplandy18 authors estimate a ΔAPOClimate trend of 1.05 ± 0.62 per meg per year and a corresponding OHC trend of 1.21 ± 0.72 ⤬ 1022 J/yr, whereas I calculate estimates of 1.03 ± 0.69 per meg per year and 1.18 ± 0.79 ⤬ 1022 J/yr respectively. My estimates reduce further to 1.00 ± 0.69 per meg per year when the oxidative ratio of 1.06 (± 0.06) given in the recent paper cited by Keeling in support of their adoption of a revised OR value is used.

I believe that the remaining difference in our uncertainty ranges is likely due to the Resplandy18 correction failing to deal with a major underestimation of systematic uncertainty in ΔAPOFF. Moreover, I am unable to replicate the Resplandy18 ΔAPOObs time series, and hence also their ΔAPOClimate time series (although the ΔAPOClimate time series I compute has an almost identical linear trend to theirs); I think it possible that they may have miscalculated ΔAPOObs.

 

Nicholas Lewis                                                                       17 November 2018

 

Acknowledgements

I thank Ross McKitrick for very helpful statistical and other input to this article.


[1]  I have taken the autocorrelation as ρ = 0.95, the value used in the reference Resplandy18 cite. Resplandy18 used an AR1 process to generate their random noise estimates on fossil fuel emissions; I did the same.

[2]   WLS reduces to OLS if the error variances are equal.

[3]  Application of a Goldfeld-Quandt test shows the error variances actually shrink over the sample, rather than growing, but not significantly. Application of a White’s test (even including the 1σ uncertainties as potential explanatory variables) also fails to detect heteroscedasticity.

[4]  Application of a Breuch-Godfrey test fails to detect AR1 or higher order autocorrelation.

[5]  The source Resplandy18 quote for fossil fuel emissions (Le Quéré, C. et al. Global carbon budget 2016. Earth Syst. Sci. Data 8, 605–649 (2016)) only gives values up to 2015. I took estimates for 2016 from the following year’s study, Global carbon budget 2017.

[6]  It is not obvious where the difference could lie. The ΔAPOObs time series depends only on (δO2/N2) and CO2 data. Although it is highly sensitive to the source of CO2 concentration data, I took monthly (δO2/N2) and CO2 values measured at La Jolla, Alert and Cape Grim, the three stations specified in the Resplandy18 Methods section, weighting them respectively 0.25, 0.25 and 0.50 as per the reference they cited.

[7]  The claim by scientist Paul Durack in the Washington Post that the Resplandy18 study confirms that the rate of ocean warming has been increasing is misleading, at best.

[8]  This assertion is based on the oxidative ratios for each fossil fuel, and uncertainty ranges given for them and for CO2 emissions, in Resplandy18 Extended Data Table 3, which uncertainties are taken to be systematic rather than randomly varying each year. On that basis, it is simple to work out that the oxidative ratio uncertainties imply much higher uncertainty in ΔAPOFF than that stated in Resplandy18 Extended Data Table 4.

[9]  https://www.sciencedirect.com/science/article/pii/S0341816214003129?via%3Dihub. The estimates they give for the two components of the global oxidative ratio indicate a slightly higher central estimate for it, of 1.067 ignoring rounding errors, so its unrounded value must lie above 1.06.

47 responses to “Resplandy et al. Part 3: Findings regarding statistical issues and the authors’ planned correction

  1. Nic,
    I read an article https://www.foxnews.com/science/error-in-major-climate-study-revealed-warming-not-higher-than-expected quoting you and Dr. Curry as stating
    ====================================================
    “But all involved, including Lewis, agree that manmade greenhouse gas emissions are warming the oceans.

    “People shouldn’t be left with the impression that the errors in this paper put into doubt whether the ocean interior is warming. It clearly is wholly or mainly due to human greenhouse gas emissions,” Lewis said.

    The study co-author who took responsibility for the error also made that point.”

    If that’s the case, I find it difficult to understand how GHG can warm the oceans deep more so than solar SWR which is regulated by increasing and decreasing cloud cover and/or air quality (less pollution from volcanoes etc.). Where is the data that rules out cloud cover and solar activity for the ocean interior warming for the past 30-50 years.
    ==================================================

    It seems making such claims such as “It [ocean interior warming] clearly is wholly mainly due to human greenhouse gas emission.” is quite a leap of faith. Explain mathematically and by what physical law your conclusion. I’ve yet to see the case made other than assumptions in climate models.

    • “People shouldn’t be left with the impression that the errors in this paper put into doubt whether the ocean interior is warming. It clearly is wholly or mainly due to human greenhouse gas emissions,” Lewis said.

      Nic, can you cite a source that supports this claim?

      • I’m not Nic, but look at this paper: https://niclewis.files.wordpress.com/2018/04/lewis_and_curry_jcli-d-17-0667_accepted.pdf
        The authors estimate the ECS ( around 1.7) from the OHC change between 1860…1880 and the presence with the Argo data ( much more precise than the used “proxy” in Resplandy 18 ) and a climate model for the past. This gives an ECS on the lower bound of the IPCC AR5 estimate. And the included premise is: the warming of the oceans comes from GHG forcing. And the result is: no CAGW!

      • Nic has labored many years in the science and dotting every I and crossing every T as the outsider, successfully gaining a seat into the climate community. We have watched him. The Resplandy18 et al reaction and response is proof of his abilities, coupled with carefulness.

        I applaud Keeling for his responses as I applaud Nic for his professional, non-emotional tone, just the facts. This being the case, I don’t know it was necessary, Nic, to give a CO2 condemnation disclaimer. The oceans are warming but they also have warmed and cooled for millennia without substantial variation in CO2.

        https://climateaudit.files.wordpress.com/2015/09/ocean2k_recon.png

      • frankclimate, the data for the earliest base period (1869-1882) in the Lewis and Curry paper seems quite uncertain, not very representative or well chosen.
        The period had the largest el Nino in recorded history (1878-1879) with a large peak in global temperature, if we can trust the few observational data from back then. If this monster el Nino was real, it probably drained the oceans of heat, leading to a negative global energy imbalance over the 1869-1982 period. Hence, the base period is probably too warm, and the energy balance +0.15 W/m2 (wildly guessed from models) much too high, leading to a significant underestimation of climate sensitivity

      • OlofR: Nice try… the results for ECS are not sesitive to the start period, see tab3 of the paper : 1930…1950 gives the same result. The earlier base periode was caculated from a model. If you think your wild guesses about ” leading to a significant underestimation of climate sensitivity” are justified: don’t bother to work it out and show it here. As we could see it presently: science works with self correcting tools. One of it seems to be this blog. So go on, show the gap in LC18 here. Nic will respond very fast, I’m quite sure.

      • Olof R: PS For the “largest ElNino in recorded history in 1878-79” see https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2010JC006695
        It was not the largest, it was 1997. ( para.42 there).

      • Olof R, you claim to deduce the TOA imbalance in the period 1869-1982. I think you meant 1869-1882. Perhaps you can provide a chart of your expert guesses of imbalance for each century period since 0 CE and explain the forcings that led to the MWP and LIA. Climate science would be at your debt.

      • First some corrections, the monster el Nino was in 1877-1878, and Nics earliest base period 1869-1882 (of course).
        The el Nino 1877-1878 had the by far lowest SOI and the highest detrended Nino 3.4 value on record. Barometric pressure is probably the most reliable data from back then. Also, 1877-1888 is the largest single deviation in all “global” temperature datasets going back to 1850.
        1930-1950 is possibly an even worse choice of base period, with its large uncertainties and spurious temperature spikes, especially 1944-1946.
        Its quite remarkable that they haven’t choosen the period 1920-1939 as a base, no world wars, no large volcanic eruptions.
        Or 1946-1962 ( then comes Agung)..

        Ron, if one really wants to get a clue of the energy imbalance in 1869-1882, I suggest running climate models that are forced to follow observed patterns of Pacific SST and barometric pressure. (I guess that very few models by their own would produce the largest el Nino, and the largest temperature deviation on record, near the end of this specific period)

      • So if I want to know the radiative imbalance for any point in history all I need is to plug that date into a climate model. Why didn’t I think of that?

      • I would say that it is pointless to do observationally based estimations of TCR and ECS earlier than 1957 when temperature datasets became “global”, and it became possible to at least crudely estimate OHC.
        Anyway, Richardson et al 2016 showed that CMIP5 models and Hadcrut 4 had similar TCR when compared “apples to apples” over the full record, 1860-now.

  2. As we do not have that good of handle on the key drivers of our planets chaotic climate, how can one possibly state with confidence that man made CO2 is heating up the ocean? Amazing level of arrogance.
    We can say, however, we really do not know, but are attempting to figure it out. That would be absolutely accurate.

  3. “serious (but surely inadvertent) errors in the underlying calculations”

    Yes surely. LOL.

    • Interesting they felt the need to change the value of the OR to 1.05 in order to increase the slope of the delta APO(climate) to over 1.0 again. Instead of just correcting their many errors …..

    • And stop calling me Shirley

  4. Every curve fit that has been displayed in all the posts and comments about this matter fall within the range of the spread of the “data”. I use “data” because the numbers that are the objective have never been measured.

    If this is correct, how can any valid differentiation among the various approaches be obtained?

    Thanks in advance for corrections to any incorrectos.

  5. As I noted at RealClimate, it is a travesty that the authors have not released their code.

  6. I don’t understand why these so called concerned journalists feel they can comment on what they clearly don’t understand. Nic Lewis did. They self evidently missed the mistakes and published exagerrated analysis on an almost 1.0 correlation. When society and the law allows ignorant and sensationalist so called journalists to comment unchecked on serious science in supposedly serious newspapers society has a big problem. So I hope these deceitful rags fail soon. IMainly because the delusional beliefs of their technically ignorant journalists are reported as if real science to support a massively deceitful fraud that damages the lives of all forced to pay for it, and the economies of their countries to no useful effect, on the science facts of the supposed cures, never mind the supposed causes. What do you hear about that?

  7. He has been a busy boy – and I I might be interested if I thought that undue precision in such things was warranted.

    http://www.realclimate.org/images//resplandy_new_fig1.png

    “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

    The large natural variation in Earth’s energy budget – over this very short record – has the signature of low level cloud variability – cooling in IR and warming in SW – and the appearance of cloud variability over the upwelling regions of global oceans being a major source. The question then is how much is AGW cloud feedback. Not an easy thing to estimate except as an approximate first order, back of the envelope Fermi problem solution. Not much?

  8. Shirley you do not understand that the new Ice Age began 18000 years ago . It is now loosing more heat every 24 hours than it gains. In order to keep the average surface temperature of the earth constant nature is removing heat from the oceans and depositing ice at the poles.

  9. “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet.”
    Misses that small component of energy produced continually by the nuclear processes in the core of the earth plus the much smaller frictional gravitational induced energy both in land waves sea waves and atmospheric wind water and land interaction.
    While the second is pretty insignificant though continual both add to the energy load so TOA should be
    “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy from the sun is absorbed, how much energy the earth produces de novo and the total energy emitted by the planet.
    “Earth’s energy imbalance (EEI) is daunting. The EEI is a small residual of TOA flux terms on the order of 340 W m−2. EEI ranges between 0.5 and 1 W m−2 roughly 0.15% of the total incoming and outgoing radiation at the TOA. the absolute uncertainty in solar irradiance alone is 0.13 W m−2 constraining EEI to 50% of its mean requires that the observed total outgoing radiation is known to be 0.2 W m−2, or 0.06%.
    Hence
    “Energy from nuclear decay in the Earth’s interior is about 0.04 W/m2 from memory add about 0.4K to ocean temperature.”
    is significant.

  10. Nic thank you in particularly for the first very precise and concise summary which was ever so easy to read.
    This is pretty good as well but the second, necessarily went on a little long.
    The concept now better explained of some sealed tanks being used as a baseline only adds to the problem I mentioned before.
    A computer can do a thousand or a billion runs but if the data used is scarce, as in this case. and subject to so many variables in the case of collecting reliable samples to compare it becomes akin to measuring gravity waves.
    Did someone have a cigarette 3 blocks away. Bushfires in Indonesia , volcanoes and even butterfly wings.
    Like throwing buckets overboard and measuring temps and ph. Different varying results expected every time.
    I like the idea, the practicality, like using OHC only as a measure of warming, is that it is impossible to calculate except within very large unusable ranges.

  11. I must assume then the ratio of the surface area of the earth covered by water to the surface area covered by land has no affect on the heat retained by the earth or lost by the earth. I believe radient heat is reflected by water.

  12. Imagine if the ocean was only 30 feet deep,” said Resplandy, in a news release from the Princeton Environmental Institute that accompanied the study.
    “Our data show that it would have warmed by 6.5 degrees Celsius (11.7 degrees Fahrenheit) every decade since 1991″””
    At least the SLR would have been small.

  13. Salvatore del Prete

    Green house gases have ZERO !!! to do with ocean warming.

    Not much to say other then even have to discuss this is just another waste of time that the AGW theory scam has given to us. Taking away resources from other areas of climate science that need to be studied and wasting time with idiotic endeavors such as CO2 versus ocean warming. Next joke.

  14. Salvatore del Prete

    This is the reason for my first post.
    Type of prediction
    Ocean warming
    Model prediction
    Warming caused by direct heating of thermal radiation at 15 microns.
    Actual measurements
    Warming of about 0.06 C over 50 years.
    More here.
    Comment
    The absorption coefficient for liquid water as a function of wavelength is given at http://www.lsbu.ac.uk/water/vibrat.html (see the figure near the end). Thermal infrared in the Earth’s atmosphere is around 10 to 20 microns where the absorption coefficient (A) is about 1000 cm-1. The transmission in liquid water (T) equals exp(-A*L) where L is the depth of penetration. For the case where 1/e or 27% of the incident photons remain unabsorbed, with A=1000 cm-1, the L= 1/1000 cm = 1/100 mm. 98% of the incident photons will be absorbed within 3 times this distance. So one can see from the figure, than practically no infrared photons penetrate beyond 3/100 mm. When I said all the photons are absorbed in the top millimeter of the water, I was being very generous. A more precise estimate of A is 5000 cm-1 at 15 microns where carbon dioxide is emitting radiation, so even 0.03 mm is extremely generous. Since the liquid water is such an effective absorber, it is a very effective emitter as well. The water will not heat up, it will just redirect the energy back up to the atmosphere much like a mirror.

    It is worth mentioning for A = 5000 cm-1 at 15 microns, the implied water emissivity is 0.9998 implying that of the incident radiation only 0.02% of it will be absorbed. The emitted radiation will closely follow a blackbody emission curve whereas the incident flux from carbon dioxide is confined to a band centered at 15 microns. The implication of this is that much of the radiation emitted will escape directly to space through the IR windows, so it is a negative feedback. The initially absorbed energy cannot be transferred to the ocean depths by conduction (too slow), by convection (too small an absorption layer), or by radiation (too opaque). It must escape by the fastest way possible meaning upwards radiation away from the water. I don’t see why anyone is having problems understanding basic physics.

    The only way to explain the ocean heating in depth is for the solar radiation to change and decreasing clouds, as measured by ISCCP, indicate increasing solar radiation is occurring right where the ocean heating is reported to be occurring. The Willis paper does not even mention the ISCCP data that has a similar geographic distribution to the water warming. Simply put, where clouds decrease in amount, the water warms. It has nothing to do with carbon dioxide. A handy plot of the ISCCP results can be found as Figure 3 at http://www.worldclimatereport.com/index.php/2006/01/11/jumping-to-conclusions-frogs-global-warming-and-nature/ Clouds have large natural variations going up and down entirely independent of any greenhouse effect. The climate models do not predict these variations and apparently Willis and others are unaware of these variations.

    Score
    1-24-4

  15. Salvatore del Prete

    Source of the above is from the web-site icecap.com

  16. At the height of the last Ice Age, about 78000 years ago, the ice on upstate New York was 5,000 feet thick and the oceans were 400 feet lower than now. The radient heat hitting the earth from the sun was about the same as it is at present. The surface of earth covered by earth, thus vegitation, was greater than it is at present. At this point the earth stopped taking heat from the ocean because it began loosing more heat than it kept,

  17. Nic, another very good post. I have to agree with Dr. Ball over at WUWT on this matter. You have scored bigly on this, and I don’t think Resplandy can recover. Rather like McIntyre instantly decimated the Gergis hockeystick paper a few years ago.

  18. I subject my Facebook friends to climate science stories. When the paper first dropped, pre Nic’s 1st post on it, I said something like, This means the IPCC has been materially and substantially wrong on the issue for the last decade. Yes, I shoot from the hip too often. Then Lewis makes me look good. Thank you. I think it’s great. And from what I’ve seen, you’ve shown class.

  19. Bengt Abelsson

    I once came upon the problem of storing N2 in steel cylinders, ambient temperature, high pressure and long time.
    The purpose itself was classified.
    The pressure loss due to diffusion of N2 molecules through the steel was a significant factor in determining the projected shelf life time.
    I cannot see any reason to presume that N2 and O2 molecules would have the same diffusion constants.

  20. Geoff Sherrington

    Nic,
    Your essays on Resplandy et al demonstrate a need for more critical evaluation of the error aspects of papers before they are published. If all climate papers were studied in similar depth prior to publication, my guess is that most would need revision or retraction. However, there is a shortage of qualified reviewers of errors and of educational papers/textbooks/standards to guide authors about error analysis. It is easy to encourage authors to follow guides such as those of the Servres-based Bureau of Weights and Measures, but ways are needed to make compliance compulsory. The level of ignorance about error estimation is severe. Science is suffering considerable detriment.
    One can but wish for more education and adoption of proper methods. Your essays here are valuable in highlighting some consequences.
    What follows is a personal view of a broad approach to the problem of proper error analysis.
    For the purposes of writing a scientific paper, the investigation of errors can initially be split into two approaches, the comprehensive and the incremental.
    With the comprehensive approach, all imaginable sources of error are identified and each is assigned a magnitude, for later compilation into a mathematical summary. This is a difficult approach when dealing with transient data, for often the best way to scope an error is by repeated experiments using different conditions. If only one view is available, estimation has to supplant measurement for that particular error. It is the more difficult approach overall. Because of its broad scope compared with the incremental approach, more effort will usually be involved. However, it stands less chance of the paper being faulty because of unintended consequences. Sadly, this comprehensive approach is not often seen in a formal form in climate related papers.
    The incremental approach commences with the usual establishment of broad relationships between selected experimental variables of interest. These relationships are then compared with expectation, for such values as magnitude, sign, trend, correlations, etc. If the expectation does not at first appear, it is assumed that this is because of errors of magnitude or omission and refinement of error estimates or measurements commences. More and more candidate errors sources are studied and fed back until the expectations are met or clarified. Then the process of compilation into a summary error is done. This method is prevalent despite its obvious weakness that sources of errors are missed because of wrong expectations or unintended consequences. It also has a weakness that too great an estimate of a chosen error source can curtail the search for other sources as the expectation appears to have been satisfied.
    My personal experience of error treatment came from a decade of analytical chemistry; my experience might be atypical of the general case of the importance of proper error analysis. When one owns a laboratory whose income depends on errors measured by the client being smaller than the advertised error, errors dominate the scientific endeavour. This dominance ought to apply in most or all fields of science but it is rare in the hundreds of papers I have so far read from the climate sector. The difference might be due to accountability of the author for the quality of the work.
    A prior awareness of these 2 approaches will hopefully assist authors with their early-stage experimental design.

  21. Pingback: Weekly Climate and Energy News Roundup #336 | Watts Up With That?

  22. Pingback: Weekly Climate and Energy News Roundup #336 |

  23. Bengt Abelsson

    Well, practically no diffusion of N2 was not good enough for the purpose I was involved in, many years ago.
    I have no insight in the pressure or vessel design for the reference air samples in this particular work to have any opinion of any influence on the N2/O2 ratio.

  24. The situation is completely different with the hockey stick. Mann had very large error bars. Later studies only confirmed Mann and reduced the error bars. Then there’s the PAGES complaints. What is this about? PAGES has over 600 proxies globally and shows a mostly cooling trend for thousands of years followed by the blade upturn in the last couple of centuries. The blade of the hockey stick is supported by thermometers. Is there some skeptic meme going around about one tree ring causing the upturn? Have they considered thermometers when they make these complaints?

  25. Nic,
    I loudly applaud your moving to the analysis of first diffs here.

    One outstanding question of interest is whether there is any significance in the apparent reduction of net flux observed in the Resplandy data over this period.

    I was wondering whether you did a count on the number of occurrences of a trend gradient on your first diffs greater than zero when you ran the Monte Carlo for the WLS case? It would be very interesting to see whether, despite the uncertainty in absolute value of (average) net flux, the Resplandy data might still provide meaningful support for net flux reducing over time.
    Paul

    • Paul,
      No, I didn’t do so. But the time-variation of the slope of the Resplandy APO_Climate series is very sensitive to the shape of their APO_Obs data, and at present I get calculate a series for that data which has a somewhat different shape from theirs. The exact OR ratio used also has an effect. So it would be better to carry out such an analysis using the final, correct, APO_Climate time series.

  26. Pingback: Resplandy et al 2018 | climat-evolution

  27. Figure 1 has an AMO signal. The heat uptake rate accelerates 1995-2005 and slows down post 2005. That’s low solar driving a warm AMO via the NAO/AO, which is reducing cloud cover, and allowing OHC to rise. Rising CO2 forcing is working in the opposite direction.

  28. Pingback: Peer Review: Selfie-Sticks & Snobbery | Big Picture News, Informed Analysis

  29. Pingback: Resplandy et al. Part 4: Further developments | Climate Etc.