Resplandy et al. Part 2: Regression in the presence of trend and scale systematic errors

by Nic Lewis

In a recent article I set out why I thought that the trend in ΔAPOClimate was overstated, and its uncertainty greatly understated, in the Resplandy et al. ocean heat uptake study. In this article I expand on the brief explanation of the points made about “trend errors” and “scale systematic errors” given in my original article, as these are key issues involved in estimating the trend in ΔAPOClimate and its uncertainty.

I will illustrate the trend error point using a 26-year time series that is a slightly modified version (‘pseudo-ΔAPOAtmD‘) of Resplandy et al.’s ΔAPOAtmD time series and associated uncertainties. The pseudo-ΔAPOAtmD best-estimate time series increases linearly by 0.27 per meg each year, starting at zero in 1991. In each year, the 1-σ uncertainty (error standard deviation) in its value is 50% of its best-estimate value (with σ set to 10-6 in 1991 to avoid divide-by-zero errors). I will assume errors have a Normal distribution, with zero mean. Figure 1 illustrates this situation.

Fig1_Lewis-on-Resplandy-trend-errors_pseudo-dAPO_Atm&data&errsFigure 1. Estimated pseudo-ΔAPOAtmD best values and their uncertainties. The black line goes through all years’ best estimates, marked with small black crosses. The pink area represents the ± 1-σ uncertainty limits in continuous time, and the red bars show that uncertainty for each year’s data point.

The case when errors are uncorrelated

If errors are uncorrelated then the usual conditions for ordinary least squares (OLS) to give valid estimates are satisfied. Obviously, regressing the best fit values will give a perfect fit with, correctly, a trend of 0.27 per meg yr−1, since they increase linearly with time. The trend and the uncertainty in it can be estimated as follows:

  1. a) for (say) 10,000 sample realizations of the pseudo-ΔAPOAtmD time series, add random, independent, errors drawn from each year’s error distribution to the pseudo-ΔAPOAtmD best estimate time series values;
  2. b) find the trend for each sample realization using OLS regression;
  3. c) take the mean of the 10,000 trends as the trend (best-)estimate and their standard deviation as the 1-σ uncertainty of that estimate.

On doing so, I obtained a trend estimate of 0.271 with uncertainty of ± 0.056. (Note: with only 10,000 samples the figure in the third decimal place is unreliable.).

I then repeated the calculation, with the same set of cases (sample realizations) but using weighted linear regression (WLS). The weight for each year were set at 1/σ2, where σ is the error standard deviation for that year, as is usual.

The resulting trend estimate was 0.270, with uncertainty of ± 0.033.

In both the OLS and WLS cases, the mean of the trend error estimates from each of the 10,000 regressions was close to the standard deviation of the 10,000 trend estimate, as one would expect.

So, if errors are uncorrelated between years but their estimated uncertainty varies between years, both OLS and WLS on average provide unbiased trend estimates, but with WLS the variation in the trend estimate from its true value between possible sample realisations is smaller. WLS in effect uses the available information more efficiently and produces more precise estimates.

The case when errors are perfectly correlated (trend errors)

Now consider the case where the uncertainty arises from the trend in the data being uncertain. For any given realisation of the pseudo-ΔAPOAtmD time series, there is then only a single uncertain error. That error scales up with time, pro rata with the magnitude of the time difference from the baseline year. The error distributions for each year are the same as previously, but they are now perfectly correlated across all years rather than being independent. For any given realisation, all data points will therefore lie on a straight line – a version of the black line rotated about its zero starting point in 1991, and most likely lying within the area shaded pink.

Repeating steps a) to c), but this time with each year’s error perfectly correlated with other years’ errors, I obtained the following results.

With OLS regression, a trend estimate of 0.268 with uncertainty of ± 0.135 per meg yr−1.

With WLS regression, a trend estimate of 0.268 with uncertainty of ± 0.135 per meg yr−1.

In both cases the trend error estimate for each of the 10,000 realisations was zero, as although the fit differed between realisations, it was a perfect fit in every case. And in both cases the trend uncertainty was the same as that in the original data – 50% of 0.27 per meg yr−1.

The key point is that if the data has a trend error, regression cannot reduce it at all. Weighting data values to reflect their absolute precision does not help, but it does no harm either in this case.

Which case applies to the actual ΔAPOAtmD values?

Resplandy et al. state the results of their model simulations of the fertilisation effects of combined, N, Fe and P aerosol deposition as: “The overall impact on ΔAPOAtmD is +0.27 per meg yr−1 over 27 years of simulation (1980–2007), which we extrapolate to our 1991–2016 period”, and that “Uncertainties at the 1σ level on ΔAPOAtmD are assumed to be ±50%”. That corresponds exactly to the way my pseudo-ΔAPOAtmD best-estimate time series and uncertainties were calculated. It is quite clear that the ΔAPOAtmD best-estimate time series should represent an exact linear trend of +0.27 per meg yr−1 with its errors being perfectly correlated, non-independent trend errors.

Scale systematic error case

Consider now the case where the best-estimate data, while broadly trending, has not been derived from a linear trend. That applies to the ‘scale systematic’ error component of ΔAPOOBS.

ΔAPOOBS is calculated from measured changes in the atmospheric (δO2/N2) ratio and CO2 concentration (XCO2, in p.p.m.) as:

ΔAPOOBS = (δO2/N2) + 1.1 ⤬ (XCO2 − 350) / XO2

where XO2 (= 0.2094) is the reference atmospheric mole fraction needed to convert XCO2 from p.p.m. to per meg units.

Resplandy et al. Extended Data Table 3 states that, in addition to Corrosion, Leakage and Desorption errors of respectively ± 0.3, ± 0.2, and ± 0.1 per meg yr−1, and random errors in each year of ± 2 per meg (± 4 before July 1992), there is a scale systematic error of 2% on (δO2/N2).

I downloaded the (δO2/N2) data for the three monitoring stations used, combined it using the weights given in the paper that Resplandy et al. cited, and deducted the 1991 value so as to match Resplandy et al.’s baseline.

The OLS linear trend of the resulting data was −19.87 per meg yr−1. Using WLS regression, it was -16.05 per meg yr−1.

I then drew samples from a normal  distribution with unit mean and a standard deviation of 0.02 and, for each sample multiplied all years’ (δO2/N2) data values by the sample value.

When using OLS regression, the trend estimate was −19.87 with uncertainty of ± 0.396 per meg yr−1

When using WLS regression, the trend estimate was −16.05 with uncertainty of ± 0.320 per meg yr−1

So in both cases sampling gives the same trend estimate as regression on the original data points, but WLS estimates a substantially shallower (less negative) trend than OLS. Note that, as with a trend error, the regression trend estimate uncertainty is the same as for the original data, here 2% (of the trend estimate).

Figure 2, which shows the data values (black circles, joined by thin black line), and linear regression fits using OLS (cyan line) and WLS (red line), helps provide insight into the difference between the OLS and WLS regression fits.

Fig2_Lewis-on-Resplandy-trend-errors_dO2.N2data&fitsFigure 2. Average (δO2/N2) annual mean data for from La Jolla, Alert, and Cape Grim monitoring stations weighted as to 0.25, 0.25 and 0.5 respectively (black circles, joined by thin black line),the OLS linear fit to them (cyan line), and the WLS linear fit to them (red line).

The reason why the WLS fit has a shallower line is that earlier years are weighted much more highly than the later years when determining the WLS fit, as the scale systematic error is applied to larger (δO2/N2) values in the later years. Indeed, the near-infinite weighting given to the 1991 data point forces the WLS fit through zero in 1991.

It can be seen that there is little evidence of random errors being significant in any year, nor greater in the later years; the black line is fairly smooth throughout the period. But the trend appears to become more negative over time, so a linear fit is strongly affected by the weightings given in different years.  Confirming that data errors are trivial, a quadratic fit is extremely close (adjusted R2 0.9998, versus 0.9950 for a linear fit). Moreover, a quadratic fit shows no evidence of fit errors being greater in later years – they are slightly higher in the first half than in the second half of the period.

There is no justification for using WLS in a case like this. Because the errors are perfectly correlated between years, the conditions for WLS to be valid are not satisfied. That the WLS result cannot possibly be valid can be seen as follows. If the method is valid, it should give the same trend whatever year is used as a baseline for the data. So, rebase all the data to a zero baseline in 2016, which implies small errors in later years and large errors in early years. That will result in the WLS weights being high in later years and low in early years. The WLS fit will then pass through the 2016 data point, and its slope will be close to that exhibited by later years’ data and, as a result, much steeper – indeed, steeper than the slope of the OLS fit (which is unchanged by the rebasing).

A known method for validly regressing when the data errors are perfectly correlated or nearly so is to transform the data by taking first differences, and then regress. The regression intercept term then corresponds to a linear trend in the original data and the slope coefficient to a quadratic trend term. When one does that using OLS, the mean trend over the full period is very close to the trend from OLS linear regression on the original data. And WLS regression on the transformed data gives a mean trend whose magnitude is only 2.5% below that for OLS regression on the transformed data, as compared with 19.2% below the OLS trend when the original data is regressed.

What this implies for trend estimation in Resplandy et al.

I have demonstrated two important points. One is that regression cannot reduce trend or scale systematic errors in the data, because they are perfectly correlated across all data points. The second is that WLS regression is liable to produce seriously inaccurate trend estimates where a scale systematic error is involved.

With linear regression models, the estimated trend of the sum of several components equals the sum of the trends of the individual components. Moreover, provided there is no correlation between errors in the different components, the trend estimation error for their sum equals the sum in quadrature of the trend estimation errors for the individual components. That enables one to place a theoretical lower limit on the ΔAPOClimate trend uncertainty, by adding in quadrature identified trend and scale systematic errors.

The largest contribution to error in ΔAPOClimate comes from its ΔAPOOBS component. That in turn contains a scale systematic error, of (at ± 1σ) 2% of a trend of circa −19.8 per meg yr−1, or –0.396 per meg yr−1, and three trend errors, or 0.3, 0.2 and 0.1 per meg yr−1. Since these four errors sources are all independent, they can be added in quadrature. So can the scale error of 0.135 per meg yr−1 in ΔAPOAtmD. I will leave errors in the remaining components,  ΔAPOFF and ΔAPOCant, out of account; although it is highly likely that they also contain trend or scale systematic errors their magnitude is small and when added in quadrature they would not add significantly to the total.

Adding all the quantified error sources in quadrature gives an irreducible minimum ΔAPOClimate trend uncertainty of ± 0.56 per meg yr−1. Even if the much lower, clearly incorrect, WLS estimate of the (δO2/N2) trend of −16.0 were substituted for the OLS-based trend, the irreducible trend uncertainty would still be 0.51 per meg yr−1.  Resplandy et al.’s ± 0.15 ΔAPOClimate trend uncertainty estimate is completely infeasible!

 

Nicholas Lewis                                                                       7 November 2018



UPDATE                                                                               8 November 2018

As mentioned in a comment by Judith Curry, Laure Resplandy now has a statement on her website http://resplandy.princeton.edu/ under her entry OCEAN WARMING FROM ATMOSPHERE GASES linking to her new paper, that reads:

We are aware the way we handled the errors underestimated the uncertainties. We are working on an update that addresses this issue. We thank Nicholas Lewis for bringing this to our attention.

 

167 responses to “Resplandy et al. Part 2: Regression in the presence of trend and scale systematic errors

  1. There is an inconvenient question for Nic and other skeptics – what is the rate of warming. This compares the cumlative TOA power flux imbalance to Argo. Very different variables that must co-very. Argo heat is rising – after the pre-2007 glitch – and at a rate of about 0.8 W/m2.

    This is the monthly power flux in less power flux out – starting at about the middle of an annual cycle when the energy imbalance is zero. The average powr flux imbalance is 0.82 W/m2 – and trending to zero apparently. No mystery – the Earth system tends to maximum entropy.

    The study value is 0.83 W/m2.

    • Is Argo data in blue? Are the Argo data consolidated by reanalysis or by using an interpolated grid? What causes the change in slope around month 70?

      I also wonder, let’s say two data sets are readily available and you want to come up with a clever method to use different data to match the other sets’ trend? Is it possible the new method can evolve to use an incorrect approach to match the “known” data? Is it possible the Ceres cumulative power flux biases the individuals creating an Argo gridding or reanalysis so the two match more or less, then we add a third (flawed) analysis with different type of data to match the Ceres trend? Im definitely not as high powered as you all, but i have spent time trying to catch scientists and engineers playing games with data and models to get promoted or sell goofy ideas. I would look out for this type of human error.

      Do you know what i think? Its better to develop a reanalysis using Argo and satellite surface temp, which includes a detailed description of geothermal flux. There is 0.1 watts/m2 coming from below in an uneven pattern and this influences how one extrapolates the reanalysis below Argo reach.

      • There is a legend and Argo is in blue. The data comes from the Scripps Argo Marine Atlas.

        http://www.argo.ucsd.edu/Marine_Atlas.html

        CERES data comes from the CERES data product page.

        https://ceres.larc.nasa.gov/order_data.php

        Conservation of energy says that they must co-vary. The global first order differential energy equation can be written as the change in heat in oceans is approximately equal to energy in less energy out at the top of the atmosphere (TOA).

        Δ(ocean heat) ≈ Ein – Eout

        The CERES record shows more energy entering the system than leaving over the period and the Argo record shows oceans warming. It can be seen in anomalies that are relatively precise – and that do not need anchoring with Argo.

        (a) Shortwave

        (b) infrared

        The cloud signature is an anti-correlated relationship of IR and SW. Less cloud reflects less light but allows more IR to escape. With low marine strato-cumulus cloud SW dominates. There are land use, water vapor, aerosols, rain… and many other changes . But cloud impedes IR emissions and reflect SW. IR anomalies are a mirror of SW anomalies showing the role of cloud in 21st century changes in the energy budget.

        The problem of absolute values arises when comparing incoming and outgoing energy – power flux over time – to obtain a numerical imbalance at TOA. Hence the ‘anchoring’ to Argo. The records are independent even if Argo is used to close the absolute global energy budget.

      • “The CERES record shows more energy entering the system than leaving over the period and the Argo record shows oceans warming. It can be seen in anomalies that are relatively precise – and that do not need anchoring with Argo.“
        Be that as it may neither of your graphs of SW and IR leave me with the impression of a TOA imbalance.

      • Yet it is quite obviously there and the problem would appear to be your vision.

      • Robert you are plotting flux , a power term, against OHC an energy term. One if the integral of the other and you expect them to covary ??

        And yes this is totally OT. The thread is about the maths of the paper , not yet another rehash of the whole climate debate.

    • Robert: CERES-EBAF (Energy Balance and Filled) has been adjusted to agree with 10-years of ARGO data (0.71 W/m2) plus some other refinements. (In earlier versions of this product, adjustment to yield 0.85 and 0.58 W/m2 were used.) Which leaves your question about the discrepancy in your graph unanswered. The paper below didn’t include a plot like your top plot. However, I don’t understand the scale on the right-hand axis: “Cumulative W/m2). A cumulated power flux presumably would represent energy per unit area.

      Loeb (2017): https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0208.1

      “Despite recent improvements in satellite instrument calibration and the algorithms used to determine CERES TOA radiative fluxes, a sizable imbalance persists in the average global net radiation at the TOA from CERES satellite observations. With no adjustments to CERES SW and LW all-sky TOA fluxes, the net imbalance for July 2005–June 2015 is approximately 4.3 W m−2, much larger than expected. As in previous versions of EBAF, we use the objective constrainment algorithm described in Loeb et al. (2009) to adjust SW and LW TOA fluxes within their ranges of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system, as determined primarily from ocean heat content anomaly (OHCA) data. In the current version, the global annual mean values are adjusted such that the July 2005–June 2015 mean net TOA flux is 0.71 ± 0.10 W m−2, as provided in Johnson et al. (2016) [uncertainties at the 95% confidence level account for expendable bathythermographs (XBT) correction uncertainties and Argo sampling errors for 0–1800 m]. The uptake of heat by Earth for this period is estimated from the sum of (i) 0.61 ± 0.09 W m−2 from the slope of weighted linear least squares fit to Argo OHCA data to a depth of 1800 m analyzed following Lyman and Johnson (2008), (ii) 0.07 ± 0.04 W m−2 from ocean heat storage at depths below 2000 m using data from 1981–2010 (Purkey and Johnson 2010), and (iii) 0.03 ± 0.01 W m−2 from ice warming and melt and atmospheric and lithospheric warming for 1971–2010 (Rhein et al. 2013).”

      • The average monthly instantaneous power flux can be converted to Joules by multiplying it by the number of seconds in the month. This shows more energy entering the system than leaving over the period.

  2. This comment is off-topic. A scientific study’s results are meant to be based on properly analysing the data that the study itself uses. What Argo shows is irrelevant to whether the Resplandy et al study’s results are correctly calculated.

    • Wow. CERES and Argo are discussed in the paper – quite rightly as these are the primary sources for ocean heat. The question on the surface is very simple. Is Nic claiming on the basis of this that oceans are not warming – this century – at a rate of some 0.8 W/m2 consistent with both ocean and satellite data – and now with this other method?

      • No, of course I’m not claiming that the oceans are not warming.
        BTW, the satellite data baseline radiative imbalance is calibrated to match Argo, it is not provide an independent source.

      • Nic

        Of course you are not claiming the oceans are not warming, but 1991 is the blink of an eye and disregards other warm spells we can trace back millennia.

        Is an increase since 1991 significant, especially bearing in mind the convoluted maths needed to arrive at a figure?

        I do think you need to submit this information to Nature. Has the author responded as yet?.

        tonyb

      • The data sources are:

        https://ceres.larc.nasa.gov/order_data.php

        http://www.argo.ucsd.edu/Marine_Atlas.html

        The global first order differential energy equation can be written as the change in heat in oceans is approximately equal to energy in less energy out at the top of the atmosphere (TOA).

        Δ(ocean heat) ≈ Ein – Eout

        Satellites measure change in energy in and energy out well but are not so good at absolute values – the comparison problem – so that energy imbalances at TOA are not immediately obvious. Energy in and out varies all the time. Energy in varies with Earth’s distance from the Sun on an annual basis and with much smaller changes over longer terms due to changes in solar emissions. Outgoing energy varies with cloud, ice, water vapor, dust… – in both shortwave (SW) and infrared (IR) frequencies.

        A increase in energy flux into the system can be seen in CERES anomalies. These are precise and do not require ‘anchoring’ in Argo.

        On the very short CERES record net energy out – the sum of SW and IR power fluxes over time – is the dominant term on the right hand side of the global energy equation. Net TOA power flux is warming up by convention – in what was a warming trend in net CERES anomalies.

        The components are shortwave and infrared.

        There is less reflected light in the early years followed by little change and a recent warming associated with a warm eastern Pacific. The IR data on the other hand shows cooling in the early record, little change in the middle and more cooling at the end. The mirror image of SW and IR energy changes show that cloud was a primary source of energy change in the climate system in the 21st century. The question you should be asking is why.

        The data sets are independent even if closing the absolute global energy budget uses the Loeb et all 2009 methodology of ‘anchoring’ it to Argo inter alia.

        The problem here remains. Resplandy et al found a warming rate of 0.83 W/m2 – at the higher end of estimates as they say. The higher end to this skeptic seems the more credible and recent end. And I reserve judgement about how the more credible rate was derived here in contrast to the invidious leaps to superficial partisan rhetoric from both sides around this.

      • Change is what is measured by by CERES and SORCE precisely and stably. Closing the absolute global energy budget is via the methods of Loeb et el 2009.

    • Why can’t I post comments?

  3. Simple question Nic. If I drop the deltas from your first displayed equation, I get the value of the quantity of interest and not its delta. In that case, this issue of baselining does not arise.

    • dpy6629, thanks for raising the baselining point.
      I like you had thought about removing the 1991 baselining. The difficulty is that some of these variable are just derived from trend estimates – they have no absolute value. For instance, dAPO_AtmD is estimated as a trend of 0.27 per meg per year with an uncertainty of 50% (135 per meg per year).
      However, the method used should produce the same result whatever baseline year is used. The fact that it does not do so shows that the method is faulty; the problem arises from the way that trend and scale systematic errors are treated.

  4. Nic, it would be more in the style of the Climate Etc. to post only the first paragraph and to link it the whole article.

  5. To Michael Mann’s innovative programing that created ‘hockey stick’ graphs out of white noise — suitable for publishing by the UN — we now have Resplandy’s ‘trajectory optimization’ mathematical models, designed to maximize estimates of the speed of oceanic temperatures due to AGW — suitable for framing America and capitalism for killing us all, no matter what we do.

    • Red noise, not white.

      • …and, a lot of hot air…

        “We found that at least 43 authors have direct ties to Dr. Mann by virtue of coauthored papers with him. Our findings from this analysis suggest that authors in the area of this relatively narrow field of paleoclimate studies are closely connected. Dr. Mann has an unusually large reach in terms of influence and in particular Drs. Jones, Bradley, Hughes, Briffa, Rutherford and Osborn.

        “Because of these close connections, independent studies may not be as independent as they might appear on the surface… We note that many of the proxies are shared. Using the same data also suggests a lack of independence.

        “The MBH98/99 work [aka, the ‘hockey stick’] has been sufficiently politicized that this community can hardly reassess their public positions without losing credibility. Overall, our committee believes that the MBH99 assessment that the decade of the 1990s was the likely the hottest decade of the millennium and that 1998 was likely the hottest year of the millennium cannot be supported by their analysis.” ~Dr. Edward J. Wegman (2006)

    • The hockey stick has been confirmed many times by now, including using mathematical techniques other than Mann et al’s.

      • Michael Dennis Jankowski

        Different mathematical techniques…but still including bristlecone pines and/or other proxies that should not be used, period. You have been around enough to be aware of this. Nice lie of omission.

      • “The hockey stick has been confirmed many times by now, including using mathematical techniques other than Mann et al’s.”
        Rubbish.
        Most confirmations, and there are not that many, David, are either Mann with associates or sock puppets of Mann.
        Worse, you know that but still spread that rubbish.
        Mathematical techniques are not confirmations, data subject to techniques could be.

    • All of the studies showing hockey sticks including the PAGES ones have come under withering criticism from McIntyre. I personally agree with Steve that the field is hopelessly addicted to artificial and unscientific selection criteria of proxies.

      • The government has been paying for filing cabinets full of worthless global warming pseudo-science like this for years.

        “My whole involvement has always been driven by concerns about the corruption of science. Like many people I was dragged into this by the Hockey Stick. The Hockey Stick is an extraordinary claim which requires extraordinary evidence, so I started reading round the subject. And it soon became clear that the first extraordinary thing about the evidence for the Hockey Stick was how extraordinarily weak it was, and the second extraordinary thing was how desperate its defenders were to hide this fact. The Hockey Stick is obviously wrong. Climategate 2011 shows that even many of its most outspoken public defenders know it is obviously wrong. And yet it goes on being published and defended year after year. Do I expect you to publicly denounce the Hockey Stick as obvious drivel? Well yes, that’s what you should do. It is the job of scientists of integrity to expose pathological science. It is a litmus test of whether climate scientists are prepared to stand up against the bullying defenders of pathology in their midst. (Jonathan Jones)

  6. Nic,

    I hope you submit a version of your posts to Nature. Some of this seems sufficiently clear to compel publication.

  7. The entire planet represented by a few data points? Seems a bit much for such a complex system.

    • Not really. If you are defining one part of a complex system like a global temperature, you can use a simple data set or go as complex as you want.
      No problems.
      In practice one can only ever use the data points that one has that are in good working order.
      The system might be complex but the data collectable is restricted and that is all one can ever use.
      This is why, as Nic said, a new approach to provide a check for other methods is always welcome.

  8. David L. Hagen (HagenDL)

    Nic Thanks for your detailed analysis. PS is there minor typo of missing “-” in “Using WLS regression, it was 16.05 per meg yr−1”? vis “-16.05” later.

  9. What does “more right” mean?
    Either Lewis’s criticism is correct or it isn’t.
    If it is, and I haven’t seen any evidence he is wrong, [you certainly haven’t presented any] then there is obviously something amiss with their paper. What correlation a poorly executed, incorrect paper bears to other research, trends, metrics or anything else is largely irrelevant isn’t it?
    If they’re wrong then they should correct the errors and then what relation it has to any other evidence, studies, papers etc can be considered.
    Until then they’re just wrong, not more or less right.

    • Thanks – I was wondering if in rushing out of the house I forgot to post it. I am starting to see where #atomski is coming from.

      But the point is consilience in a complex and uncertain field where results are imprecise and absolute truth is absent. The important question seems to be is the world warming at 0.8W/m2 or not? It seems quite likely based on 3 out of 3 data sources now.

      Most of the commentary on this is tribal posturing from skeptics. The search is always on for a simple reason to dismiss with prejudice some bit of science they disapprove of.

      The method itself is refreshingly new – it opens a new perspective they reach a very plausible conclusion – they get serious scientific points. It is a fabulous study – even if there is an error – but any error will not be convincingly demonstrated by a knee jerk bit of blog science. You need at least three – and hopefully a bit of real science. In the interim – chill – your rhetoric is absurd.

      As far as I am concerned If some wretch wants to dis science itself piece by piece on the basis of motivated belief – and I can’t figure out why but both sides do it – then they earn contempt.

      • “But the point is consilience in a complex and uncertain field where results are imprecise and absolute truth is absent.”
        No point in consilience if there is no possible truth, is there?
        “ The important question seems to be is the world warming at 0.8W/m2 or not? It seems quite likely based on 3 out of 3 data sources”
        I thought the question was more what rate it should be warming at.
        I am surprised that 3 different and unrelated data sets ( is that right, someone Nic above? Suggested that at least 2 were actually linked).
        What temp rise at what time frame is that, for those that prefer a simple understandable scale?

      • What rate it should be warming at? This paper claims 0.83 W/m2 +/- 0.11 (from memory) at the higher end of estimates they say – rather than 60% greater. Discount the noisy hyperbole from both sides and you may have a basis for seeking truth – but as Voltaire said beware those who have found it.

        There is far too much scope for uncertainty in this method for great precision yet – but it is still a new and good idea that breaks scientific ground. Kudos. New perspectives are possible that may evolve into new methods.

        You can do the calcs – not all that difficult.

        http://www.argo.ucsd.edu/Marine_Atlas.html

        https://ceres.larc.nasa.gov/order_data.php

        CERES/SORCE measures change with precision and stability – and although the paper as well claims they are not truly independent – it find that tkoo self serving. The data is space based of course and the method of closing the energy budget (Loeb et al 2009) does not make it an alias for ocean heat.

      • … I find it too self serving… and facile.

  10. Pingback: Uncritical News Media Gave Blanket Coverage To Flawed Climate Paper | Watts Up With That?

  11. Pingback: Uncritical News Media Gave Blanket Coverage To Flawed Climate Paper |

  12. “The Washington Post, for example, reported: “The higher-than-expected amount of heat in the oceans means more heat is being retained within Earth’s climate system each year, rather than escaping into space. In essence, more heat in the oceans signals that global warming is more advanced than scientists thought.”

    The New York Times at least hedged their reporting, claiming that the estimates, “if proven accurate, could be another indication that the global warming of the past few decades has exceeded conservative estimates and has been more closely in line with scientists’ worst-case scenarios.”

    Such bizarre hyperbole – there is neither more nor less heat in the oceans and atmosphere than last week. Well not much. And the average rate of warming over the past couple of decades they find is about 0.8 W/m2 – consistent with other recent estimates. It seems currently much less than that. Believers have had a lock on bizarre hyperbole – but it does seem that skeptics are making a late run.

  13. Pingback: Uncritical Information Media Gave Blanket Protection To Flawed Local weather Paper | Tech News

  14. This story’s been noted by Reason’s Ronald Bailey. He apparently has no financial investments that depend on thermal sea water gradients:

    http://reason.com/blog/2018/11/08/is-new-study-claiming-the-oceans-are-war

  15. Nic, have you heard back from the authors?

    • Yes, he has: “New study estimate ocean warming using atmospheric O2 and CO2 concentrations. We are aware the way we handled the errors underestimated the uncertainties. We are working on an update that addresses this issue. We thank Nicholas Lewis for bringing this to our attention.” Source: http://resplandy.princeton.edu
      Thanks, Nic! :)

    • Resplandy now has a statement on her website: resplandy.princeton.edu

      “We are aware the way we handled the errors underestimated the uncertainties. We are working on an update that addresses this issue. We thank Nicholas Lewis for brining this to our attention.”

      • “We show that the ocean gained 1.33 ± 0.20 × 1022 joules of heat per year between 1991 and 2016, equivalent to a planetary energy imbalance of 0.83 ± 0.11 watts per square metre of Earth’s surface.”

        The central estimate is fine – and they have underestimated the confidence limits?

        Estimates of these components have huge uncertainties that are possibly systemic rather than random.

        The cultural problem for science is the invidious leap to superficial partisan rhetoric from both sides

      • I’m not seeing her comment on the website(?)

      • On the top menu bar, click on “in the news”.

      • That’s good news to know the authors have accepted that Nic Lewis is correct. It raises my confidence that Nic Lewis is careful with his analyses and that non experts in the field can have confidence in what he says. It increases my confidence in his (and Curry’s) ECS and TCR estimates. The fact that Richard Tol checked and agreed with Lewis’s numbers also helps.

      • Dr. Curry and Nic:
        I apologize for hijacking this post but could find no other appropriate
        “reply”s.
        I would like to submit an article for publication on Climate Etc. Can you send me contact info? The topic involves errors in temperature anomalies.
        Thanks.
        4kx3

      • Robert,

        As Nic explains in his first post, the size of the error bars makes this work, though novel, broadly consistent with all other estimates. Not sure why you keep hammering home this point about 0.8 W/m^2 when Nic explicitly says their results are consistent with previous measures. Or, is your argument more about the rate and the slope being much lower than their estimate? But again, the large error bars come into play there too.

  16. Judith: ALMOST synchron! :-)

  17. I guess that the response will clarify which method was used and what the uncertainties actually are and then we can all resume our normal approaches.
    What does the way in which they handled the errors meant they underestimated the uncertainties actually mean. Does it mean they did it Kennedy’s way? Does this mean that it actually does matter what starting point you use?

  18. Nic,
    Thank you for a meticulous analysis of the uncertainty model.
    It appears from Resplandy’s website that she has acknowledged that there is a problem with the error analysis, but has yet to acknowledge the more important problem with the median estimation of trend. Is there another shoe still to drop? Has she now responded to your earlier e-mails?

    • Hi Paul
      Thanks! I think you are right.
      The only substantive response I have had from Laure Resplandy simply said that they were working on an update. I had sent her pdfs of both articles as they were posted; that response was after I sent the pdf of this article.

  19. How many angels can dance on the point of a pin?

  20. This paper is excellent science. This is poor science of course. I would include opportunistic ensembles, TCR and ECS, anything lacking a centennial to millennial perspective – papers that rely on the assumption that any change in past 70 years is anthropogenic – and in a complex dynamical system anything that finds a simple cause and effect. I wouldn’t include the meticulous science of PAGES 2k.

    The rush to rehearsed purple prose here – regardless of the merits of the argument – and we will see – largely from those who don’t know their scientific arse from their elbows is a less than edifying spectacle.

  21. The problem with Resplandy et al. is far deeper than just faulty determination of regression error. It lies in the blind reliance upon linear regression for analyzing empirical time series whose spectral structure is unknown–a “climate science” practice as widespread as it is deplorable.

    There are almost no geophysical time series that satisfy the tacit premise of persistent linear trend plus stationary gaussian white noise. Instead, we are confronted with climate data that are rich in chaotic oscillations of various bandwidths and periods that often exceed the length of available record. What are often presumed to be “trends” turn out to be mere snippets of far-longer-scale variations of unknown origin Analytic unraveling of the many mysteries of nature cannot begin by relying upon false premises

    • “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” Julia Slingo and Tim Palmer, 2011

      Significant in these seemingly random changes is the 20 to 30 Pacific state regimes. The is the core of the Earth system operation. It happens in 20 to 30 year regimes quite evidently for at least a 1000 years. Regimes have associated means and variance that then shift and this sums to perpetual change.


      https://journals.ametsoc.org/doi/10.1175/JCLI-D-12-00003.1

      And that modulate temperatures as signals propagate around the planet like a stadium wave. With inter alia anti-phase responses in the Antarctic and Arctic (e.g. https://uncch.pure.elsevier.com/en/publications/polar-synchronization-and-the-synchronized-climatic-history-of-gr)

      The implication for Resplandy is that the rate of warming changed around the turn of the century and will shift again within a decade – if it is not happening now – with changes in TOA radiant energy flux and the rate of ocean warming. How’s that for a prediction. This time we will have CERES and Argo. Most people are still far behind the curve and I guess it will come as a surprise.

      https://watertechbyrie.com/2014/06/23/the-unstable-math-of-michael-ghils-climate-sensitivity/

    • @john321s,
      I agree with most of your commentary. See my long note to Nic Lewis below.

  22. Nic Lewis:
    Great job.

  23. Nic,

    I do wonder why the authors, having produced a great hypothesis, were only prepared to spend a couple of bent pennies on crude inferences from their results. I suspect in part at least that the answer to this question lies in the fact that, using the authors’ own data, it is easily concluded that average net TOA flux is only about half of previous best estimates as used in the reconciliation of CERES data to ARGOS. Somehow, I don’t think that this result would have drawn such big headlines. I will explain in a moment.

    As john321s correctly points out above, there is little justification for expecting a straight line in the pseudo-concentration time series. If the authors’ hypothesis is correct then the per meg vs time plot should be responding to instantaneous heat content or the integral of net flux with time. There is no physical validity to any assumption that net flux is approximately constant with time over this period. We have sufficient reliable data (satellite anomaly variation and ocean temperature measurements) to know that it isn’t, and the authors’ own data show significant gradient variation over the period.

    The only justification (of the authors), I presume, is that fitting a linear trend to the total dataset yields one measure of the “average rate of energy gain” and hence the “average net flux” over the full period. As such, I would question whather this measure is a good choice. It is a very crude measure of average flux, especially where, as in this instance, the integral curve displays non-linear structure. End point analysis would be less ambiguous and would preserve total energy gain over the period. But a more obvious approach would be to analyze the first difference series as a more direct measure of the time-variation in net flux. This would not only allow reconciliation/comparison with time-varying data from alternative sources, but would also IMO allow a statistical model with a perhaps more robust separation of the components of uncertainty.

    None of the above detracts from the validity of your challenges to the calculations on which the authors actually reported, but these thoughts were what led me to speculate on why the authors chose to report on this crude (and seemingly incorrectly calculated) headline item of average energy gain per year over the period.

    In recent years, our best relative ocean heat data comes from ARGOS, and our best satellite data on RELATIVE CHANGE in TOA net flux comes from CERES. CERES has high precision and low accuracy. It yields no useful information on the absolute value of net flux, but calculates the relative change with good accuracy. Consequently, after conversion of irradiance to flux (including daily drift calculations), constant bias corrections are applied to the SW and LW fluxes to tie the TOA net flux values to ARGOS-adjusted-PLUS estimates of net flux, using data from between 2005.5 and 2015.5. Before application of this correction, CERES EBAF (Ed 4.0) shows a non-credible net flux difference of 4 W/m2 when averaged over the reference period. After correction, it shows a value of 0.71 W/m2 – which by design ties into the estimate obtained from ARGOS-adjusted-PLUS. This ARGOS-adjusted-PLUS value comes from Johnson et al:- https://www.researchgate.net/publication/304400581_Improving_estimates_of_Earth's_energy_imbalance

    If we now consider the gradient of the Resplandy data OVER THE SAME REFERENCE PERIOD, it works out to be around 0.51 per meg per year. This translates into a heat addition of 5.8 ZJ per year or an average net flux value of 0.36 W/m2. This is about half the reference value from Johnson et al. So have we been greatly underestimating or greatly overestimating ocean warming?

  24. kribaez

    ‘So have we been greatly underestimating or greatly overestimating ocean warming?’

    That is a great question. There has been a lot of talk of supposed sea level rise acceleration which never quite seems to come about

    Fasullo et al

    “Is the detection of accelerated sea level rise imminent’

    Global mean sea level rise estimated from satellite altimetry provides a strong constraint on climate variability and change and is expected to accelerate as the rates of both ocean warming and cryospheric mass loss increase over time. In stark contrast to this expectation however, current altimeter products show the rate of sea level rise to have decreased from the first to second decades of the altimeter era. Here, a combined analysis of altimeter data and specially designed climate model simulations shows the 1991 eruption of Mt Pinatubo to likely have masked the acceleration that would have otherwise occurred. This masking arose largely from a recovery in ocean heat content through the mid to late 1990 s subsequent to major heat content reductions in the years following the eruption. A consequence of this finding is that barring another major volcanic eruption, a detectable acceleration is likely to emerge from the noise of internal climate variability in the coming decade.

    http://sealevel.colorado.edu/content/detection-accelerated-sea-level-rise-imminent

    There is another slightly newer paper on the same site by fasullo et al

    “Climate-change–driven accelerated sea-level rise detected in the altimeter era”

    http://www.pnas.org/content/115/9/2022

    Although reading the second paper the results seem more ambivalent and nuanced than the headline suggests..

    So the level rise may be accelerating, or it may not. A good part of this would be due to the heat content in the ocean and subsequent expansion.

    I don’t think we can therefore be certain which of the scenarios in your question is or isn’t happening.

    Perhaps these figures could do with checking over to see if the uncertainties have been properly dealt with?

    tonyb

    • Tony,
      Forgive me if I don’t pursue the question of MSL rate change here in any detail, other than to say that given the quasi oscillatory behaviour historically, the difficulty of tying TOPEX to JASON and the lack of closure between satellite altimetry and tide-gauge data, I am agnostic on the question of whether it is accelerating or decelerating, but I am 100% confident that it is doing one or the other if you pick the correct timescale!

      My question was perhaps easily misread as an open question (sorry!). It was not intended to be an open question. It was narrowly focused on the fact that under the Resplandy interpretation, the inference is that we have underestimated ocean heat gain substantially (big headlines). Once her calculation is corrected a la Nic Lewis, the median estimate becomes 0.63W/m2 – in line or slightly below most estimates over the period. If, on the other hand you repeat the calculation using the Resplandy data over the key reference period 2005.5 to 2015.5 you find that her method predicts only half the average net flux estimated by ARGO.

      • I think the error bars on this method would be much wider than the more direct Argo measurement for the recent period, so there is no reason to prefer this measure to Argo since 2005. Prior to 2005 and 2000 the in situ error bars become wider bit I am still not sure that Resplandy’s technique would be any better. It looks very indirect with a lot of built-in assumptions.

  25. “Importantly, the SW and LW TOA flux adjustment is a one-time adjustment to the entire record. Therefore, the time dependence of EBAF TOA flux is tied to the CERES instrument radiometric stability.”


    https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0208.1

    “The CERES Energy Balanced and Filled (EBAF) product is produced to address two shortcomings in the standard CERES level-3 data products. First, satellite instruments used to produce CERES TOA ERB data products provide excellent spatial and temporal coverage and therefore are a useful means of tracking variations in ERB over a range of time–space scales. However, the absolute accuracy requirement necessary to quantify Earth’s energy imbalance (EEI) is daunting. The EEI is a small residual of TOA flux terms on the order of 340 W m−2. EEI ranges between 0.5 and 1 W m−2 (von Schuckmann et al. 2016), roughly 0.15% of the total incoming and outgoing radiation at the TOA. Given that the absolute uncertainty in solar irradiance alone is 0.13 W m−2 (Kopp and Lean 2011), constraining EEI to 50% of its mean (~0.25 W m−2) requires that the observed total outgoing radiation is known to be 0.2 W m−2, or 0.06%. The actual uncertainty for CERES resulting from calibration alone is 1% SW and 0.75% LW radiation [one standard deviation (1σ)], which corresponds to 2 W m−2, or 0.6% of the total TOA outgoing radiation. In addition, there are uncertainties resulting from radiance-to-flux conversion and time interpolation. With the most recent CERES edition-4 instrument calibration improvements, the net imbalance from the standard CERES data products is approximately 4.3 W m−2, much larger than the expected EEI. This imbalance is problematic in applications that use ERB data for climate model evaluation, estimations of Earth’s annual global mean energy budget, and studies that infer meridional heat transports. CERES EBAF addresses this issue by applying an objective constrainment algorithm to adjust SW and LW TOA fluxes within their ranges of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system (Loeb et al. 2009).”

    CERES radiometric data is stable and accurate – I am am not clear on what the presumed difference between accurate and precise above is – and it provides information on what is changing in the system. Regardless – the ocean data remains. Warming steadily and strongly in the most recent decade.

    “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

    The role for chaos in the Earth system is solely in how atmospheric and ocean dynamics change. Regimes, shifts and variance that gets more extreme the longer the record. The nature of Wally Broecker’s beast. Trends, means and variance are perfectly valid ideas within a regime – but regimes in the globally coupled flow field shift. Importantly it seems with a 20 to 30 year periodicity. The next shift is due within a decade – and we will have CERES and Argo this time.

    The cororally of Wally’s beast is that it is still not a great idea to poke it with a stick.

    • Robert,
      It is not clear to me what point if any you are trying to make here.
      From your reference:-
      “Despite recent improvements in satellite instrument calibration and the algorithms used to determine CERES TOA radiative fluxes, a sizable imbalance persists in the average global net radiation at the TOA from CERES satellite observations. With no adjustments to CERES SW and LW all-sky TOA fluxes, the net imbalance for July 2005–June 2015 is approximately 4.3 W m−2, much larger than expected. As in previous versions of EBAF, we use the objective constrainment algorithm described in Loeb et al. (2009) to adjust SW and LW TOA fluxes within their ranges of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system, as determined primarily from ocean heat content anomaly (OHCA) data. In the current version, the global annual mean values are adjusted such that the July 2005–June 2015 mean net TOA flux is 0.71 ± 0.10 W m−2, as provided in Johnson et al. (2016)…”

      TOA net flux from CERES EBAF Ed 4.0 is calibrated in a one-off bias correction to 0.71 W/m2. The derivation of this value, which is what I described as “ARGOS-adjusted-PLUS”, is partly explained in your reference and fully explained in the Johnson et al paper which I referenced in my comment above, and which is also referenced multiple times in the Loeb et al paper from which you abstracted your quotes.

      I say again, it is not at all evident to me what point you are seeking to make.

      • You have certainly lost my interest – from the few words I scanned it seems a rehash, a spurious dismissal of the Norman Loeb et al 2017 paper and some whines about missing the point. Loen et al 2009 is the basis for the closure methodology.

        Warming in net (-IR- SW : up is warming by convention) is obvious – and if you understand that solar variability is inconsequential directly – it’s not the sun stupid – and there I don’t think I am referring to you – then closing the global energy budget is not required for a Fermi problem solution. The warming rate is approximately given by the net flux.

  26. Pingback: An ominous discovery? Sea Temperature or Climate? | Naval War changes Climate

  27. “Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.49 Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to
    pretend that they are.” http://www.lse.ac.uk/researchAndExpertise/units/mackinder/pdf/mackinder_Wrong%20Trousers.pdf

    My guess is that the unhealthy and unhelpful triumphalism around this supposed error will grind down to another inconclusive climate war talking point as the opposing arguments are marshaled.

    Chaos in the Earth system sums to a conclusion most can’t get their heads around — you get points for guessing which denier said this. “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” It means that most of “the science” — the data interpretation, the methods and the theories are utterly inadequate to the task of explaining climate for us. But both sides of the climate war continue to insist on a certainty that is impossible – and continue a battle in which one side is heavily outgunned. The climate change battalion is all of the global scientific institutions, the liberal press, governments, major scientific journals, etc. Opposed is a ragtag collection of a few marginalized cheer leaders for curmudgeons with crude and eccentric theories they insist is the true science. The curmudgeons are remarkably persistent – and climate shifts may give them a strategic advantage as the planet doesn’t warm entirely as expected – or at all. The battle is absurd and unwinnable – by either side.

    The rest of us are concerned that the real objectives of humanity are not lost sight of. It is simple in principle to take the initiative on the broad front of population, development, energy technology, multiple gases and aerosols across sectors, land use change, conservation and restoration of agricultural lands and ecosystems and building resilient communities. What we really want is much more clarity on effective policy responses – a focus on the real issues of global economic progress and environmental protection. Emissions of greenhouse gases or loss of biodiversity are far from intractable problems — but economic growth is the foundation of any practical measures.

  28. Paul, Thank you for your thoughtful comment.

    I agree that there is no reason to expect a linear increase in deltaAPO_Climate. However, given the poor signal to noise ratio in their deltaAPO_Climate time series, I think the best that can be done is to provide an estimate of the average ocean heat uptake over the period (or, equivalently, an estimate of the change in ocean heat content over 1991-2016). I don’t think that it is unreasonable to do so by fitting a linear regression model, although the way that they did so is inappropriate. Alternatively, simply using the increase in deltaAPO_Climate over the analysis period would, as you say, have the advantage of avoiding difficulties over choice of regression method.

    I quite agree that analysing the first difference time series, as I suggested towards the end of my article, would be preferable, at least if weighted LS regression is used.

    As you say, ARGO provides the best ocean heat data, and CERES provides the best estimates of fluctuations in net flux and hence in ocean heat uptake. As I see it, the Resplandy et al. deltaAPO_Climate based method, while novel, interesting and independent of ARGO data, preesently has such large uncertainty that it is almost equally compatible with all in situ measurement based estimates of the change in ocean heat content.

    It is certainly interesting that their deltaAPO_Climate time series increases more slowly over the final decade or so of the 1991-2016 period than prior to then, implying as you say a much lower estimate of ocean heat uptake over 2006-16. However, the uncertainties are so great that I doubt much weight can be put on this result. My current view is that ARGO based ocean heat change estimates are broadly correct over the last decade or so.

  29. Seems like they are startig to recognise the issues in the paper. Latest at one of the paper’s co-authors (https://scripps.ucsd.edu/news/study-ocean-warming-detected-atmospheric-gas-measurements) is as follows:

    Note from co-author Ralph Keeling Nov. 9, 2018: I am working with my co-authors to address two problems that came to our attention since publication. These problems, related to incorrectly treating systematic errors in the O2 measurements and the use of a constant land O2:C exchange ratio of 1.1, do not invalidate the study’s methodology or the new insights into ocean biogeochemistry on which it is based. We expect the combined effect of these two corrections to have a small impact on our calculations of overall heat uptake, but with larger margins of error. We are redoing the calculations and preparing author corrections for submission to Nature.

    Gavin Schmidt has also been tweeting about it – mind you he refers t it as a ‘minor issue in the Resplandy et al discussion’!

    Interestingly, neither Keeling nor Resplandy make any reference to the OLS regression mean trend mis-calculation that Nic identified.

    • stevefitzpatrick

      “Interestingly, neither Keeling nor Resplandy make any reference to the OLS regression mean trend mis-calculation that Nic identified.”
      .
      Shocking, I know. Admitting the paper’s conclusions are, well, wrong, is harder than accepting that the uncertainty estimate is wrong. Still, the paper’s conclusions are wrong, whether admitted or not. Good enough for government work.
      .
      It is a little sad just the same…. a good idea that was implemented badly. Of course, if the method just supported earlier estimates, but with much greater uncertainty, the it would not have been in Nature.

      • Steve, this comment is an excellent summary of the Resplandy paper.

        To get published in Nature they needed some extra juice. But this is perfectly justified and a tiny matter when one is saving the world.

      • “… it would not have been in Nature.” Pattern here: it must be worse than we thought to be worthy of blaring headlines. Next, thanks to conscientious people like Nic the science is corrected — yet un-publicized or unpublished.

        Nic, Thanks for your time. And, Resplandy et al thanks you as well for re-establishing the baseline, reopened the avenue to publish a future article headlined: “it’s worse than we thought [again]”.

      • Ron:

        My preferred solution to “saving the world” is recycling scotch-flavored ice cubes. Beats hell out of recycling bad analyses.

  30. It seems that neither Resplandy nor Keeling are accepting that the trend of 1.16 was an error as they haven’t mentioned it. Nic explained one possible source for it was an OLS regression on DAPO_Climate & DAPO_AtmD. I tried that and get a slope of 1.153 which is close but not exactly 1.16.

    As Nick Stokes and Richard commented here, a WLS regression of DAPO_Climate alone with weights as normal at 1/SD^2 (I set 1991 SD to 1e-02). The slope of this regression is 1.163 with a SE of 5.121e-02. The mean value of this is the same as in the paper if rounded to 2 decimal places.

    I assume the figure of +/-0.15 is SD and not SE so that has to be adjusted to SD (Nic states all errors in the paper are SD). There are 26 data points in the series however there is a lot of auto-correlation in the WLS residuals so the effective size is less. Foster & Rahmstorf in ‘Global Temperature Evolution 1979-2010’ (ERL) in their methods explain the standard and adjusted methods for dealing with this. For an AR(1) model of the residuals (which is the standard method normally used) nu=(1+phi1)/(1-phi1)=4.12 which means the effective sample size is 26/4.12=6.32 and the SD=SE*Sqrt(6.32)=0.129. From the SE of 5.121e-02 and the paper’s SD of 0.15 the effective size calculates at about 8.6 with a nu of approximately 3. If it was calculated this way then they didn’t use the AR1 method and used some other method. (Note: the residuals are significantly greater than AR(1) so F&R recommend using something like ARMA(1,1) for such a scenario but here oddly the nu value is less than AR(1)!)

    It looks like the trend in DAPO_Climate of 1.16+/-0.15 was calculated from the series given in Table 4 by WLS using the SD’s given to weight them. Could the SD given for the slope have been calculated from the SE of that estimate adjusting the data size for residual auto-correlation?

    • Thinking about it further I believe the SD listed in the paper is the SE of the estimate but inflated for auto-correlation (the conversion back to the SD doesn’t make any sense ). In fact, Gavin Schmidt queried in a tweet if that was how the SE was higher due to the residual auto-correlation. Using a standard AR1 model would have the SE as +/-0.104. However, the residuals are higher than AR1 so it isn’t sufficient and also n is small at 26 and Lee and Lund using a further correction for small n. Using this small sample correction and an AR1 model the SE is +/-0.133. Given the residuals are significantly greater than AR1 (at least to the 7th) further inflation of the SE would be in order (such as using ARMA(1,1) as F&R suggest). Overall, the +/-0.15 seems consistent with the SE of the WLS regression estimate adjusted for the residuals actual auto-correlation. It is surprising to me that such calculations (or simulations) of SE’s don’t have clear explanations published somewhere (in appendices or supplementary papers or wherever). Leaving people to guess how things were derived is a strange way to do things.

      I see that Keeling references another issue someone else has identified to do with the constant land O2:C exchange ratio of 1.1.

      It will be interesting to see how they justify the trend estimate based on WLS given the arbitrariness of it that Nic has identified.

    • Joe H
      “It looks like the trend in DAPO_Climate of 1.16+/-0.15 was calculated from the series given in Table 4 by WLS using the SD’s given to weight them”

      I also think that the 1.16 trend was derived using WLS in the way that you say, although they could have used a more complex approach, using WLS regression on the 10^6 sampled time series and taking the mean trend estimate.

      It is unclear to me how exactly the +/_ 0.15 1-sigma standard error was derived, but it is clear to me that the true uncertainty is far greater than +/- 0.15. Moreover the problem is worse than simple autocorrelation; the statistical model is mis-specified, and due to the dominance of trend and scale-systematic errors in the trend estimation there are hardly any effective degrees of freedom in the errors.

      • Nic,

        Just for completeness sake I tried doing a Bayesian WLS regression using Stan. I tried it first for a OLS (single unconstrained SD) and got a slope of 0.88 +/- 0.05 SE. Then constraining the SD’s to the values in table 4 it was a slope of 1.16 and a SE of +/-0.1. The intercept and slope priors were N(0,10) regularising uninformative and I ran it on 2 chains for 10k iterations. It is interesting to see that the WLS SE agrees with the standard methodology of inflating the calculated SE by sqrt(1+rho1/1-rho1) as I calculated up above in an earlier post. I don’t know why the SE is half that in your and Richard’s direct MC simulations which were about 0.2 – it must be the effect of the priors but they are quite uninformative.

      • Keeling’s own acknowledgement appears quite gracious, although interestingly still lacks a direct comment on the validity of the central estimate.

        On the other hand, the response of Chris Mooney and Brady Dennis (http://www.365newstoday.com/2018/11/14/scientists-acknowledge-key-errors-in-study-of-how-fast-the-oceans-are-warming/) probably offers a template for how the news might be managed.

        Among other things, it quotes Paul Durack telling us that the study “confirms the long known result that the oceans have been warming over the observed record, and the rate of warming has been increasing,” Does it indeeed? Not from where I sit. If anything the Resplandy data show a marked deceleration, but with reanalysis, they could show something different again. (Paul Durack is one of the co-authors of a 2016 paper which sought to explain the overprediction of ocean heat uptake by the CMIP5 models against (even) high-end estimates from observational data. In part he attributed it to “missing twenty-first -century volcanic forcings”.)

        We are also told that “Schmidt and Keeling agreed that other studies also support a higher level of ocean heat content than the Intergovernmental Panel on Climate Change, or IPCC, saw in a landmark 2013 report.” I wonder when and how they reached this agreement. No doubt they could also agree that there are other studies which support a lower level of ocean heat uptake, but I am not sure that either agreement is pertinent to the matter at hand.

        Nic Lewis is characterised as a sort of amateur dabbler who ” has argued in past studies and commentaries that climate scientists are predicting too much warming because of their reliance on computer simulations…” Slanderous. I do not believe that Nic has ever been guilty of such a non-sequitur.

        So we can conclude from this that there was a minor problem in a recent paper, uncovered by a lucky amateur sleuth, followed by an exemplary response from the authors, despite the “hostile environment” that poor Gavin seems to work in all the time. But rest assured that we can be confident that the reality is worse than we thought.

      • stevefitzpatrick

        Hi Paul,
        I do not expect the authors, and certainly not media outlets, will ever come out and say that the methods and conclusions in the paper were simply wrong. If Nic publishes a clear refutation (essentially a peer reviewed version of his blog posts), then that might further discredit the paper’s conclusions, at least in a scientific sense. But even after a published refutation, media reports will forever point to the paper’s “it’s worse than we thought” conclusion in whipping up support for drastic public policies. It is a political exercise, not a scientific one.

  31. Instead of rejigging their maths why have they not said what method they actually used, or have I missed that?

  32. Robert.
    OHC is only part of the water available to be heated up on the planet albeit a very substantial part of it. One imagines that the unmentioned other third, that on land an in land has an important part to play and a need to be quantified.
    Given this extra wallop of heat over such a long time why is it not detectable by rising sea level and air temperatures?
    Your two early graphs of SW and IR did not add up to positive in the last 2 years unlike your latest combined graph.

    • SW and IR never can add up to imbalance at TOA. But it is what changes most in the energy budget for many reasons.

      Satellites measure change in energy in and energy out well but are not so good at absolute values – the problem of closing the incoming and outgoing energy budget – so that energy imbalances at TOA are not immediately obvious. Energy in and out varies all the time. Energy in varies with Earth’s distance from the Sun on an annual basis and with much smaller changes over longer terms due to changes in solar emissions. Outgoing energy varies with cloud, ice, water vapor, dust, vegetation, ocean and atmospheric temperature, etc. – in both shortwave (SW) and infrared (IR) frequencies.

      So neglect the closure problem on spurious grounds if you wish – but net outgoing TOA power flux categorically shows warming – against a background of much smaller change in solar intensity.

      And what? Something about the ENSO related increase in reflected SW in the past couple of years?

      • Thanks.
        Energy in, energy out , energy contained and energy created are the four obvious concepts though the first 2 with a powerhouse like the sun are far greater in immediate effects and consequences.
        SW and IR also the band widths we can measure most easily at the moment but not all the band widths that exist and need to be balanced.
        Mass of the (planet) system is a variable concept though reasonably constrained for virtually all conditions under discussion.
        The area that might need focusing on most is the retention time of energy for different substances and mileu.
        It is hard to envisage the tiny bit of energy trapped by each individual photosynthesis reaction but over time that builds up to a significant amount of stored energy in human terms but million year time scale. Still only a weak spark in terms of the daily solar budget..
        Similarly some substances, water, CO2 in the air, help restrain the flow of energy out of a system longer than others.
        It is this difference in re emmision rates that leads to the lag and fluctuation in energy in v out that is so hard to quantify.

      • I got to energy created and decided and decided that…

      • At toa – 20km up it is assumed I seem to recall – all energy is electromagnetic. The change in planetary energy stores – overwhelmingly as ocean heat in relevant time frames- results from small differences in unit energy in less unit energy out that accumulate over time. Nothing all that difficult at all – at least conceptually.

        Ultimately the Earth system is not isolated and operates far from thermodynamic equilibrium. The 1st law of thermodynamics says that energy cannot be created or destroyed – the second law says that net energy flows from the sun to the Earth and back to space. The planet tends to energy equilibrium – maximum entropy production – at toa as a result of the operation of fundamental physical laws.

      • Energy contained (with none created) is known as the imbalance and is measurable as the rate of change of ocean heat content which has more than 90% of the effective heat capacity. The land doesn’t retain so much.

  33. Pingback: Weekly Climate and Energy News Roundup #335 | Watts Up With That?

  34. Pingback: Weekly Climate and Energy News Roundup #335 |

  35. Pingback: Weekly Local weather and Power Information Roundup #335 | Tech News

  36. Hence your concept of OHC as the arbiter of retained heat in the system. It is more a part of the total retained heat. It ignores by a third the amount of water available to retain heat accessible on the planet.
    Hence the figures you use are out by at least a third. Attempts to define the energy budget are therefore off by a third before you start let alone using them to say this paper is within the bounds of what we know. Rather significant.
    Mind you this argument is specious on my part because, while true, it implies a bigger capacity for heat uptake.
    But this all begs the question of why have we not detected it. Roy Spencer gives his reasons for climate insensitivity, why are you unmoved by his arguments?

  37. Nuclear is not energy created?

    • Energy stored in matter since soon after the big bang.

    • angech, people have estimated total combustion (of which nuclear would be small part) and it is still a fraction of a percent of solar input when averaged over the earth’s surface. Plus there is no reason to believe it has changed significantly over time in the way other forcings have.

    • Energy from nuclear decay in the Earth’s interior is about 0.04 W/m2 from memory add about 0.4K to ocean temperature. Heat retained in a warming atmosphere has implications for ‘heat in the pipeline’.

      This seems close enough but energy flow is of course dynamic.

      • Why is it quite hot 2 k under the earth but extremely cold 5 k under the sea where it is less than zero.
        Water 5 k under the earth might be at the temperature of steam due to heat from pressure and internal trapped earth core origin nuclear produced heat.
        There are multiple water containing sources, the atmosphere, ice and snow, oceans, lakes and rivers and all the subterranean water capable of acting as global energy sources.
        Simplistic 90/4/4 % estimates are not necessarily right.
        Arguing for an imbalance from data sets is good until you face that little fact that this study was needed to show greater OHC existed. The current level is not enough for theory. And certainly not for some satellite temperature observations.

  38. Salvatore del Prete

    OHC I not important when it comes to the climate. It is the overall oceanic sea surface temperatures that matter , and they should be declining in response to very low solar activity. The lower overall ocean sea surface temperatures will translate to lower OHC. It is playing out and will continue as global warming looks like it peaked around 2 years ago.

  39. Salvatore del Prete

    The climate is now in at the crossroads.

    For my money I think it is the geo/solar magnetic field strengths and if they weaken enough and stay weak I think the result will be a major climatic impact to colder conditions.

    Signs I am watching for are:

    OVERALL SEA SURFACE TEMPERATURES

    500 MB ATMOSPHERIC CIRCULATION PATTERNS /HEIGHTS

    OVERALL GEOLOGICAL ACTIVITY

    OVERALL SNOW/CLOUD COVERAGE

    Time will tell but the potential is as higher now then any other time since the Dalton Solar Minimum ended which was in 1850.

  40. David L. Hagen (HagenDL)

    ““Our error margins are too big now to really weigh in on the precise amount of warming that’s going on in the ocean,” Keeling said. “We really muffed the error margins.”A correction has been submitted to the journal Nature.”

    “that increase in heat has a larger range of probability than initially thought — between 10 percent and 70 percent, as other studies have already found”

    Climate skeptic uncovers scientific error, upends major ocean warming study

    …“The findings of the … paper were peer reviewed and published in the world’s premier scientific journal and were given wide coverage in the English-speaking media,” Lewis wrote. “Despite this, a quick review of the first page of the paper was sufficient to raise doubts as to the accuracy of its results.”

    Coauthor Ralph Keeling, climate scientist at the Scripps Institution of Oceanography, took full blame and thanked Lewis for alerting him to the mistake.

    “When we were confronted with his insight it became immediately clear there was an issue there,” he said. “We’re grateful to have it be pointed out quickly so that we could correct it quickly.”

    Keeling said they have since redone the calculations, finding the ocean is still likely warmer than the estimate used by the IPCC. However, that increase in heat has a larger range of probability than initially thought — between 10 percent and 70 percent, as other studies have already found.

    “Our error margins are too big now to really weigh in on the precise amount of warming that’s going on in the ocean,” Keeling said. “We really muffed the error margins.”

    A correction has been submitted to the journal Nature….

  41. Pingback: Resulta que los océanos NO se están calentando más rápido; era un error, y sólo los “negacionistas” lo han visto | PlazaMoyua.com

  42. From these current replies from the authors it appears that they are acknowledging their error in estimating the trend uncertainty from the Nic Lewis’ analysis presented here, but have not to this point responded to the critical analysis that Nic has also detailed here concerning the estimation of the trend itself.

    I do not follow the tweets, but has John Kennedy made further tweets following his initial guess as to how the authors were estimating the trend.

    • Yes. They are not addressing the more important question that goes to the heart of the paper, (and news release headlines). It’s harder to publicly admit that they really muffed both, and the trend is not worse than previous analysis results.

  43. Pingback: Widely Reported Alarming Ocean Warming Study Is Wrong – iftttwall

  44. It gets interesting without a doubt. Of course the trend is faulty in the paper indeed. BUT besides this the conclusions are also wrong if they admit a bigger uncertainty. They wrote in the paper: ” An upward revision of the ocean heat gain by +0.5 × 1022 J yr−1 (to 1.30 × 1022 J yr−1 from 0.80 × 1022 J yr−1 ) would push up the lower bound of the equilibrium climate sensitivity from 1.5 K back to 2.0 K (stronger warming expected for given emissions), thereby reducing maximum allowable cumulative CO2 emissions by 25% to stay within the 2 °C global warming target”
    IMO this statement ( also questioned by other scientists not aware of the methological mistakes in the paper) should be reviesed also if only the error margins are fundamentaly higher than reported. This would also kill the reason for get published in “Nature”. :-)

    • As estimated: http://www.realclimate.org/index.php/archives/2018/11/resplandy-et-al-correction-and-response/ . They admit the uncertainty, also the OLS use BUT they decline the CO2/O2- Ratio to increase the APOclimate trend again. It’ s childish IMO. I bet it’s not the last word! :)

      • stevefitzpatrick

        They should clearly revise the claim about increasing the lower bound for empirically estimated equilibrium sensitivity and the 25% reduction in maximum cumulative emissions to stay under 2C. We’ll see what they do.

      • stevefitzpatrick

        Frank,
        It may well be that Keeling thought the lower ratio (1.05 vs 1.1) was better all along, but when the calculated trend from Resplandy came out really high, using that lower ratio would have put the calculated ocean heat so high (almost TWICE other recent estimates!) that it would not have been even plausible.

      • Steve: Yes, IMO they should withdraw the paper indeed to rewrite it. Because their clever method to use the proxy of atmosphere changes
        due to outgassing from the oceans is indeed worth to use with the real uncertainties… The core finding of the paper is indeed valuable, unfortunately it’s infected from unjustified claims in the moment. It would be a pitty if the clever method would be buried along with the mistakes and the “Nature” affinity.

  45. How will Nature handle the correction? Will they simply publish the authors’ correction or will they make an editorial comment? If the authors acknowledged a method error in estimating the trend beyond the uncertainy critique would that put the original paper in the withdrawal range?

    • I’m curious if the Resplandy paper would have been published in Nature in its corrected form (?) Was it the ‘splash’ of the findings or did the novel methods of estimating ocean warming warrant its acceptance in Nature?

  46. Pingback: Widely Reported Alarming Ocean Warming Study Is Wrong – ALibertarian.org

  47. Pingback: Widely Reported Alarming Ocean Warming Study Is Wrong | Libertarian Party of Alabama Unofficial

  48. stevefitzpatrick

    Keeling has a guest post at RealClimate. He agrees both the trend and uncertainty calculations were wrong. Absent another change (a reduction in the “oxidative ratio of land carbon” from the original 1.1 to 1.05), the corrected trend result (0.9)would have matched Nic’s calculated trend almost exactly.

    Keeling a clearly a stand-up guy, and I applaud him addressing the issue directly.

    Nic Lewis: Maybe you will be invited to peer review more papers in the future. Maybe not.

  49. Well, that was pretty cool. I haven’t been visiting my favorite climatologist site at judithcurry.com as things seem to have cooled and slowed down here since Trump’s election.

    But, I saw an article linked to by Drudge, and it was reviewing Nic Lewis findings! What a treat!

    Thank you Nic, and Congratulations on the world-wide fame you just achieved!!!!!

  50. I agree with SteveF that this episode has ended very well, much better than I expected. Maybe all the noise about the replication crisis is being listened to by some scientists. However, the authors should also retract their press release and at least try to contain the media hype that accompanied it. Media these days almost retract or correct their mistake though.

    In the future, it would be helpful for people to get outside statisticians involved at an early stage in their data processing.

    • stevefitzpatrick

      dpy6629,
      I agree it ended well, but probably because Ralph Keeling is a stand-up guy, no other reason. I very much doubt Princeton is going to retract their press releases. Where I have seen any MSM commentary at all, it is that the thrust of the paper remains “correct” or “important”. In reality, nothing could be further from the truth. The paper was very wrong in a purely technical sense, for clear and (at least to Nic Lewis) obvious reasons. Many will try to salvage the paper and its results, of course, but really, it was an interesting approach done badly. As noted upthread by frankclimate, I suspect the paper would never have been published in Nature had the correct calculations been done from the first, confirming that earlier estimates of ocean warming are probably about right, or at the very least well within the rather broad uncertainty of this evaluation.

      In their revision at Nature, I do hope the authors walk back all the discussion of the need to reduce ‘cumulative emissions’ to maintain warming under 2C. That was arguably political grandstanding before; now it is just factually wrong and should not be in the paper.

  51. Pingback: Bits and Pieces – 20181111, Sunday | thePOOG

  52. I think an important point made by Nic Lewis’ analysis/critique of the paper in question and the responses by the authors of that paper to which Nic was critical was that Nic was able to get that response by posting his analysis via a blog. Authors of that paper have thanked Nic for his efforts and are making corrections to an article that was published by a most prestigious outlet for scientific peer reviewed papers.
    That point might stick in the craw of those whose answer to critiques such as these has been we will take you seriously if you can publish your criticism – and even when made at a blog like Real Climate. Nic Lewis can obviously get his worked published, but his immediate reaction was to post his criticism on a blog.
    I am most interested in how Nature will handle these corrections.

    • This blog post was picked up by media, which usually entails an independent expert agreeing with it. Others of Nic’s blogposts have not been picked up by media and basically sank into obscurity as a result of not having independent support. Independent support is crucial.

      • I believe that it was the authors’ response to Nic’s critique and not the media’s that was critical to an apparent resolution of the issue here. This instance is not the first time that an author to a paper that Nic has critiqued at these blogs has responded and acknowledged an error.

        I think a further point is that lack of “independent” critical analysis cannot and should not be confused with independent support.

        I also doubt that from a scientific point of view that the media’s response matters much at all.

      • I don’t think the author or journal responded to Nic until after it got out on the media. The article was Nov 6th. Many of the early pingbacks are no longer active (fake sites???), but appear to be from Nov 7th, while the first sign of a response from Resplandy was Nov 8th.
        Yes, Nic has received a response from authors before. Patrick Brown had a very detailed one that was also on blogs, but that did not result in any alteration to the initial paper.

      • stevefitzpatrick

        Just as disconnected from reality as always. Nice consistency.

      • I was wondering why the majority of the 61 pingbacks to the original blog post don’t lead anywhere now. I think they did at the time because I tried some. Is this a news amplification mechanism with bots? Let’s see if the pingbacks to the current post also disappear after a few days.

      • stevefitzpatrick

        Ken,
        “I also doubt that from a scientific point of view that the media’s response matters much at all.”

        As true a statement as you can make.

      • Might be true. Most of these pingbacks, now up to 74 and counting, aren’t real media. There’s another bump today. The time I noticed this before was when John Bates had a post here complaining about Karl’s use of the ERSST5 pausebuster data. That had 157 pingbacks.

  53. Blogging can be source of information and learning and needs posters with Nic Lewis’ know how and technical skills. I would suppose that I could be convinced from a recent post that it might need a scorekeeper also.

  54. Pingback: Scientists: Widely Reported Study On Ocean Warming Is WRONG

  55. Pingback: Scientists: Widely Reported Study On Ocean Warming Is WRONG - Survive!

  56. Pingback: Scientists: Widely Reported Study On Ocean Warming Is WRONG - Planet Free Will

  57. I should submit that, as Nic noted: “The method used by Resplandy et al. was novel, and certainly worthy of publication”, Resplandy could be publish worthy, not as showing considerably more heat going into the ocean than direct measures indicate, but rather as an independent and novel method showing agreement with those measurements. I would like to see a side by side comparison of the corrected Resplandy OHC change with uncertainties with that obtained from direct measurement.

    If the original paper were correctly handled it probably could have been published – but probably with little or no media coverage.

  58. Pingback: Scientists: Widely Reported Study On Ocean Warming Is WRONG – How to survive when SHTF

  59. Pingback: Scientists: Widely Reported Study On Ocean Warming Is WRONG | YoNews #AlternativeNews #news

  60. I see at Real Climate Keeling gives details on the changes they are making. The systematic errors in O2 measurement are being treated now as systematic instead of random as had been done incorrectly in the paper (this agrees with one of Nic’s points). The oxidative ratio of 1.1 in the paper is now treated as 1.05+/-0.05 which increases the DAPOclimate trend by 0.15+/-0.15. From unweighted least squares fits to 10^6 realizations of DAPOclimate using the new errors (systematic and random) they calculate the new DAPOclimate trend as 1.05+/-0.62 per meg/yr and DOHC 1.21+/-0.72 x10^22 J/yr. Finally, he concludes that ‘the revised uncertainties preclude drawing any strong conclusions with respect to climate sensitivity or carbon budgets based on the APO method alone.’
    Clearly, they are not accepting all of the criticisms and suggestions by Nic on how the uncertainties should have been calculated. He gives an update of 2 graphs and the uncertainty of DOHC is now much greater than estimations by others and covers their entire range.

  61. Separate to the mathematical substance we have been treated to is the physical substance of the data and its treatment. A bit like the early attempts to work out the distance to the sun. A couple of observations and an input algorithm.
    Did not work well then though in the 62% SD ballpark. [the new DAPOclimate trend as 1.05+/-0.62 per meg/yr ]
    This study only has a few dubious estimations as well regarding PAO, and a lot of assumptions. Note the lovely round figure fudges of the experts in the field.No 1.04 or 1.07’s. [The oxidative ratio of 1.1 in the paper is now treated as 1.05+/-0.05 which increases the DAPOclimate trend by 0.15+/-0.15.] When rounding is done to the nearest zero or .05 to determine ratios it stops being science.
    The problem is we only have a few reliable decadal long records of O2 levels at only a few places , the changes are minute and widespread and a very small difference makes a very large resultant outcome.
    All right though if you run the same data through a million computer variations?
    No.

  62. Could the author or someone who understands the following sentence from the original post clarify what it is referring to? I am confused by the wording. Being able to see the math formula(s) would be helpful.

    “In both the OLS and WLS cases, the mean of the trend error estimates from each of the 10,000 regressions was close to the standard deviation of the 10,000 trend estimate, as one would expect.”

    • OLS: ordinary least squares
      WLS: weighted least squares

      • Thanks; do you know what ‘the mean of the trend error estimates from each of the 10000 regressions’ refers to? How is it different from “the standard deviation of the 10000 trend estimate’?

    • wpnov,
      There are four calculations here, two for each of the OLS and WLS cases.

      Let us consider just the two calculations for the OLS case. The first calculation involves calculating the OLS trend for each of the 10000 data realisations. You can then calculate the standard deviation (sd) of these 10000 values, which yields what NL is calling the “standard deviation of the 10,000 trend estimate”. The second calculation involves estimating the sd of the trend for each realisation (a standard analytic calculation) and then computing the mean of the 10,000 sd estimates.
      One expects that for a large number of realisations, these two estimates should yield similar values. If they did not, then it would suggest that the error model was incoherent at some stage. Thus it provides a coherence check on the validity of the error model which underpins the generation of realisations, and on the estimation of sd for each realisation.

      While the expectation of this coherence is self-evident for the OLS case, it is less evident for the WLS estimation, and hence NL’s statement provides a useful confirmatory step.

      Hope that this helps.

      • Thank you for your explanations. It’s clearer to me now what the standard deviation of the 10,000 trend estimates refers to. As for the mean of the estimated sd of each trend, does that actually refer to the mean of the error function values, one for each trend? Where the error function is the square-root of the mean of (y – ybar)^2, where ybar = param_0 + x*param_1? I’ve read up on a couple of sources online but am not sure whether the same technique is being used here.

      • wpnov,
        No, it is not what you are calling the “error function”. You are a little confused by some terminology, I note. “ybar” is normally reserved to mean the sample mean of the (given) y data values. The variable which you have defined as “ybar” is more conventionally called “yhat” – the estimated or predicted value of y from the straight line fit. What you have defined (the square root of the sum of the square of the residuals divided by n) is generally called the MLE estimator for the error variance.

        But in any event, this is not the statistic that NL is using in the comparison.

        When you fit a regression line of the form
        Y = alpha + beta*X
        you can calculate the sd or standard error of the estimated beta value using a well established formula – different for OLS and WLS. You can find the formulae for OLS and for WLS fairly readily on the internet. Each dataset generated in the 10,000 realisations will give you a different estimate of the sd of the estimated beta’s. We expect, if things are done coherently, that the average value of these 10,000 estimates of sd (of beta) should be close to the sd calculated from the 10,000 estimates of beta.
        Hope this helps.

      • Corrigendum
        “What you have defined (the square root of the sum of the square of the residuals divided by n) is generally called the MLE estimator for the standard deviation of the residual error.”

        If you hadn’t taken the square root, it would have been the MLE estimator of the error variance. Sorry for any confusion.

      • I actually wasn’t able to find what the formula that you mentioned is for the sd of the beta value (for each single realisation) aside from the posts at this link . It seems that the ‘estimator for the variance-covariance matrix for the estimator of the regression coefficients’ aka Var(b_hat), multiplied by the variance matrix of the independent variable X, would equal to s^2, the ‘estimator for the variance of the noise.’ I hope this is on the right track. Again, much thanks for following up with me!

      • wpnov,
        Yes. You are getting there. Do keep going.
        Firstly, you can calculate the sum of the squares of the (y-yhat) residuals and divide by n-2. This is the sample estimate of the residual error variance, and it is conventionally denoted as s^2.
        Secondly, you can calculate the “corrected sum of the squares of the x’s”. This can be expressed in the form of the scalar product of a row and column matrix – what you are calling the ” the variance matrix of the independent variable X” – but it is the same thing as the sum of the squares of the (x-xbar) values.
        If you divide the first by the second, it yields the sample variance of the slope estimate of the regression line.
        Taking the square root yields the “sample standard error of the slope estimator”.
        You will find a very similar formula for WLS except that it includes whatever weights have been applied.

        Paul
        .

      • Hi Paul, I think what you’re saying is that the sample variance of the slope estimator is a ratio between two other variances (or sums of squares) divided further by the degrees of freedom. The link that I mentioned last time did not actually show up in the reply; it is at the following address, groups(dot)google(dot)com/forum/#!topic/microsoft(dot)public(dot)excel(dot)worksheet(dot)functions/NTuoLQcV6Xo. Anyhow, why the two ways of estimating the variance of the slope estimator should actually agree probably requires additional derivations. The only other question I have is about the F-statistic (F = SSreg/SSresid*df) mentioned by ‘Jerry W. Lewis’ in the following related link, groups(dot)google(dot)com/forum/#!topic/microsoft(dot)public(dot)excel(dot)worksheet(dot)functions/vzmcd2WUeO0/UMah53A0oNYJ. Does the multiplication by df reflect the fact that SSreg = CORREL^2 * DEVSQ(y) and SSresid = DEVSQ(y) – SSreg have different degrees of freedom, different variances, or something else? – Jonathan

  63. “Uncertainties at the 1σ level on ΔAPOAtmD are assumed to be ±50%”.

    That tells us all we need to know about the rigour of this paper. A) uncertainty in their work is huge, and B) they just pulled a figure out of the air because they are unable of calculating a proper error estimation.

  64. Pingback: Resplandy et al. Part 3: Findings regarding statistical issues and the authors’ planned correction | Climate Etc.

  65. The hyperbole around this is a signal of social pathology. I have noted a 600% increase in warming rates from a pissant AGW blog without even trying. And frankly I am not sure what meaning it has skeptic curmudgeons.

    There is little doubt that oceans have warmed. The earlier data has uncertainty of up to 50% – the Argo data to 2009 may have a systematic error. The lengths of the records are short – and there is a lack of continuity over a critical period at the turn of the century. Nonetheless – as scientists we make do with what’s available and accept limitations. Otherwise it is not science. I am only a practical, working environmental scientist – but we do it all the time. Quite often there is little to no data and defaults are used – it can be very fuzzy.

    This new, very interesting and potentially very useful methodology always had great uncertainty – the nature of the data. I find Nic’s increasing error bars over time a little odd – but they have gone back and reassessed uncertainty based on the nature of the diverse data sources. It is the only way it can be done in credible physical sciences.

  66. Pingback: Resplandy et al 2018 | climat-evolution

  67. Pingback: Peer Review: Selfie-Sticks & Snobbery | Big Picture News, Informed Analysis

  68. Pingback: Resplandy et al. Part 4: Further developments | Climate Etc.

  69. Pingback: Nic Lewis: More Problems With Resplandy et al. - The Global Warming Policy Forum (GWPF)The Global Warming Policy Forum (GWPF)

Leave a Reply to franktoo Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s