by Nic Lewis

In a recent article I set out why I thought that the trend in ΔAPO_{Climate} was overstated, and its uncertainty greatly understated, in the Resplandy et al. ocean heat uptake study. In this article I expand on the brief explanation of the points made about “trend errors” and “scale systematic errors” given in my original article, as these are key issues involved in estimating the trend in ΔAPO_{Climate} and its uncertainty.

I will illustrate the trend error point using a 26-year time series that is a slightly modified version (‘pseudo-ΔAPO_{AtmD}‘) of Resplandy et al.’s ΔAPO_{AtmD }time series and associated uncertainties. The pseudo-ΔAPO_{AtmD} best-estimate time series increases linearly by 0.27 per meg each year, starting at zero in 1991. In each year, the 1-*σ* uncertainty (error standard deviation) in its value is 50% of its best-estimate value (with *σ* set to 10^{-6} in 1991 to avoid divide-by-zero errors). I will assume errors have a Normal distribution, with zero mean. Figure 1 illustrates this situation.

**Figure 1**. Estimated pseudo-ΔAPO_{AtmD} best values and their uncertainties. The black line goes through all years’ best estimates, marked with small black crosses. The pink area represents the ± 1-*σ* uncertainty limits in continuous time, and the red bars show that uncertainty for each year’s data point.

*The case when errors are uncorrelated*

If errors are uncorrelated then the usual conditions for ordinary least squares (OLS) to give valid estimates are satisfied. Obviously, regressing the best fit values will give a perfect fit with, correctly, a trend of 0.27 per meg yr^{−1}, since they increase linearly with time. The trend and the uncertainty in it can be estimated as follows:

- a) for (say) 10,000 sample realizations of the pseudo-ΔAPO
_{AtmD}time series, add random, independent, errors drawn from each year’s error distribution to the pseudo-ΔAPO_{AtmD}best estimate time series values; - b) find the trend for each sample realization using OLS regression;
- c) take the mean of the 10,000 trends as the trend (best-)estimate and their standard deviation as the 1-
*σ*uncertainty of that estimate.

On doing so, I obtained a trend estimate of 0.271 with uncertainty of ± 0.056. (Note: with only 10,000 samples the figure in the third decimal place is unreliable.).

I then repeated the calculation, with the same set of cases (sample realizations) but using weighted linear regression (WLS). The weight for each year were set at 1/*σ*^{2}, where *σ* is the error standard deviation for that year, as is usual.

The resulting trend estimate was 0.270, with uncertainty of ± 0.033.

In both the OLS and WLS cases, the mean of the trend *error* estimates from each of the 10,000 regressions was close to the standard deviation of the 10,000 trend estimate, as one would expect.

So, if errors are uncorrelated between years but their estimated uncertainty varies between years, both OLS and WLS on average provide unbiased trend estimates, but with WLS the variation in the trend estimate from its true value between possible sample realisations is smaller. WLS in effect uses the available information more efficiently and produces more precise estimates.

*The case when errors are perfectly correlated (trend errors)*

Now consider the case where the uncertainty arises from the trend in the data being uncertain. For any given realisation of the pseudo-ΔAPO_{AtmD} time series, there is then only a single uncertain error. That error scales up with time, *pro rata* with the magnitude of the time difference from the baseline year. The error distributions for each year are the same as previously, but they are now perfectly correlated across all years rather than being independent. For any given realisation, all data points will therefore lie on a straight line – a version of the black line rotated about its zero starting point in 1991, and most likely lying within the area shaded pink.

Repeating steps a) to c), but this time with each year’s error perfectly correlated with other years’ errors, I obtained the following results.

With OLS regression, a trend estimate of 0.268 with uncertainty of ± 0.135 per meg yr^{−1}.

With WLS regression, a trend estimate of 0.268 with uncertainty of ± 0.135 per meg yr^{−1}.

In both cases the trend error estimate for each of the 10,000 realisations was zero, as although the fit differed between realisations, it was a perfect fit in every case. And in both cases the trend uncertainty was the same as that in the original data – 50% of 0.27 per meg yr^{−1}_{.}

The key point is that** if the data has a trend error, regression cannot reduce it** at all. Weighting data values to reflect their absolute precision does not help, but it does no harm either in this case.

*Which case applies to the actual ΔAPO _{AtmD} values?*

Resplandy et al. state the results of their model simulations of the fertilisation effects of combined, N, Fe and P aerosol deposition as: “The overall impact on ΔAPO_{AtmD} is +0.27 per meg yr^{−1} over 27 years of simulation (1980–2007), which we extrapolate to our 1991–2016 period”, and that “Uncertainties at the 1σ level on ΔAPO_{AtmD} are assumed to be ±50%”. That corresponds exactly to the way my pseudo-ΔAPO_{AtmD} best-estimate time series and uncertainties were calculated. It is quite clear that the ΔAPO_{AtmD} best-estimate time series should represent an exact linear trend of +0.27 per meg yr^{−1} with its errors being perfectly correlated, non-independent trend errors.

*Scale systematic error case*

Consider now the case where the best-estimate data, while broadly trending, has not been derived from a linear trend. That applies to the ‘scale systematic’ error component of ΔAPO_{OBS}.

ΔAPO_{OBS} is calculated from measured changes in the atmospheric (δO_{2}/N_{2}) ratio and CO_{2} concentration (*X*_{CO2}, in p.p.m.) as:

ΔAPO_{OBS} = (δO_{2}/N_{2}) + 1.1 ⤬ (*X*_{CO2} − 350) / *X*_{O2}

where *X*_{O2} (= 0.2094) is the reference atmospheric mole fraction needed to convert *X*_{CO2} from p.p.m. to per meg units.

Resplandy et al. Extended Data Table 3 states that, in addition to Corrosion, Leakage and Desorption errors of respectively ± 0.3, ± 0.2, and ± 0.1 per meg yr^{−1}, and random errors in each year of ± 2 per meg (± 4 before July 1992), there is a scale systematic error of 2% on (δO_{2}/N_{2}).

I downloaded the (δO_{2}/N_{2}) data for the three monitoring stations used, combined it using the weights given in the paper that Resplandy et al. cited, and deducted the 1991 value so as to match Resplandy et al.’s baseline.

The OLS linear trend of the resulting data was −19.87 per meg yr^{−1}. Using WLS regression, it was -16.05 per meg yr^{−1}.

I then drew samples from a normal distribution with unit mean and a standard deviation of 0.02 and, for each sample multiplied all years’ (δO_{2}/N_{2}) data values by the sample value.

When using OLS regression, the trend estimate was −19.87 with uncertainty of ± 0.396 per meg yr^{−1}

When using WLS regression, the trend estimate was −16.05 with uncertainty of ± 0.320 per meg yr^{−1}

So in both cases sampling gives the same trend estimate as regression on the original data points, but WLS estimates a substantially shallower (less negative) trend than OLS. Note that, as with a trend error, the regression trend estimate uncertainty is the same as for the original data, here 2% (of the trend estimate).

Figure 2, which shows the data values (black circles, joined by thin black line), and linear regression fits using OLS (cyan line) and WLS (red line), helps provide insight into the difference between the OLS and WLS regression fits.

**Figure 2. **Average (δO_{2}/N_{2}) annual mean data for from La Jolla, Alert, and Cape Grim monitoring stations weighted as to 0.25, 0.25 and 0.5 respectively (black circles, joined by thin black line),the OLS linear fit to them (cyan line), and the WLS linear fit to them (red line).

The reason why the WLS fit has a shallower line is that earlier years are weighted much more highly than the later years when determining the WLS fit, as the scale systematic error is applied to larger (δO_{2}/N_{2}) values in the later years. Indeed, the near-infinite weighting given to the 1991 data point forces the WLS fit through zero in 1991.

It can be seen that there is little evidence of random errors being significant in any year, nor greater in the later years; the black line is fairly smooth throughout the period. But the trend appears to become more negative over time, so a linear fit is strongly affected by the weightings given in different years. Confirming that data errors are trivial, a quadratic fit is extremely close (adjusted R^{2} 0.9998, versus 0.9950 for a linear fit). Moreover, a quadratic fit shows no evidence of fit errors being greater in later years – they are slightly higher in the first half than in the second half of the period.

There is no justification for using WLS in a case like this. Because the errors are perfectly correlated between years, the conditions for WLS to be valid are not satisfied. That the WLS result cannot possibly be valid can be seen as follows. If the method is valid, it should give the same trend whatever year is used as a baseline for the data. So, rebase all the data to a zero baseline in 2016, which implies small errors in later years and large errors in early years. That will result in the WLS weights being high in later years and low in early years. The WLS fit will then pass through the 2016 data point, and its slope will be close to that exhibited by later years’ data and, as a result, much steeper – indeed, steeper than the slope of the OLS fit (which is unchanged by the rebasing).

A known method for validly regressing when the data errors are perfectly correlated or nearly so is to transform the data by taking first differences, and then regress. The regression intercept term then corresponds to a linear trend in the original data and the slope coefficient to a quadratic trend term. When one does that using OLS, the mean trend over the full period is very close to the trend from OLS linear regression on the original data. And WLS regression on the transformed data gives a mean trend whose magnitude is only 2.5% below that for OLS regression on the transformed data, as compared with 19.2% below the OLS trend when the original data is regressed.

**What this implies for trend estimation in Resplandy et al.**

I have demonstrated two important points. One is that regression cannot reduce trend or scale systematic errors in the data, because they are perfectly correlated across all data points. The second is that WLS regression is liable to produce seriously inaccurate trend estimates where a scale systematic error is involved.

With linear regression models, the estimated trend of the sum of several components equals the sum of the trends of the individual components. Moreover, provided there is no correlation between errors in the different components, the trend estimation error for their sum equals the sum in quadrature of the trend estimation errors for the individual components. That enables one to place a theoretical lower limit on the ΔAPO_{Climate} trend uncertainty, by adding in quadrature identified trend and scale systematic errors.

The largest contribution to error in ΔAPO_{Climate} comes from its ΔAPO_{OBS} component. That in turn contains a scale systematic error, of (at ± 1*σ*) 2% of a trend of circa −19.8 per meg yr^{−1}, or –0.396 per meg yr^{−1}_{,} and three trend errors, or 0.3, 0.2 and 0.1 per meg yr^{−1}. Since these four errors sources are all independent, they can be added in quadrature. So can the scale error of 0.135 per meg yr^{−1} in ΔAPO_{AtmD}. I will leave errors in the remaining components, ΔAPO_{FF} and ΔAPO_{Cant}, out of account; although it is highly likely that they also contain trend or scale systematic errors their magnitude is small and when added in quadrature they would not add significantly to the total.

Adding all the quantified error sources in quadrature gives an irreducible minimum ΔAPO_{Climate} trend uncertainty of ± 0.56 per meg yr^{−1}. Even if the much lower, clearly incorrect, WLS estimate of the (δO_{2}/N_{2}) trend of −16.0 were substituted for the OLS-based trend, the irreducible trend uncertainty would still be 0.51 per meg yr^{−1}. Resplandy et al.’s ± 0.15 ΔAPO_{Climate} trend uncertainty estimate is completely infeasible!

Nicholas Lewis 7 November 2018

#### UPDATE 8 November 2018

As mentioned in a comment by Judith Curry, Laure Resplandy now has a statement on her website http://resplandy.princeton.edu/ under her entry OCEAN WARMING FROM ATMOSPHERE GASES linking to her new paper, that reads:

We are aware the way we handled the errors underestimated the uncertainties. We are working on an update that addresses this issue. We thank Nicholas Lewis for bringing this to our attention.

There is an inconvenient question for Nic and other skeptics – what is the rate of warming. This compares the cumlative TOA power flux imbalance to Argo. Very different variables that must co-very. Argo heat is rising – after the pre-2007 glitch – and at a rate of about 0.8 W/m2.

This is the monthly power flux in less power flux out – starting at about the middle of an annual cycle when the energy imbalance is zero. The average powr flux imbalance is 0.82 W/m2 – and trending to zero apparently. No mystery – the Earth system tends to maximum entropy.

The study value is 0.83 W/m2.

Is Argo data in blue? Are the Argo data consolidated by reanalysis or by using an interpolated grid? What causes the change in slope around month 70?

I also wonder, let’s say two data sets are readily available and you want to come up with a clever method to use different data to match the other sets’ trend? Is it possible the new method can evolve to use an incorrect approach to match the “known” data? Is it possible the Ceres cumulative power flux biases the individuals creating an Argo gridding or reanalysis so the two match more or less, then we add a third (flawed) analysis with different type of data to match the Ceres trend? Im definitely not as high powered as you all, but i have spent time trying to catch scientists and engineers playing games with data and models to get promoted or sell goofy ideas. I would look out for this type of human error.

Do you know what i think? Its better to develop a reanalysis using Argo and satellite surface temp, which includes a detailed description of geothermal flux. There is 0.1 watts/m2 coming from below in an uneven pattern and this influences how one extrapolates the reanalysis below Argo reach.

There is a legend and Argo is in blue. The data comes from the Scripps Argo Marine Atlas.

http://www.argo.ucsd.edu/Marine_Atlas.html

CERES data comes from the CERES data product page.

https://ceres.larc.nasa.gov/order_data.php

Conservation of energy says that they must co-vary. The global first order differential energy equation can be written as the change in heat in oceans is approximately equal to energy in less energy out at the top of the atmosphere (TOA).

Δ(ocean heat) ≈ Ein – Eout

The CERES record shows more energy entering the system than leaving over the period and the Argo record shows oceans warming. It can be seen in anomalies that are relatively precise – and that do not need anchoring with Argo.

(a) Shortwave

(b) infrared

The cloud signature is an anti-correlated relationship of IR and SW. Less cloud reflects less light but allows more IR to escape. With low marine strato-cumulus cloud SW dominates. There are land use, water vapor, aerosols, rain… and many other changes . But cloud impedes IR emissions and reflect SW. IR anomalies are a mirror of SW anomalies showing the role of cloud in 21st century changes in the energy budget.

The problem of absolute values arises when comparing incoming and outgoing energy – power flux over time – to obtain a numerical imbalance at TOA. Hence the ‘anchoring’ to Argo. The records are independent even if Argo is used to close the absolute global energy budget.

Yes Argo is in blue.

“The CERES record shows more energy entering the system than leaving over the period and the Argo record shows oceans warming. It can be seen in anomalies that are relatively precise – and that do not need anchoring with Argo.“

Be that as it may neither of your graphs of SW and IR leave me with the impression of a TOA imbalance.

Yet it is quite obviously there and the problem would appear to be your vision.

Robert: CERES-EBAF (Energy Balance and Filled) has been adjusted to agree with 10-years of ARGO data (0.71 W/m2) plus some other refinements. (In earlier versions of this product, adjustment to yield 0.85 and 0.58 W/m2 were used.) Which leaves your question about the discrepancy in your graph unanswered. The paper below didn’t include a plot like your top plot. However, I don’t understand the scale on the right-hand axis: “Cumulative W/m2). A cumulated power flux presumably would represent energy per unit area.

Loeb (2017): https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0208.1

“Despite recent improvements in satellite instrument calibration and the algorithms used to determine CERES TOA radiative fluxes, a sizable imbalance persists in the average global net radiation at the TOA from CERES satellite observations. With no adjustments to CERES SW and LW all-sky TOA fluxes, the net imbalance for July 2005–June 2015 is approximately 4.3 W m−2, much larger than expected. As in previous versions of EBAF, we use the objective constrainment algorithm described in Loeb et al. (2009) to adjust SW and LW TOA fluxes within their ranges of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system, as determined primarily from ocean heat content anomaly (OHCA) data. In the current version, the global annual mean values are adjusted such that the July 2005–June 2015 mean net TOA flux is 0.71 ± 0.10 W m−2, as provided in Johnson et al. (2016) [uncertainties at the 95% confidence level account for expendable bathythermographs (XBT) correction uncertainties and Argo sampling errors for 0–1800 m]. The uptake of heat by Earth for this period is estimated from the sum of (i) 0.61 ± 0.09 W m−2 from the slope of weighted linear least squares fit to Argo OHCA data to a depth of 1800 m analyzed following Lyman and Johnson (2008), (ii) 0.07 ± 0.04 W m−2 from ocean heat storage at depths below 2000 m using data from 1981–2010 (Purkey and Johnson 2010), and (iii) 0.03 ± 0.01 W m−2 from ice warming and melt and atmospheric and lithospheric warming for 1971–2010 (Rhein et al. 2013).”

The average monthly instantaneous power flux can be converted to Joules by multiplying it by the number of seconds in the month. This shows more energy entering the system than leaving over the period.

This comment is off-topic. A scientific study’s results are meant to be based on properly analysing the data that the study itself uses. What Argo shows is irrelevant to whether the Resplandy et al study’s results are correctly calculated.

Wow. CERES and Argo are discussed in the paper – quite rightly as these are the primary sources for ocean heat. The question on the surface is very simple. Is Nic claiming on the basis of this that oceans are not warming – this century – at a rate of some 0.8 W/m2 consistent with both ocean and satellite data – and now with this other method?

No, of course I’m not claiming that the oceans are not warming.

BTW, the satellite data baseline radiative imbalance is calibrated to match Argo, it is not provide an independent source.

Nic

Of course you are not claiming the oceans are not warming, but 1991 is the blink of an eye and disregards other warm spells we can trace back millennia.

Is an increase since 1991 significant, especially bearing in mind the convoluted maths needed to arrive at a figure?

I do think you need to submit this information to Nature. Has the author responded as yet?.

tonyb

The data sources are:

https://ceres.larc.nasa.gov/order_data.php

http://www.argo.ucsd.edu/Marine_Atlas.html

The global first order differential energy equation can be written as the change in heat in oceans is approximately equal to energy in less energy out at the top of the atmosphere (TOA).

Δ(ocean heat) ≈ Ein – Eout

Satellites measure change in energy in and energy out well but are not so good at absolute values – the comparison problem – so that energy imbalances at TOA are not immediately obvious. Energy in and out varies all the time. Energy in varies with Earth’s distance from the Sun on an annual basis and with much smaller changes over longer terms due to changes in solar emissions. Outgoing energy varies with cloud, ice, water vapor, dust… – in both shortwave (SW) and infrared (IR) frequencies.

A increase in energy flux into the system can be seen in CERES anomalies. These are precise and do not require ‘anchoring’ in Argo.

On the very short CERES record net energy out – the sum of SW and IR power fluxes over time – is the dominant term on the right hand side of the global energy equation. Net TOA power flux is warming up by convention – in what was a warming trend in net CERES anomalies.

The components are shortwave and infrared.

There is less reflected light in the early years followed by little change and a recent warming associated with a warm eastern Pacific. The IR data on the other hand shows cooling in the early record, little change in the middle and more cooling at the end. The mirror image of SW and IR energy changes show that cloud was a primary source of energy change in the climate system in the 21st century. The question you should be asking is why.

The data sets are independent even if closing the absolute global energy budget uses the Loeb et all 2009 methodology of ‘anchoring’ it to Argo inter alia.

The problem here remains. Resplandy et al found a warming rate of 0.83 W/m2 – at the higher end of estimates as they say. The higher end to this skeptic seems the more credible and recent end. And I reserve judgement about how the more credible rate was derived here in contrast to the invidious leaps to superficial partisan rhetoric from both sides around this.

Change is what is measured by by CERES and SORCE precisely and stably. Closing the absolute global energy budget is via the methods of Loeb et el 2009.

Why can’t I post comments?

no idea, your comments keep getting caught in moderation. Appel is getting caught in spam for no obvious reason. I will try to keep on top of this and release comments

Thank you

Simple question Nic. If I drop the deltas from your first displayed equation, I get the value of the quantity of interest and not its delta. In that case, this issue of baselining does not arise.

dpy6629, thanks for raising the baselining point.

I like you had thought about removing the 1991 baselining. The difficulty is that some of these variable are just derived from trend estimates – they have no absolute value. For instance, dAPO_AtmD is estimated as a trend of 0.27 per meg per year with an uncertainty of 50% (135 per meg per year).

However, the method used should produce the same result whatever baseline year is used. The fact that it does not do so shows that the method is faulty; the problem arises from the way that trend and scale systematic errors are treated.

Should that be 0.135 per meg per year not 135

Nic, it would be more in the style of the Climate Etc. to post only the first paragraph and to link it the whole article.

Good point. I was in a hurry when I uploaded the article and I forgot to insert a “read more” tag. Judith has since done so.

To Michael Mann’s innovative programing that created ‘hockey stick’ graphs out of white noise — suitable for publishing by the UN — we now have Resplandy’s ‘trajectory optimization’ mathematical models, designed to maximize estimates of the speed of oceanic temperatures due to AGW — suitable for framing America and capitalism for killing us all, no matter what we do.

Red noise, not white.

…and, a lot of hot air…

“We found that at least 43 authors have direct ties to Dr. Mann by virtue of coauthored papers with him. Our findings from this analysis suggest that authors in the area of this relatively narrow field of paleoclimate studies are closely connected. Dr. Mann has an unusually large reach in terms of influence and in particular Drs. Jones, Bradley, Hughes, Briffa, Rutherford and Osborn.

“Because of these close connections, independent studies may not be as independent as they might appear on the surface… We note that many of the proxies are shared. Using the same data also suggests a lack of independence.

“The MBH98/99 work [aka, the ‘hockey stick’] has been sufficiently politicized that this community can hardly reassess their public positions without losing credibility. Overall, our committee believes that the MBH99 assessment that the decade of the 1990s was the likely the hottest decade of the millennium and that 1998 was likely the hottest year of the millennium cannot be supported by their analysis.” ~Dr. Edward J. Wegman (2006)

The hockey stick has been confirmed many times by now, including using mathematical techniques other than Mann et al’s.

Different mathematical techniques…but still including bristlecone pines and/or other proxies that should not be used, period. You have been around enough to be aware of this. Nice lie of omission.

“The hockey stick has been confirmed many times by now, including using mathematical techniques other than Mann et al’s.”

Rubbish.

Most confirmations, and there are not that many, David, are either Mann with associates or sock puppets of Mann.

Worse, you know that but still spread that rubbish.

Mathematical techniques are not confirmations, data subject to techniques could be.

All of the studies showing hockey sticks including the PAGES ones have come under withering criticism from McIntyre. I personally agree with Steve that the field is hopelessly addicted to artificial and unscientific selection criteria of proxies.

The government has been paying for filing cabinets full of worthless global warming pseudo-science like this for years.

Nic,

I hope you submit a version of your posts to Nature. Some of this seems sufficiently clear to compel publication.

The entire planet represented by a few data points? Seems a bit much for such a complex system.

Not really. If you are defining one part of a complex system like a global temperature, you can use a simple data set or go as complex as you want.

No problems.

In practice one can only ever use the data points that one has that are in good working order.

The system might be complex but the data collectable is restricted and that is all one can ever use.

This is why, as Nic said, a new approach to provide a check for other methods is always welcome.

Nic Thanks for your detailed analysis. PS is there minor typo of missing “-” in “Using WLS regression, it was 16.05 per meg yr−1”? vis “-16.05” later.

David, Thanks for pointing out the typo. Now fixed.

What does “more right” mean?

Either Lewis’s criticism is correct or it isn’t.

If it is, and I haven’t seen any evidence he is wrong, [you certainly haven’t presented any] then there is obviously something amiss with their paper. What correlation a poorly executed, incorrect paper bears to other research, trends, metrics or anything else is largely irrelevant isn’t it?

If they’re wrong then they should correct the errors and then what relation it has to any other evidence, studies, papers etc can be considered.

Until then they’re just wrong, not more or less right.

Thanks – I was wondering if in rushing out of the house I forgot to post it. I am starting to see where #atomski is coming from.

But the point is consilience in a complex and uncertain field where results are imprecise and absolute truth is absent. The important question seems to be is the world warming at 0.8W/m2 or not? It seems quite likely based on 3 out of 3 data sources now.

Most of the commentary on this is tribal posturing from skeptics. The search is always on for a simple reason to dismiss with prejudice some bit of science they disapprove of.

The method itself is refreshingly new – it opens a new perspective they reach a very plausible conclusion – they get serious scientific points. It is a fabulous study – even if there is an error – but any error will not be convincingly demonstrated by a knee jerk bit of blog science. You need at least three – and hopefully a bit of real science. In the interim – chill – your rhetoric is absurd.

As far as I am concerned If some wretch wants to dis science itself piece by piece on the basis of motivated belief – and I can’t figure out why but both sides do it – then they earn contempt.

“But the point is consilience in a complex and uncertain field where results are imprecise and absolute truth is absent.”

No point in consilience if there is no possible truth, is there?

“ The important question seems to be is the world warming at 0.8W/m2 or not? It seems quite likely based on 3 out of 3 data sources”

I thought the question was more what rate it should be warming at.

I am surprised that 3 different and unrelated data sets ( is that right, someone Nic above? Suggested that at least 2 were actually linked).

What temp rise at what time frame is that, for those that prefer a simple understandable scale?

What rate it should be warming at? This paper claims 0.83 W/m2 +/- 0.11 (from memory) at the higher end of estimates they say – rather than 60% greater. Discount the noisy hyperbole from both sides and you may have a basis for seeking truth – but as Voltaire said beware those who have found it.

There is far too much scope for uncertainty in this method for great precision yet – but it is still a new and good idea that breaks scientific ground. Kudos. New perspectives are possible that may evolve into new methods.

You can do the calcs – not all that difficult.

http://www.argo.ucsd.edu/Marine_Atlas.html

https://ceres.larc.nasa.gov/order_data.php

CERES/SORCE measures change with precision and stability – and although the paper as well claims they are not truly independent – it find that tkoo self serving. The data is space based of course and the method of closing the energy budget (Loeb et al 2009) does not make it an alias for ocean heat.

… I find it too self serving… and facile.

Pingback: Uncritical News Media Gave Blanket Coverage To Flawed Climate Paper | Watts Up With That?

Pingback: Uncritical News Media Gave Blanket Coverage To Flawed Climate Paper |

“The Washington Post, for example, reported: “The higher-than-expected amount of heat in the oceans means more heat is being retained within Earth’s climate system each year, rather than escaping into space. In essence, more heat in the oceans signals that global warming is more advanced than scientists thought.”

The New York Times at least hedged their reporting, claiming that the estimates, “if proven accurate, could be another indication that the global warming of the past few decades has exceeded conservative estimates and has been more closely in line with scientists’ worst-case scenarios.”

Such bizarre hyperbole – there is neither more nor less heat in the oceans and atmosphere than last week. Well not much. And the average rate of warming over the past couple of decades they find is about 0.8 W/m2 – consistent with other recent estimates. It seems currently much less than that. Believers have had a lock on bizarre hyperbole – but it does seem that skeptics are making a late run.

Pingback: Uncritical Information Media Gave Blanket Protection To Flawed Local weather Paper | Tech News

Reblogged this on Quaerere Propter Vērum.

This story’s been noted by

Reason’sRonald Bailey. He apparently has no financial investments that depend on thermal sea water gradients:http://reason.com/blog/2018/11/08/is-new-study-claiming-the-oceans-are-war

Nic, have you heard back from the authors?

Yes, he has: “New study estimate ocean warming using atmospheric O2 and CO2 concentrations. We are aware the way we handled the errors underestimated the uncertainties. We are working on an update that addresses this issue. We thank Nicholas Lewis for bringing this to our attention.” Source: http://resplandy.princeton.edu

Thanks, Nic! :)

Resplandy now has a statement on her website: resplandy.princeton.edu

“We are aware the way we handled the errors underestimated the uncertainties. We are working on an update that addresses this issue. We thank Nicholas Lewis for brining this to our attention.”

“We show that the ocean gained 1.33 ± 0.20 × 1022 joules of heat per year between 1991 and 2016, equivalent to a planetary energy imbalance of 0.83 ± 0.11 watts per square metre of Earth’s surface.”

The central estimate is fine – and they have underestimated the confidence limits?

Estimates of these components have huge uncertainties that are possibly systemic rather than random.

The cultural problem for science is the invidious leap to superficial partisan rhetoric from both sides

I’m not seeing her comment on the website(?)

On the top menu bar, click on “in the news”.

That’s good news to know the authors have accepted that Nic Lewis is correct. It raises my confidence that Nic Lewis is careful with his analyses and that non experts in the field can have confidence in what he says. It increases my confidence in his (and Curry’s) ECS and TCR estimates. The fact that Richard Tol checked and agreed with Lewis’s numbers also helps.

Dr. Curry and Nic:

I apologize for hijacking this post but could find no other appropriate

“reply”s.

I would like to submit an article for publication on Climate Etc. Can you send me contact info? The topic involves errors in temperature anomalies.

Thanks.

4kx3

Robert,

As Nic explains in his first post, the size of the error bars makes this work, though novel, broadly consistent with all other estimates. Not sure why you keep hammering home this point about 0.8 W/m^2 when Nic explicitly says their results are consistent with previous measures. Or, is your argument more about the rate and the slope being much lower than their estimate? But again, the large error bars come into play there too.

Judith: ALMOST synchron! :-)

I guess that the response will clarify which method was used and what the uncertainties actually are and then we can all resume our normal approaches.

What does the way in which they handled the errors meant they underestimated the uncertainties actually mean. Does it mean they did it Kennedy’s way? Does this mean that it actually does matter what starting point you use?

Nic,

Thank you for a meticulous analysis of the uncertainty model.

It appears from Resplandy’s website that she has acknowledged that there is a problem with the error analysis, but has yet to acknowledge the more important problem with the median estimation of trend. Is there another shoe still to drop? Has she now responded to your earlier e-mails?

Hi Paul

Thanks! I think you are right.

The only substantive response I have had from Laure Resplandy simply said that they were working on an update. I had sent her pdfs of both articles as they were posted; that response was after I sent the pdf of this article.

How many angels can dance on the point of a pin?

This paper is excellent science. This is poor science of course. I would include opportunistic ensembles, TCR and ECS, anything lacking a centennial to millennial perspective – papers that rely on the assumption that any change in past 70 years is anthropogenic – and in a complex dynamical system anything that finds a simple cause and effect. I wouldn’t include the meticulous science of PAGES 2k.

The rush to rehearsed purple prose here – regardless of the merits of the argument – and we will see – largely from those who don’t know their scientific arse from their elbows is a less than edifying spectacle.

I wish there was an editing function.

… there is poor science…

The problem with Resplandy et al. is far deeper than just faulty determination of regression error. It lies in the blind reliance upon linear regression for analyzing empirical time series whose spectral structure is unknown–a “climate science” practice as widespread as it is deplorable.

There are almost no geophysical time series that satisfy the tacit premise of persistent linear trend plus stationary gaussian white noise. Instead, we are confronted with climate data that are rich in chaotic oscillations of various bandwidths and periods that often exceed the length of available record. What are often presumed to be “trends” turn out to be mere snippets of far-longer-scale variations of unknown origin Analytic unraveling of the many mysteries of nature cannot begin by relying upon false premises

“Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” Julia Slingo and Tim Palmer, 2011

Significant in these seemingly random changes is the 20 to 30 Pacific state regimes. The is the core of the Earth system operation. It happens in 20 to 30 year regimes quite evidently for at least a 1000 years. Regimes have associated means and variance that then shift and this sums to perpetual change.

https://journals.ametsoc.org/doi/10.1175/JCLI-D-12-00003.1

And that modulate temperatures as signals propagate around the planet like a stadium wave. With inter alia anti-phase responses in the Antarctic and Arctic (e.g. https://uncch.pure.elsevier.com/en/publications/polar-synchronization-and-the-synchronized-climatic-history-of-gr)

The implication for Resplandy is that the rate of warming changed around the turn of the century and will shift again within a decade – if it is not happening now – with changes in TOA radiant energy flux and the rate of ocean warming. How’s that for a prediction. This time we will have CERES and Argo. Most people are still far behind the curve and I guess it will come as a surprise.

https://watertechbyrie.com/2014/06/23/the-unstable-math-of-michael-ghils-climate-sensitivity/

@john321s,

I agree with most of your commentary. See my long note to Nic Lewis below.

Nic Lewis:

Great job.

Nic,

I do wonder why the authors, having produced a great hypothesis, were only prepared to spend a couple of bent pennies on crude inferences from their results. I suspect in part at least that the answer to this question lies in the fact that, using the authors’ own data, it is easily concluded that average net TOA flux is only about half of previous best estimates as used in the reconciliation of CERES data to ARGOS. Somehow, I don’t think that this result would have drawn such big headlines. I will explain in a moment.

As john321s correctly points out above, there is little justification for expecting a straight line in the pseudo-concentration time series. If the authors’ hypothesis is correct then the per meg vs time plot should be responding to instantaneous heat content or the integral of net flux with time. There is no physical validity to any assumption that net flux is approximately constant with time over this period. We have sufficient reliable data (satellite anomaly variation and ocean temperature measurements) to know that it isn’t, and the authors’ own data show significant gradient variation over the period.

The only justification (of the authors), I presume, is that fitting a linear trend to the total dataset yields one measure of the “average rate of energy gain” and hence the “average net flux” over the full period. As such, I would question whather this measure is a good choice. It is a very crude measure of average flux, especially where, as in this instance, the integral curve displays non-linear structure. End point analysis would be less ambiguous and would preserve total energy gain over the period. But a more obvious approach would be to analyze the first difference series as a more direct measure of the time-variation in net flux. This would not only allow reconciliation/comparison with time-varying data from alternative sources, but would also IMO allow a statistical model with a perhaps more robust separation of the components of uncertainty.

None of the above detracts from the validity of your challenges to the calculations on which the authors actually reported, but these thoughts were what led me to speculate on why the authors chose to report on this crude (and seemingly incorrectly calculated) headline item of average energy gain per year over the period.

In recent years, our best relative ocean heat data comes from ARGOS, and our best satellite data on RELATIVE CHANGE in TOA net flux comes from CERES. CERES has high precision and low accuracy. It yields no useful information on the absolute value of net flux, but calculates the relative change with good accuracy. Consequently, after conversion of irradiance to flux (including daily drift calculations), constant bias corrections are applied to the SW and LW fluxes to tie the TOA net flux values to ARGOS-adjusted-PLUS estimates of net flux, using data from between 2005.5 and 2015.5. Before application of this correction, CERES EBAF (Ed 4.0) shows a non-credible net flux difference of 4 W/m2 when averaged over the reference period. After correction, it shows a value of 0.71 W/m2 – which by design ties into the estimate obtained from ARGOS-adjusted-PLUS. This ARGOS-adjusted-PLUS value comes from Johnson et al:- https://www.researchgate.net/publication/304400581_Improving_estimates_of_Earth's_energy_imbalance

If we now consider the gradient of the Resplandy data OVER THE SAME REFERENCE PERIOD, it works out to be around 0.51 per meg per year. This translates into a heat addition of 5.8 ZJ per year or an average net flux value of 0.36 W/m2. This is about half the reference value from Johnson et al. So have we been greatly underestimating or greatly overestimating ocean warming?

kribaez

‘So have we been greatly underestimating or greatly overestimating ocean warming?’

That is a great question. There has been a lot of talk of supposed sea level rise acceleration which never quite seems to come about

Fasullo et al

“Is the detection of accelerated sea level rise imminent’

Global mean sea level rise estimated from satellite altimetry provides a strong constraint on climate variability and change and is expected to accelerate as the rates of both ocean warming and cryospheric mass loss increase over time. In stark contrast to this expectation however, current altimeter products show the rate of sea level rise to have decreased from the first to second decades of the altimeter era. Here, a combined analysis of altimeter data and specially designed climate model simulations shows the 1991 eruption of Mt Pinatubo to likely have masked the acceleration that would have otherwise occurred. This masking arose largely from a recovery in ocean heat content through the mid to late 1990 s subsequent to major heat content reductions in the years following the eruption. A consequence of this finding is that barring another major volcanic eruption, a detectable acceleration is likely to emerge from the noise of internal climate variability in the coming decade.

http://sealevel.colorado.edu/content/detection-accelerated-sea-level-rise-imminent

There is another slightly newer paper on the same site by fasullo et al

“Climate-change–driven accelerated sea-level rise detected in the altimeter era”

http://www.pnas.org/content/115/9/2022

Although reading the second paper the results seem more ambivalent and nuanced than the headline suggests..

So the level rise may be accelerating, or it may not. A good part of this would be due to the heat content in the ocean and subsequent expansion.

I don’t think we can therefore be certain which of the scenarios in your question is or isn’t happening.

Perhaps these figures could do with checking over to see if the uncertainties have been properly dealt with?

tonyb

Tony,

Forgive me if I don’t pursue the question of MSL rate change here in any detail, other than to say that given the quasi oscillatory behaviour historically, the difficulty of tying TOPEX to JASON and the lack of closure between satellite altimetry and tide-gauge data, I am agnostic on the question of whether it is accelerating or decelerating, but I am 100% confident that it is doing one or the other if you pick the correct timescale!

My question was perhaps easily misread as an open question (sorry!). It was not intended to be an open question. It was narrowly focused on the fact that under the Resplandy interpretation, the inference is that we have underestimated ocean heat gain substantially (big headlines). Once her calculation is corrected a la Nic Lewis, the median estimate becomes 0.63W/m2 – in line or slightly below most estimates over the period. If, on the other hand you repeat the calculation using the Resplandy data over the key reference period 2005.5 to 2015.5 you find that her method predicts only half the average net flux estimated by ARGO.

I think the error bars on this method would be much wider than the more direct Argo measurement for the recent period, so there is no reason to prefer this measure to Argo since 2005. Prior to 2005 and 2000 the in situ error bars become wider bit I am still not sure that Resplandy’s technique would be any better. It looks very indirect with a lot of built-in assumptions.

“Importantly, the SW and LW TOA flux adjustment is a one-time adjustment to the entire record. Therefore, the time dependence of EBAF TOA flux is tied to the CERES instrument radiometric stability.”

https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0208.1

“The CERES Energy Balanced and Filled (EBAF) product is produced to address two shortcomings in the standard CERES level-3 data products. First, satellite instruments used to produce CERES TOA ERB data products provide excellent spatial and temporal coverage and therefore are a useful means of tracking variations in ERB over a range of time–space scales. However, the absolute accuracy requirement necessary to quantify Earth’s energy imbalance (EEI) is daunting. The EEI is a small residual of TOA flux terms on the order of 340 W m−2. EEI ranges between 0.5 and 1 W m−2 (von Schuckmann et al. 2016), roughly 0.15% of the total incoming and outgoing radiation at the TOA. Given that the absolute uncertainty in solar irradiance alone is 0.13 W m−2 (Kopp and Lean 2011), constraining EEI to 50% of its mean (~0.25 W m−2) requires that the observed total outgoing radiation is known to be 0.2 W m−2, or 0.06%. The actual uncertainty for CERES resulting from calibration alone is 1% SW and 0.75% LW radiation [one standard deviation (1σ)], which corresponds to 2 W m−2, or 0.6% of the total TOA outgoing radiation. In addition, there are uncertainties resulting from radiance-to-flux conversion and time interpolation. With the most recent CERES edition-4 instrument calibration improvements, the net imbalance from the standard CERES data products is approximately 4.3 W m−2, much larger than the expected EEI. This imbalance is problematic in applications that use ERB data for climate model evaluation, estimations of Earth’s annual global mean energy budget, and studies that infer meridional heat transports. CERES EBAF addresses this issue by applying an objective constrainment algorithm to adjust SW and LW TOA fluxes within their ranges of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system (Loeb et al. 2009).”

CERES radiometric data is stable and accurate – I am am not clear on what the presumed difference between accurate and precise above is – and it provides information on what is changing in the system. Regardless – the ocean data remains. Warming steadily and strongly in the most recent decade.

“The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

The role for chaos in the Earth system is solely in how atmospheric and ocean dynamics change. Regimes, shifts and variance that gets more extreme the longer the record. The nature of Wally Broecker’s beast. Trends, means and variance are perfectly valid ideas within a regime – but regimes in the globally coupled flow field shift. Importantly it seems with a 20 to 30 year periodicity. The next shift is due within a decade – and we will have CERES and Argo this time.

The cororally of Wally’s beast is that it is still not a great idea to poke it with a stick.

Robert,

It is not clear to me what point if any you are trying to make here.

From your reference:-

“Despite recent improvements in satellite instrument calibration and the algorithms used to determine CERES TOA radiative fluxes, a sizable imbalance persists in the average global net radiation at the TOA from CERES satellite observations. With no adjustments to CERES SW and LW all-sky TOA fluxes, the net imbalance for July 2005–June 2015 is approximately 4.3 W m−2, much larger than expected. As in previous versions of EBAF, we use the objective constrainment algorithm described in Loeb et al. (2009) to adjust SW and LW TOA fluxes within their ranges of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system, as determined primarily from ocean heat content anomaly (OHCA) data. In the current version, the global annual mean values are adjusted such that the July 2005–June 2015 mean net TOA flux is 0.71 ± 0.10 W m−2, as provided in Johnson et al. (2016)…”

TOA net flux from CERES EBAF Ed 4.0 is calibrated in a one-off bias correction to 0.71 W/m2. The derivation of this value, which is what I described as “ARGOS-adjusted-PLUS”, is partly explained in your reference and fully explained in the Johnson et al paper which I referenced in my comment above, and which is also referenced multiple times in the Loeb et al paper from which you abstracted your quotes.

I say again, it is not at all evident to me what point you are seeking to make.

You have certainly lost my interest – from the few words I scanned it seems a rehash, a spurious dismissal of the Norman Loeb et al 2017 paper and some whines about missing the point. Loen et al 2009 is the basis for the closure methodology.

Warming in net (-IR- SW : up is warming by convention) is obvious – and if you understand that solar variability is inconsequential directly – it’s not the sun stupid – and there I don’t think I am referring to you – then closing the global energy budget is not required for a Fermi problem solution. The warming rate is approximately given by the net flux.

Pingback: An ominous discovery? Sea Temperature or Climate? | Naval War changes Climate

“Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.49 Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to

pretend that they are.” http://www.lse.ac.uk/researchAndExpertise/units/mackinder/pdf/mackinder_Wrong%20Trousers.pdf

My guess is that the unhealthy and unhelpful triumphalism around this supposed error will grind down to another inconclusive climate war talking point as the opposing arguments are marshaled.

Chaos in the Earth system sums to a conclusion most can’t get their heads around — you get points for guessing which denier said this. “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” It means that most of “the science” — the data interpretation, the methods and the theories are utterly inadequate to the task of explaining climate for us. But both sides of the climate war continue to insist on a certainty that is impossible – and continue a battle in which one side is heavily outgunned. The climate change battalion is all of the global scientific institutions, the liberal press, governments, major scientific journals, etc. Opposed is a ragtag collection of a few marginalized cheer leaders for curmudgeons with crude and eccentric theories they insist is the true science. The curmudgeons are remarkably persistent – and climate shifts may give them a strategic advantage as the planet doesn’t warm entirely as expected – or at all. The battle is absurd and unwinnable – by either side.

The rest of us are concerned that the real objectives of humanity are not lost sight of. It is simple in principle to take the initiative on the broad front of population, development, energy technology, multiple gases and aerosols across sectors, land use change, conservation and restoration of agricultural lands and ecosystems and building resilient communities. What we really want is much more clarity on effective policy responses – a focus on the real issues of global economic progress and environmental protection. Emissions of greenhouse gases or loss of biodiversity are far from intractable problems — but economic growth is the foundation of any practical measures.

Paul, Thank you for your thoughtful comment.

I agree that there is no reason to expect a linear increase in deltaAPO_Climate. However, given the poor signal to noise ratio in their deltaAPO_Climate time series, I think the best that can be done is to provide an estimate of the average ocean heat uptake over the period (or, equivalently, an estimate of the change in ocean heat content over 1991-2016). I don’t think that it is unreasonable to do so by fitting a linear regression model, although the way that they did so is inappropriate. Alternatively, simply using the increase in deltaAPO_Climate over the analysis period would, as you say, have the advantage of avoiding difficulties over choice of regression method.

I quite agree that analysing the first difference time series, as I suggested towards the end of my article, would be preferable, at least if weighted LS regression is used.

As you say, ARGO provides the best ocean heat data, and CERES provides the best estimates of fluctuations in net flux and hence in ocean heat uptake. As I see it, the Resplandy et al. deltaAPO_Climate based method, while novel, interesting and independent of ARGO data, preesently has such large uncertainty that it is almost equally compatible with all in situ measurement based estimates of the change in ocean heat content.

It is certainly interesting that their deltaAPO_Climate time series increases more slowly over the final decade or so of the 1991-2016 period than prior to then, implying as you say a much lower estimate of ocean heat uptake over 2006-16. However, the uncertainties are so great that I doubt much weight can be put on this result. My current view is that ARGO based ocean heat change estimates are broadly correct over the last decade or so.

Seems like they are startig to recognise the issues in the paper. Latest at one of the paper’s co-authors (https://scripps.ucsd.edu/news/study-ocean-warming-detected-atmospheric-gas-measurements) is as follows:

Note from co-author Ralph Keeling Nov. 9, 2018: I am working with my co-authors to address two problems that came to our attention since publication. These problems, related to incorrectly treating systematic errors in the O2 measurements and the use of a constant land O2:C exchange ratio of 1.1, do not invalidate the study’s methodology or the new insights into ocean biogeochemistry on which it is based. We expect the combined effect of these two corrections to have a small impact on our calculations of overall heat uptake, but with larger margins of error. We are redoing the calculations and preparing author corrections for submission to Nature.Gavin Schmidt has also been tweeting about it – mind you he refers t it as a ‘minor issue in the Resplandy et al discussion’!

Interestingly, neither Keeling nor Resplandy make any reference to the OLS regression mean trend mis-calculation that Nic identified.

“Interestingly, neither Keeling nor Resplandy make any reference to the OLS regression mean trend mis-calculation that Nic identified.”

.

Shocking, I know. Admitting the paper’s conclusions are, well, wrong, is harder than accepting that the uncertainty estimate is wrong. Still, the paper’s conclusions are wrong, whether admitted or not. Good enough for government work.

.

It is a little sad just the same…. a good idea that was implemented badly. Of course, if the method just supported earlier estimates, but with much greater uncertainty, the it would not have been in Nature.

Steve, this comment is an excellent summary of the Resplandy paper.

To get published in Nature they needed some extra juice. But this is perfectly justified and a tiny matter when one is saving the world.

“… it would not have been in Nature.” Pattern here: it must be worse than we thought to be worthy of blaring headlines. Next, thanks to conscientious people like Nic the science is corrected — yet un-publicized or unpublished.

Nic, Thanks for your time. And, Resplandy et al thanks you as well for re-establishing the baseline, reopened the avenue to publish a future article headlined: “it’s worse than we thought [again]”.

It seems that neither Resplandy nor Keeling are accepting that the trend of 1.16 was an error as they haven’t mentioned it. Nic explained one possible source for it was an OLS regression on DAPO_Climate & DAPO_AtmD. I tried that and get a slope of 1.153 which is close but not exactly 1.16.

As Nick Stokes and Richard commented here, a WLS regression of DAPO_Climate alone with weights as normal at 1/SD^2 (I set 1991 SD to 1e-02). The slope of this regression is 1.163 with a SE of 5.121e-02. The mean value of this is the same as in the paper if rounded to 2 decimal places.

I assume the figure of +/-0.15 is SD and not SE so that has to be adjusted to SD (Nic states all errors in the paper are SD). There are 26 data points in the series however there is a lot of auto-correlation in the WLS residuals so the effective size is less. Foster & Rahmstorf in ‘Global Temperature Evolution 1979-2010’ (ERL) in their methods explain the standard and adjusted methods for dealing with this. For an AR(1) model of the residuals (which is the standard method normally used) nu=(1+phi1)/(1-phi1)=4.12 which means the effective sample size is 26/4.12=6.32 and the SD=SE*Sqrt(6.32)=0.129. From the SE of 5.121e-02 and the paper’s SD of 0.15 the effective size calculates at about 8.6 with a nu of approximately 3. If it was calculated this way then they didn’t use the AR1 method and used some other method. (Note: the residuals are significantly greater than AR(1) so F&R recommend using something like ARMA(1,1) for such a scenario but here oddly the nu value is less than AR(1)!)

It looks like the trend in DAPO_Climate of 1.16+/-0.15 was calculated from the series given in Table 4 by WLS using the SD’s given to weight them. Could the SD given for the slope have been calculated from the SE of that estimate adjusting the data size for residual auto-correlation?

Thinking about it further I believe the SD listed in the paper is the SE of the estimate but inflated for auto-correlation (the conversion back to the SD doesn’t make any sense ). In fact, Gavin Schmidt queried in a tweet if that was how the SE was higher due to the residual auto-correlation. Using a standard AR1 model would have the SE as +/-0.104. However, the residuals are higher than AR1 so it isn’t sufficient and also n is small at 26 and Lee and Lund using a further correction for small n. Using this small sample correction and an AR1 model the SE is +/-0.133. Given the residuals are significantly greater than AR1 (at least to the 7th) further inflation of the SE would be in order (such as using ARMA(1,1) as F&R suggest). Overall, the +/-0.15 seems consistent with the SE of the WLS regression estimate adjusted for the residuals actual auto-correlation. It is surprising to me that such calculations (or simulations) of SE’s don’t have clear explanations published somewhere (in appendices or supplementary papers or wherever). Leaving people to guess how things were derived is a strange way to do things.

I see that Keeling references another issue someone else has identified to do with the constant land O2:C exchange ratio of 1.1.

It will be interesting to see how they justify the trend estimate based on WLS given the arbitrariness of it that Nic has identified.

Joe H

“It looks like the trend in DAPO_Climate of 1.16+/-0.15 was calculated from the series given in Table 4 by WLS using the SD’s given to weight them”

I also think that the 1.16 trend was derived using WLS in the way that you say, although they could have used a more complex approach, using WLS regression on the 10^6 sampled time series and taking the mean trend estimate.

It is unclear to me how exactly the +/_ 0.15 1-sigma standard error was derived, but it is clear to me that the true uncertainty is far greater than +/- 0.15. Moreover the problem is worse than simple autocorrelation; the statistical model is mis-specified, and due to the dominance of trend and scale-systematic errors in the trend estimation there are hardly any effective degrees of freedom in the errors.

Nic,

Just for completeness sake I tried doing a Bayesian WLS regression using Stan. I tried it first for a OLS (single unconstrained SD) and got a slope of 0.88 +/- 0.05 SE. Then constraining the SD’s to the values in table 4 it was a slope of 1.16 and a SE of +/-0.1. The intercept and slope priors were N(0,10) regularising uninformative and I ran it on 2 chains for 10k iterations. It is interesting to see that the WLS SE agrees with the standard methodology of inflating the calculated SE by sqrt(1+rho1/1-rho1) as I calculated up above in an earlier post. I don’t know why the SE is half that in your and Richard’s direct MC simulations which were about 0.2 – it must be the effect of the priors but they are quite uninformative.

Instead of rejigging their maths why have they not said what method they actually used, or have I missed that?

Robert.

OHC is only part of the water available to be heated up on the planet albeit a very substantial part of it. One imagines that the unmentioned other third, that on land an in land has an important part to play and a need to be quantified.

Given this extra wallop of heat over such a long time why is it not detectable by rising sea level and air temperatures?

Your two early graphs of SW and IR did not add up to positive in the last 2 years unlike your latest combined graph.

SW and IR never can add up to imbalance at TOA. But it is what changes most in the energy budget for many reasons.

Satellites measure change in energy in and energy out well but are not so good at absolute values – the problem of closing the incoming and outgoing energy budget – so that energy imbalances at TOA are not immediately obvious. Energy in and out varies all the time. Energy in varies with Earth’s distance from the Sun on an annual basis and with much smaller changes over longer terms due to changes in solar emissions. Outgoing energy varies with cloud, ice, water vapor, dust, vegetation, ocean and atmospheric temperature, etc. – in both shortwave (SW) and infrared (IR) frequencies.

So neglect the closure problem on spurious grounds if you wish – but net outgoing TOA power flux categorically shows warming – against a background of much smaller change in solar intensity.

And what? Something about the ENSO related increase in reflected SW in the past couple of years?

Thanks.

Energy in, energy out , energy contained and energy created are the four obvious concepts though the first 2 with a powerhouse like the sun are far greater in immediate effects and consequences.

SW and IR also the band widths we can measure most easily at the moment but not all the band widths that exist and need to be balanced.

Mass of the (planet) system is a variable concept though reasonably constrained for virtually all conditions under discussion.

The area that might need focusing on most is the retention time of energy for different substances and mileu.

It is hard to envisage the tiny bit of energy trapped by each individual photosynthesis reaction but over time that builds up to a significant amount of stored energy in human terms but million year time scale. Still only a weak spark in terms of the daily solar budget..

Similarly some substances, water, CO2 in the air, help restrain the flow of energy out of a system longer than others.

It is this difference in re emmision rates that leads to the lag and fluctuation in energy in v out that is so hard to quantify.

I got to energy created and decided and decided that…

At toa – 20km up it is assumed I seem to recall – all energy is electromagnetic. The change in planetary energy stores – overwhelmingly as ocean heat in relevant time frames- results from small differences in unit energy in less unit energy out that accumulate over time. Nothing all that difficult at all – at least conceptually.

Ultimately the Earth system is not isolated and operates far from thermodynamic equilibrium. The 1st law of thermodynamics says that energy cannot be created or destroyed – the second law says that net energy flows from the sun to the Earth and back to space. The planet tends to energy equilibrium – maximum entropy production – at toa as a result of the operation of fundamental physical laws.

Energy contained (with none created) is known as the imbalance and is measurable as the rate of change of ocean heat content which has more than 90% of the effective heat capacity. The land doesn’t retain so much.

Pingback: Weekly Climate and Energy News Roundup #335 | Watts Up With That?

Pingback: Weekly Climate and Energy News Roundup #335 |

Pingback: Weekly Local weather and Power Information Roundup #335 | Tech News

Hence your concept of OHC as the arbiter of retained heat in the system. It is more a part of the total retained heat. It ignores by a third the amount of water available to retain heat accessible on the planet.

Hence the figures you use are out by at least a third. Attempts to define the energy budget are therefore off by a third before you start let alone using them to say this paper is within the bounds of what we know. Rather significant.

Mind you this argument is specious on my part because, while true, it implies a bigger capacity for heat uptake.

But this all begs the question of why have we not detected it. Roy Spencer gives his reasons for climate insensitivity, why are you unmoved by his arguments?

90% of global energy stores are in the oceans, 4% on land and 4% as latent heat. And I have seen Roy Spencer discuss ERBS data.

Nuclear is not energy created?

E = MC^2

Energy stored in matter since soon after the big bang.

angech, people have estimated total combustion (of which nuclear would be small part) and it is still a fraction of a percent of solar input when averaged over the earth’s surface. Plus there is no reason to believe it has changed significantly over time in the way other forcings have.

Energy from nuclear decay in the Earth’s interior is about 0.04 W/m2 from memory add about 0.4K to ocean temperature. Heat retained in a warming atmosphere has implications for ‘heat in the pipeline’.

This seems close enough but energy flow is of course dynamic.

OHC I not important when it comes to the climate. It is the overall oceanic sea surface temperatures that matter , and they should be declining in response to very low solar activity. The lower overall ocean sea surface temperatures will translate to lower OHC. It is playing out and will continue as global warming looks like it peaked around 2 years ago.

The climate is now in at the crossroads.

For my money I think it is the geo/solar magnetic field strengths and if they weaken enough and stay weak I think the result will be a major climatic impact to colder conditions.

Signs I am watching for are:

OVERALL SEA SURFACE TEMPERATURES

500 MB ATMOSPHERIC CIRCULATION PATTERNS /HEIGHTS

OVERALL GEOLOGICAL ACTIVITY

OVERALL SNOW/CLOUD COVERAGE

Time will tell but the potential is as higher now then any other time since the Dalton Solar Minimum ended which was in 1850.