A major problem with the Resplandy et al. ocean heat uptake paper

by Nic Lewis

Obviously doubtful claims about new research regarding ocean content reveal how unquestioning Nature, climate scientists and the MSM are.

On November 1st there was extensive coverage in the mainstream media[i] and online[ii] of a paper just published in the prestigious journal Nature. The article,[iii] by Laure Resplandy of Princeton University, Ralph Keeling of the Scripps Institute of Oceanography and eight other authors, used a novel method to estimate heat uptake by the ocean over the period 1991–2016 and came up with an atypically high value.[iv] The press release [v] accompanying the Resplandy et al. paper was entitled “Earth’s oceans have absorbed 60 percent more heat per year than previously thought”,[vi] and said that this suggested that Earth is more sensitive to fossil-fuel emissions than previously thought.

I was asked for my thoughts on the Resplandy paper as soon as it obtained media coverage. Most commentators appear to have been content to rely on what was said in the press release. However, being a scientist, I thought it appropriate to read the paper itself, and if possible look at its data, before forming a view.

Trend estimates

The method used by Resplandy et al. was novel, and certainly worthy of publication. The authors start with observed changes in ‘atmospheric potential oxygen’ (ΔAPOOBS).[vii] In their model, one component of this change (ΔAPOClimate) is due to warming of the oceans, and they derived an estimate of its value by calculating values for the other components.[viii] A simple conversion factor then allows them to convert the trend in ΔAPOClimate into an estimate of ocean heat uptake (the trend in ocean heat content).

On page 1 they say:

From equation (1), we thereby find that ΔAPOClimate = 23.20 ± 12.20 per meg, corresponding to a least squares linear trend of +1.16 ± 0.15 per meg per year[ix]

A quick bit of mental arithmetic indicated that a change of 23.2 between 1991 and 2016 represented an annual rate of approximately 0.9, well below their 1.16 value. As that seemed surprising, I extracted the annual ΔAPO best-estimate values and uncertainties from the paper’s Extended Data Table 4[x] and computed the 1991–2016 least squares linear fit trend in the ΔAPOClimate values. The trend was 0.88, not 1.16, per meg per year, implying an ocean heat uptake estimate of 10.1 ZJ per year,[xi] well below the estimate in the paper of 13.3 ZJ per year.[xii]

Resplandy et al. derive ΔAPOClimate from estimates of ΔAPOOBS and of its other components, ΔAPOFF, ΔAPOCant, and ΔAPOAtmD, using – rearranging their equation (1):

ΔAPOClimate = ΔAPOOBS − ΔAPOFF − ΔAPOCant − ΔAPOAtmD

I derived the same best estimate trend when I allowed for uncertainty in each of the components of ΔAPOOBS, in the way that Resplandy et al.’s Methods description appears to indicate,[xiii] so my simple initial method of trend estimation does not explain the discrepancy.

Figure 1 shows how my 0.88 per meg per year linear fit trend (blue line) and Resplandy et al.’s 1.16 per meg per year trend (red line) compare with the underlying ΔAPOClimate data values.

Fig1_Lewis-on-Resplandy18_dAPO_Clim&fits

Figure 1. ΔAPOClimate data values (black), the least squares linear fit (blue line) to them, and the linear trend per Resplandy et al. (red line)

Assuming I am right that Resplandy et al. have miscalculated the trend in ΔAPOClimate, and hence the trend in ocean heat content (OHC), implied by their data, the corrected OHC trend estimate for 1991–2016 (Figure 2: lower horizontal red line) is about average compared with the other estimates they showed, and below the average for 1993–2016.

Fig2_Lewis-on-Resplandy18_their-Fig1b-with-correct-trend

Figure 2. An adaptation of Figure 1b from Resplandy et al. with the
corrected estimate for the APOClimate derived ΔOHC trend added
(lower horizontal red line; no error bar is shown)

I wanted to make sure that I had not overlooked something in my calculations, so later on November 1st I emailed Laure Resplandy querying the ΔAPOClimate trend figure in her paper and asking for her to look into the difference in our trend estimates as a matter of urgency, explaining that in view of the media coverage of the paper I was contemplating web-publishing a comment on it within a matter of days. To date I have had no substantive response from her, despite subsequently sending a further email containing the key analysis sections from a draft of this article.

How might Laure Resplandy [xiv] have miscalculated the ΔAPOClimate trend as 1.16 per meg per year? One possibility is that the computer code for the trend computation somehow only deducted ΔAPOFF and ΔAPOCant from ΔAPOOBS when computing the 1991–2016 trend, thereby in fact obtaining the trend for {ΔAPOClimate + ΔAPOAtmD}, which is 1.16 per meg per year.[xv]

Uncertainty analysis

I now turn to the uncertainty analysis in the paper.[xvi]  Strangely, the Resplandy et al. paper has two different values for the uncertainty in the results. On page 1 they give the ΔAPOClimate trend (in per meg per year) as 1.16 ± 0.15. But on page 2 they say it is 1.16 ± 0.18. In the Methods section they go back to 1.16 ± 0.15. Probably the ± 0.18 figure is a typographical error.[xvii]

More importantly, it seems to me that uncertainty in the ΔAPOClimate trend, and hence in the ocean heat uptake estimate, is greatly underestimated in Resplandy et al., because of two aspects of the way that they have treated trend and scale uncertainties affecting ΔAPOOBS. First, they appear to have treated corrosion, leakage and desorption errors in ΔAPOOBS as fixed errors that have the same influence each year.[xviii] But these are annual trend errors, so their influence is proportional to the number of years elapsed since the 1991 base year.[xix] Secondly, they appear to have treated both these trend errors and the scale systematic error in ΔAPOOBS as uncorrelated between years.[xx] However, each year’s error from each of these sources is simply a multiple of a single random trend or scale systematic error, and is therefore perfectly correlated with the same type of error in all other years.

On a corrected basis, I calculate the ΔAPOClimate trend uncertainty as ± 0.56 per meg yr−1, more than three times as large as the ± 0.15 or ± 0.18 per meg yr−1 values in the paper.[xxi] This means that, while Resplandy et al.’s novel method of estimating ocean heat uptake is useful in providing an independent check on the reasonableness of estimates derived from in situ temperature measurements, the estimates their method provides are much more uncertain than in situ measurement-based estimates, and are consistent with all of them.

Effect on climate sensitivity and carbon budgets

Resplandy et al. point out that a larger increase in ocean heat content (a higher ocean heat uptake) would affect estimated equilibrium climate sensitivity. That is true where such sensitivity estimates are derived from observationally-based analysis of the Earth’s energy budget. However, after correction, the Resplandy et al. results do not suggest a larger increase in ocean heat content than previously thought.  In fact, using the corrected Resplandy et al. estimate of the change in ocean heat content over the relevant period (2007–2016) in the recent Lewis and Curry (2018)[xxii] energy budget study would slightly lower its main estimate of equilibrium climate sensitivity. Moreover, a larger estimated increase in ocean heat content would principally affect the upper uncertainty bound of the equilibrium climate sensitivity estimate. Contrary to what Resplandy et al. claim, the lower bound would be little affected and would remain well below 1.5°C, [xxiii] providing no support for increasing the lower bound of the IPCC’s range for equilibrium climate sensitivity to 2.0°C.

Resplandy et al. also make the bizarre claim that increasing the lower bound of the IPCC’s equilibrium climate sensitivity range from 1.5°C to 2.0°C would have the effect of “reducing  maximum allowable cumulative CO2 emissions by 25% to stay within the 2°C global warming target”. In fact, that cumulative carbon emissions budget is very largely determined by a combination of carbon-cycle characteristics and the transient climate response.[xxiv] Observational estimates of the transient climate response are unaffected by the level of ocean heat uptake. Therefore, increasing the lower bound of the equilibrium climate sensitivity range would have little or no impact on the cumulative carbon emissions budget to stay within 2°C global warming.

Conclusions

The findings of the Resplandy et al paper were peer reviewed and published in the world’s premier scientific journal and were given wide coverage in the English-speaking media. Despite this, a quick review of the first page of the paper was sufficient to raise doubts as to the accuracy of its results. Just a few hours of analysis and calculations, based only on published information, was  sufficient to uncover apparently serious (but surely inadvertent) errors in the underlying calculations.

Moreover, even if the paper’s results had been correct, they would not have justified its findings regarding an increase to 2.0°C in the lower bound of the equilibrium climate sensitivity range and a 25% reduction in the carbon budget for 2°C global warming.

Because of the wide dissemination of the paper’s results, it is extremely important that these errors are acknowledged by the authors without delay and then corrected.

Of course, it is also very important that the media outlets that unquestioningly trumpeted the paper’s findings now correct the record too.

But perhaps that is too much to hope for.

 

Nicholas Lewis                                                                               6 November 2018

 


[i]  Examples are:
https://www.bbc.co.uk/news/science-environment-46046067
https://www.nytimes.com/2018/10/31/climate/ocean-temperatures-hotter.html
https://www.washingtonpost.com/energy-environment/2018/10/31/startling-new-research-finds-large-buildup-heat-oceans-suggesting-faster-rate-global-warming/
https://www.scientificamerican.com/article/the-oceans-are-heating-up-faster-than-expected/
https://edition.cnn.com/2018/11/01/australia/ocean-warming-report-intl/index.html
http://www.latimes.com/science/sciencenow/la-sci-sn-oceans-study-climate-change-20181031-story.html
https://www.usatoday.com/story/news/nation-now/2018/11/01/oceans-more-heat-study-global-warming-climate-change-nature/1843074002/
https://www.independent.co.uk/environment/climate-change-global-warming-ocean-temperature-heat-fossil-fuels-science-research-a8612796.html

[ii]  Examples are:
http://www.realclimate.org/index.php/archives/2018/11/unforced-variations-nov-2018/
https://wattsupwiththat.com/2018/11/02/friday-funny-at-long-last-kevin-trenberths-missing-heat-may-have-been-found-repeat-may-have-been/
https://bskiesresearch.wordpress.com/2018/11/01/that-new-ocean-heat-content-estimate/
https://andthentheresphysics.wordpress.com/2018/11/03/new-ocean-heat-content-analysis/
https://twitter.com/Knutti_ETH/status/1057960390502608901

[iii]  L. Resplandy, R. F. Keeling, Y. Eddebbar, M. K. Brooks, R. Wang, L. Bopp, M. C. Long, J. P. Dunne, W. Koeve & A. Oschlies, 2018: Quantification of ocean heat uptake from changes in atmospheric O2 and CO2 composition. Nature, 563, 105-108. https://doi.org/10.1038/s41586-018-0651-8 (“Resplandy et al.”)

[iv]  A value of 13.3 zetta Joules (ZJ) per year, or 0.83 Watts per square metre of the Earth’s surface.  ZJ is the symbol for zetta Joules; 1 ZJ = 1021 J. 1 ZJ per year = 0.0621 Watts per square metre (W/m2 or Wm–2) of the Earth’s surface.

[v]  http://web.archive.org/web/20181103021900/https://www.princeton.edu/news/2018/11/01/earths-oceans-have-absorbed-60-percent-more-heat-year-previously-thought

[vi]  However that is in comparison with an IPCC estimate for 1993–2010; estimates for 1991–2016 are higher.

[vii]  ΔAPO is the change in ‘atmospheric potential oxygen’, the overall level of which has been observationally measured since 1991 (ΔAPOOBS). It is the sum of the atmospheric concentrations of O2 and of CO2 weighted respectively 1⤬ and 1.1⤬.

[viii]  The authors break the observed change in ΔAPOOBS into four components, ΔAPOFF, ΔAPOCant, ΔAPOAtmD and ΔAPOClimate, deriving the last component (which is related to ocean warming) by deducting estimates of the other three components from ΔAPOOBS. ΔAPOFF is the decrease in APO caused by industrial processes (fossil-fuel burning and cement production). ΔAPOCant accounts for the oceanic uptake of excess anthropogenic atmospheric CO2. ΔAPOAtmD accounts for air–sea exchanges driven by ocean fertilization from anthropogenic aerosol deposition.

[ix]  1 per meg literally means 1 part per million (1 ppm), however ‘per meg’ and ‘ppm’ are defined differently in relation to atmospheric concentrations and are not identical units.

[x] The same data is available in Excel format from a link on Nature’s website, as “Source Data Fig. 2“.

[xi] Dividing by their conversion factor of 0.087 ± 0.003 per meg per ZJ. ZJ is the symbol for zetta Joules; 1 ZJ = 1021 Joules.

[xii] I used ordinary least squares (OLS) regression with an intercept. That is the standard form of least squares regression for estimating a trend. Resplandy et al. show all APO variables as changes from a baseline of zero in 1991, but that is an arbitrary choice and would not justify forcing the regression fit to be zero in 1991 (by not using an intercept term). Doing so would not in any event raise the ΔAPOClimate estimated trend to the level given by Resplandy et al.

[xiii] I took a large number of sets of samples for each of the years 1991 to 2016 from the applicable error distributions of ΔAPOOBS, ΔAPOFF, ΔAPOCant, and ΔAPOAtmD given in Extended Data Table 4, and calculated all the corresponding sample values of ΔAPOClimate using equation (1). I then computed the ordinary least squares linear trend for each set of 1991–2016 sampled values of ΔAPOClimate, and calculated the mean and standard deviation of the trends.

[xiv] Laure Resplandy was responsible for directing the analysis of the datasets and models.

[xv] This fact was spotted by Frank Bosse, with whom I discussed the apparent error in the Resplandy et al. ΔAPOClimate trend.

[xvi] All uncertainty values in the paper are ± 1 sigma (1 standard deviation). Errors are presumably assumed to be Normally distributed, as no other distributions are specified.

[xvii] The statement in their Methods that “ΔCant′ cannot be derived from observations and was estimated at 0.05 Pg C yr−1, equivalent to a trend of +0.2 per meg−1, using model simulations” is presumably also a typographical error. The correct value appears to be +0.12 per meg yr−1, as stated elsewhere in Methods and in Extended Data Table 3.

[xviii] On that basis , I can replicate the Extended Data Table 4 ΔAPOOBS uncertainty time series values within ±0.1. Note that all the values in that table, although given to two decimal places, appear to be rounded to one decimal place.

[xix] The overall uncertainties given in Table 3 in Resplandy et al.’s source paper for its errors in ΔAPOOBS support my analysis.

[xx] When using the Resplandy et al. Extended Data Table 4 ΔAPOClimate total uncertainty time series and assuming that each year’s errors are independent, despite the trend and scale systematic errors being their largest component, the estimated ΔAPOClimate uncertainty reduces to between ± 0.20 and ± 0.21 per meg yr−1. That is still slightly higher than the ± 0.15 and ± 0.18 per meg yr−1 values given in the paper. The reason for the small remaining difference is unclear.

[xxi] It seems likely that the same non-independence over time issue largely or wholly applies to errors in ΔAPOCant, ΔAPOAtmD and probably ΔAPOFF. If the errors in ΔAPOCant and ΔAPOAtmD (but not in ΔAPOFF) were also treated as perfectly correlated between years, the ΔAPOClimate trend uncertainty would be ± 0.60 per meg yr−1.

[xxii] Lewis, N., and Curry, J., 2018: The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. J. Climate, 31(15), 6051-6071.

[xxiii] Even if the 2007–2016 ocean heat uptake estimate used in Lewis and Curry (2018) were increased by 3 ZJ yr−1 to match Resplandy et al.’s (incorrect) estimate for 1991–2016, the 1.05°C  5% lower bound of its HadCRUT4v5-based estimate of effective/equilibrium climate sensitivity would only increase to 1.15°C. Moreover, Resplandy et al.’s ΔAPOClimate data imply have a lower ocean heat uptake estimate for 2007–2016 than they do for 1991–2016.

[xxiv] See the IPCC’s 2018 Special Report on Global Warming of 1.5°C

219 responses to “A major problem with the Resplandy et al. ocean heat uptake paper

  1. Good stuff, Nic.

    • In Lewis’ error analysis is correct, then I congratulate him on catching that error, because I didn’t. He can feel free to submit his post as a comment or a response, so that it can undergo peer review.

      However, I don’t think this reflects badly on Nature at all, or shows some conspiring on the part of media and climate scientists. After all, I remember a previous time a high-tier scientific journal published an incorrect result that was trumpeted by the media (and then years later by much of the Internet). It had something to do with members of UAH under-estimating tropospheric warming due to their poor homogenization:


      [from: “Correcting Temperature Data Sets”]

      “Although concerns have been expressed about the reliability of surface temperature data sets, findings of pronounced surface warming over the past 60 years have been independently reproduced by multiple groups. In contrast, an initial finding that the lower troposphere cooled since 1979 could not be reproduced. Attempts to confirm this apparent cooling trend led to the discovery of errors in the initial analyses of satellite-based tropospheric temperature measurements.”
      http://science.sciencemag.org/content/334/6060/1232

      • stevefitzpatrick

        “However, I don’t think this reflects badly on Nature at all…..”
        Others may disagree with you.
        “… a quick review of the first page of the paper was sufficient to raise doubts as to the accuracy of its results.”

      • Curious George

        Proposing peer review in this situation is a real chutzpah.

      • Re: “Proposing peer review in this situation is a real chutzpah.”

        Not really. Last I checked, you rely on the fruits of peer-reviewed research everyday. Unless you’ve never taken medical treatment, eaten processed food, drank treated water, worn manufactured clothing, etc.

        Is peer review perfect? No. Is peer review at top-tier journals good enough to be relied upon? Yes, as shown by your daily life and the scientific advances peer-reviewed research has brought us for centuries. Complaining that peer review is unacceptable because it sometimes lets mistakes through, is like complaining that medicine is unacceptable because doctors were not perfect in the past, or that flying in planes in unacceptable because pilots have crashed planes in the past.

        I also love the selective outrage and special pleading of some faux “skeptics”. They complain that peer review is horrid when it lets through some flawed papers that present evidence in support of the mainstream evidence-based consensus. But those same “skeptics” will trumpet peer review that lets through Lewis and Curry’s papers. Or they trumpet peer review that let through the (later debunked) work of Spencer+Christy at UAH, work that challenged the mainstream evidence-based consensus. Or peer review that let through Lindzen’s paper that contained (in his own words) stupid mistakes:

        “Dr. Lindzen acknowledged that the 2009 paper contained “some stupid mistakes” in his handling of the satellite data. “It was just embarrassing,” he said in an interview.”

        Peer review is not some plot to feed the media “alarmist” (whatever that is) material. Peer review is done better at some journals that at others. It is an imperfect, relatively reliable process that served us well for centuries, and there’s always room for improvement in it. Respect it.

      • This is why I ask folks to post their analysis code

        Nic Lewis does.

      • stevefitzpatrick

        “Peer review is not some plot to feed the media “alarmist” (whatever that is) material.”
        .
        Absolutely right. Feeding the media with sensational, alarming material is done via blaring press releases at the same time as the paper’s publication.

      • Whataboutism show absolutely nothing except that Atomsk needs to change the subject.

      • It’s OK to make mistakes and then admit and/or fix them. This is what Christy did. This is what Lindzen did. This is what Michael Mann refused to do with his hockey stick. This is what Sherwood refused to do when he first used a deceptive color graph to “prove” the existent of a tropical hot spot, and when he later wanted us to believe that we should throw out balloon and satellite data and instead depend on his derivation of tropospheric temperatures through wind shear and an assumption of natural variability.

        Hughes blamed 2016 GBR coral bleaching on global warming and Jim Steele, and then later Wolanski in a published paper, showed that bleaching was due to lowered sea levels from El Nino and natural current mechanics. Did Hughes admit any error?

        Lister recently published a paper purportedly showing that insects in the Luquillo Forest of Puerto Rico died because of global warming– i.e., an (alleged) temperature rise from 26C to 28C!!! Lister ignored the widespread use of insecticides in Puerto Rico as a potential cause. It’s absurd to imagine that insects are dying because of 82F but all we’ll hear about is how that’s the future we’re facing.

        Consensus climate science has all the fingerprints of advocacy science and maybe even a touch of downright dishonesty.

      • Peer review would work much better if Nic Lewis was considered a peer.

      • Michael Dennis Jankowski

        Not accounting properly for orbital decay is “poor homogenization?”

      • Atomsk: The important question is whether the peer-reviewers of this paper would have given similar scrutiny to a paper that concluded the ocean heat content was rising slower than expected rather than faster. As far as I can tell, faster ocean heat uptake will reduce the discrepancy between observational-based estimates of ECS (EBMs) and estimates from climate models. So the natural tendency of all supporters of the consensus constructed around AOGCMs would be more carefully scrutinize a paper which enlarged this discrepancy than shrank it.

        The rivalry between UAH and RSS has ensured that the work of both sides has always be carefully scrutinized – and it has probably resulted in both parties scrutinizing their work more carefully than they might otherwise have. One the other hand, the discrepancy between observational-based and model-based ECS remained unrecognized for more than a decade before Nic Lewis’s contributions. Unfortunately for climate science, there are relatively few skeptics, and even fewer willing to deliberately “audit” the details.

        Then there is the unwillingness (acknowledged by some) to publicly discuss doubts or problems that might be picked up by the conservative press and skeptical blogs.

        IMO, the fact that auditing by Nic Lewis and Steve McIntyre has turned up so many problems (real problems as best this biased individual can tell) suggests that you and the whole climate science community should be deeply concerned about confirmation bias during peer review. However, that is another subject that can’t be publicly discussed without it reaching the conservative press and skeptical blogs. In many areas of research, there is a crisis in confidence in published work. Ionnaides, for one example. Pharmaceutical companies find their laboratories are unable to reproduce about 75% of key studies claiming to have validated a particular molecular targets (enzymes, receptors, etc) for new drug discovery.

      • Yes, Franktoo, well said. I would add however that some of this widespread problem in climate science (as in all fields) can simply be chalked up to a dysfunctional culture of peer review. There are huge numbers of papers to be reviewed and top researchers are very busy generating their own papers and results. Virtually all peer reviewers don’t have time to do even a cursory check of the work other than reading it for obvious problems and conflicts with already published papers. If peer reviewers were paid and expected to devote at least a couple of weeks to each review, the quality would be higher. The real problem here is that 90% of what is published is not worthy of the paper its printed on.

        That’s why citizen scientists are exceptionally valuable.

      • Re: “Atomsk: The important question is whether the peer-reviewers of this paper would have given similar scrutiny to a paper that concluded the ocean heat content was rising slower than expected rather than faster.”

        This case does nothing to support the line of reasoning you’re going towards. For example, I know of plenty of research that made it through peer review and which reduced estimates of climate trends. Take the following paper on the topic of altitude-dependent warming, a topic I’m interested in:

        “Artificial amplification of warming trends across the mountains of the western United States”

        And, of course, Curry herself co-authored research that later needed to be corrected:

        “Correction for Liu et al., Impact of declining Arctic sea ice on winter snowfall”
        http://www.pnas.org/content/109/17/6781

        So no, you can drop the insinuations of slanted bias in favor of showing more warming in the context ocean heat content.

        Re: “One the other hand, the discrepancy between observational-based and model-based ECS remained unrecognized for more than a decade before Nic Lewis’s contributions”

        Oh, come on.

        First, Lewis’ work uses an energy budget model. And you conveniently left out the paleoclimate observations that show a higher climate sensitivity that Lewis’ model-based approach. So your dichotomy between “observational-based and model-based ECS” here is a false one, if you’re acting as if Lewis doesn’t use a model:

        These studies employ observations but still require an element of modeling to infer ECS.
        […]
        Forster & Gregory (2006, p. 39) overstated the benefits of such an approach by claiming, “Importantly, the [ECS] estimate is completely independent of climate model results.” As Equation 1 derives directly from conservation of energy, the Forster & Gregory (2006) claim would appear valid. But it in fact makes the assumption that the α derived from a particular observational period is the same as the α applicable under long-term climate change. Another way of stating this assumption is saying that the effective climate sensitivity (the apparent ECS diagnosed from a specific α) is the same as the true ECS. Uncertainties around the derivation of ECS from an energy budget approach can be attributed to two causes: the model used to translate α into an ECS estimate and the quality of the observation-based data sets.”

        https://www.annualreviews.org/doi/abs/10.1146/annurev-earth-060614-105156?casa_token=vRrWzUdKwDEAAAAA%3AUMXASkMMfYggx6oKU4oOgDux4qh0qRfqgNYSuaj_sYPDCjHwsFgkCpNVZi6gWQc6sNAXQHSakfnCdME&journalCode=earth

        “Proxies for CO2 and temperature generally imply high climate sensitivities: ≥3 K per CO2 doubling during ice-free times (fast-feedback sensitivity) and ≥6 K during times with land ice (Earth-system sensitivity). Climate models commonly underpredict the magnitude of climate change and have fast-feedback sensitivities close to 3 K.”
        https://www.annualreviews.org/doi/full/10.1146/annurev-earth-100815-024150

        Second, energy-budget-model-based and energy balance approaches were used long before Lewis’ work. And observations were compared to models. You were wrong when you claimed otherwise:

        “To answer that question, starting in the 1960s scientists have used energy balance arguments combined with observed changes in the global energy budget, evaluated comprehensive climate models against observations, and analysed the relationship between external forcing and climate change over different climate states in the past (see Methods for a list of early publications).”
        https://www.nature.com/articles/ngeo3017

        I’ve told you this many times before, frank: please do not make false claims for which you have no cited evidence. It’s annoying.
        Maybe instead of making baseless insinuations about peer review, you should spend instead more time reading the peer-reviewed literature? That would help stop you from making the sort of false claims you made above.

        Re: “In many areas of research, there is a crisis in confidence in published work. Ionnaides, for one example”

        And now you round things out with the usual abuse of Ioannidis’ work. Amazing. This is getting too predictable.

        Ioannidis notes that the evidence (and level of certainty) on anthropogenic climate change is on par with the evidence (and level of certainty) that smoking kills people. He made this comparison because he recognizes that scientific hypotheses become more reliable (and more likely to be true) as more and more research groups test the hypothesis using different lines of evidence, methodologies, etc., and keep finding that the hypothesis passes the tests:

        17:17 to 18:22 of:
        “RS 174 – John Ioannidis on “What happened to Evidence-based medicine?””
        http://rationallyspeakingpodcast.org/show/rs-174-john-ioannidis-on-what-happened-to-evidence-based-med.html

        So no, don’t misuse Ioannidis work to support your spurious insinuations on the reliability of climate science.

      • Re: “It’s OK to make mistakes and then admit and/or fix them. This is what Christy did. This is what Lindzen did. This is what Michael Mann refused to do with his hockey stick. This is what Sherwood refused to do when he first used a deceptive color graph to “prove” the existent of a tropical hot spot, and when he later wanted us to believe that we should throw out balloon and satellite data and instead depend on his derivation of tropospheric temperatures through wind shear and an assumption of natural variability.”

        The so-called “hot spot” has been shown to exist multiple times, both in research Sherwood co-authored and in research from other groups. I’ve read his IUK papers; they were not deceptive. The color-scale was clearly defined in the papers. Feel free to show otherwise. Here’s a sampling (I suspect that figure 6 of the 3rd paper is what you’re complaining about):

        “Atmospheric changes through 2012 as shown by iteratively homogenized radiosonde temperature and wind data (IUKv2)”
        “Warming maximum in the tropical upper troposphere deduced from thermal winds”
        “Robust tropospheric warming revealed by iteratively homogenized radiosonde data”

        Moving on: Mann doesn’t need to admit to all the mistakes you claim he made, especially if he didn’t make them. He also re-did the hockey stick analysis without tree ring data and with multiple different analysis techniques. Other people replicated the hockey stick result as well. Not my fault if you don’t accept it. I’d suggest you read papers such as:

        “Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia”
        “Robustness of the Mann, Bradley, Hughes reconstruction of Northern Hemisphere surface temperatures: Examination of criticisms based on the nature and processing of proxy climate evidence”
        “A global multiproxy database for temperature reconstructions of the Common Era”
        “A Reconstruction of Regional and Global Temperature for the Past 11,300 Years”

        Re: “Consensus climate science has all the fingerprints of advocacy science and maybe even a touch of downright dishonesty.”

        And pigs fly.

        I remember when some conservatives made the same baseless claim on consensus medical science on smoking causing cancer. Or consensus climate / chemical science on CFC-induced ozone depletion. Or consensus biological science on human evolution. Or…

      • Franktoo wrote: “One the other hand, the discrepancy between observational-based and model-based ECS remained unrecognized for more than a decade before Nic Lewis’s contributions”

        Atomsk wrote: “Oh, come on. First, Lewis’ work uses an energy budget model. And you conveniently left out the paleoclimate observations that show a higher climate sensitivity that Lewis’ model-based approach. So your dichotomy between “observational-based and model-based ECS” here is a false one, if you’re acting as if Lewis doesn’t use a model …”

        The history of the failure of the IPCC to recognize the discrepancy between observational-based and model-based ECS is documented in great detail below. There is no reason (except confirmation bias) why recent and current efforts to understand the origins of this discrepancy shouldn’t have started a decade earlier. If the IPCC published reports that met Schneider’s standard for ethic science (the whole truth with all of the caveats), the discrepancy would have been candidly discussed in AR5.

        http://www.thegwpf.org/content/uploads/2014/02/A-Sensitive-Matter-Foreword-inc.pdf

        An energy balance model simply divides the current radiative forcing into two parts: 1) The current imbalance that is causing warming right now – mostly in the ocean. 2) The increase in OLR + OSR associated with a warmer planet. (OSR + reflection of SWR). d(OLR+OSR)/dTs is the reciprocal of ECS expressed in K/(W/m2) rather than K/doubling. As best I can tell, saying that EBM’s are merely models is functionally equivalent to saying applying conservation of energy to our climate system is “merely a model”. There are acknowledged uncertainties in the forcing and warming data, but I see no reason not to trust this “model” as much as I trust the law of conservation of energy.

        Estimates of ECS from paleoclimatology similarly depend on conservation of energy. However, we have far more accurate information about the planet from 1970-2010 (the period used by Otto 2013)) and the instrumental period used by others (and hindcast by AOGCMs). Where EBM’s and paleo disagree, the more reliable answer should be obvious and the central estimate for EBMs is within the confidence interval for paleo. Citing paleo is misdirection.

        You quoted a 2016 review by Forster. The full quote is:

        “As Equation 1 derives directly from conservation of energy, the Forster & Gregory (2006) claim would appear valid. But it in fact makes the assumption that the α derived from a particular observational period is the same as the α applicable under long-term climate change. Another way of stating this assumption is saying that the effective climate sensitivity (the apparent ECS diagnosed from a specific α) is the same as the true ECS. Uncertainties around the derivation of ECS from an energy budget approach can be attributed to two causes: the model used to translate α into an ECS estimate and the quality of the observation-based data sets.”

        I interpret this to mean that past surface temperature change has been forced by known phenomena that effect the radiative balance across the TOA and by unforced by chaotic fluctuations in ocean currents that control mixing between the surface and the deeper ocean (internal variability not arising from the TOA). EBM’s certainly assume that all warming is forced warming. The last sentence (the one you quoted) adds nothing new. The model used to “translate alpha into an ECS estimate” involves only F2x and conservation of energy and F_2x is only needed if you want to express ECS is terms of CO2 (K/doubling) instead of forcing (K/(W/m2)). The latter works for any forcing and is far more general. The quality of the observation-based data sets includes the confidence intervals around the inputs which produce the confidence interval around ECS and the possibility of systematic errors in forcing,

        Atomsk finishes with: “And now you round things out with the usual abuse of Ioannidis’ work. Amazing. This is getting too predictable.”

        I may have been misunderstood, but I didn’t intentionally abuse Ioannidis’ work. The point I was trying to make was that there is somewhat of a crisis of confidence in the reliability and meaning of published work in many areas of science. Standards are tightening, independent replication of important findings is getting more attention, what does it mean when 5 labs independently test a hypothesis, 4 don’t find a statistically significant effect (and don’t publish) while a fifth does find an statistically significant effect and does publish? In contrast, climate science charges forward with no public doubts despite: model parameterization and “ensembles of opportunity”, AR5’s revision of the lower limit for ECS after AR4 lowered it, decreased confidence in the MWP, the effect of GW on hurricanes, Climategate, misused of extreme weather …

      • Re: “The history of the failure of the IPCC to recognize the discrepancy between observational-based and model-based ECS is documented in great detail below”

        You’re citing a 2014 GWPF document as your source on information, despite GWPF’s long history of publishing misleading reports for ideological reasons. That is why you’re confused on this topic. Please try reading better sources.

        Instead of believing their claims about how the evil IPCC is suppressing science, actually read what the IPCC said. If you had, then you’d know that IPCC AR4 was already discussing observational estimates back in 2007. For example:

        “Estimates of Climate Sensitivity Based on Instrumental Observations”
        https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-6-2.html

        And you’re simply repeating the mistakes I already addressed, without addressing the points. Again. To reiterate: you were already cited work comparing models and observational estimates, dating back to before the IPCC even existed. So you were wrong when you claimed this issue was ignored before Nic Lewis.

        Re: “I interpret this to mean that past surface temperature change has been forced by known phenomena that effect the radiative balance across the TOA and by unforced by chaotic fluctuations in ocean currents that control mixing between the surface and the deeper ocean (internal variability not arising from the TOA). EBM’s certainly assume that all warming is forced warming. The last sentence (the one you quoted) adds nothing new.”

        First, I didn’t just quote the last sentence. Second, your interpretation is wrong. The quote is actually pointing out that the assumption that estimate for the recent historical record is the same as the estimate that will apply at later periods of time. To quote the relevant portion again:

        “But it in fact makes the assumption that the α derived from a particular observational period is the same as the α applicable under long-term climate change. Another way of stating this assumption is saying that the effective climate sensitivity (the apparent ECS diagnosed from a specific α) is the same as the true ECS.”
        https://www.annualreviews.org/doi/abs/10.1146/annurev-earth-060614-105156?casa_token=vRrWzUdKwDEAAAAA%3AUMXASkMMfYggx6oKU4oOgDux4qh0qRfqgNYSuaj_sYPDCjHwsFgkCpNVZi6gWQc6sNAXQHSakfnCdME&journalCode=earth

        As Forster notes, this assumption is not just conservation of energy; it’s an assumption one can reject without rejecting conservation of energy. Thus you’re wrong when you claim that EBM-bsed approach is just using conservation of energy. There are plenty of papers that call the assumption by pointing out how the effective climate sensitivity increases with time, such that the recent effective climate sensitivity need not be equivalent to the true ECS. For instance, see the following and the references cited therein:

        “Internal variability and disequilibrium confound estimates of climate sensitivity from observations”

        You’re also assuming that EBM-based approaches necessarily show lower climate sensitivity than models. And that’s not the case either. For example:

        “Reconciled climate response estimates from climate models and the energy budget of Earth”

        Re: “Citing paleo is misdirection.”

        Nope. It’s citing evidence that you’re trying to avoid, by acting as if the EBM-approach doesn’t use a model that depends on challenge-able assumptions. In fact, the higher climate sensitivity estimate of paleo support rejecting the claim that the “the α derived from a particular observational period is the same as the α applicable under long-term climate change”. For example:

        “A better characterization of feedbacks in warm worlds raises climate sensitivity to values more in line with proxies and produces climate simulations that better fit geologic evidence. As CO2 builds in our atmosphere, we should expect both slow (e.g., land ice) and fast (e.g., vegetation, clouds) feedbacks to elevate the long-term temperature response over that predicted from the canonical fast-feedback value of 3 K.”
        https://www.annualreviews.org/doi/full/10.1146/annurev-earth-100815-024150

        Re: “The point I was trying to make was that there is somewhat of a crisis of confidence in the reliability and meaning of published work in many areas of science. […] In contrast, climate science charges forward with no public doubts despite”

        Of course there are public doubts, which are often baseless. If there weren’t such doubts, then websites like this wouldn’t exist. Similarly, there are often baseless public doubts about whether Earth is round, HIV causes AIDS, humans evolved from non-human animals, etc. Public doubt isn’t necessarily rational.

        You also make claims about a “crisis of confidence”. But you ignored why such a “crisis” does not apply in the scientific community to various topics, like humans causing most of the recent global warming, and smoking having caused at least hundreds of thousands of cases of cancer. As your own source Ioannidis notes, these scientific scientific hypotheses become more reliable (and more likely to be true) as more and more research groups test the hypothesis using different lines of evidence, methodologies, etc., and keep finding that the hypothesis passes the tests. This has occurred for both the science on smoking killing people and the science on anthropogenic climate change. So your misuse of Ioannidis’ work as applying to climate science, is as misguided as someone using his work to cast doubt on the medical science regarding smoking causing cancer.

    • “Peer review is not some plot to feed the media “alarmist” (whatever that is) material. ”

      It can be. It depends on whom the editor chooses for peer review, and the quality and nature of the review those persons produce. In most cases the reviewers are anonymous, but that’s not necessarily a good idea since it keeps the process opaque.

      • It is not a plot; it is a paradigm. Speaking of which here is my latest for CFACT:
        http://www.cfact.org/2018/11/07/climate-alarmism-paradigm-protection-at-nsf/

        The beginning:

        In his landmark book “The Structure of Scientific Revolutions,” Thomas Kuhn discusses at great length how a scientific field can be captured by a system of false ideas, to the point where these beliefs determine what the legitimate scientific questions are. He called such belief systems “paradigms,” an example being the Earth centered model of the solar system.

        Kuhn had no name for the way that a paradigm shields the field from hard questions and contrary evidence, so I call it “paradigm protection.” Paradigm protection is rampant in the field of climate science, where the controlling paradigm is the idea that humans are causing dangerous climate change.

        The US National Science Foundation has just produced a remarkably clear example of alarmist paradigm protection. It is a multi-million dollar research funding Program cleverly titled “Navigating the New Arctic.”

        There is a lot more.

        In this case it is alarmist research empire building.

  2. Wow, just wow!

  3. Well done—yet again. Kudos.

  4. Whoah. Good work Nic.
    We’ve seen estimates of OHC trend get adjusted up and up over the last 8 years. The purpose is clearly to bootstrap and justify higher sensitivity estimates observed surface T changes won’t support.

  5. Clean, clear, well written, well cited, all homework completed.

    Well done, that man!!

    w.

  6. stevefitzpatrick

    “But perhaps that is too much to hope for.”
    .
    Of course it is. If it bleeds, it leads. “New paper is found in error, and when corrected is consistent with earlier estimates” doesn’t get a bid of press coverage.

  7. “the prestigious journal Nature.”

    “the world’s premier scientific journal”

    I think I see your problem.

  8. stevefitzpatrick

    It is very good you found this apparent error, but it does mean two things:
    1) The Nature reviewers didn’t actually review the paper (except maybe for typos), which is unfortunate. I would suggest confirmation bias, but maybe that is too harsh.
    2) You are not going to be the authors’ favorite person in the world.

    The interesting question is if Nature will ever publish a correction. My bet: not an icecube’s chance in Hell.

  9. stevefitzpatrick

    I didn’t see the paper, only the press releases, but my immediate reaction was: “Well, if that is right, then thermal expansion has to have been the dominant cause for measured sea level rise, and ocean mass increases (melting of land supported ice) estimated from Grace data have to be WAY wrong. That seems unlikely.”

  10. Just a thought, but with higher CO2 levels driving increased photosynthesic activity, and O2 being a direct byproduct of photosynthesis, would one not expect O2 levels to rise in the atmosphere? After all, this was the mechanism that generated oxygen in the first place. This increase in O2 would on the surface, appear to correlate well with the increased crop production and greening shown by Dr. Spencer and NASA studies.

  11. “Earth’s oceans have absorbed 60 percent more heat per year than previously thought”. Settled science?

    • No such claim was ever made. Straw; half and puff; goal.

    • The paper is, as I understand it, the first to employ its approach. How could it have anything to do with settled science?

      • stevefitzpatrick

        Clearly it doesn’t have anything to do with settled science. But considering that it looks like it has serious (freshman physics level) mistakes, it might be better considered ‘erroneous science’. I hope all the blaring press releases and frightening MSM headlines get walked back, something like say “Sorry, we were mistaken, it really isn’t worse than we thought”….. but I won’t hold my breath.

      • And I won’t hold my breath waiting for people to take back cheap goals.

      • It wants to add to and strengthen “the many lines of evidence” for “global warming”, which according to the consensus is settled science.

      • stevefitzpatrick

        Cheap goal? I don’t think so.

        The cheap goal was all the incorrect information that has been fed to the public from this paper, and that will never be taken back. If it weren’t for the blaring press releases and frightening MSM headlines that followed, a paper with problems like this could be (and likely would be) handled very differently. Maybe an exchange of email messages, followed by a low keyed joint correction published in Nature. That is not possible when a paper makes dramatic claims which are then breathlessly pitched to the MSM about how dire things are, accompanied by comparably breathless quotes from the authors about the ‘seriousness of the new findings’. When a scientific paper instantly becomes a bludgeon used to advance political goals, it had damned well better be bulletproof and perfect. And the truth is, very few are. If all press releases about new papers in climate science were withheld for 4 months, civility in resolving errors would be a lot more likely.

  12. Nice work Nic. This fits with my contention that a lot of climate science deals with noisy data and uses poor methods (and poor quality control) to reach conclusions that are dubious. Mann has a new post at RealClimate that perfectly illustrates these points. The bulk of the post is about how “simple physics” explains worsening weather. “simple physics” is another way of saying vague verbal formulations that lack quantification.

    We are all in your debt for doing the homework so called “peer reviewers” seem disinclined to do.

  13. “Of course, it is also very important that the media outlets that unquestioningly trumpeted the paper’s findings now correct the record too.”

    The odds of winning the Powerball are better than that happening.

    I wish I had a dollar for each paper I read that used a “novel method”.

    Apparently novel is better than proper. But if it gets you what you want, what the hey.

  14. The other day I spilled coffee on my keyboard and my up and down scroll buttons don’t work. There are a couple of comments in the thread that I cannot copy:

  15. Well done. Short, succinct, to the point. Clear and irrefutible.
    In immortal praise even.
    “ Atomsk’s Sanakan (@AtomsksSanakan) | November 6, 2018
    I congratulate him on catching that error”
    On page 1, Atom, worth rereading Nic’s logic.
    I am hopeful you have done a McIntyre on this paper’s conclusion.

  16. Before everyone gets to excited, John Kennedy tweeted something 3 days ago that suggests there might be a legitimate reason for the difference. It has to do with weighting data by uncertainty. Weighting by uncertainty is a legitimate thing to do. It is something that would need to be called out in the paper and so on and so on. Also, the difference between weighted and unweighted results should be called out.

    Playing with data from the OHC paper. I get their gradient for the regression (1.16) by weighting the data by 1/unc^2. This forces gradient to match the early data and overshoot the later, less certain, data. Other weightings give lower gradients. 1/2https://t.co/NNP5kyMOMJ pic.twitter.com/bPsXrl65Eh— John Kennedy (@micefearboggis) November 3, 2018

    https://platform.twitter.com/widgets.js

    I’m trying to see if he’ll say more on twitter.

    • stevefitzpatrick

      Lucia,
      The direct quote from the paper is:
      “From equation (1), we thereby find that ΔAPOClimate = 23.20 ± 12.20 per meg, corresponding to a least squares linear trend of +1.16 ± 0.15 per meg per year”

      I don’t see how a least squares linear trend becomes one based on 1/unc^2
      at least not without there being some explanation.

      • They say more in the methods section. They do a million simulations for trend, presumably varying numbers according to the stated uncertainties. For each one, they do a OLS regression, so the terminology is correct. The uncertainty has an effect because the earlier terms are more likely to be close to their more highly trending values in each sample, so the overall effect is uncertainty-weighted.

    • Lucia, thanks for drawing this to readers’ attention.

      John Kennedy’s weighted regression method (which could indeed be that used by Resplandy) forces the regression fit through zero in 1991. There is no justification for doing so, as I point out in note [xii]. The uncertainties aren’t really zero in 1991; it is simply that the 1991 data value has been deducted from all years’ data. The uncertainty only appears to be low in the early years because 1991 is used as the base year. One could instead have deducted the 2016 data value from all years’ data. That would result in zero uncertainty in 2016 and maximum uncertainty in 1991. Using weighted regression would then force the fit through the 2016 data point and produce a very low trend, as the slow growth in the (then lower uncertainty) later years dominates the fit.

      The underlying issue is that the error in dAPO_Climate values is dominated by trend and scale systematic errors in its components. Those errors give rise to irreducible uncertainty in the trend in dAPO_Climate. It is arbitrary which year is used as a base to measure trend errors from. Whichever year is chosen, the trend error magnitude will be zero in that year and grow in both directions away from it. If the method used results in the estimated dAPO_Climate linear trend depending on the arbitrary choice of base year for measuring the trend error, it must be wrong. Where data error ranges arise due to trend uncertainty that affect all years’ data in proportion to distance from a base year, it is not appropriate to weight the data values inversely with their error variance, as Kennedy’s method does.

      • Ni Lewis,

        Thank you for another excellent post and for the explanation you provide in this comment.

      • Nic,

        The uncertainties aren’t really zero in 1991; it is simply that the 1991 data value has been deducted from all years’ data.

        Fair enough. I haven’t read the papers. So I assumed theses were some sort of published measurement uncertainties — similar to what we would do with instrumentation. (Hadcrut has published uncertainties that exist external to any individuals decision to fit the data. )

        I agree the uncertainty is defined as zero for 1990, and then that’s not legitimate. (If one does want to do that, at least define it as zero in the midpoint of the time range! Even then that’s not quite right.)

      • “estimate the rate of change in the original quantity”
        The original quantity is APO. But they are looking at trend in APO_climate. That involves attribution, which you can do for a change. But not necessarily for the original quantity.

        It’s like where you measure temperature of a place since 1991, and during that time the location has shifted up a hill. You can talk about the part of the change that was due to climate, and the part that was due to location change. But it doesn’t make sense to ask what fraction of the temperature in 1991 was due to climate, and what fraction due to location. So there is no absolute definition of T_climate.

    • I reproduced Nic Lewis’ numbers.

      I cannot reproduce John Kennedy’s numbers.

      • I can reproduce John Kennedy’s numbers. The R code is here, with lewis.csv being derived from the linked xlsx file:
        v=read.csv(“lewis.csv”)
        x=v$year
        y=v[,2]
        wt=v[,3] # to deal with zero uncertainty at 1991
        wts=1/wt/wt
        wts[1]=100
        h=lm(y~x,we=wts)
        print(summary(h))

        That gives the same slope of 1.162, and the std error 0.0514, which JK notes is lower than the paper. The essential thing, as Nic says, is that the regression is constrained to pass through 0 at 1991. I agree with Nic that this may not be the right thing to do, even though it is the defined reference value. If you don’t, but assign an uncertainty same as 1992, the slope is 1.014.

      • Nick:
        Agreed. If we put an arbitrarily large weight (100) on the first observation, the slope is indeed 1.16 (+/-0.05).

        I used Gretl.

      • > wts[1]=100

        Wow! Did they really do that?!

      • “Wow! Did they really do that?!”
        No, they did nothing like that, or like what Nic did either. It’s just my shortcut for a regression with fixed intercept. What they in fact did was create a statistical model for the uncertainties of the data, create a million ensembles (realizations) and derive the statistics of that.

        It takes a lot more analysis that what is given here to work out the effect of what they did. What John Kennedy did was to show that an uncertainty weighting with fixed intercept was adequate to explain the trend. It mimics some of the effect of what they did, but it isn’t the same.

      • stevefitzpatrick

        Nic Stokes,
        “If you don’t, but assign an uncertainty same as 1992, the slope is 1.014.”

        Assign the uncertainty for all the data points to be the same as 1992? If so, how could it not just be the same as an OLS calculation?

      • “Assign the uncertainty for all the data points to be the same as 1992? I”
        No, assign the uncertainty of 1991 to be the same as 1002. At present it is artificially zero because it is chosen as the reference value.

      • stevefitzpatrick

        Nick Stokes,
        OK, so assigning the starting point some (small) uncertainty moves the resulting calculated slope about half way to Nic’s result of 0.88. But that doesn’t answer Nic’s question of the validity of having ever growing uncertainty the further away in time from 1991. Do you think this is reasonable, and if so, why?

      • “Do you think this is reasonable, and if so, why?”
        It may well be. It is, as said, ΔAPO, the difference between a given year and 1991. That becomes more uncertain as time proceeds, and the difference becomes larger.

      • Seems to me the way to do this is to not use deltas and estimate the rate of change in the original quantity. All the data points then have roughly the same measurement uncertainties.

      • stevefitzpatrick

        Nick Stokes,
        ” It is, as said, ΔAPO, the difference between a given year and 1991.”

        But could it not just as easily be calculated based on the differences between a central year and both earlier and later years? (the signs for the ΔAPO’s before and after would be different, of course) Seems to me what date you choose as a reference point ought not substantially change the calculated slope.

      • David
        “estimate the rate of change in the original quantity”
        The original quantity is APO. But they are looking at trend in APO_climate. That involves attribution, which you can do for a change. But not necessarily for the original quantity.

        It’s like where you measure temperature of a place since 1991, and during that time the location has shifted up a hill. You can talk about the part of the change that was due to climate, and the part that was due to location change. But it doesn’t make sense to ask what fraction of the temperature in 1991 was due to climate, and what fraction due to location. So there is no absolute definition of T_climate.
        (apologies for misplacing response upthread).

      • “Seems to me what date you choose as a reference point ought not substantially change the calculated slope.”

        “If the method used results in the estimated dAPO_Climate linear trend depending on the arbitrary choice of base year for measuring the trend error, it must be wrong. ”

        That seems correct. I can measure how wrong every opinion is by its distance from my own.

  17. There are 2 further problems to discuss.
    One is the magical pudding result of this paper.
    It is all very well to say we have been underestimating the amount of heat that went into the oceans but where is it?
    (Hansen).
    If it really truely occurred it has to be detectable by Argo the satellites and (choke) the models.
    But it is not there.
    Are our observations so unbelievably wrong? Rhetorical.
    It is one thing to make a grandiose claim.
    You have to be able to show proof of the result of that claim.

  18. Second is the concept.
    Of using O2.
    The authors start with observed changes in ‘atmospheric potential oxygen’ (ΔAPOOBS)
    And CO2.
    the recent paper that [quantifies] ocean heat uptake from changes in atmospheric O2 and CO2 composition, by Resplandy et al
    Reliably.
    For the world.
    I understand they have figures. Nic has found the figures. But seriously.
    We are talking about these levels at sites all over the world and yet using a presumed aggregate of levels at a few sites possibly not even related to the oceans themselves.
    The potential standard deviations are enormous.
    CO2 level estimation is only done at a couple of places with very por correlation to that at sea level over the oceans which is substantially unmeasured.
    O2 measurement is even more fragile and unreliable.
    While the concept is great it is of one of those measures that should help to provide backup and reassurance to our observations, not refute or challenge them.
    A glaring difference like this raises two possibilities. The measurements of CO2 and O2 potential have serious limitations or errors (do more measurements in different places).
    Or worse, confirmation bias of the worst sort.
    The desperate effort to find some way to refute observations which question AGW (climate change) makes people do strange things. Mann, Gerghis and the holy grail (hot spot) searchers come to mind.

  19. Just curious: I tried to do a back-of-the-envelope calculation. If I did the math right, this paper says that we’ve increased the average temperature of the *entire ocean* by about 0.065 degrees in the past 25 years.

    First, did I do the math right?

    Second, assuming I did, if the ocean is warming more slowly than the atmosphere, would that mean that over time, the ocean will dampen the effect of CO2 warming?

    And an aside: the said that they found an error of 60% in the estimate of ocean heat uptake. It’s a pretty big error. Is there any reason that the error couldn’t have been “the other way”? As in: could we have found that the oceans absorbed 60% less heat? And, if so, what does that say about our ability to understand the climate at all?

    • Most of the heat should stay/ warm the upper ocean so the SST should go up more than your estimate. Secondly the temperature changes are minute whereas what we are fed in OHC sounds massive even for 0.01 C of warming. I imagine your calculations could be good.

  20. Nic, Wondering if you had seen this

  21. used a novel method
    Novel as in fiction?

  22. “We show that the ocean gained 1.33 ± 0.20 × 1022 joules of heat per year between 1991 and 2016, equivalent to a planetary energy imbalance of 0.83 ± 0.11 watts per square metre of Earth’s surface.”

    An energy (sic) imbalance of 0.83 W/m2 is consistent with both Argo heat and CERES raw monthly power flux imbalance. Thus the first hurdle is passed. Although I suspect that the result may at lest unconsciously be tuned to that.

    Natural variability continues to be underestimated. There is clearly some not so simple physics – Rayleigh–Bénard convection – in cloud formation over oceans that is related to ocean surface temperature. From observation cloud variability over the eastern and central Pacific is the dominant source of global cloud variability (Clement et al 2009).

    e.g. https://www.nature.com/articles/s41467-018-04478-0

    The cloud radiative effect is the result of a non-linear relationship of rain to the internal kinetic energy of water vapor molecules in the presence of nucleating aerosols. Closed convection cells persist for longer over cool ocean surfaces before raining out from the center. It leads to modulation of TOA power flux by inter-annual to millennial variability in the Pacific state. The rate of ocean heating changed in the late 1990’s as a result of low frequency climate variability – largely to do with shifts in the Pacific state. That will shift again within the decade.

    In a practical sense the estimated components on the RHS of this equation have uncertainties greater than +/- 20%. And I suspect that biological component is radically underestimated through neglect of the terrestrial system.

    e.g. http://environmentportal.in/files/Temperature%20associated%20increases%20in%20the%20global%20soil.pdf

    These estimates are unlikely to be significantly improved leaving CERES and Argo as the most refined energy observing systems. Nonetheless – this study is a refreshingly new and plausible idea in Earth system science – one that opens up a different perspective – and they get serious scientific points.

    The noise around it is simply incredulous hyperbole from the climate tribes as usual – sound and fury signifying nothing.

    • Robert I. Ellison
      If CERES provided robust absolute values for energy going in and out, there would be nothing more to debate – the energy budget would be seen to either track CO2 levels or not.
      But since there is still a debate, I’ll take it CERES isn’t up to that vital measurement ?

      • CERES and SORCE it is. And the mismatch is a little more than 1% – some 4W/m2. But of course the EEI is less than that – somewhere around 0.8 W/m2 in the most recent decade – it is subject to a one off adjustment to close the budget.

        The result is most intriguing.

        Power flux imbalances change from negative to positive on an annual basis. The average is 0.8W/m2 – consistent with the rate of ocean warming. Adjusted to be so. The trend over the period of record is negative as the planet tends to maximum entropy. The large swings in imbalances are due to the current orbital eccentricity. Nor is carbon dioxide the major source of change in precisely and stably measured outgoing energy anomalies. There is no substantive ‘debate’.

      • Robert I. Ellison
        Thanks for your comments.
        So the earth system is supposedly warming at 0.8 w/m^2. But is that energy budget something that is
        – directly measured, in absolute energy units ? or
        – something that is merely modelled from relative radiation changes ?

        But even if we grant that figure, why do you then go on to say it isn’t consistent with the overall CO2 trend ?

        And what exactly do you maintain is there “no substantive debate” about ?

      • > Power flux imbalances change from negative to positive on an annual basis … The large swings in imbalances are due to the current orbital eccentricity. <

        Intriguing indeed. Varying distance from the sun I assume you mean.

        And what is the prognosis for our orbit and its eccentricities ?

  23. Trending at rank exploits (the blackboard) and at ATTP. Guess WUWT next.

  24. Bravo, Nic.

    Sheesh, Trenberth and others will have to go in search of the missing heat again.

    Regards,
    Bob

  25. Nic- Can you replicate reported PMEL, MRI, NCEI and CHEN linear trends in Table 1?

  26. Nic ==> Thank you for the review. The paper and conclusion set off my “multiple errors” alarm, but the analysis was far beyond my abilities.

  27. Besides this, Nic needs to write a letter to Nature if he wants his objection to be considered seriously, where the original authors can reply and the science community can read. Few scientists read blogs.

  28. Besides this, Nic needs to write a letter to Nature if he wants his objection to be considered seriously, where the original authors can reply and the science community can read. Few scientists read blogs.

    • stevefitzpatrick

      Looks like you are not being blocked.
      “Few scientists read blogs.” I wonder if that includes all the scientists who actualy host blogs. Is that an ‘appell’ to your personal authority, or do you have some published citations?

      • or do you have some published citations?

        Do you? What % of scientists host blogs?

      • stevefitzpatrick

        I don’t know how many scientist host blogs, but there obviously are some who work in climate science. But I didn’t make the claim of “few scientists read blogs”, Appell did. He ought to be able to substantiate that claim if he thinks it is true.

    • David

      I agree . if there is a mistake it needs to be pointed out to the journal..

      Which raises the interesting question that IF this is a mistake how did it slip through peer review of a publication like Nature??

      What criteria is used to thoroughly examine submitted papers and does that vary according to the topic or the author or the journal?

      tonyb.

    • David Appell: Besides this, Nic needs to write a letter to Nature

      I agree.

      I forgot to write the recommendation when I posted my “thank you” to Nic Lewis.

    • > Nic needs to write a letter to Nature
      Yes now that blogs have highlighted the issue, Nature will find it more difficult to bury / decline it.

  29. “Earth’s oceans have absorbed 60 percent more heat per year than previously thought”
    Should this read
    “Earth’s oceans have absorbed 60 percent more heat per year than ever detected”?

  30. Figure 1 is amazing. Is it correct that there are only about 25 data points, and that you can see their mistake just by graphing and eyeballing the data? And that it looks like a linear trend doesn’t fit right and the shape is concave, which is even worse for their conclusions? It may be an honest mistake, but it looks inexcusably sloppy, especially when there are 10 authors who presumably have at least read the paper but none of them noticed an obvious major error. Not to mention the referees and editor!

  31. Nic,
    “On a corrected basis, I calculate the ΔAPOClimate trend uncertainty as ± 0.56 per meg yr−1,”
    I think your approach is overly simplistic here, and doesn’t take account of the Monte Carlo aspect of what they have done. As I read it, the steps are
    1. They form, in effect, a statistical model of the data
    2. They input this to a million OLS regressions
    3. The result trend stated is an average of these, and their σ is deduced from the distribution of results.I think that the σ’s in Table 4 col 3 are also from these million realisations of the stat model.

    They are clearly aware of the non-independence of some of the error terms, as mentioned in their methods text. I think the way they have done the Monte Carlo is where there is effective uncertainty weighting. I don’t know if the underlying stat model is a good one, but the key thing will be whether they vary the terms like corrosion independently from year to year.

    As John K said, a simple unc-weighted regression gives a much smaller standard error than theirs, but with similar trend. It is possible that their Monte Carlo gives an error that is larger because of correlation, but is more discriminating than your idea of a single regression. Of course, it may also be wrong.

    • Nick
      I had thought that they did what you say. I consider that to be an appropriate method, provided the statistical model used for the data is realistic. It is what I did, initially using, in effect, their statistical model. (I took 200,000 rather than 1,000,000 samples for faster computation.) See my note [xiii]. This method gives a dAPO_Climate trend estimate of 0.88 per meg per year.

      However, the wording in their Methods section is not very clear, and I now think it likely that they did something different, as follows:

      1. They form, in effect, a statistical model of the data.
      2. They sample from it a million times to obtain a million realisations of the dAPO_Climate time series.
      3. They compute the standard deviation sd[i] of the million realisations for each year i.
      4. They carry out a million weighted least squares (not OLS) regressions, using weights w[i] = 1 / sd[i]^2
      5. The result trend stated is the mean of the regression slopes, and their σ is their standard deviation.
      6. Alternatively they might have carried out a single weighted least squares regression on the best-estimate dAPO_Climate time series (Col. 2 of their Extended Data Table 4) using the above-mentioned weights. That is what John Kennedy did.

      Both variants of this procedure give a slope estimate of 1.16 per meg per year, but the error is much smaller in the second case. Unfortunately neither variant is appropriate, given the nature of the data errors.

      I like you think that the σ’s in Table 4 col 3 are also from their million realisations of the stat model.

      Interesting though the question is, how Resplandy et al actually derived their estimated trend and uncertainty is not the key issue. Rather, the key issue is how the trend and uncertainty should have been derived, and what their resulting correctly-derived values are. I consider that the approach I used, as set out in the Uncertainty analysis section of my article, is a reasonable method (albeit it could be improved slightly), and that the results of Resplandy’s method – whatever its details – cannot possibly be justified.

  32. My comments are being blocked here. Judith, really?

  33. As I understand it Nature / Resplandy et al. didn’t publish their computer code. I don’t see that Nic Lewis published his either.
    Did I miss something here?

    Speculating and guessing about errors in other people’s code instead of just inspecting and/or rerunning the code in this day seems strange.

    • Their paper includes the following
      “Code availability. ESM codes are available online for IPSL-CM5A-LR (cmc.ipsl.fr/ipsl-climate-models), GFDL-ESM2M (mdl-mom5.herokuapp.com/web/docs/project/quickstart), UVic (climate.uvic.ca/model) and CESM (www.cesm.ucar.edu/models/).”
      and on data
      “Data availabilityScripps APO data are available at http://scrippso2.ucsd.edu/apo-data. APOClimate data, contributions to APOOBS and ocean heat content time series are available in Extended Data Figs.1–4 and Extended Data Tables1–5. Model results are available upon reasonable request to R.W. (IPSL anthropogenic aerosol simula-tions), L.B. (IPSL-CM5A-LR), M.C.L. (CESM-LE), J.P.D. (GFDL-ESM2M) or W.K. (UVic).”
      Requests for code seem to be a kneejerk response from people who really have no idea what is involved. They didn’t just do a regression; they did a million-member ensemble. They used several GCM’s. So what code do you want?

  34. Of course, it is also very important that the media outlets that unquestioningly trumpeted the paper’s findings now correct the record too.

    Kevin Trenberth is father to the idea that humanity’s heat is there but it’s hiding from our detection in the deepest of the deep, deep ocean. AGW alarmism is a sort of extended Trenberthian that takes us right up to the edge of the cliff where nothing is left but the simple reality that, we can’t see humanity’s heat amidst all that pesky natural variability.

  35. Nic Lewis, thank you for the essay.

  36. Pingback: Alarmist Study On 'Overheated Oceans' Fails Scientific Scrutiny | PSI Intl

  37. A day in moderation and nothing has improved.

    https://judithcurry.com/2018/10/27/week-in-review-science-edition-88/#comment-883097

    When they say it is at the higher end of warming rate estimates – it is the more credible end.

    • CERES and Argo data are incontrovertible – the planet is warming at a rate of about 0.8W/m2. Understanding why is the next level.

      Quibbling about difference in rates that are within confidence limits is an almighty waste of time.

  38. Pingback: Oops. The oceans are warming, fast! Resplandy et al. shows how peer review can fail, even at Nature | Watts Up With That?

  39. Pingback: Oops. The oceans are warming, fast! Resplandy et al. shows how peer review can fail, even at Nature |

  40. This Matlab code is how I read the methods section. It gives a standard error of 0.21, close to what Resplandy found, but a slope of 0.89:

    for i=1:1000000;
    %normrnd returns a random drawing from a normal distribution
    v1 = normrnd(data(2:26,4),data(2:26,5));
    v2 = normrnd(data(2:26,6),data(2:26,7));
    v3 = normrnd(data(2:26,8),data(2:26,9));
    v4 = normrnd(data(2:26,10),data(2:26,11));

    v5 = v1-v2-v3-v4;
    %add the zero back in
    v5 = [0; v5];

    beta = inv(x’*x)*x’*v5; %OLS

    t(i) = beta(2);
    end
    mean(t)
    std(t)

    • This simplest of calculations shows their error that led Nic to dig deeper.

      “ΔAPOClimate = 23.20 ± 12.20 per meg” divided over the 26 years under study.
      23.2/26 = 0.89.
      Significant deviations from that cannot be justified. Minor deviations, yes.

      Any other weighting, or extensive statistical processing and/or selection of a base year is subjective without sound first-principle reasoning. As done, it is susceptible to cherry-picking. And importantly, is not fully described (whatever was done) in Methods by the authors. There can be no excuses for this without invoking an intent.

    • Richard
      Thanks for confirming. That is how I also had read the methods section when I wrote my article. As my Uncertainty analysis section and endnote [xx] indicates, I effectively did almost the same as you, and obtained the same trend and trend uncertainty. (I took their dAPO_Climate values and uncertainty values from columns 2 and 3 of their data table, which had already had the subtraction and adding of errors in quadrature performed).

      However, as I wrote in my Uncertainty analysis section, this procedure is clearly wrong since it treats errors as being uncorrelated between years, whereas in fact the largest components of the error are perfectly correlated across all years – the same error just scales with time elapsed since 1991. When one allows for this, the mean trend estimate doesn’t change, but its uncertainty becomes much larger.

      • I reran the code. Larger slopes are associated with lower standard errors on the slope.

        If I use the weighted mean of the 1 million slope estimate, rather than the raw mean, it indeed goes up, from 0.8900 to 0.8996.

    • and if I assume perfect correlation, the weighted trend goes to 0.96 …

  41. Looks like we’ve got as much El Niño as we’re going to get.

    http://www.bom.gov.au/climate/enso/#tabs=Outlooks

    • “Looks like we’ve got as much El Niño as we’re going to get.”
      Your BoM link starts out
      “All surveyed models predict the tropical Pacific Ocean will continue to warm in the coming months.”

      • “The dependence of the prediction skills of ENSO on its phase is linked to the variation of signal-to-noise ratio (SNR). This variation is found to be mainly due to the changes in the amplitude of the signal (prediction of ensemble mean) during different phases of the ENSO cycle, as the noise (forecast spread among the ensemble members), both in the Niño3.4 region and the whole Pacific, does not depend much on the Niño3.4 amplitude. It is also shown that the spatial pattern of unpredictable noise in the Pacific is similar to the predictable signal. These results imply that skillful prediction of the ENSO cycle, either at the initial time of an event or during the transition phase of the ENSO cycle, when the anomaly signal is weak and the SNR is small, is an inherent challenge.” https://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-18-0285.1

        A sophisticated understanding of the limitations of models – and of the system dynamics – is possible – then there is a naive waving them about like magic talisman.

  42. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – students loan

  43. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – iftttwall

  44. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice | Newzsentinel

  45. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice | peoples trust toronto

  46. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice | StockTalk Journal

  47. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - open mind news

  48. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice | Real Patriot News

  49. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – TCNN: The Constitutional News Network

  50. Someone first has to ask “How Does CO2 and LWIR between 13 and 18 microns warm the oceans?” That is the first basic question that has to be asked. Everything has to tie back to CO2’s contribution. CO2’s only contribution is through the radiation or thermalization of 13 to 18 micron LWIR. That is the only mechanism defined by which CO2 can affect climate change.

    You can test in a lab if 13 to 18 micron LWIR will warm water. It won’t, in fact, Ice emits shorter wavelength IR than CO2 emits. 0.00 Degree C Ice emits 10.5 micron LWIR.
    http://www.spectralcalc.com/blackbody_calculator/blackbody.php

    The oceans are warmed by shortwave/high energy visible blue light, not LWIR. CO2 is transparent to visible radiation. The warming oceans are the greatest evidence that CO2 is not the cause of global warming. What is warming the oceans is also warming the atmosphere above it.

    To warm the oceans you need either 1) a hotter sun 2) fewer clouds or 3) both. That is in fact exactly what has happened during the recent warming period.
    The Millennial Turning Point – Solar Activity and the Coming Cooling
    https://wattsupwiththat.com/2018/11/02/the-millennial-turning-point-solar-activity-and-the-coming-cooling/

    Real Climate Science is Finally Figuring Things Out; Its the Sun Stupid
    https://co2islife.wordpress.com/2018/11/04/real-climate-science-is-finally-figuring-things-out-its-the-sun-stupid/

  51. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – Forex news forex trade

  52. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – secretsforex.com

  53. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - forex analytics Forex Factory provides information to professional forex traders; lightning-fast forex news; bottomless forex forum; famously-reliable forex calendar; aggregate ...

  54. Pingback: 1 – A major problem with the Resplandy et al. ocean heat uptake paper | Traffic.Ventures Social

  55. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – Binarybrokersblog.com – binary options trade forex

  56. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – forex-4you.com, الفوركس بالنسبة لك

  57. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - pustakaforex.com

  58. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - aroundworld24.com

  59. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - entertainment-ask.com

  60. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – top10brokersbinaryoptions.com

  61. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - megaprojectfx-forex.com

  62. Pingback: Les Principaux Erreur De Math Met Largement Cité Réchauffement Climatique Étude Sur La Glace – forex-enligne-fr.com

  63. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – forexreadymax.info

  64. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – Forex news – Binary options

  65. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice - forexdemoaccountfree.com

  66. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – fastforexprofit.com, الفوركس بالنسبة لك

  67. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – 4-forex.info

  68. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice « Socio-Economics History Blog

  69. What I take from this is that the raw data in Figure 1 supports Argo very well even to the apparent break in the gradient around 2005.

    If you do the conversion this line would agree with the black line in Figure 1, both averaging 10 ZJ/yr for 1991-2016.
    The controversy is about the red line fit, not the raw annual data. I regard Argo as a more direct measure anyway, with perhaps more uncertainty in the early years, so here is an independent dataset that also supports those early years. Useful plot to have in addition to Argo.

  70. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – customchakra.com

  71. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – comparforex.com

  72. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – stocksoptionsandforex.com

  73. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – forex analytics forexpic.com

  74. Pingback: Math Error Puts Widely Cited Global Warming Study On Ice | Thought Crime Radio

  75. CERES edition 4.0 data is beautiful. Errors are a fraction of a percent – and the instruments are relentlessly stable.

    https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-17-0208.1

    #jiminy is one will will reject science – bizarrely without reading it and on the basis solely of what is happening in his head at the time – and then claim that it validates the warmer members of a CMIP opportunistic ensemble.

    “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system�s future possible states by the generation of (perturbed physics ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.” IPCC as long ago as 2001.

    I am inclined to think that nothing can validate opportunistic ensembles.

    • “A one-time adjustment to shortwave (SW) and longwave (LW) TOA fluxes is made to ensure that global mean net TOA flux for July 2005–June 2015 is consistent with the in situ value of 0.71 W m−2”.
      In situ – that’s Argo. One-time adjustment assumes a linear drift. Maybe it is, maybe it isn’t. Maybe they don’t know so they do what they can. However it tells you this is not an independent measure of the OHC.

    • Ah – bold is mine. And Mark Twain.

  76. Pingback: Major Math Error Puts Widely-Cited Global Warming Study On Ice – mudahnyaforex.com

  77. Pingback: Major Math Error Puts Global Warming Study On Ice | PSI Intl

  78. Pingback: Quality Control Sorely Needed In Climate Science: Half Of Peer-Reviewed Results Non-Replicable, Flawed.

  79. “The press release [v] accompanying the Resplandy et al. paper was entitled “Earth’s oceans have absorbed 60 percent more heat per year than previously thought”,[vi] and said that this suggested that Earth is more sensitive to fossil-fuel emissions than previously thought.”

    Declining solar wind temperature/pressure since the mid 1990’s, driving a warm AMO phase via the NAO, driving a decline in cloud cover. It’s a negative feedback.

  80. Reblogged this on 4TimesAYear's Blog.

  81. Pingback: At the Fall Snow Café... | Top 10

  82. Yancey David Ward

    I think the interesting question here is what would have happened to the paper if the results, as Lewis describes them, had been described by the authors themselves? I suspect they wouldn’t have tried to publish it, or, if they had, Nature and the referees would have declined it. Does anyone really doubt this?

    • One should focus on the facts. In the main post Nic rolled out a “major problem” in the methods which questions the basic findings in the Resplandy et al. (2018) paper. Only a hasty look at fig.3 of the paper shows, that an inceasing of APOclimate of more than 1 per meg/year is very unlikely for the time span of 25 years after 1991:

      A more stringent peer review should have steped in to avoid further confusion.
      On the other hand it was also mentioned by Nic:
      “Moreover, even if the paper’s results had been correct, they would not have justified its findings regarding an increase to 2.0°C in the lower bound of the equilibrium climate sensitivity range and a 25% reduction in the carbon budget for 2°C global warming.
      The conclusions ot the paper are not justified at all by the results of the paper. This was shown by other scientists, e.g. Thomas Stocker, James Annen too.
      Both findings are reasons enough that this paper ( as we find it at the time in “Nature”) should have never been released. IMO this should also lead to some questions relating the processes in the editors policy of the journal.

  83. Was really disappointed in how Curry advertised this blogpost as “Nature strikes out again”. If she really thinks that, PNAS (which is about a high-tier a journal as Nature) also struck out when it published error-containing research Curry co-authored. The double-standard in Curry’s position is amazing.

    As I said before, if Lewis analysis is correct, then I congratulate him on catching that error, because I didn’t. He can feel free to submit his post as a comment or a response, so that it can undergo peer review. But this doesn’t reveal some deep unreliability in Nature, unless people also accept that Curry’s past mistakes mean she’s unreliable. And I doubt people would be willing to accept that.

    • Seriously, Atomsk? You’re comparing the actual error in the Nature paper (if Nic is correct, which he appears to be given that the authors agreed with him that the uncertainty was too small) with Dr. Curry’s error, which was:

      “The authors note that the legends for Figs. 1, 2, and 3 appeared incorrectly.”

      Other than the fact that this is a totally trivial error, it’s not even clear who made it, Dr. Curry or the Journal. Nor does it matter. It’s as important as a typo, which is to say, not at all. Unlike the error Nic found, it doesn’t change any of her claims or conclusions in the slightest.

      How petty can you get? Sheesh …

      w.

      • Re: ““The authors note that the legends for Figs. 1, 2, and 3 appeared incorrectly.”
        Other than the fact that this is a totally trivial error, it’s not even clear who made it, Dr. Curry or the Journal. Nor does it matter. It’s as important as a typo, which is to say, not at all. Unlike the error Nic found, it doesn’t change any of her claims or conclusions in the slightest.
        How petty can you get? Sheesh”

        JC SNIP: This is irrelevant. The lead author on this paper was my postdoc. He corrected a minor error in the journal.

      • Re: “Other than the fact that this is a totally trivial error, it’s not even clear who made it, Dr. Curry or the Journal. Nor does it matter. It’s as important as a typo, which is to say, not at all. Unlike the error Nic found, it doesn’t change any of her claims or conclusions in the slightest.
        How petty can you get? Sheesh …”

        JC SNIP: this is a guest post and technical thread. Your specious criticisms about things that I have previously written that are irrelevant to this thread are worse than irrelevant.

      • Atomsk’s Sanakan (@AtomsksSanakan) | November 9, 2018 at 10:59 pm |

        It looks like you messed up, Willis. Tell me what the error was in the legends. Or in other words: tell me how the figure changed between before the correction and after the correction.
        You would need to know this in order to make the claims you did (such as that the error was “total trivially”, “as important as a typo”, etc.).

        Not true at all. If the error required her to retract even one of her claims, it would have had to be mentioned in the correction.

        It was not. Not one of her ideas, claims, or conclusions were required to be changed, corrected, or retracted based on the error in the legends.

        Ergo, unlike the problem that Nic identified, it was a trivial error, not something that affected her scientific claims.

        But you knew that …

        w.

      • Re: “If the error required her to retract even one of her claims, it would have had to be mentioned in the correction”

        Please read more closely, Willis. I didn’t say anything about retracting her claims. I said it would be difficult to use the uncorrected figures alone to support informed acceptance of the conclusions, because the uncorrected figures are difficult to interpret. That’s why the figure legends needed to be corrected.

        Re: “Ergo, unlike the problem that Nic identified, it was a trivial error, not something that affected her scientific claims. But you knew that …”

        And you failed to meet my challenge, as expected, because you made your claims without knowing what the actual error in the figure legends was. This is why you’ve been warned before to actually read scientific sources before you comment on them, instead of relying on press pieces or your gut feeling:

        https://tamino.wordpress.com/2017/07/22/does-willis-eschenbach-have-any-honor/

        So, once again:
        Tell me what the error was in the legends. Or in other words: tell me how the figure changed between before the correction and after the correction.

        I’ll give you more time to answer, since I need to step away for awhile anyway.

    • stevefitzpatrick

      You are comparing a blueberry to a grapefruit and claiming they are the same because both are roughly spherical. The central claims of the paper appear to be just wrong, and the stated uncertainty clearly so. That is bad for any journal….. even Nature. The question people should be asking is how the review process failed in this case, and how it could be improved. My guess is that had Nic been a reviewer the paper would not have been published in the form it was. Maybe that points toward how reviewers of ‘groundbreaking’ papers should be selected.

      • There should be massive opportunities for future generations for case studies of how well reviewers were prepared academically to fully grasp the methods and approaches that were “novel” or untried. But, if interviewed, I wonder how many would be candid and say “ I had no idea what they were doing” just out of embarrassment of such an admission.

        To be around in 2050 for such a retrospective. And not just this issue but being able to analyze the entire decision making process of the establishment for the preceding 70 years. Oy!!!

      • Re: “You are comparing a blueberry to a grapefruit and claiming they are the same because both are roughly spherical. The central claims of the paper appear to be just wrong, and the stated uncertainty clearly so”

        The central claim of the paper was using a novel, proxy-based method to show that ocean heat content increased. That central claim is true. At best, Lewis showed the OHC increased was over-estimated and the uncertainty under-estimated. And people are now disputing his claim on that, which is why I recommended he submit it for peer review.

        Re: “That is bad for any journal….. even Nature.”

        We’ve been over this:
        https://judithcurry.com/2018/11/06/a-major-problem-with-the-resplandy-et-al-ocean-heat-uptake-paper/#comment-882996

        Once again:
        This isn’t a black eye for Nature, unless you want to claim that the journals that published Spencer and Christy’s debunked UAH work, are also garbage. And note that in those cases, unlike this case, the central claim was wrong. Spencer and Christy’s central claim was that satellite-based MSU analysis did not show tropospheric warming. They were wrong.

        Re: “The question people should be asking is how the review process failed in this case, and how it could be improved.”

        People have been asking how to peer review can be improved since before you or I were born. This blogpost offers nothing novel for that discussion.

  84. Pingback: Oops. The oceans are warming, quick! Resplandy et al. paper reveals how peer evaluation can fail, even at Nature | Tech News

  85. Pingback: Oops. The oceans are warming, quick! Resplandy et al. paper reveals how peer evaluation can fail, even at Nature | Tech News

  86. Pingback: Oops. The oceans are warming, quick! Resplandy et al. paper reveals how peer evaluation can fail, even at Nature | Tech News

  87. I am not sure that the ‘error’ has been demonstrated even close to conclusively or that it not utterly trivial as well. We shall see. On the other hand the personal, the politics, the motivated sociology, the self lauding verbosity and the prominence of trifling debating points from #atomski is most certainly inconsequential.

    • stevefitzpatrick

      You comment much too much and spout a huge amount of nonsense. Do everyone a favor and put a cork in it.

      • I make one or two comments a day – instead of a morning paper with my coffee. And respond to people as they respond. And this sort of nonsense from you is contrary to blog rules – so unless you have something interesting to say… and that would be novel for you.

  88. Pingback: Oceani, ritrovato il calore mancante? : Attività Solare ( Solar Activity )

  89. nobodysknowledge

    I have been wondering about the connection between O2 depletion, N2 and CO2 increase. I found some interesting stuff.
    “While no danger exists that our O2 reserve will be depleted, nevertheless the O2 content of our atmosphere is slowly declining–so slowly that a sufficiently accurate technique to measure this change wasn’t developed until the late 1980s. Ralph Keeling, its developer, showed that between 1989 and 1994 the O2 content of the atmosphere decreased at an average annual rate of 2 parts per million. Considering that the atmosphere contains 210,000 parts per million, one can see why this measurement proved so difficult.

    This drop was not unexpected, for the combustion of fossil fuels destroys O2. For each 100 atoms of fossil-fuel carbon burned, about 140 molecules of O2 are consumed. The surprise came when Keeling’s measurements showed that the rate of decline of O2 was only about two-thirds of that attributable to fossil-fuel combustion during this period. Only one explanation can be given for this observation: Losses of biomass through deforestation must have been outweighed by a fattening of biomass elsewhere, termed global “greening” by geochemists. Although the details as to just how and where remain obscure, the buildup of extra CO2 in our atmosphere and of extra fixed nitrogen in our soils probably allows plants to grow a bit faster than before, leading to a greater storage of carbon in tree wood and soil humus. For each atom of extra carbon stored in this way, roughly one molecule of extra oxygen accumulates in the atmosphere.

    At first glance, this finding appeared to be good news to those worried about the climatic effects of the ongoing buildup of anthropogenic CO2 in the atmosphere, for it suggested that during this five-year period an amount of carbon equal to one-third of that burned for energy production had taken up residence in the biosphere. As another third was taken up by the ocean, this meant that between 1989 and 1994 only one-third of the CO2 we produced by burning fossil fuels accumulated in the atmosphere. However, this enormous biospheric storage is likely an anomaly reflecting an unusual climate, perhaps related to persistent El Niño conditions or emissions by the volcano Pinatubo. A burst of plant growth during this period allowed carbon storage to exceed respiratory losses temporarily, but once climate conditions return to normal the products of this burst will be eaten up, releasing this carbon stored in organic matter back into the atmosphere as CO2 gas. Thus, we can’t use Keeling’s observation as evidence that the biosphere will serve as a major sink for the CO2 we generate. But through Keeling’s O2 measurements we now have a reliable means to monitor the ongoing changes in global biomass. Eventually his record will allow us to diagnose the response of the Earth’s biomass to changing climate and nutrient availability.” by Wallace S.Broecker http://www.columbia.edu/cu/21stC/issue-2.1/broecker.htm#footnotes
    I think it questions the methodology of using CO2 level and O2/N2 ratio as an exact thermometer.

  90. The greater issue that the likes of Atomski don’t confront, is : How much can the consensus and its peer-review be trusted ? Have they really ceased practicing the advocacy science so clearly revealed by their whitewashing of the Climategate, are have they just got better at it ?

    No scientists read blogs, someone above said. Well, blogs are how peer-review itself can now be reviewed and kept honest. Without blogs, would a paper with the above content this have any chance of seeing the light of day ? Would Resplendy et al have deigned to respond ? Or would the gatekeepers of Mannian “redefined” peer-review have buried it ?

  91. I’ll plow this plowed ground and beat this dead horse yet some more. Maybe somebody will step up and ‘splain scientifically how/why I’ve got it wrong – or not.

    Radiative Green House Effect theory (TFK_bams09):
    1) 288 K – 255 K = 33 C warmer with atmosphere, RGHE’s only reason to even exist – rubbish. (simple observation & Nikolov & Kramm)
    But how, exactly is that supposed to work?

    2) There is a 333 W/m^2 up/down/”back” energy loop consisting of the 0.04% GHG’s that absorbs/”traps”/re-emits per QED simultaneously warming BOTH the atmosphere and the surface. – Good trick, too bad it’s not real, thermodynamic nonsense.
    And where does this magical GHG energy loop first get that energy?

    3) From the 16 C/289 K/396 W/m^2 S-B 1.0 ε ideal theoretical BB radiation upwelling from the surface. – which due to the non-radiative heat transfer participation of the atmospheric molecules is simply not possible.

    No BB upwelling & no GHG energy loop & no 33 C warmer means no RGHE theory & no CO2 warming & no man caused climate change.
    Got science? Bring it!!

    Nick Schroeder, BSME CU ‘78, CO PE 22774

    Experiments in the classical style:
    https://principia-scientific.org/debunking-the-greenhouse-gas-theory-with-a-boiling-water-pot/

  92. Pingback: Bits and Pieces – 20181111, Sunday | thePOOG

  93. Pingback: Half Of Climate Science Results Non-Replicable, Flawed | PSI Intl

Leave a Reply to drhealy Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s