Resplandy et al. Part 4: Further developments

By Nic Lewis

There have been further interesting developments in this story

Introduction

The Resplandy et al. (2018) ocean heat uptake study (henceforth Resplandy18) is based on measured changes in the O2/N2 ratio (δO2/N2) and CO2 atmospheric concentration. These are combined to produce an estimate (ΔAPOObs) of changes in atmospheric potential oxygen since 1991, from which they isolate a component (ΔAPOClimate) that can be used to estimate the change in ocean heat content. In three recent articles, here and here, and here, I set out why I thought the trend in ΔAPOClimate – and hence their ocean heat uptake estimate – was overstated, and its uncertainty greatly understated, essentially because of errors in their statistical methodology.  These criticisms have been largely accepted by the authors of the study, although they have also made a change in an unconnected assumption that has the effect of offsetting much of the reduction in their ocean heat uptake estimate that correcting their statistical errors causes.

In the third article, I wrote that my calculated ΔAPOObs time series differed from that in Resplandy18, and said:

It is unclear whether Resplandy18 made a (presumably uncorrected) error in their ΔAPOObs calculations or whether I did something contrary to what was specified in their Methods section.

As I explained in an endnote, it was not obvious where the difference could lie. The ΔAPOObs time series depends only on (δO2/N2) and CO2 data. As it is highly sensitive to the source of CO2 concentration data, I took monthly (δO2/N2) and CO2 values measured at Alert, La Jolla and Cape Grim, the three stations specified in the Resplandy18 Methods section, weighting them respectively 0.25, 0.25 and 0.50 as per the reference they cited. These weights give equal weighting to the two hemispheres, only Cape Grim station being in the southern hemisphere.

Investigating the difference in ΔAPOClimate values

In order to investigate this issue, I turned to multiple regression. This was the approach that led me to suggest that GISS model simulations might have omitted land use forcing from their measure of total historical forcing, hence biasing the Marvel et al (2015)[1] estimates of its efficacy. While Gavin Schmidt responded in an article by asserting that “Lewis in subsequent comments has claimed without evidence that land use was not properly included in our historical runs”, my regression analysis provided good evidence of what I had suggested – which was subsequently bolstered by further evidence that I found – and proved to be correct. In the Resplandy18 case, the obvious thing to do was to regress the ΔAPOObs time series  given in the paper on the three ΔAPOObs time series calculated, on the same basis as in Resplandy18,[2] for each individual station.

The results were surprising. They revealed that the influence, as measured by the relevant multiple regression slope coefficient, of the calculated ΔAPOObs data for the three stations differed greatly from the expected 0.25 for Alert and La Jolla and 0.50 for Cape Grim. The estimated coefficient for Alert was insignificant (not distinguishable from zero )! The estimated coefficients for La Jolla and Cape Grim were respectively 0.65 and 0.37, however these are only approximate as the regression fit was imperfect, with a residual standard error of ~ 0.8 per meg, leading to 1σ (1 standard deviation) uncertainty in these regression coefficients of 10–15% of their estimated value. This implied that the difference between my and Resplandy18’s ΔAPOObs time series couldn’t simply be due to a different set of weights having been used. Their data processing must differ from mine, with their weights not necessarily also differing from those indicated (although probably doing  so), otherwise my regression would have obtained an excellent fit, albeit with different weights from {0.25, 0.25, 0.50}.

This seemed very strange, so I checked with Ralph Keeling whether they had used the {0.25, 0.25, 0.50} weights indicated in the paper’s Methods, being those cited in the paper that they cited in that connection.[3] He confirmed that was so. He also suggested that I double check the version of the data downloaded from the Scripps Institution website, as at point it had had a problem.

I re-downloaded the Scripps monthly O2 and CO2 data. The monthly data comes in four flavours. Two columns give the measured values before and after a seasonal adjustment to remove the quasi-regular seasonal cycle. There are a few missing data months, particularly for La Jolla where the prevalent wind direction in the winter sometimes results in there being no suitable sampling conditions during a month. The adjustment involves subtracting from the data a 4-harmonic function fit to the seasonal cycle. Two other columns contain a smoothed version of the data generated from a stiff cubic spline function, either with the 4-harmonic function added (thus containing a seasonal signal) or deseasonalised. The Resplandy18 Methods sections states:

Station annual means are based on bimonthly data fit to a four-harmonic seasonal cycle and a stiff long-term trend.

Although I was unsure why bimonthly data was referred to, I took this as meaning that the smoothed version of the monthly data was used; otherwise I would have expected the treatment of months with missing data to be mentioned. However, it was unclear to me whether the version of the smoothed data including or excluding the seasonal cycle had been used. I tried deriving annual mean (δO2/N2) and CO2 values from both versions of the smoothed monthly data – which gave essentially identical results – and various other alternatives, without being able to obtain a close regression fit.

As I couldn’t figure out what might be the cause, I went back to Ralph Keeling saying that although I found a strong dependence of the ΔAPOObs time series given in the paper on Cape Grim and (particularly) La Jolla data, I could not isolate any dependence on data from the Alert station, that there was a substantial amount of unexplained variance, and that it looked as if Alert station data might not have been processed as intended.

A surprising station weighting

Ralph Keeling quickly came back to me to say to he had just also figured out that Laure Resplandy had used different weights than he had thought. Shortly afterwards he confirmed that the actual weights used were {0.0105, 0.5617, 0.4278} for Alert, La Jolla and Cape Grim respectively. These weights not only differ very substantially from those indicated in the Methods section, but are physically inappropriate. Equal weighting the two hemispheres is important, and it makes no sense to put almost all the northern hemisphere weight on La Jolla, which unlike Alert station is close to human habitation and has considerably more months with missing data.

I imagine that Laure Resplandy misunderstood what the relevant statement in the Methods section meant and/or misunderstood the paper that it referenced, but I have no idea how she came up with the weightings that she used. Figures 1 and 2 show the four types of monthly CO2 concentration data at La Jolla and Alert stations respectively. The unsmoothed data shows substantially more fluctuation around the smoothed data at La Jolla than at Alert. At Cape Grim, in the southern hemisphere, there is very little such fluctuation.

Figure 1. Monthly CO2 concentration at La Jolla station. The black line shows raw monthly-mean measurements, the blue line shows them after the fitted seasonal cycle has been removed. The orange and magenta lines show the smoothed version with and without the inclusion of the fitted seasonal cycle.

Figure 2. As Figure 1 but for Alert station.

An unexpected derivation of ΔAPOClimate annual means.

Ralph Keeling helpfully provided me with the annual mean data that Laure Resplandy had used, explaining that he suspected I was working with data with messed up monthly values, and that it was possible the data posted on the web had not yet been fixed. He said that months with missing data get filled from the fit. This appears to be contrary to what the paper’s Methods section states, which implies that values from the fit (to a four-harmonic seasonal cycle and a stiff long-term trend) are used for all months, not just for months with missing data. Moreover, I’m not sure that mixing smoothed and unsmoothed data in this way is a good idea, particularly as data is preferentially missing in particular months.

I found that I could almost exactly reproduce the annual O2 and CO2 time series Ralph Keeling sent from the monthly data I had downloaded, using the deseasonalised but unsmoothed version with missing months infilled with smoothed data from the version of the fit without the seasonal cycle. The differences (Figure 3) were only a few parts per billion for CO2, although somewhat less minute for La Jolla than the other stations. Examination of the mean values of this data for each month reveals that the seasonal cycle has not been completely removed, particularly for La Jolla.

Figure 3. Differences between the annual CO2 data provided by Keeling and that I calculate from downloaded data using the method he described, rather than that indicated in the Methods section.

Strangely, despite now having, and being able to replicate, the annual O2 and CO2 data that Ralph Keeling said had been used, and knowing the actual weights used, I still could not very accurately reproduce the ΔAPOObs time series given in the paper; the RMS error is 0.55 per meg. Figure 4 shows the remaining differences (blue circles). A difference in the baseline 1991 value might account for the offset of approaching −0.5, and rounding errors for the remaining difference in most years, but there are larger differences in 1998, 2004 and 2011. The differences between the time series in Resplandy18 and that calculated using the station weights indicated in its Methods section are much larger (red circles). However,  whichever basis of calculation is used the 1991–2016 trend of the resulting time series is almost the same, so there is only a minor effect on the resulting ΔAPOClimate trend and the ocean heat uptake estimate.

Figure 4. Differences between the ΔAPOObs annual values stated in Resplandy18 Extended Data Table 4 and those calculated using the CO2 data provided by Ralph Keeling, either giving the measuring stations  the weights indicated in the Methods section or the actual weights used.

The development of ΔAPOObs for the three stations, calculated from the downloaded data on the basis actually used in Resplandy18, is shown in Figure 5. Trends calculated using the smoothed data, as stated in the paper’s Methods section, are almost identical. While the ΔAPOObs trend difference between the Alert and La Jolla stations looks small, it equates to a nearly 20% difference in the ΔAPOClimate trend and hence in the ocean heat uptake estimate.[4]

Figure 5. Annual mean ΔAPOObs time series for the three stations calculated from downloaded data using the method Keeling described.

As it happens, the effect of the very low weight, relative to that per the weightings indicated in the paper’s Methods section, given by Laure Resplandy to Alert (which has the weakest downwards trend) was largely cancelled out by the lower than indicated weight she gave to Cape Grim (which has the strongest downwards trend). Accordingly, the overall effect on the weighted-mean ΔAPOObs trend of the station weightings used differing from those indicated in the paper’s Methods section is minor.

Conclusions

Neither the station weighting nor the method used to derive annual means makes a material difference to the ΔAPOClimate trend and hence to the ocean heat uptake estimate. They nevertheless illustrate why publication of computer code is so important. In this case, the Methods section was commendably detailed, but in reality the derivation of annual mean O2 and CO2 data and the weighting given to data from the three stations used was different from that indicated in the Methods, radically so as regards the station weighting. It is possible, although hopefully not the case, that there are other differences between the stated methods and those actually used. With Resplandy18, I was lucky in that it was not too difficult to work out that there was a problem and in that Ralph Keeling has behaved very professionally, cooperating with my attempts to resolve the differences between the ΔAPOObs time series I calculated and those given in the paper. But a problem arising from the stated methods not actually being used can often only be revealed if code is made available. I have written to Nature making this point.

While the issues relating to the calculation of ΔAPOObs have now largely been clarified, there remain unresolved issues with the calculation of ΔAPOFF and the uncertainty in it. I shall be pursuing these.

 

Nicholas Lewis                                                                                   23 November 2018


[1] Kate Marvel, Gavin A. Schmidt, Ron L. Miller and Larissa S. Nazarenko: Implications for climate sensitivity from the response to individual forcings. Nature Climate Change DOI: 10.1038/NCLIMATE2888. The simulations involved were CMIP5 Historical simulations by the GISS-E2-R model and associated diagnostics.

[2] As ΔAPOObs = δO2/N2 + (XCO2 − 350)  ⤬ 1.1 / XO2 per meg, where XCO2 is CO2 concentration in ppm, XO2 = 0.2094, 1.1 is the value taken for the land oxidative ratio (OR) and the result is stated as the change from the 1991 ΔAPOObs value. All ΔAPO values in this article use an OR value of 1.1, as in the original Resplandy18 paper, not the revised 1.05 (± 0.05) OR value that Ralph Keeling has stated  is used in their Corrigendum.

[3] Hamme, R. C. & Keeling, R. F. Ocean ventilation as a driver of interannual variability in atmospheric potential oxygen. Tellus B Chem. Phys. Meterol. 60, 706–717 (2008): Table 1 Global station weightings for 3 stations starting in 1991 (rightmost column)

[4]  All stated trends are derived using ordinary least squares regression.

47 responses to “Resplandy et al. Part 4: Further developments

  1. It’s clear that peer review has been no use at all for this paper and that we’re lucky Nic Lewis took this review upon himself. Also it is,very good for science that the authors have been so cooperative. But extrapolating from this, there must have been many similar cases where similar papers have just been published, with serious errors. What to do with peer review? I must admit that during my scientific engineering career I never really had the time to dedicate myself to a paper the way Nic Lewis ha done. So perhaps a better way to review papers is to select reviewers and make sure they do their work properly by paying them for it.

    • This is a good idea. Compensation for reviewing would improve quality, perhaps improve it a lot. My experience is the same as terastientstra’s. Reviews tend to be very superficial and there is usually no replication done at all.

    • Maybe at least one peer in the review process should someone from the opposing camp.

      • Perhaps just someone, anyone with knowledge in the field being used, here the right maths to use.

    • Use a bounty system

      • Interesting idea, Steve, but if the author is liable to pay that simply pushes them to deny the error more so, especially if they have the establishment backing. If they are not supported by the establishment it would chill publishing controversial results that would draw establishment attacks. If the publication is the payer then they would have a pretext to beef up barriers. If a governmental body pays then there is potential for gaming or corruption.

        Publishing and archiving data and code has little down side cost.

      • I suggested the same in an earlier thread. Journals who want to find out the truth would surely be happy to offer such prizes.

        It would be logical for one of these happy billionaires to offer such prizes if the journals won’t. Whoever funded them, these prizes would be a great way for graduate students to earn a few quid while learning.

        This is also a way to weight authors and reviewers in the future.

      • I suppose if the bounty is modest it will be more of a recognition and credit than a monetary award. You changed my mind.

    • Get rid of reviewer anonymity. (Did it apply in this case?)

      Paying reviewers and/or offering a bounty for errors found are fine, on paper, but could easily be frustrated by an establishment hostile to dissent and committed to manufacturing consensus.

  2. Kudos to Ralph Keeling and mega Kudos to Nic Lewis, Let’s hope that Nature moves the ball forward by requiring code in the future. It is hard to understand why this is not already the case.

    • Re: “Let’s hope that Nature moves the ball forward by requiring code in the future. It is hard to understand why this is not already the case.”

      Which is why faux “skeptics” have always been up in arms about getting the raw data, homogenization methods, code, etc. for the UAH analysis.

      …Oh wait, they weren’t.

    • Going back at least to the hockey stick, mainstream faux climate “science” has long resisted open data and code.

      But this is OK, implies Atomsk, because evidently some/all/? skeptics didn’t also call for the UAH data and code. However much or little truth there is in that, it’s an absolutely top-class diversion. Apologist’s job done.

  3. Reblogged this on Quaerere Propter Vērum and commented:
    Gavin still doing damage control. Code needs to be revealed.

    • I’m not sure its damage control as much as trying to summarize the science without pointing out any potential flaws that need to be audited.

    • I also note neither Real Climate nor James Annan has corrected their blogs related to ECS changes due to the original flawed study. That’s surprising to me but its yet another example of how flawed research can live on forever because people are too lazy to add updates to their blog posts.

  4. “Ralph Keeling has behaved very professionally”

    So there’s one good apple in the barrel. It’s still only fit for cattle.

  5. And as I have said – it is not clear what Nic was doing here.

    https://curryja.files.wordpress.com/2018/11/fig1_lewis-on-resplandy-trend-errors_pseudo-dapo_atmdataerrs.png
    “In each year, the 1-σ uncertainty (error standard deviation) in its value is 50% of its best-estimate value…” It implies that random errors increase with the magnitude of the measured variable. This not the case.

    The random error is the root-mean-square difference from the prediction of the linear regression. Either that or detrend it before calculating the rms difference,

    But it is said to be systematic errors in the O2/N2 experimental methodology that was the problem. The details of that are not clear to me.

    As for the rest – I will wait for Nic’s published improvement in the methodology.

    • whoops… O2/CO2… of course.

    • The uncertainty in deltaAPO_AtmD does not results from random errors each year. Rather, it is its time-trend that is uncertain – its 1-sigma limits are +0.135 to +0.405 per meg/yr. That is what the pink triangle in my figure shows, with the black line showing the central +0.27 per meg/yr estimate. For details of systematic/trend uncertainties in other APO time series, see Extended data table 3 to the paper – freely available at the link given at the start of this article.

  6. “These criticisms have been largely accepted by the authors of the study, although they have also made a change in an unconnected assumption that has the effect of offsetting much of the reduction in their ocean heat uptake estimate that correcting their statistical errors causes.”

    I love it. In the 1960s and 1970s, Management by Objectives (MBO) was all the rage to improve organizational effectiveness. Now we have its counterpart in science, SBO, Science by Objective. If you can’t get the conclusion you like or intended to get, you try again.

    That is why autos have a reverse gear. If you don’t like where you ended up, back up and start over.

  7. Equal weighting the two hemispheres is important, and it makes no sense to put almost all the northern hemisphere weight on La Jolla…

    The manipulation of data from La Jolla is yet another stab in the back of the memory of Roger Revelle, the first being his understudy, Al Gore who accused his mentor Revelle of being senile; and, the second being his own family who undermined Revelle, claiming Fred Singer had manipulated him into making a statement about being skeptical of the AGW hypothesis.

    We now have time to investigate the investigators. In his article about lifetime Leftist politician Al Gore and global warming (‘Let Us Prey,’ Fall 2007 Range Magazine), author Tim Findley talks about UCSD professor Roger Revelle, the first global warming heretic.

    “Before he died in 1991,” reports Findley, “Revelle produced a paper with [former NASA climate scientist Frederick] Singer suggesting that people should not be made to become alarmed over the greenhouse effect and global warming.” Their article (subtitled, “Look before you leap”) said as follows:

    Drastic, precipitous and, especially, unilateral steps to delay the putative greenhouse impacts can cost jobs and prosperity and increase the human costs of global poverty, without being effective.

    Findley says Revelle’s article, “was a Judas kiss to [future ecomessiah] Gore, who was already conducting congressional hearings meant to produce just the sort of alarm his former mentor [Revelle] was saying was unnecessary.”

  8. From the post: “Ralph Keeling quickly came back to me to say to he had just also figured out that Laure Resplandy had used different weights than he had thought. Shortly afterwards he confirmed that the actual weights used were {0.0105, 0.5617, 0.4278} for Alert, La Jolla and Cape Grim respectively. These weights not only differ very substantially from those indicated in the Methods section, but are physically inappropriate.”

    Although Nic is careful to point out the weighting had only a small difference, this error seems harder to understand than the others. Any elementary school student could understand the appropriateness of 25%, 25%, 50% representation of a pie dividing into halves then again halving one of the halves. Where in the world could one get 1%, 56%, 42%? I think Nic was super nice not to have asked Keeling that question or how he discovered that cryptic weighting scheme. (Sounds more like a recipe for baking a pie.)

    If it had little significance to the paper’s result I am then alarmed at the lengths that can be gone for insignificant exaggerations toward the desired message. If there was a legitimate reason to weight Alert only 1% then it should have just been dropped altogether. If Mann had a legitimate reason to weight Sheep Mountain tree-rings 370 times higher than Mayberry Slough then the paper should just have reported exclusively Graybill’s bristlecone and foxtail pines and not camouflaged with 55 over sites that contributed only 7% (all combined) to the shape of the 600-year hockey stick temperature plot.

  9. “We report oxygen measurements as changes in the O2/N2 ratio of air relative to a reference.  We compute

                            δ = ((O2/N2)sample – (O2/N2)reference)/ (O2/N2)reference)

    where (O2/N2)sample is the O2/N2 mole ratio of an air sample and (O2/N2)reference is the O2/N2 mole ratio of our reference.  Our reference is based on tanks of air pumped in the mid 1980s which we store in our laboratory. 

    Atmospheric Potential Oxygen (APO) data, reported on this site, is computed by combining the O2/N2 and CO2 data according to

    APO =  δ(O2/N2) + 1.1/0.2095(XCO2-350)

    where δ(O2/N2) is the O2/N2 ratio in per meg units and XCO2 is the CO2 mole fraction in ppm units, and 1/0.2095 is a conversion factor from ppm to per meg, 1.1 is an estimate of the average O2:C ratio for land photosynthesis or respiration, and 350 is an arbitrary additive constant.  APO is reported in per meg units.

    APO is a measure of the O2 concentration that an air sample would have if the CO2 concentration were adjusted by a typical land plant to exactly 350 ppm through photosynthesis or respiration.”  http://scrippso2.ucsd.edu/indexhttp://scrippso2.ucsd.edu/index

    https://watertechbyrie.files.wordpress.com/2018/11/resplandy-ocean-heat.png

    The systematic error comes from the reference O2/N2 flasks.  The Xco2 is the atmospheric concentration.  APO(obs) is calculated from data from the three stations – Cape Grim, Scripps Pier and Alert – with an ‘atmospheric inversion model’ – for which the code is available.   

    http://scrippso2.ucsd.edu/assets/imce/station_map.gif

    There is a land/ocean asymmetry between the hemispheres and thus the 50:25:25 weighting is far too simple .    

    Carbon dioxide emissions from fossil fuels and cement production – from 1750 to 2011 – was about 365 billion metric tonnes as carbon (GtC), with another 180 GtC from deforestation and agriculture.   Of this 545 GtC, about 240 GtC (44%) had accumulated in the atmosphere, 155 GtC (28%) had been taken up in the oceans with slight consequent acidification, and 150 GtC (28%) had accumulated in terrestrial ecosystems.  A critical metric is the losses from soils and forests.  Very easy to turn net losses into net gains.   

    http://scrippso2.ucsd.edu/assets/imce/o2_graphic_mlo_large.png

    They need to think more about the terrestrial system – you need to get a lot more serious about your random error error and oversimplified procedures.

  10. Geoff Sherrington

    Nic,
    Oxygen has long been one of the difficult elements for analytical chemistry, not the least because we are surrounded by high levels of it as a gas, solid and liquid.
    I have started to see if there are relevant potential error sources in this paper but have severe family health problems just now.
    Given the dependence of this paper on accurate (as opposed to precise) oxygen measurement. Can another reader here take it on?
    As a mathematician, of course you started elesewhere and the results of your work have been important. Geoff.

  11. The weightings are the normalized values of [1-sin(latitude)] (in radians) which I think is related to the whole area from the station to the nearest pole.

    • Good spot. That fits the weights exactly (using the 40.7 S latitude for Cape Grim station given in the Hamme et al paper, not the incorrect 40.5 S latitude given in Resplandy18). However, this basis makes little physical sense to me, as well as the resulting weights being very different from those specified in the referenced Hamme et al paper.

    • Excellent discovery, Ruth. Once again it points up the necessity for submitting data processing scripts/spreadsheets as supplemental information for papers such as Resplandy et al. 2018. There is no technical bar to such a practice.

      For most purposes, the language or form wouldn’t matter; even if one does not have access to (say) Matlab, the code can be manually parsed, and discrepancies between the write-up and processing, such as weighting in this case, can be discerned.

      This isn’t a climate science-only issue. Any paper with significant data processing involved in its conclusions should be subject to such a restriction.

  12. Fascinating

  13. Reblogged this on Climate Collections.

  14. There was a companion article to the Hamme et al 2008 paper.

    Rödenbeck, C., Le Quéré, C., Heimann, M. & Keeling, R. F. Interannual variability in oceanic biogeochemical processes inferred by inversion of atmospheric O2/N2 and CO2 data. Tellus B Chem. Phys. Meterol. 60, 685–705 (2008).

    I believe this is it.

    But with this a later paper may be of more interest.

    https://www.nature.com/articles/s41561-018-0151-3

    There were various estimates made using an ‘ocean/atmosphere inversion model’ – for which the code is available I have seen. I am unclear yet even as to what that means. There is a lot here to systematically work through.

  15. Geoff Sherrington

    Accuracy of oxygen concentration measurements is quite complicated and not easy. For those who have not delved deeply, some aspects are here –
    http://ictinternational.com/casestudies/understanding-oxygen-in-air/
    There are other accuracy considerations related to making the sum of analytes equal 100%, or 100% +/-Z% that are not commonly found in routine analytical chemistry.

    There is no prior assumption in my comments that Resplandy et al have made obvious mistakes in calculating oxygen (and other gas) concentrations. It is simply a matter of formalism that when a paper’s errors are under discussion, prudence suggests the identification of all possible sources of error to place them in relative context. Put in a simple way, Nic’s work has uncovered an error or errors that the authors are correcting, but at this stage it is not known if this is the largest, or only error needing correction.
    Nothing abnormal in this, it is customary scientific interplay to keep striving for the best possible result. Geoff

  16. Ralph Keeling has behaved very professionally…

    …by not releasing the code???

  17. “Ralph Keeling has behaved very professionally”

    But not Resplandy ? (seems s/he did fob Lewis off, as Appell’s blog suggests).

  18. Nic,
    I’m starting to feel really sorry for Laure Resplandy. This is not good news for a thirty-something Asst Prof. If I were Laure, I would seriously consider trying to co-opt you for a re-write and re-submission.
    I think that there may however be a further substantial problem in the paper which nobody seems to have touched on as yet. So far, you have considered only the errors associated with the components which are independent of AOGCM modeling. There remains the crucial question of the errors and assumptions associated with the factor to convert ΔAPOClimate to a change in OHC.
    The paper relies on four Earth System (ES) AOGCMs, and considers specifically the period from 1920 to 2100 to define an assumed linear relationship between ocean heat gain and ΔAPOClimate.
    The calculation of aggregate OHC gain in each of these ES models has a fairly simple dependence on (a) the model’s conversion of forcing drivers to net flux change and (b) the magnitude of net temperature-dependent feedbacks to net flux built into the model architecture. Depending on these two contributions, the ES – AOGCM can either over-predict or under-predict the true-but-unknown OHC gain. Neither of these contributions has much dependence on the atmospheric interchange of gases – even CO2 – since the total atmospheric concentration of CO2 is well constrained by measurement and is matched by the model. On the other hand, the atmosphere-ocean oxygen-interchange does have some dependence on the CO2 interchange with the ocean.
    During the historic tuning period, each model needs to match the known measured change in total atmospheric concentration of CO2 which is done by adjusting the uncertain changes in sources and sinks. Once this is done, this will partly constrain the change in total atmospheric oxygen. There are still degrees of freedom left to adjust the total atmospheric concentrations of oxygen in the model – even after stoichiometric constraints imposed by fuel consumption and biological processes are satisfied, and oceanic mass fluxes are set.
    Loosely summarizing, it means there is practically no dependence of OHC gain on biological and thermally driven ocean-atmosphere exchange, and there is a higher-but-not perfect dependence of ocean-atmosphere oxygen exchange on CO2 ocean-atmosphere exchange.
    The point of all this is that it is perfectly possible to match total atmospheric CO2 and oxygen concentrations with a completely incorrect ocean heat gain in an ES model. It is also perfectly possible, using the same completely incorrect ocean heat gain, to produce a self-consistent series of ocean-atmosphere exchanges for CO2 and oxygen.
    To illustrate the effect of all this, let us set up a Gedanken experiment under the following assumptions:-
    1. On average each of the 4 ES models overestimates the OHC gain by 30% relative to the true-but-unknown OHC gain
    2. (But) each of the 4 ES models does a perfect job of matching the total atmospheric concentration series for CO2 and O2
    3. (And) each of the 4 ES models does a perfect job of matching the contribution of ocean oxygen exchange to the total change of atmospheric oxygen.
    4. The Resplandy central estimate of the trend of DelAPOclimate is perfectly accurate relative to the true value.
    With the above assumptions, once the Resplandy estimate is divided by the conversion factor, we find that the OHC gain per year (average net flux) is exactly equal to the average OHC gain per year from the four ES models. This is 30% higher than the true-but-unknown estimate via our Gedanken assumptions.
    What this highlights is that far from this being an independent estimate as claimed, the OHC gain from Resplandy’s approach is a relative value which is entirely dependent on the accuracy of the OHC gain forecast by the ES models. It is a relative measure at best, and the relativity is to an average of models – not to the true-but-unknown value.
    I could write several more pages on the uncertainty associated with this conversion factor – which I believe is incorrectly calculated even under Resplandy’s assumptions – but it feels like putting bullets into a twice-dead horse.
    Paul

  19. Paul, Thank you for your detailed comment. I take your point, however I think it only applies to Historical/RCP simulations or similar.

    The Resplandy18 Methods section doesn’t seem to say which ESM simulations were used to estimate the ΔAPOClimate to ΔOHC relationship. However, I that they used the feedback experiment ‘esmFdbk3’, which they say “includes only warming-driven changes associated with anthropogenic emissions, such as radiation effects”. This experiment is not mentioned in the CMIP5 protocol papers, but I believe it is the same as ‘esmFdbk2’ except that it involves the Historical/RCP8.5, rather than Historical/RCP4.5, CO2 concentration pathway.

    The protocol papers describe ‘esmFdbk2’ as ‘Carbon cycle sees piControl CO2 concentration, but radiation sees [Historical/RCP4.5 concentration]’. This is experiment 5.5 of the CMIP5 protocol. It is designed to show the carbon cycle response to climate change alone. If the ESM misestimates atmospheric and ocean warming, ocean-atmosphere cumulative O2 & CO2 (& N2) fluxes will respond to the incorrect ocean warming and will be too high or too low, but I think with the same proportional error as in ΔOHC. So I don’t see why the ΔAPOClimate to ΔOHC relationship would be misestimated in this case?

    • I have not read the paper being discussed on these threads, but have followed the discussion with some interest. I have been doing an unrelated analysis that requires downloading all the RCP scenario runs from KNMI and looking at the trends (from ceemdan decomposition) for temperature, forcing and net TOA radiation (rsdt-rlut-rsut). I have not summarized that analysis to date but my eyeballing some plots and what I have read about ocean heat content (OHC) or net TOA radiation change indicates to me that that change during the historical period (1861-2005) on average for the ensemble of CMIP5 model runs is something on the order of 0.80-0.85 watts/m2 for net TOA radiation change.

      My question then becomes one of: how far the change in OHC from the paper under discussion here deviates from that of the mean of the ensemble of CMIP5 models during the historical period? I have the impression that the change in OHC in the paper over estimates the ensemble mean of the CMIP5 model runs. Of course, I suspect without making the calculation as yet that where there are multiple runs from individual models there will be significant differences between a number of individual models.

      • ken
        “change during the historical period (1861-2005) on average for the ensemble of CMIP5 model runs is something on the order of 0.80-0.85 watts/m2 for net TOA radiation change.”
        That sounds about right. CMIP5 Historical runs start from equilibrium, with zero applied forcing change from preindustrial, so 1850 or 1860 TOA radiative imbalance is zero relative to preindustrial. There’s a corrected figure in the AR5 WG1 Erratum that shows the development of TOA imbalance (or possibly of OHU) in all CMIP5 models over the Historical period.
        The corrected uncertainty range for Resplandy18’s OHU estimate is so large that its result will be consistent with all or virtually all CMIP5 models and in observational estiamtes based on in situ ocean measurements.

  20. Pingback: Weekly Climate and Energy News Roundup #336 |

  21. I can see why M. Mann would rather sue than release his code ;)

  22. Жан Марк Ван Белл

    Thank you, you have a very logic way of thinking. We appreciate this very much. It read very easy, and i take over the main error:
    “Ralph Keeling quickly came back to me to say to he had just also figured out that Laure Resplandy had used different weights than he had thought. Shortly afterwards he confirmed that the actual weights used were {0.0105, 0.5617, 0.4278} for Alert, La Jolla and Cape Grim respectively. These weights not only differ very substantially from those indicated in the Methods section, but are physically inappropriate. Equal weighting the two hemispheres is important, and it makes no sense to put almost all the northern hemisphere weight on La Jolla, which unlike Alert station is close to human habitation and has considerably more months with missing data.”

    Jean Marc VAN BELLE +Je

  23. Pingback: Weekly Climate and Energy News Roundup #337 | Watts Up With That?

  24. VAN BELLE Jean Marc

    The Bulgarian team thanks you very much for your efforts. We will spread the correction to the media we can connect too, but it will
    not be easy. In the time of Tsjernobyl there was a taboo on ‘negative news for the health of people’, but nowadays there is some taboo on news towards ‘there is nothing wrong with the oceans/earth temperature’. Strange evolution, but we can only do our best to stay as objective as possible. Жан марк ван белл +je