*By Nic Lewis*

The recently published open-access paper “How accurately can the climate sensitivity to CO2 be estimated from historical climate change?” by Gregory et al.[i] makes a number of assertions, many uncontentious but others in my view unjustified, misleading or definitely incorrect.

Perhaps most importantly, they say in the Abstract that “The real-world variations mean that historical EffCS [effective climate sensitivity] underestimates CO_{2} EffCS by 30% when considering the entire historical period.” But they do not indicate that this finding relates only to effective climate sensitivity in GCMs, and then only to when they are driven by one particular observational sea surface temperature dataset.

However, in this article I will focus on one particular statistical issue, where the claim made in the paper can readily be proven wrong without needing to delve into the details of GCM simulations.

Gregory et al. consider a regression in the form *R* = α *T*, where *T* is the change in global-mean surface temperature with respect to an unperturbed (i.e. preindustrial) equilibrium, and *R* = *α* *T* is the radiative response of the climate system to change in *T*. *α* is thus the climate feedback parameter, and *F*_{2xCO2 }/*α* is the EffCS estimate, *F*_{2xCO2} being the effective radiative forcing for a doubling of preindustrial atmospheric carbon dioxide concentration.

The paper states that “that estimates of historical *α* made by OLS [ordinary least squares] regression from real-world *R* and *T* are biased low”. OLS regression estimates *α* as the slope of a straight line fit between *R* and *T *data points (usually with an intercept term since the unperturbed equilibrium climate state is not known exactly), by minimising the sum of the squared errors in *R*. Random errors in *R* do not cause a bias in the OLS slope estimate. Thus in the below chart, with *R* taken as plotted on the y-axis and *T *on the x-axis, OLS finds the red line that minimizes the sum of the squares of the lengths of the vertical lines.

However, some of the variability in measured *T* may not produce a proportionate response in *R*. That would occur if, for example, *T* is measured with error, which happens in the real world. It is well known that in such an “error in the explanatory variable” case, the OLS slope estimate is (on average) biased towards zero. This issue has been called “regression dilution”.

Regression dilution is one reason why estimates of climate feedback and climate sensitivity derived from warming over the historical period often instead use the “difference method”.[ii] [iii] [iv] [v] The difference method involves taking the ratio of differences, Δ*T *and Δ*R*, between *T *and *R* values late and early in the period. In practice Δ*T *and Δ*R* are usually based on differencing averages over at least a decade, so as to reduce noise.

I will note at this point that when a slope parameter is estimated for the relationship between two variables, both of which are affected by random noise, the probability distribution for the estimate will be skewed rather than symmetric. When deriving a best estimate by taking many samples from the error distributions of each variable, or (if feasible) by measuring them each on many differing occasions, the appropriate central measure to use is the sample median not the sample mean. Physicists want measures that are invariant under reparameterization[vi], which is a property of the median of a probability distribution for a parameter but not, when the distribution is skewed, of its mean. Regression dilution affects both the mean and the median estimates of a parameter, although to a somewhat different extent.

So far I agree with what is said by Gregory et al. However, the paper goes on to state that “The bias [in *α* estimation] affects the difference method as well as OLS regression (Appendi*x *D.1).” This assertion is wrong. If true, this would imply that observationally-based estimates using the difference method would be biased slightly low for climate feedback, and hence biased slightly high for climate sensitivity. However, the claim is *not *true.

The statistical analyses in Appendi*x *D consider estimation by OLS regression of the slope *m *in the linear relationship *y*(*t*) = *m x*(*t*), where *x *and y are time series the available data values of which are affected by random noise. Appendi*x *D.1 considers using the difference between the last and first single time periods (here, it appears, of a year), not of averages over a decade or more, and it assumes for convenience that both *x *and *y* are recentered to have zero mean, but neither of these affect the point of principle at issue.

Appendi*x *D.1 shows, correctly, that when only the endpoints of the (noisy) *x *and *y* data are used in and OLS regression, the slope estimate for *m *is Δ*y*/Δ*x*, the same as the slope estimate from the difference method. It goes on to claim that taking the slope between the *x *and *y* data endpoints is a special case of OLS regression and that the fact that an OLS regression slope estimate is biased towards zero when there is uncorrelated noise in the *x *variable implies that the difference method slope estimate is similarly so biased.

However, that is incorrect. The median slope estimate is not biased as a result of errors in the *x *variable when the slope is estimated by the difference method, nor when there only two data points in an OLS regression. And although the mean slope estimate is biased, the bias is high, not low. Rather than going into a detailed theoretical analysis of why that is the case, I will show that it is by numerical simulation. I will also explain how in simple terms regression dilution can be viewed as arising, and why it does not arise when only two data points are used.

The numerical simulations that I carried out are as follows. For simplicity I took the true slope *m *as 1, so that the true relationship is *y* = *x, *and that true value of each *x* point is the sum of a linearly trending element running from 0 to 100 in steps of 1 and a random element uniformly distributed in the range -30 to +30, which can be interpreted as a simulation of a trending “climate” portion and a non-trending “weather” portion.[vii] I took both *x* and *y* data (measured) values as subject to zero-mean independent normally distributed measurement errors with a standard deviation of 20. I took 10,000 samples of randomly drawn (as to the true values of *x* and measurement errors in both *x* and *y*) sets of 101 *x* and 101 *y* values.

Using OLS regression, both the median and the mean of the resulting 10,000 slope estimates from regressing *y* on *x* using OLS were 0.74 – a 26% downward bias in the slope estimator due to regression dilution.

The median slope estimate based on taking differences between the averages for the first ten and the last ten *x* and *y* data points was 1.00, while the mean slope estimate was 1.01. When the averaging period was increased to 25 data points the median bias remained zero while the already tiny mean bias halved.

When differences between just the first and last measured values of *x *and *y* were taken,[viii] the median slope estimate was again 1.00 but the mean slope estimate was 1.26.

Thus, the slope estimate from using the difference method was median-unbiased, unlike for OLS regression, whether based on averages over points at each end of the series or just the first and last points.

The reason for the upwards mean bias when using the difference method can be illustrated simply, if errors in *y* (which on average have no effect on the slope estimate) are ignored. Suppose the true Δ*x *value is 100, so that Δ*y* is 100, and that two *x *samples are subject to errors of respectively +20 and –20. Then the two slope estimates will be 100/120 and 100/80, or 0.833 and 1.25, the mean of which is 1.04, in excess of the true slope of 1.

The picture remains the same even when (fractional) errors in *x* are smaller than those in *y*. On reducing the error standard deviation for *x *to 15 while increasing it to 30 for *y*, the median and mean slope estimates using OLS regression were both 0.84. The median slope estimates using the difference method were again unbiased whether using 1, 10 or 25 data points at the start and end, while the mean biases remained under 0.01 when using 10 or 25 data point averages and reduced to 0.16 when using single data points.

In fact, a moment’s thought shows that the slope estimate from 2-point OLS regression must be unbiased. Since both variables are affected by error, if OLS regression gives rise to a low bias in the slope estimate when *x *is regressed on *y*, it must also give rise to a low bias in the slope estimate when *y* is regressed on *x*. If the slope of the true relationship between *y* and *x *is m, that between *x *and *y* is 1/m. It follows that if regressing *x *on *y* gives a biased low slope estimate, taking the reciprocal of that slope estimate will provide an estimate of the slope of the true relationship between *y* and *x *that is biased high. However, when there are 2 data points the OLS slope estimate from regressing *y* on *x *and that from regressing *x *on *y* and taking its reciprocal are identical (since the fit line will go through the 2 data points in both cases). If the *y*-against-*x *and *x*-against-*y* OLS regression slope estimates were biased low that could not be so.

As for how and why errors in the *x *(explanatory) variable cause the slope estimate in OLS regression to be biased towards zero (provided there are more than two data points), but errors in the *y* (dependent) variable do not, the way I look at it is this. For simplicity, I take centered (zero-mean) *x *and *y* values. The OLS slope estimate is then Σ*xy* / Σ*xx*, that is to say the weighted sum of the *y* data values divided by the weighted sum of the *x *data values, the weights being the *x *data values. An error that moves a measured *x *value further from the mean of zero not only reduces the slope *y*/*x *for that data point, but also increases the weight given to that data point when forming the OLS slope estimate. Hence such points are given more influence when determining the slope estimate. On the other hand, an error in *x *that moves the measured value nearer to zero mean *x *value, increasing the *y*/*x *slope for that data point, reduces the weight given to that data point, so that it is less influential in determining the slope estimate. The net result is a bias towards a smaller slope estimate. However, for a two-point regression, this effect does not occur, because whatever the signs of the errors affecting the *x*-values of the two points, both *x*-values will always be equidistant from their mean, and so both data points will have equal influence on the slope estimate whether they increase or decrease the *x*-value. As a result, the median slope estimate will be unbiased in this case. Whatever the number of data points, errors in the y data points will not affect the weights given to those data points when forming the OLS slope estimate, and errors in the *y*-data values will on average cancel out when forming the OLS slope estimate Σ*xy* / Σ*xx*.

So why is the proof in Gregory et al. AppendixD.1, supposedly showing that OLS regression with 2 data points produces a low bias in the slope estimate when there are errors in the explanatory (*x*) data points, invalid? The answer is simple. The Appendi*x *D.1 proof relies on the proof of low bias in the slope estimate in Appendi*x *D.3, which is expressed to apply to OLS regression with any number of data points. But if one works through the equations in Appendi*x *D.3, one finds that in the case of only 2 data points no low bias arises – the expected value of the OLS slope estimate equals the true slope.

It is a little depressing that after many years of being criticised for their insufficiently good understanding of statistics and lack of close engagement with the statistical community, the climate science community appears still not to have solved this issue.

Nicholas Lewis ……………………………………………….. 18 October 2019

[i] Gregory, J.M., Andrews, T., Ceppi, P., Mauritsen, T. and Webb, M.J., 2019. How accurately can the climate sensitivity to CO₂ be estimated from historical climate change?. Climate Dynamics.

[ii] Gregory JM, Stouffer RJ, Raper SCB, Stott PA, Rayner NA (2002) An observationally based estimate of the climate sensitivity. J Clim 15:3117–3121.

[iii] Otto A, Otto FEL, Boucher O, Church J, Hegerl G, Forster PM, Gillett NP, Gregory J, Johnson GC, Knutti R, Lewis N, Lohmann U, Marotzke J, Myhre G, Shindell D, Stevens B, Allen MR (2013) Energy budget constraints on climate response. Nature Geosci 6:415–416

[iv] Lewis, N. and Curry, J.A., 2015. The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Climate Dynamics, 45(3-4), pp.1009-1023.

[v] Lewis, N. and Curry, J., 2018. The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. Journal of Climate, 31(15), pp.6051-6071.

[vi] So that, for example, the median estimate for the reciprocal of a parameter is the reciprocal of the median estimate for the parameter. This is not generally true for the mean estimate. This issue is particularly relevant here since climate sensitivity is reciprocally related to climate feedback.

[vii] There was an underlying trend in T over the historical period, and taking it to be linear means that, in the absence of noise, linear slope estimated by regression and by the difference method would be identical.

[viii] Correcting the small number of negative slope estimates arising when the *x* difference was negative but the *y* difference was positive to a positive value (see, e.g., Otto et al. 2013). Before that correction the median slope estimate had a 1% low bias. The positive value chosen (here the absolute value of the negative slope estimate involved) has no effect of the median slope estimate provided it exceeds the median value of the remaining slope estimates, but does materially affect the mean slope estimate.

Reblogged this on Climate Collections.

Also see my discussion of regression dilution here on C. Etc. in 2016:

https://judithcurry.com/2016/03/09/on-inappropriate-use-of-least-squares-regression/

The red herring are schooling. The Gordion knot has been cut and the Holy Grail found. The global warming scare is based upon one false assumption – that linear regression is proof of global warming. In fact what is happening since Krakatoa(1883) is regression toward the mean. Here’s the evidence. Temperatures were similar to modern in 1875 before Krakatoa but were depressed by a series of volcanic eruptions until the volcanic era ended in the 1930s.

https://www.dropbox.com/s/4473qqg9dw8lugm/Volcanoes%20ENSO%20and%20Carbon%20Dioxide.pdf?dl=0

Francis

Recovery from Krakatoa – that’s a new interesting hypothesis.

Are there data on atmospheric volcanic particulates that would support that.

Or do particulates only mediate the short term effect with the long term effect resulting from ocean cooling during the aerosol-shaded period?

Judith: Thanks for this. It has given me the gist of the problems which appear very disturbing.

I write with expertise on water, having been trained as an officer in the Royal Navy in the days of steam propulsion-so have a different perspective.

If one steps out of the CO2/GHG/statistical/radiation mindset the question of climate sensitivity might better be resolved by bottom up reference to the fundamental science.

For instance the fact that water evaporative phase change takes place at constant temperature means that: the sensitivity coefficient (S) in the Planck equation dF = S*dT has a value of zero. Thus, unless this factor is taken into account in the calculation of the Global Sensitivity the result will be too high. Perhaps this could go a long way to explain why the models are running hot wrt to observations?

It seems to me that water is a bit of a joker in the pack which statistics will fail to identify unless due attention is given to the basic science of its thermodynamics and behaviour.

The buoyancy of water vapor wrt dry air is another aspect which also appears to be ignored; but that is another matter so won’t elucidate here.

My regards. Keep the good work we desperately need people like you.

Alasdair

Nic:

Regarding your first point about statistically estimating the feedback parameter by regressing radiative flux (R) variations against temperature (T) variations, what I think you might be missing is that the greater the time-varying radiative forcing that exists in nature (including from volcanoes), the more the covariations between T and R are decorrelated, and the decorrelation is not just “blamed on” the radiative flux. It’s the fact that when R causes T, then R is proportional to dT/dt (not T) with a climate system of finite heat capacity. This is in contrast to feedback (causation in the opposite direction), where R is roughly proportional to T.

This is just the result of the simple time-dependent forcing feedback model T departures from equilibrium:

Cp[dT(t)/dt] = RF(t) + NRF(t) – lambda[T(t)],

where the total radiative flux variations R(t) are made up of the sum of radiative forcing RF(t) and radiative feedback, lambda[T(t)]. (An example of NRF is ENSO variations in SST. An example of RF(t) would be a major volcano or non-feedback variations in clouds.)

Importantly, if NRF(t) is zero, and you only have time-varying RF as a forcing of temperature change, the resulting T vs. R variations are almost totally uncorrelated for any realistic specified feedback parameter lambda. Regression gives you an estimated lambda of near-zero, even if you specify (say) lambda=8 W m-2 K-1.

You can demonstrate this by putting the above equation into a program with any realistic heat capacity Cp and use low-lass filtered random noise for RF and NRF, and adjust their relative proportions. Danny Braswell and I published several papers on this years ago, although they were largely ignored my most people except Lindzen and Choi, who also published on the issue.

I can send you a spreadsheet with such a simple model to experiment with if you want.

If I have misunderstood the “Appendix D” issue, let me know.

-Roy

Hi Roy

Thanks for your comment. I quite agree that causation goes both ways, with changes in R caused by changes in T representing a radiative response that is properly treated as ‘climate feedback’, but also unrelated variability in R, caused by internal variability in clouds, SST patterns, etc., that itself causes changes in T (with a delay as the ocean mixed layer warms up).

However, I don’t think these bear on the main point I am making. My key message in this post is simply that their claim the measurement errors in the explanatory variable, here T, (or internal variability in it that is not reflected in the dependent variable, here R) bias estimates of the strength of the relationship between the two variables when the difference method is used, is incorrect. No such bias results, unlike when OLS regression is used (and there are >2 data points). And so their statistical proof in Appendix D.1 is invalid.

I’m not sure that the two way causation is relevant to decadal to centennial timescales like Lewis and Curry 2104, which is presumably the target of Gregory’s paper.

It is far more important on annual or subannual timescales used in S&B 2011. I found an exponential lag with a time const of 8 mo matched the Pinatubo data fairly closely using a simple model accounting for the mutual two-way causation.

Roy, I proposed a method to account for the two way causation problem and decorrelation which you pointed out in Spencer & Braswell 2011 , which I referenced here:

https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

It produces AOD scaling similar to Lacis et al 1992 when they were still using observations and “basic physics” to estimate the scaling. It has now become an unconstrained fiddle factor and their modelled result is not longer used. Obviously changing the scaling changes CS for volcanic forcing.

I would be interested to hear your assessment of this method, since in SB11 you said you could not see how this could be tackled. Hopefully this represents a possible solution.

Cp[dT(t)/dt] = RF(t) – lambda[T(t)], leads to an exponential solution which was the basis of my analysis in the linked article.

Convolution of volcanic forcing timeseries with the exponential solution can then be compared to SST. The tropical response to Mt Pinatubo fits very well, suggesting that the model is a reasonable representation of the primary response.

The Sun controls the ocean phases, which act as amplified negative feedbacks to changes in indirect solar forcing, and control low cloud cover and lower troposphere water vapour also as negative feedbacks. Post 1995 global warming is all natural negative feedback responses to weaker solar wind states, including the increase in upper ocean heat content. If rising CO2 forcing projects onto natural variability, it would drive a colder AMO.

https://www.linkedin.com/pulse/association-between-sunspot-cycles-amo-ulric-lyons/

Sadly Ulric I think there is confirmation bias at work here. There are areas at the beginning and end of you graph which are totally in anti-phase , yet you only seem to notice the bits which are in phase. This is typical of every single claim of correlation I have seen over the years.

Also suggesting a negative feedback is “amplified” to the point where it over rides the causation it is supposed to be a feedback too makes no sense to me.

A very very strong neg f/b with remove almost all change in a system, you can not make it strong enough to have an effect stronger than the deviation it is responding too. Sorry that does not make sense. I would then oppose itself and self cancel.

“There are areas at the beginning and end of you graph which are totally in anti-phase , yet you only seem to notice the bits which are in phase.”

Try and understand what I wrote:

North Atlantic sea surface temperature anomalies (Atlantic Multidecadal Oscillation – AMO) are fairly tightly in phase with sunspot cycles during a cold AMO, and generally anti-phase with sunspot cycles during a warm AMO.

“Also suggesting a negative feedback is “amplified” to the point where it over rides the causation it is supposed to be a feedback too makes no sense to me.”

It’s a brilliant arrangement. We cannot overheat when the solar wind is strong as the ocean phases turn cold and increase low cloud cover, like in the 1970’s. And when the solar wind is weaker, the ocean phases are warm and reduce low cloud cover, which also increases the upper ocean heat content.

Thanks for the reply.

what I would suggest you are seeing is that it is not SSN which is main driver but the 9.1 lunar driven cycle. When AMO is in-phase with SSN is when SSN is in-phase with lunar cycle. When AMO is out of phase, SSN is out of phase with 9.1y lunar.

See the update added to my article on ACE.

https://judithcurry.com/2016/01/11/ace-in-the-hole/

Update: The following produces a beat pattern of about 59y that would match the periodicity found in the article. This may explain much of problem with

finding a consistent correlation to solar forcing in absence of the

recognition of a significant lunar periodicity.

p1=9.1;p2=10.8;

cos(2*pi*x/p1)+cos(2*pi*x/p2)

My guess is that the circa 60y pseudo cycle in AMO is a result of both lunar and solar influence. That is why it drifts in and out of phase with SSN.

Reblogged this on Climate- Science.press.

“How accurately can the climate sensitivity to CO2 be estimated from

historical climate change?” by Gregory et al.[i]

Historic climate change nails it right on!

Historic temperature changes follow CO2 changes cannot be actual causes! Climate sensitivity to CO2 cannot be proved to be anything different than ZERO!

No one caught that! Historic CO2 changes have followed Temperature Changes. Say it backwards and no one notices!

Climate sensitivity to CO2 cannot be proved to be anything different than ZERO!

I caught it but decided not to bring it up.

they said historic, which requires a history. They are not talking about paleo, neither is Nic.

Perhaps the only thing we can expect, assuming a 30% underestimation of CO2 EffCS, is that future cooling effects on the climate due to reduced solar activity over the next couple of decades (compared to that experienced during the 20th century) may not be as pronounced as was experienced during the maunder minimum.

Trend of measured water vapor exceeds by 64% that calculated from feedback from UAH temperatures (27% if from HadCRUT4 temperatures) proving WV not CO2 contributes to average global temperature https://pbs.twimg.com/media/EHLxE8aUcAE2p5H?format=jpg&name=small

http://globalclimatedrivers2.blogspot.com

So, even taking the 30% underestimate claim as given, multiplying the highest of the Curry – Lewis equilibrium number of 1.76 (https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0667.1) by 1.3 gives me 2.3 degrees equilibrium warming based on the temperature data / warming correlation. By the same token the 95% certainty range would be 1.5 to 4, as I understand with the potential latency factored in.

It seems unquestionable that the IPCC’s former best estimate of 3 degrees cannot be described as accurate, barring some kind of extreme latency that there is no data evidence for. I don’t know the IPCC models very well but I’m pretty sure if they depended on some kind of handwave latency I would have heard about that.

Also, if you see this Nic, something I’m curious about regarding the sensitivity estimate is whether it’s taken for granted that all warming since thermometer temperature records began is greenhouse gas caused or if the correlation between warming levels and greenhouse levels is consistent through the data set so that it would appear that natural variation (up or down) is not a major part of the picture.

Robbie

Estimation of climate feedback and climate sensitivity over the historical period should (and normally does) allow for a small part (usually 5-10%) of the warming not being anthropogenic in origin, but rather a recovery from previous unusually negative natural forcings (heavy volcanism and, to an extent, low solar insolation), and possibly negative multicentennial centennial natural internal variability, which led to surface temperature and the the ocean interior being cooler than normal prior to the start of the instrumental temperature record (the little ice age).

The relationship between warming and and anthropogenic radiative forcing from greenhouse gases, aerosols, etc. does appear to be fairly consistent over the instrumental record, but with some multidecadal fluctuation linked to a natural cycle in temperature in the North Atlantic, likely due to fluctuations in the ocean overturning circulation.

“The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

Cloud, ice, dust, vegetation, water vapour – all modulate energy dynamics at toa. Cloud changes in the eastern Pacific dominate variability in ISCCP, ERBE and CERES records.

e.g. https://www.mdpi.com/2225-1154/6/3/62

Much more likely the cause of 50% of warming over the past 40 years than a random – sums to zero – 10%.

https://www.nature.com/articles/s41612-018-0044-6

Cloud can evaporate with warming or ice sheets grow as a feedback to the NH insolation and THC variability. I can’t discount either.

I have used the the word chaos a lot – but it is just a word and explains nothing. What it means is that climate appears to have properties of the broad class of complex and dynamic systems. Sensitivity to small changers and nonlinear emergent behaviour among them.

I cannot imagine that such a complex climate can be summarized by one simple three-term equation.

Nabil:

No matter how complex the individual processes in the climate system, imposition of an energy imbalance will cause a net change in global-average temperature, whether large or small. Feedbacks (changes in TOA radiative flux with temperature) determine the size of that temperature change. The simple equation is not meant to capture the details; it is meant to explain the net result in conceptually simple terms.

Given the proper nonlinear negative feedback (chaos) that temperature change might even be negative, that is, cooling. I do not see this possibility being considered. Do these simple equations have a chaotic regime? If not then they may be unrealistic.

Thanks Dr. Spencer. However, there appears to be no practical use for it, except over 20 years of controversies.

David, it is very hard to justify negative feedback that can actually cause a negative temperature change when the initial change is positive. Lucia has nearly cured me of this thought in the past. The principle being that any feedback to a change should not be bigger than the change itself.

Chaotic change is therefore much more likely to be an unexplained natural variation that has failed to have been detected or suspected.

David Wojick brings up chaos: It may help to distinguish between forced climate change and unforced/internal climate variability. Forced climate change involves an change in the net flux at the TOA and is due “naturally-forced” (solar change, volcanos, and orbital changes) or anthropogenically-forced (GHGs, aerosols, etc) variability. Conservation of energy demands warming (somewhere below the TOA) in response to a positive net flux at the TOA.

Internal climate variability is due to a change in the distribution of heat within the climate system below the TOA. Below the mixed layer, our oceans are filled with cold water that has subsided in polar regions topped by a thermocline. Upwelling returns that cold water to the surface. Since both upwelling and subsidence involve fluid flow, that redistribution of heat between the surface and deeper ocean is inherently chaotic. One can grossly over-simplify ENSO (the biggest high frequency example of internal climate variability we know of) as being caused by a reduction or increase in upwelling in the Eastern Equatorial Pacific and downwelling in the Western Pacific – and the re-distribution of that heat around the globe. There are also chaotic surface currents that move heat to regions where it can escape more or less easily to space. And chaotic changes in the transport of water vapor and latent heat in the troposphere that produces clouds and precipitation.

Over the long run, internal chaotic variability should average out and we call that average “climate”. If I understand correctly, there is no way to determine how long a period is required for any chaotic process to average out and characterize a stable climate with an average net radiative imbalance of zero at the TOA. With it high heat capacity. averaging should be much slower in the ocean than the atmosphere (with clouds that effect albedo). The best we can do is look at fluctuations over the instrumental period and the past 10-20 centuries (or 70 centuries of the Holocene, if we are willing to tolerate some orbital changes). The LIA appears to have been fairly global in extent and at least partially naturally forced by a change in solar output and volcanos. The warm periods observed in Greenland ice cores (MWP, RWP, Minoan Warm Period) aren’t seem in Antarctic Ice cores, nor in most ocean sediment cores. So unforced climate variability clearly can be at least 0.3 K (a large El Nino or the warming from 1920 to 1945). I can’t find any convincing examples of global climate change in the Holocene as bigger than 1 K (except the transient effects of volcanos). GLOBAL warming over the past half-century has amounted to 0.9 K. I personally find this unprecedented in the (limited) Holocene record, but others strenuously disagree.

Given a likely future additional anthropogenic forcing of 3.6 W/m2 and a climate sensitivity of 1.8 K/doubling (1.8 W/m2/K), future forced climate change is likely to be several times larger than PRECEDENTED chaotic internal (unforced) variability. If so, the sensible thing to do (IMO) is to set aside internal climate variability (which could hurt as well as help) and focus on the tractable problem of forced transient and equilibrium warming. Our host (who knows far more than I), however, believes that additional forced global warming could be accompanied by internal climate variability that is UNPRECEDENTED in the last 70 centuries. As the last ice age ended, the abrupt changes of the Younger Dryas period in Greenland ice cores are certainly unprecedented in the Holocene. IMO, one can still logically set aside unprecedented internal variability as a “known unknown” and have a sensible discussion of anthropogenically-forced climate change.

Frank’s framework for a sensible discussion of anthropogenically-forced climate change is very sensible. Of course, the possibility that proper nonlinear feedback chaos might lead to cooling should be considered and quickly dismissed as being unrealistic, remote, far-fetched and irrelevant. Chaos cuts both ways.

Angech: “…any feedback to a change should not be bigger than the change itself.”

This is only true for single dynamic systems. Looking at the 400

ka ice core temperature plots shows that even very smooth and gradual orbital forcing manifests in difficult to correlate temperature responses. The plot is chaotic instead of the expected smooth response between cycles. One example of such a multiple dynamic resulting in a retrograde response would be warming leading to increased precipitation at the poles, increasing global albedo that becomes more persistent and positively-responsive, rather than negatively, to a cooling feedback. However bizarre that might seem, as I said, the ice core plots appear to me to leave that as an open possibility.

Frank’s very sensible argument forgets to mention Climate Inc. does not recognize the MWP or RWP or MWP as global events.

One other factor: decadal-affecting massive volcanic eruptions random but also asteroids are whizzing by on a regular basis. The work of Gene Shoemaker and the historic frequency of asteroid strike is still underappreciated. A paper was published this month showing evidence of a platinum strata indicating a significant asteroid strike dated to the start of the Younger Dryas. https://qz.com/africa/1723888/scientists-say-a-platinum-meteorite-hit-africa-12800-years-ago/?utm_source=YPL&yptr=yahoo

Two simple statements explain the Ice age.

When the earth loses more heat than it gains daily it takes it from the oceans and deposits the ice at the poles.

When the earth gains more heat than it loses it deposits it in the oceans by melting the ice at the poles.

Weather is Mother Nature moving the water vapor.

I think you are partly right. I will do you one better.

Energy Balance of the Earth Climate System

A standard consensus is that “Solar In” is equal to ”IR out” plus “Albedo out”.

There are large amounts of energy that has been left out of consideration.

There are large storage of sequestered ice on land. To produce ice, energy must be removed, that must result in IR out from the production of the ice. That IR out does not result in immediate cooling of earth, the cooling is stored in the sequestered ice and the cooling happens latter when the ice thaws, for some the ice, many years later. We have found ice in Antarctica that is 800 thousand years old. That ice produced, resulting in IR out that happened 800 thousand years ago. This was not considered in energy balance equations for the earth climate system. That ice will produce cooling when it thaws, and it will be replaces by newer ice.

A large amount of IR out is producing water and ice, some of the water and ice does cause immediate cooling of the climate system, some of the ice is sequestered for cooling later. Sequestered ice is flowing and thawing and cooling the earth and water on the earth. Large amount of IR out in warmest times is used in the production of sequestered ice. The sequestered ice flows and spreads and reflects and thaws. The reflecting and thawing causes the most cooling in the coldest times when the ice extent is the most. The coldest times are coldest because the ice extent is the largest.

Over the last fifty million years, climate temperature dropped from 14 degrees warmer than now to 6 degrees colder than now, then settled in the modern normal ten thousand years “Paradise”, described by Leighton Steward in his book, “Fire, Ice and Paradise”.

The cause of this evolution was the changing of ocean currents, forcing more and more warm tropical currents into Polar Regions and increasing the evaporation and snowfall and sequestering of ice on land in cold places. It did not form snow in cold polar regions before warm tropical water flowed there, before that, sea ice prevented evaporation and snowfall.

As more warm water flowed into polar regions, ice ages cycled larger and larger. The largest ice ages sequestered ice on Antarctica that did not thaw and return to the oceans. The last major warm period and ice age finally sequestered enough ice on Antarctica and Greenland and cold places around the world, that the reduced oceans no longer provide enough evaporation and snowfall for another major ice age.

The climate we have had for ten thousand years is the “NEW NORMAL CLIMATE”

Adjustments to the sequestered ice in each hemisphere is regulated by sea ice thawing when the oceans are warmer than the freeze thaw point and increased sequestering of ice until it gets too cold an sea ice forms. In the cold times with evaporation stopped by sea ice, the ice flows and dumps into the oceans and on the land until the ice is depleted to the point oceans warm again.

Look at ice core data from Greenland and Antarctica, the temperatures are regulated in the same bounds but the cycles in the NH and SH do not correlate with the length of the cycles or the timing of the cycles. Every other forcing factor, correlates with some change, over the shorter time, but does not push temperatures out of bounds over the longer time.

A standard consensus is that “Solar In” is equal to ”IR out” plus “Albedo out”. Over a long term average, that is right, but it does not consider the internal storage of energy in oceans and ice. Significant IR out is producing sequestered ice and significant cooling comes from the thawing of the ice.

The ice extent of sequestered ice on land and ice extended into oceans does regulate the temperature of earth. When earth is colder, it is due to more ice extent, reflecting and thawing, and that colder resulted from IR out in a warmer time. Ice piles higher in warmer times, spreads slowly and later causes more cooling.

Hi Pope, thought you’d find this interesting.

Bering Strait influenced ice age climate patterns worldwide

https://www.sciencedaily.com/releases/2010/01/100110151325.htm

“In a vivid example of how a small geographic feature can have far-reaching impacts on climate, new research shows that water levels in the Bering Strait helped drive global climate patterns during ice age episodes dating back more than 100,000 years.”

You do not understand the how large the ammount of heat lost daily is. If I assume the average surface temperature of the surface of the earth is 63’F then the radiant heat lost every 24 hours to black sky (-459’F) that is a 522’F difference. The radant heat striking the earth from the sun, an area equal to the area of the great circle, is larger. The area of the surface of the oceaan, which reflects radient heat to the black sky, is the controllig factor.

IT IS AS SIMPLE AS THAT!!!

Nic, your analysis of the bias resulting from the OLS versus the unbiased difference method sounds to be rather straight forward as you explain it here – at least to me. Have you contacted the corresponding author of the paper in question and if so what was the reaction to your analysis? Also what biasing effects would be expected from using TLS or Deming regression?

The difference method as you have frequently noted requires selecting starting and ending periods that avoid large volcanic events and asymmetric peaks and valleys in multi-decadal natural phenomena. I have used a currently optimum version of Empirical Mode Decomposition (EMD) called CEEMDAN or Complete Ensemble EMD with Adaptive Noise, which from analysis with series constructed from known secular trends, multi-decadal and higher frequency variations, and red and white noise, extracts trends with good precision. It does allow more choices in periods to select for applying the difference method to time series.

Ken, Yes, I sent Jonathan Gregory a draft of my article at the beginning of the week, and I sent him the final one when published. To date I have received no reply from him.

I would expect no bias when using Deming regression if the standard deviations of the x and y variables were well estimated and errors in them were uncorrelated. However if no adjustment were made for the low efficacy of volcanic forcing then that would affect the slope estimate.

TLS is IMO only suitable when the x and y variables have similar fractional error standard deviations, otherwise there will be a bias – usually a high bias, if the x variable has a lower fractional error standard deviation than the y variable.

CEEMDAN may well be a good method, but I like to properly understand a method before using it so I’ll stick to other methods for the present. Perhaps you would like to write a blog post about CEEMDAN, with examples?

Nic, I recently sent you an email with an attachment where I was analyzing the differences in the CMIP5 models between the historical period of 1861-2005 and the future period of 2006-2100 for the relationship of delta T versus F/rho were delta T is the change in GMST, F is the forcing change, rho is the climate resistance and equal to lambda, the feedback parameter, plus kappa, the ocean heat uptake efficiency. In that analysis I used the CEEMDAN decomposition to extract the GMST and forcing trends in order obtain delta Ts and Fs from those trends. In the historical period there was no significant correlation between delta T and F/rho whereas in the future period there was a very good correlation. The delta T versus F/rho relationship was taken from a paper where Gregory was an author and assumes for validity a monotonically increasing forcing.

The implications for me from the analysis were that it presented strong evidence that forcing was being used with at least some of the models in order to better emulate the observed GMST trends in the historical period with little carryover effect into the future period.

I assumed without a reply from you that it was probably not of sufficient interest to you to spend time on it or you were too polite to tell me all that was wrong with it. I was thinking about asking Judith Curry if I might do a summary post of that analysis here at Climate Etc with a posting of the entire analysis made available by me with Dropbox or some other Cloud device. I have no intention of publishing the work in a journal and would only like to get some feedback on it.

I can certainly appreciate your reasons for not using CEEMDAN and particularly so when getting papers published. I did use CEEMDAN in attempting to replicate your results in LC (2018) and with very good agreements.

I for one would like to see such a post Ken. Your conclusions confirms what a lot of us have suspected, viz., that there is conscious or unconscious tuning of GCM’s to replicate the historical GMST reasonably well.

Ken, Thanks for reminding me about your email. I’m afraid that it slipped my mind – I’ve been rather busy. I’ll take a look at what you sent and get back to you. Feel free to chase me if I haven’t done so by the end of this week.

Nic, I have an updated version of my analysis with better graphics and supplemental information on the precision/accuracy of CEEMDAN extracting a known trend from a noisy time series. I should be able to send that version to you tomorrow.

kenfritsch:

. In that analysis I used the CEEMDAN decomposition to extract the GMST and forcing trends in order obtain delta Ts and Fs from those trends. In the historical period there was no significant correlation between delta T and F/rho whereas in the future period there was a very good correlation.That sounds quite interesting. I hope that you post the details.

As I understand it both the radiative response R and the temperature are variables that are measured over time, thus this is an example of longitudinal data that is inherently auto-correlated. In this case OLS regression is not appropriate. Instead other methods such as linear mixed models are suitable. Do you agree?

Frederick, regression is normally performed on annual mean data. That is indeed generally autocorrelated in the case of both R and T, and cross-correlated at non-zero lag (above that arising from the autocorrelation). It is unclear however that this leads to significant bias in OLS slope estimates. No bias arises when OLS is used and errors in the dependent variable are auto-correlated, although the OLS uncertainty estimate will be too small.

This issue is another reason for preferring the difference method. Also, if using regression, working with multiannual (e.g., pentadal) mean data is much safer than using annual data, as the auto-correlations and lagged cross correlations are normally fairly low when doing so.

Ok thank you very much!

The Teenage Super Sleuths have just completed a video that uses NASA Ground Measurements to prove that CO2 has had no impact on temperatures since 1902. The experiment is very simple. You simply identify stations that existed on or before 1902 and have a BI of 10 or less. That is a control for the urban heat island effect. If you control for the urban heat island effect, you will find that CO2 has had 0.00% impact on temperatures. Michael Mann’s hockey stick shows 1.1 to 1.2 degree increase over that time period. You will find no instrumental data sets that show that kind of warming if you control for the urban heat island effect. Don’t take my word for it, simply replicate the experiment outlined in the video.

Complete Climate Change Science Fair Project

https://youtu.be/ZUVqZKBMF7o

co2: The world warms globaly, also the SST, 70% of the earth. What about UHI there? The mentioned satellite observations (UAH and RSS) also show warming on land, not influenced by UHI. This makes this argument mostly obsolete IMHO.

Gregory’s gambit, or the “Gregory grab”, can be summed up very simply.

Claiming ALL the global temperature increase in the last two centuries, as entirely a function of climate sensitivity to CO2, is not enough.

They now claim all the warming – and then some.

So out of every 2 degrees of warming, 3 are caused by CO2.

Their claim that natural climate change has ceased, is just wrong.

300 to 400 parts per million is one in ten thousand.

Climate continues to change in natural cycles, not caused by one molecule increase per ten thousand.

Hilarious.

First, at Climate Etc. natural variation is completely limited to cooling.

So, true to form, natural variation caused a 1 degree drop, because it only cools, and CO2 increased it by 3 degrees, resulting in a 2-degree increase.

JCH

Where do you get natural variation claimed only in the cooling direction? Not from most skeptics. The opposite – almost no thread is without the suggestion by a skeptic that warming in the last 2 centuries is recovery from the little ice age, the coldest period in the Holocene. Krakatoa also happened back then plus a few other big volcanoes.

That’s the null hypothesis – remember those? Modern warming (not cooling) is a natural fluctuation. It is the responsibility of the CO2-AGW proponents to say it ain’t so.

Hitchens’ razor is an epistemological razor asserting that the burden of proof regarding the truthfulness of a claim lies with the one who makes the claim, and if this burden is not met, the claim is unfounded, and its opponents need not argue further in order to dismiss it.The more fundamental question is whether there is a justifiable methodology for reliably calculating climate sensitivity.

This is not a CMIP type ensemble. It is a single model with solution trajectories exponentially diverging from small initial differences. It is the Lorenz conundrum writ large. The probability of any trajectory being correct is in the order of 0.01%. A CMIP ensemble is a collection of low probability solutions that owe more to nonlinear math than forcing or feedback.

https://watertechbyrie.files.wordpress.com/2018/05/rowlands-2012.jpg

https://www.nature.com/articles/ngeo1430

Equally – climate shifted 4 times last century – and the potential is there for more extreme shifts in future. It is not possible with state of the art methods to disentangle internal from forced variability.

Climate sensitivity seems to me to be an exercise in doing the math that can be done rather than the math that should be done to provide reliable answers. Much like the drunk looking for keys under the lamp post.

Robert E.

Chapter 10 of AR5 provides a case study of your observation;

It states “The shortcomings of the present estimates of natural climate variability cannot be readily overcome.” And yet it arrives at an estimate (of almost zero) based on no evidence.

Richard

Simple proof that water vapor drives temperature, not the reverse is at https://pbs.twimg.com/media/EHLxE8aUcAE2p5H?format=jpg&name=small

Pingback: Gregory et al 2019: Unsound claims about bias in local weather comments and local weather sensitivity estimation – All My Daily News

Here’s the Connolly’s power point presentation in Tucson Arizona USA in July. Well worth an hour of your time.

A lot to take in but this is all about the actual balloon data over a long period of time.

No modelling or theories or guesses, just the results of millions of balloon flights over decades.

There’s a very short Q&A at the end. I hope those interested have the time to look at the video and perhaps have a friend who understands the chemistry + data etc involved?

Way beyond my capabilities.

https://www.youtube.com/watch?v=XfRBr7PEawY

Pingback: Feedback at the Gregory et al. Local weather Sensitivity Paper and Nic Lewis’s Grievance « Roy Spencer, PhD – Daily News

Pingback: Feedback at the Gregory et al. Local weather Sensitivity Paper and Nic Lewis’s Grievance « Roy Spencer, PhD – All My Daily News

Yes we have people writing pieces (and books) that claim scientists underplay the “crisis”. For example, this piece in The Guardian. https://amp.theguardian.com/commentisfree/2019/oct/25/the-real-reason-some-scientists-downplay-the-risks-of-climate-change

Nic Lewis, thank you for this presentation.

Reblogged this on Quaerere Propter Vērum.