by Kenneth Fritsch

Identification of significant differences between the historical and future CMIP5 simulations for intrinsic climate sensitivities.

**Introduction**

There are a number of climate science articles that refer to the potential for climate modelers to select from parameter variables that satisfy well the sought after direct effects for a range of inputs and thus give the modeler the option of selecting those values that also better reproduce the observed temperature changes over the historical period. These references do not attribute the modeling influences on GMST trends to explicit tuning but rather to selecting from a range of parameter processes that can yield in turn a range of trends. The parameter processes that are most amenable to these selections involve cloud and aerosol effects. See for example references 1 through 10.

In this post reference to the historical period is taken as 1861-2005 and the future period as 2006-2100. Numbered references in this post are assigned to papers listed in the table linked here.

https://www.dropbox.com/s/zhslpck9lqnbwxu/References_Delta_T_vs_Climate_Resistance.docx?dl=0

My analyses have discovered significant differences between the historical and future time periods for global mean surface temperature change responses to the CMIP5 models intrinsic climate sensitivities. For determining responses, I used the relationship ΔT=ΔF/ρ, where ρ is the climate resistance as derived in Forster et al. (2013) and listed reference 11. This parameter comes from the relationship given in the equation ρ=κ + λ, where κ is the ocean heat uptake efficiency and λ is the feedback parameter. The definition of climate resistance assumes the forcing to be continuously increasing, with ΔT as the change in GMST. I assumed that the forcing ideally should be applied in equal measure for all individual models, and thus ΔF becomes a constant and ΔT=k/ρ.

**Methods and Resources**

To identify the secular trend in the temperature and forcing series, I used the trend extraction method Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN). See references 12 and 13. The original EMD on which CEEMDAN is described in N. Huang et al. (1998) and works well with data that are non stationary and non linear. It is an empirical approach that is applied to a data set, rather than a theoretical tool that requires assumptions. Using CEEMDAN on a time series with known noise and cycles results in a faithful separation of a secular trend.

To demonstrate consistency with more conventional methods, I also use in some regressions, in conjunction with the CEEMDAN method, an End Point Averaging method much like the approach used in Lewis and Curry (2018) and listed as reference 14.

The GMST and forcing data are taken from KNMI Explorer and Meinshausen et al. (2011), references 15 and 16, respectively.

The ρ values are available from the literature for 25 CMIP models, e.g. references 17 and 18. These values are determined for the individual models from the CMIP5 4XCO2 and 1% CO2 experiments. For this analysis I determined that I could use the RCP 8.5 GMST and forcing data for the future period to obtain ρ values for 39 CMIP5 models. The RCP 8.5 scenario, having a large increase in GHG forcing, a decreasing aerosol and cloud forcing and large temperature increases in response to the forcing, makes it promising ground for estimating model ρ values with little influences from natural variations and potential model-to-model forcing variations leftover from the historical period.

The analysis consists primarily of doing OLS regressions on the changes in GMST (ΔT) versus 1/ρ and with derived values called calculated ΔT and difference ΔT. The calculated ΔT is the expected ΔT given the forcing (ideal) and ρ value of an individual climate model. The difference ΔT is the difference between the calculated ΔT and the derived ΔT, where derived means the CEEMDAN or End Point Averaging derived trend from the KNMI GMST temperature data. The regression r-squared values are reported with and without the outliers. Outliers were determined using Cook’s Distance of 4 times the mean distance (reference 19). It should be of little surprise that not all of the models are going to fit closely in the regressions, given the large range of responses these models are known to have and the probabilities that model results are affected to a lesser or greater degree by modelers’ choices of parameters and parameter values. The point of removing outliers was to show the regression fit for almost all of the models and thus no effort was made to explain the outliers.

**Results**

The results of this analysis are summarized in the four figures and table below.

Figure 1 shows a regression plot of the individual model p values derived from RCP 8.5 ΔT and forcing data versus the published counterparts. The fit is very good. Figures 2 and 3 show the very good regression fits of the RCP 4.5 and RCP 6.0 individual model ΔT values from the 2006-2100 period versus the RCP 8.5 generated ρ values. These regressions have the expected positive slope that ideally should result from the same forcing applied to all the models and the change in GMST responding linearly to 1/ρ. In contrast to the fits and slopes for the future period, Figure 4 for RCP 4.5 in the 1861-2005 period has a negative slope and a poor fit.

The table of regressions lists the results of twenty OLS regressions for ΔT versus 1/ρ, calculated versus derived ΔT and difference ΔT (calculated – derived) between various scenarios and time periods. Critical to this analysis is the stark contrast between the historical and future period ΔT response to ρ with the assumption of a single forcing value for the individual models. The models perform in the future period as expected while the same models in the historical period, for the most part, appear to respond to different forcings. Most revealing in this matter are the regressions in 6, 7 and 8, where ΔT versus 1/ρ is regressed for the historical period with the RCP 4.5 models. The regressions in 6 that includes all 39 of the models shows a negative but not significant slope (p <0.05). In regression 7 with a selection of the 14 models with the smallest difference ΔT values (calculated – derived) the slope is very significantly positive, putting those models very much in line with the models responses in the future period. The regression of 25 of 39 models with the largest difference ΔT values has a significant slope and it is negative.

It should be noted here that the CEEMDAN and End Point Averaging methods used concurrently to obtain GMST changes gave nearly the same regression results. That would not always be the case as tests with simulations of known series compositions of white and red noise and a range of periodically varying components will show CEEMDAN to be the superior method in extracting a known trend component over other commonly used methods, including End Point Averaging. Where End Point Averaging is used as judiciously as it has been in Lewis and Curry papers in selecting optimum time periods CEEMDAN and End Point Averaging will yield nearly the same results.

**Discussion and Conclusions:**

The important result here is that in the historical period the individual model relationship of ΔT versus 1/ρ tends for a large majority of models to have a negative slope which in effect is strongly suggesting that those models have had a differential forcing applied, and more importantly, that the applied forcing is compensating for a higher climate sensitivity. Further the results show that the ideal of an externally applied forcing of the same value for all models actually does not apply in the historical period and varies for most of the models. It also shows that approximately a third of all the models in the historical period follow what would be expected in the ideal case.

A precursory view of these results might lead to the conclusion that a relatively small change in a negative forcing such as that resulting from the direct and indirect effects of aerosol forcing could affect the historical period relationships of ΔT to 1/ρ more than in the future period where the positive forcing increases more over time than in the historical period. Qualitatively that view would be correct, but a quantitative assessment would require analysis beyond what is shown here.

This analysis shows that (a) a portion of the models in the historical period can respond more ideally to forcing and corresponding to the case for nearly all the models in the future period and (b) that those models in the historical period that are not responding ideally have a general tendency to have forcing applied in such a direction as to counter the models climate sensitivity towards producing smaller changes in GMST.

**Moderation note: ** As with all guest posts, please keep your comments civil and relevant.

Has climate academia has become projection by throwing darts…?

So let me see if I have this right: if a group of models can be verified by agreeing with the historic past, that’s good, right? But if those same models project different futures, that’s bad and suggests they may not be very useful. Or maybe I just read this too fast…

You got it right – that’s what statistics is all about.

Sorry, I got wrong

Sorry, I got wrong

Jim E: models CANNOT predict the future.

Why is this difficult to understand?? Until you specify the future pathways of CO2 concentrations, CH4, N2O and more, and changes in solar irradiance, and major volcanic eruptions, this cannot happen even in principle.

Got that. But these models are using the same or similar pathways and getting different outcomes – especially regional. Again, I understand that models don’t predict, they are more about “what if?” – but the answers don’t agree.

Wrong, as usual, Mr. Appleman:

There have been three different CO2 global average temperature correlations since 1940, using near global UAH satellite data after 1979

( surface temperature “data” are inaccurate, non-global, infilled, adjusted, re-adjusted, re-re-adjusted, homogenized, non-scientific wild guesses … but apparently fine for government work ! )).

4.5 billion years before 1940 – unknown correlation

1940 to 1975 – Mild Negative CO2-average temperature correlation

1975 to 2003 – Strong positive correlation

2003 to mid-2015 – No correlation

YOU have no idea which relationship, if any, of the three above, will apply to the future.

Not to mention no certainty of TCS, except for a 1970’s-era wild guess with a laughable +/- 50% margin of error !.

CO2 emissions are probably the easiest to predict, yet you climate alarmists always want to assume unusually fast CO2 growth.

Global warming is good news, and has been good news since the 1690s — over 300 years ago.

More CO2 in the air is good news for green plants, based on thousands of scientific studies ( Not the one or two fraudulent studies YOU will reference, Mr. Appleman )

The current climate is the best it has been for humans, animals and plants in at least 800 to 1,000 years, since before the Little Ice Age centuries.

Only a fool, or a leftist ( I repeat myself ) would want the past 325+ years of mild, intermittent global warming to end.

In the history of our planet, the climate doesn’t get much better than it is now.

Richard Greene

Bingham Farms, Michigan

… where global warming is loved !

My climate science blog

with over 55,000 page views:

http://www.elOnionBloggle.Blogspot.com

Reblogged this on Climate- Science.press.

The only model that tracks the satellite data is the Russian INCMN4 and now 5. So why don’t people focus on that model?

Would help if you linked to its results and publications.

Roy Spencer had a better solution some time ago, when he ran many models and noted that all model runs diverged from real data except for the Russian model

Anyhow Roy decided that real data MUST BE WRONG (joke)

If the atmospheric carbon dioxide growth rate is set by temperature, then RCPs are not real…

The integral of temperature equals atmospheric carbon dioxide levels going back to 1958 (when MLO CO2 measurements began):

http://www.woodfortrees.org/graph/plot/esrl-co2/from:1958/mean:12/derivative/integral/plot/hadsst3sh/from:1958/scale:0.253/offset:0.099/integral/plot/esrl-co2/from:1958/mean:12/derivative/trend/plot/hadsst3sh/from:1958/scale:0.253/offset:0.099/trend

The integral of temperature also equals atmospheric carbon dioxide levels going back to 1850 (Law Dome ice core + MLO):

http://www.woodfortrees.org/graph/plot/hadsst3sh/from:1850/scale:0.253/offset:0.099/integral

https://www.sealevel.info/co2.html

Temperature also correlates with the atmospheric carbon dioxide growth rate going back

500 yearsat which point the relationship abruptly stops. (Moberg temperature reconstruction compared with Law Dome):https://i0.wp.com/i90.photobucket.com/albums/k247/dhm1353/LawMob1.

Difficult to see much of anything in a study based on

representative concentration pathwaysif RCPs aren’t even real.https://i0.wp.com/i90.photobucket.com/albums/k247/dhm1353/LawMob1.png

(Moberg temperature reconstruction compared with Law Dome)

afonzarelli wrote:

If the atmospheric carbon dioxide growth rate is set by temperature, then RCPs are not real…It obviously DOES NOT when we have a spigot putting CO2 into the atmosphere.

Good god I will never understand why people can’t understand this.

David, can you read a graph? (can you read

threegraphs?) The data is telling us that the atmospheric carbon dioxide growth rate correlates with temperature for500 years(!!!)Good god I will never understand why

youcan’t understand this.No, it doesn’t.

If you’re in a bathtub, it can gain water if it rains from above, or lose water if there’s evaporation.

What happens to the bathtub level if you turn on the facet?

For those of you who might be interested I linked a table with thirty nine named CMIP5 models ranked from worst to best in descending order by Difference ΔT where that value is the difference between the calculated GMST change and the derived GMST change for the 1861-2005 period as determined by CEEMDAN and using KNMI data. Calculated change is the GMST change expected when the model response to forcing is correlated through the reciprocal of the climate resistance, ρ, and the forcing is the same for all models. The 14 models with the green background are responding in the historical 1861-2005 period with a high correlation of GMST change versus 1/ρ and much the same as the models do in the future 2006-2100 period. The 25 models with the red background are responding in the 1861-2005 period generally as though the forcing is different for these models and is correlated less with 1/ρ than the well-behaved 14 models and with a slope in the opposite direction.

https://www.dropbox.com/s/d6nipexo0yv6iv9/Difference_GMST_Historical_Period.pdf?dl=0

CO2 depends on the height of the oceans, volume of green folliage and man’s poluting.

The Ice Age depends on the height of the oceans, the average surface temperature of the sun, and the fact that water reflects radient heat.

Until you understand the two are seprate, you will never understand we are in the first half of the new Ice Age.

And what’s your proof we’re in an Ice Age now?

(This should be good.)

yep, we are in an ice age and ice ages have interglacials and glacials.

The only excuse I have for my ignorance is I am old and senile. ENJOY

At the beginning of an Ice Age:

The surface of the earth covered by land is at its largest.

The green foliage is at its highest covering the earth.

The CO2 level is at its lowest.

The ice to the left of this point on the graph was formed in the first third or earlier of the previous Ice Age. When ice melts in very dry cold air the water flashes (sublimates) into a gas. The solids remain.

This point on the Vostok Ice Core is about 20,000 years ago + or -.

Since this point the new ice has risen about 250 meters. The CO2 level grew rapidly, the water level dropped. Finally, nature was overcome by the green foliage and the CO2 level started down. Then man came along and brought the level to present.

Nature, as this was happening, was building the Ice Shelf. Eventually the water level reached about 120 meters lower than present. The water was rising and the Ice Shelf began breaking off. The water level rose to its present where the heat required to melt the ice breaking off is equal to the excess heat reflected to the black sky. We are close to the point where stops breaking off, in millennial time, where the oceans will start back down.

Robert Clark wrote:

At the beginning of an Ice Age:The surface of the earth covered by land is at its largest.

The green foliage is at its highest covering the earth.

The CO2 level is at its lowest.

Huh? Atmospheric CO2 is lowest during the recent glacial maximums, not during the interglacials.

There is no indication whatsoever that we are near the beginning of an ice age. Past indications don’t apply in an anthropogenic world.

Jeez, what you wrote is bananas.

During past interglacials CO2 was *highest*, certainly higher than during the glacial maximums.

The ice to the left of this point on the graph was formed in the first third or earlier of the previous Ice Age. When ice melts in very dry cold air the water flashes (sublimates) into a gas. The solids remain.

Kenneth, I never heard of climate resistance so do not pretend to understand your post. I note that none of the comments to date address it, perhaps for the same reason.

But I wonder what you think about the CMIP6 models running significantly hotter than CMIP5, in their future projections, apparently due to increased positive cloud feedbacks? Does your work relate to this?

See https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782?af=R#.XhE6r8kjlR0.twitter

David, the same analysis where I have used the climate resistance, ρ, and the equation ΔT= ΔF/ρ can also be carried out using the equation ΔT=(ΔF-ΔR)/λ where λ is the feedback parameter with which perhaps more readers here are familiar. I used the climate resistance in order to avoid the noise of ΔR which reduces the regression fits for the individual models. Recall that ρ=κ+λ where κ is the ocean heat uptake efficiency.

With a number of CMIP6 models having higher ECS values than the CMIP5 models I will be interested in determining, when I can get my hands on the climate data, how these models hindcast the historical (observed) period. Someone (it might have been Hausfather) indicated that some of the CMIP6 models with a high ECS also had higher than observed GMST changes in the historical period and some had near the same changes.

Good point about the CMIP6 hot models. Fun fight to follow. How the IPCC handles them in AR6 will be interesting. I say “Greta got them” as their newly found hotness corresponds with the sudden rise of Action Now alarmism. Suspicious!

Some anomalous results in a correlation analysis of cmip5 against temperature reconstructions.

https://tambonthongchai.com/2018/08/31/cmip5forcings/

Thanks for the link, chaamjamal. I have not read it in detail but on first glance I do not see any disagreements with my analysis.

Yes sir. It was not intended as disagreement.

Probably no chance these claims will be written up and submitted to peer review at a good journal, huh?

David, it would be great if you could make some content related comments. None of yours I found up to now satisfy this criterion.

+1

A previous post here at Climate Etc. linked to a study of CMIP6 models showing their climate sensitivity. I noted then the INMCM5 both replicates the Hadrut4 past record better than the others, and also remains one of the lower sensitivities. A simplistic scenario is in the graph below:

https://rclutz.files.wordpress.com/2020/01/co2-gmt-1959-to-2019-obs-and-proj.png

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

A more complete discussion is here:

https://rclutz.wordpress.com/2020/01/28/i-want-you-not-to-panic/

Hi Kenneth,

Very interesting. Is there a difference in average calculated ECS for the two groups of models? (‘ideal’ in both historical and future periods versus ‘ideal’ only in the future period).

Steve, I do not have the ECS values at my finger tips but the climate resistance, ρ, that I have used in my analysis is a sensitivity measure as can be seen from the relationship ΔT=ΔF/ρ. It is more in line with TCR than ECS. For the groupings I made, the 25 and 14 model groups had mean 1/ρ values of 0.62 and 0.55, respectively. There is some overlap between groups and if I compare the top 12 from the 25 group and bottom 12 from the 14 group the means are 0.74 and 0.54, respectively. In other words the better behaved models in the group of 14 have lesser sensitivities, as measured by 1/ρ, than the worse behaving models.

I believe that a transient measure like 1/ρ is more appropriate when looking at model behavior in the historical period. I will, however, look up the ECS values for some of these models and attempt to answer your question. Not all the models for which I generated ρ values will have published ECS values.

Thanks Kenneth. A scatter plot of calculated model ECS versus your calculated “difference delta-T” might be informative.

Steve, I found ECS values for 13 models in the group of 25 and 12 in the group of 14 (well behaved). The mean ECS values for group of 25 and of 14 were 3.52 and 2.81, respectively.

There are some low ECS values in the group of 25 and some high ones in the group 14. I think these differences we see between the historical and future period for model behavior are influenced by modeler choices in how they might decide to compensate and how much to compensate for higher climate sensitivities in order to better reproduce observed GMST changes in the historical period. In other words there might well be a human element involved.

Thanks Kenneth.

Ken Fritsch:

It is an empirical approach that is applied to a data set, rather than a theoretical tool that requires assumptions. Using CEEMDAN on a time series with known noise and cycles results in a faithful separation of a secular trend.“Known noise and cycles” are assumptions about noise and cycles. If they are required for CEEMDAN, then CEEMDAN can not properly be described as not requiring assumptions. Possibly this apparent contradiction has arisen from your trying to be succinct.

Thank you for the essay.

Matthew, by “known” I meant that I used CEEMDAN (and other methods) to extract trends from artificial series where I had put in time varying periodical variations (both period and amplitude), white and red noise and a known trend. The key is extraction of known trend in a series with these potential interfering components.

kenfritsch, thank you. Upon rereading I thought that was what you probably meant.

We do not know how much W/m2 of the by earth’s surface emitted longwave radiation is absorbed by atmosphere.

Why all this fuss about CO2. Simple calculations using data from Hitran show that the increase of water vapor has been about 10 times more effective than the increase of CO2 at ground level warming.

Measured water vapor trend has been increasing faster than possible from feedback. https://watervaporandwarming.blogspot.com

Ken: ΔT=ΔF/ρ is only correct when the heat capacity of the compartment where ΔT is being measured is effectively zero. In the real world, the heat capacity of the mixed layer and atmosphere can’t be ignored, but doing so doesn’t cause always cause a problem. When I plot dT vs dF for the historical period, I get a much better fit when I omit the radiative forcing from volcanic aerosols. The reason is that the heat capacity of the mixed layer and atmosphere is too large for temperature to fully respond to large changes in volcanic forcing within a year. The temperature change in any one year is a function – a fraction – of the imbalance at the TOA during that year, not a function of the change in forcing during that one year. This doesn’t cause a major problem in 1% pa experiments and during most of the historical record of steady increases in forcing produce parallel changes in the imbalance at the TOA, but it does cause a problem with the large irregular forcing from volcanos and may cause problems when forcing stops rising (RCP 4.6) or falls (RCP 2.8).

Consider a single 1 W/m2 step increase in radiate forcing. Given the heat capacity of a 50 m deep mixed layer and the atmosphere and ignoring heat transfer into the deeper ocean*, the initial radiative imbalance would warm the planet at an initial rate of 0.2 K/year. The rate of warming would exponentially decrease and approach a steady state warming of ΔT = ΔF/ρ = ΔF/λ since we are temporarily assuming κ = 0. When a steady state is restored, ΔT would be +1 K or 0.5 K for λ = 1 or 2 W/m2/K. (These feedback parameters are roughly appropriate for ECS of 3.6 or 1.8 K, respectively and roughly cover the likely behavior of the planet.) So the temperature of the atmosphere and mixed layer can rise within about 1/6 to 1/3 of the steady state response for that forcing and the other 2/3 to 5/6 of the response would develop over about the next four or nine years. When we analyze data assuming warming in one year is a consequence of forcing change during the same year, we are oversimplifying.

*If heat is being taken up by the deep ocean, the fraction of the forcing change disappearing into the deep ocean can be subtracted from the forcing change and the slope plotting ΔT vs ΔF is ρ = κ + λ and I think the imbalance produced by a single step change in forcing is ΔF*[λ/(κ+λ)].

When one combines a series of annual step-function (or continuous) changes in forcing, one must remember that the imbalance driving warming is a function of the forcing change over the past half-decade to decade, not solely a function of the forcing change over the past year. The forcing change a few years ago contributes exponentially less to the current imbalance than the current forcing change, but it isn’t zero. Large and rapid changes in forcing produced by volcanic eruptions fade before their full effect on steady state temperature can be experienced, which is why some suggest that the effective forcing for volcanos is much less than the measured change in absorbed SWR.

Frank, thanks for the comment. It is relevant to what I presented. I was disappointed that I did not obtain more relevant comments. I know you have pointed to posters here that rather consistently make comments that are not related to the topic presented and I agree with you that it creates a lot of wasted band width.

The constancy of the climate resistance, ρ, in the regression of global ΔT versus forcing (with ρ being the slope) has been tested at lower rates of increasing forcing and reported by Gregory and P. M. Forster (2008), Gregory et al. (2015) and Stouffer and Manabe (1998). Gregory et al. discusses the reasoning for failure of the ΔT=F/ ρ when the rate of increases in F is sufficiently low. It involves the change in the ocean heat uptake efficiency, κ, where ρ=κ+α. The evidence in those papers indicates that the slower rates of forcing increases in the historical periods should give reasonably constant ρ values over those periods.

You are correct that where the forcing has leveled off towards the end of the RCP 4.5 in the future period and for earlier in the RCP 2.6 future period the dT/dF slope (from CEEMDAN trends) begins to change/fail as can be seen from a plot of T versus F. There is also a change in the historical period from the very early period to the remainder of the period. Those changes did not affect the difference relationship in the historical and RCP 4.5 periods that I could find that would change the conclusions of analysis.

The forcing due to volcanoes gets distributed into the trend when using CEEMDAN trend extraction.

In my analysis I am comparing (regressin) individual models within a defined scenario and time period where ideally the individual models should be reacting much the same and the differentiating driver should be the sensitivity as represented by ρ. I also carried out a similar analysis using the relationship ΔT=(ΔF-ΔR)/λ and obtained much the same results as using ΔT=ΔF/ρ. The differences were simply due to more noise using ΔR.

From the analysis recently posted here by frankclimate I was comparing the CMIP5 model change in R during the historical period with that in the observed record. At one time I thought the noisy observed R data were that noisy from measurement noise and was surprised that models also showed much the same noise. In fact the CERES observed data showing a significant trend over I believe what was a 20 year period was never replicated or even approached from 108 CMIP5 simulations over the same period because of the noisy signal.

“It involves the change in the ocean heat uptake efficiency, κ, where ρ=κ+α.”

For consistency here that should have been ρ=κ+λ where λ (and α) is the feedback parameter.

Ken: Thanks for the reply. As a self-taught amateur, I have been thinking about our climate system in terms of a two-compartment model in which: λ tells us how much additional heat goes to space per degK of surface warming and κ tells us how much additional heat flows into the ocean BELOW the mixed layer in response to warming of the mixed. You may remember Paul_K analyzing Pinatubo using such a model at the Blackboard long ago (“Two Dogs that Did Bark in the Night”). In that model, we can’t ignore the heat capacity of the “surface compartment with thermometers” consisting of the mixed layer + atmosphere. For the first time, it dawns on me that everyone else is probably using a model where κ tells us how much additional heat flows into the ocean and the surface compartment consists only of the atmosphere (which has a negligible heat capacity). Given that the temperature of the mixed layer rises and falls seasonally with the temperature of the atmosphere above, my model may be more realistic. The point of my comment is to suggest that some of the inconsistencies that are found are due to ignoring the heat capacity of the mixed layer of the ocean.

The large negative forcing from volcanic aerosols comes from real measurements. We need to postulate (without a good reason?) that the effective forcing from such aerosols is much less than observed forcing in order to produce a good fit to ΔT=ΔF/ρ. Why not use a better model instead of relying on such fudge factors? I’m not sure I’m capable (or persistent enough) of carrying out such an analysis. Getting a value for ocean heat uptake below the mixed layer will be a complication.

Frank, I believe your posts fit well with the discussion in the paper titled: “Transient Climate Response in a Two-Layer Energy-Balance Model. Part I: Analytical Solution and Parameter Calibration Using CMIP5 AOGCM Experiments”

https://journals.ametsoc.org/doi/pdf/10.1175/JCLI-D-12-00195.1

That paper points to T=F/ρ where ρ=λ+κ being applicable to a one-layer model (the well-mixed layer) with the deep ocean having an infinite heat capacity. The following excerpt from the paper sums up the limitation of this model.

“Based on empirical relationships, Gregory and Mitchell (1997) and Raper et al. (2002) propose a formulation for the deep-ocean heat uptake, which is proportional to the surface temperature perturbation, H=κT. However, this formulation is not able to represent the equilibrium temperature response in the case of a step-forcing or a stabilization scenario because the deep-ocean temperature response is by definition neglected.”

I believe that using the T=F/ρ relationship and a one layer model where the forcing (and temperature) is in a transient phase and is monotonically increasing works as a good approximation because it is outside the limitation boundaries suggested in the paper linked above. Empirically the relationship holds well for almost all the CMIP5 models in the 2006-2100 future period for the RCP 4.5, 6.0 and 8.5 scenarios and for 14 out of 39 models in the 1861-2005 historical period. It is the other 25 models in the historical period to which my analysis points with interest. It is these models that yield a negative regression slope for ΔT versus 1/ρ indicative of differential forcing applied to reduce the temperature that those models ρ values would have otherwise produced.

Frank, there are other papers which you may have already read that have evaluated GMST change models with 1, 2 and 3 time constants along with a diffusion model (in one paper) for the abrupt 4XCO2 and 1% CO2 cases. I have had what I judged as good conversations with some of the authors of these papers and Nic Lewis in helping me understand the models better. I have linked those papers below. I do not necessarily agree with the conclusions of these papers but I do find the contents interesting and valuable.

“Slow climate mode reconciles historical and model-based estimates of climate sensitivity”

Cristian Proistosescu* and Peter J. Huybers

https://advances.sciencemag.org/content/3/7/e1602821

“Projections of the pace of warming following an abrupt increase in atmospheric carbon dioxide concentration” K Caldeira and N P Myhrvold

https://iopscience.iop.org/article/10.1088/1748-9326/8/3/034039/pdf

Ken: Thank you so much for the Geoffroy reference, which I just started looking at and the other two papers. I guess I shouldn’t be surprised that (so far) I see nothing about analyzing the behavior of our planet in terms of a two-layer EBM and obtaining a better value for λ and therefore ECS. However, I’ve barely scratched the surface on what you have given me.

These authors start by analyzing an abrupt 4XCO2 experiment. I’ve always been skeptical about such experiments, because they produce an extremely warm mixed layer floating (+1.5? K in one year, +5? K in less than a decade, IIRC) on top of a deeper ocean that hasn’t had much time for heat to penetrate more deeply. That makes the ocean much more stable to overturning and resistant to vertical heat transport. (When I’ve expressed my concerns about this subject to Nic at this blog, he notes that 1% pa and 4X experiments give similar estimates for climate sensitivities.)

I suspect you already realize that the big difference between analyzing future AOGCM projections in terms of EBMs and analyzing historic AOGCM hindcasts is that the hindcast contain volcanic eruptions that can’t be properly modeled without taking the heat capacity of the mixed layer into account. There is another difference between a large step-function increase in CO2 and a volcanic eruption: The abrupt increase in CO2 produces an ocean that is more stable to overturning and resistant to vertical heat transport, while cooling of the mixed layer after a volcanic eruption makes the mixed layer cooler, less stable towards overturning and more capable of transporting heat vertically.

Further published evidence that including both convection as well as radiation in atmospheric thermodynamics, yields an insignificant warming from CO2.

https://www.scirp.org/pdf/acs_2020011611163731.pdf

The alarming story of warming from CO2 depends on a radiation only paradigm of atmospheric thermodynamics, which cannot be tenable.

Phil Salmon:

https://www.scirp.org/pdf/acs_2020011611163731.pdfthank you for the link.

Pingback: Weekly Climate and Energy News Roundup #399 -

Pingback: Weekly Local weather and Power Information Roundup #399 – Daily News

Ken, could you possibility add the input file(s) for the regressions above to the dropbox folder?

I’ll put together a separate dropbox link with the data used for the plots and regressions that are identified with the posted plots and regressions sometime today.

Thank you.

Here is a link to an Excel file showing the data I used for the plots and regressions in my post. I had to rearrange the data in order to make more readily understood. I hope that I made no errors in that process.

https://www.dropbox.com/s/sklkkmaju6xf1jo/Data_Climate_Etc_Post.xlsx?dl=0

Hi Ken,

Thank you for a very interesting article.

I would suggest however that you may be drawing the wrong inference from your results. I believe that they are demonstrating that it is inappropriate to apply your chosen model solution form to the historic dataset – because the variability of the associated historic forcing series means that the necessary assumptions which go into your model solution form are not satisfied.

In brief, your analysis is implicitly founded on the governing equation for a two-body model. The domain of validity of this governing equation is then somewhat limited by the unnecessary assumption of an infinite acting deep ocean. The validity is then

further limitedby your not applying a general solution form, but instead selecting a particular memoryless solution formwhich quite explicitly requires the validity of an assumption of a linear increase in forcing with time. Deviation from this requirement introduces model error into your results. The solution you are using can be readily shown to be much less valid when applied to the historic period forcings than when it is applied to future forcing projections. A consequence is that, even if there does exist a near-perfect consistency in the application of your chosen governing equation to the historic and the future performances (which there isn’t because of the dubious assumption of an infinite acting deep ocean), there is still no such consistency to be expected in the application of your chosen solution form to that governing equation; we would for mathematical reasons expect a greater spread of results arising from “invalid assumption error” when your solution is applied to the variable, nonlinear forcing in the historic period – as in, for example, your Figure 4 – and hence we cannot readily conclude anything about consistency in the application of the governing equation itself or of its parameters.As you noted above in the interchange with Franktoo, the governing equation which provides the foundation for your solution is initially based on a two-body model; it is then further assumed that the deeper ocean body can be treated as an infinite heat sink (of invariant temperature) which supplies a flux into the mixed layer which is directly proportional only to surface temperature change. This resulting governing equation can be written as:-

CdT/dt + kT = F – λ T Eq 1

where T is surface temperature change from time t = 0, C is the mixed layer heat capacity expressed in watt-years/deg C, and λ and k are as in your description above.

Putting ρ = k + λ the governing equation becomes:-

CdT/dt = F – ρ*T Eq 1A

For a fixed step forcing F, the solution for surface temp change, T, as a function of time is given by:-

T(t) = F/ρ * {1 – exp(-C*t/ρ)} Eq 2

We can immediately see from Eq 2 ( as you noted above) that a major limitation of this governing equation is that it can, at best, be applied only to transient behavior, since its solution for a fixed step forcing does not asymptote to the theoretical ECS, (F/ λ), at infinite time.

Nevertheless, the general solution for any arbitrary forcing series can be obtained from Eq 2 by a standard convolution. This would allow a more legitimate comparison of the behaviour of your historic and future datasets. But you are not doing this. Instead, you are using a solution of Eq 1A that derives from a highly specific assumed boundary condition,

namely, a forcing series which increases linearly with time. To consider the consequences, let us examine the solution for this boundary condition.Rearranging Eq 1A, and applying an integrating factor, exp(ρt/C), we obtain:-

d/dt (exp(ρt/C) * T) = exp(ρt/C) * F(t)/C Eq 3

Now we impose the boundary condition that the forcing series increase linearly with time :- F(t) = βt

Integrating both sides of Eq 3 from 0 to t, we obtain the analytic solution for the boundary condition of a linear increase in forcing with time:

T(t) = βt/ρ – Cβ/ρ^2 [1 – exp(-ρt/C)] Eq 4

Substituting F(t) = βt, we can rewrite this as :-

T(t) = F(t)/ρ – Cβ/ρ^2 [1 – exp(-ρt/C)] Eq 4A

Inspection of Eq 4A shows a curve which passes through the origin, and which eventually asymptotes to a straight line approximation of the form

T(t) = F(t)/ρ – Cβ/ρ^2 Eq 4B

Eq 4B is then the basis for an expectation that a crossplot of F against T should yield a gradient of 1/ρ.

The important thing to note is that the more that the forcing series deviates from the assumption of a linear increase with time, the greater the introduction of a difficult-to-quantify “model assumption error” in the application of Eq 4B whether it is used to abstract estimates of ρ or to predict temperature change. The future forcing trajectories might conform tolerably well to the assumption of a linear increase in forcing with time. The historic forcing series generally do not. Hence, it is to be expected that your analysis of the future series has a smaller contribution of model assumption error from this source than does your analysis of the historic series. We should therefore expect the latter to display a greater spread of results in temperature prediction and a poorer correlation.

It is possible to abstract parameters from the abrupt 4X increase in CO2 results plus the 1% per year experiments and use them in a two-body model formulation to emulate the history and prediction period of AOGCMs by superposition or convolution (without the confusing assumption of an infinite-acting deep ocean). Such studies reveal no apparent disconnect in parameter estimates between the future and the historic emulations. In particular, no parameter switches are needed between past and future. This strongly suggests that your results are largely controlled by the validity, or lack thereof, of the assumptions necessary to support your choice of model solution. Your results do IMO serve to highlight why this particular model form should generally be avoided.

There was some related discussion of this on Lucia’s (many years ago now!):-

http://rankexploits.com/musings/2012/ocean-heat-uptake-efficiency-chicken-laying-eggs-and-infinite-silliness/

My best wishes, Ken, and I sincerely hope that this criticism does not discourage you in any way from further challenging enquiry.

Paul

The paper linked first in the most recent Week in Review thread: On the Climate Sensitivity and Historical Warming Evolution in Recent Coupled Model Ensembles by Clare Marie Flynn and Thorsten Mauritsen contains comments that are relevant to my post/thread here.

The following comment in the abstract relates well to what I (along with many others) have proposed for a potential cause of the CMIP5 model differences between the historical and future periods. Aerosol forcing is negative and while relatively smaller than the GHG positive forcing in the historical period it becomes a convenient tool to compensate the amount of warming that would otherwise occur given an individual model’s climate sensitivity. Carrying that negative forcing into the future period has much less influence on the relationship between warming and a measure of climate sensitivity, in the example of ΔT versus k/ρ where k represents the same forcing change for all models, because of 2 factors: (1) the GHG forcing becomes greater in the future period and (2) the negative aerosol forcing declines in the future period in all scenarios or at least for CMIP5.

“Somewhat surprisingly, CMIP6 models exhibit less historical warming than CMIP5 models; the evolution of the warming suggests, however, that several of the models apply too strong aerosol cooling resulting in too weak mid 20th Century warming compared to the instrumental record.”

The following comment in this paper succinctly puts the importance that modeling and modelers put on getting the model warming close to that for the observed and why it is an important part of validation. As references in my posts point out there are “trade-offs” that can be made get that historical warming in line with the observed for models with high sensitivities that would otherwise run hot.

“The instrumental record warming is the prima facie test of climate models: if models are not able to reproduce the history of warming then they do not represent a credible hypothesis of how the climate system works. However, the warming in a model is a result of both climate change feedbacks, radiative forcing, deep ocean heat uptake and pattern effects and therefore modellers can trade off these factors to obtain an overall warming in line with observations (Kiehl, 2007). Some modelling centres use this explicitly to tune their models (Hourdin et al., 2017; Mauritsen et al., 2019) whereas others state they do not 200 do this (Schmidt et al., 2017). In either case, as such representing historical warming is a necessary, but insufficient validation of a climate model.”

The paper also discusses the relevance of using TCR as a measure of warming in the historical period and gives a relationship for it which is in line for what I used in my analysis. Note that, since λ is taken as negative, forcing is given a negative sign to keep TCR positive and that F2x is F2XCO2.

“A central metric that incorporates several of the factors relevant for historical warming is the transient climate response (TCR). TCR is computed from an idealized simulation with a gradual 1% per year CO2 increase as the warming around the time of doubling. Just as ECS, also TCR has increased in CMIP6 to a mean of 1.98 K (range 1.30-2.91 K) compared to the 205 CMIP5 mean of 1.75 K (0.96-2.58 K), as seen in Fig. 9. One can obtain an approximate estimate of TCR in terms of physical bulk properties of the climate system (Jiménez-de-la Cuesta and Mauritsen, 2019):

TCR ≈ -F2x /(λ-εϒ)

where the product eg is equal to 0.93 Wm-2K-1 with an uncertainty range of 0.54-1.32 Wm-2K-1 in CMIP5 (Geoffroy et al., 2013); ε is the deep ocean heat uptake efficacy representative of forced temporary pattern effects, and ϒ is the deep ocean heat 210 uptake coefficient.”

Ken: If I remember correctly aerosols have been falling since about 2000. Therefore we are approaching four decades over which their is no overall change in aerosol forcing. It may be possible to tease out the use of excessively negative aerosol forcing by comparing warming over 1960-2000 (where aerosols increased dramatically) with 1980-2020 (where the increase in aerosols may be negligible.

Another way to prevent to much warming during the historic period is to send more heat into the ocean.

Frank, the total aerosol forcing (from columns 41 + 48) from the link below that prescribes the RCP forcings for CMIP5 shows negative forcing goes more negative at a fairly steady rate from 1950 to 2005 where at 2005 it starts to decrease beyond that year steadily to 2100. That would give a model (or modeler) plenty of leeway to significantly influence GMST changes in the historical period (1861-2005).

http://www.pik-potsdam.de/~mmalte/rcps/

Heat going into the ocean is a possibility for influencing GMST changes in the historical period. Using the EBM equation as I did in conjunction with ΔT=ΔF/ρ would have, however, shown that avenue of influence. It did not.

Ken wrote: “the total aerosol forcing (from columns 41 + 48) from the link below that prescribes the RCP forcings for CMIP5 shows negative forcing goes more negative at a fairly steady rate from 1950 to 2005 where at 2005 it starts to decrease beyond that year steadily to 2100.”

My comment was based on what I vaguely remembered about the total aerosol values from a spreadsheet in WG1 AR5 archive. (IIRC, Nic pointed me to this source.) The Excel worksheet with this data was title “had4_krig_annual_v2_0_0.webarch” . I don’t know if these values disagree with the ones you are looking at or disagree with RCP scenarios.

1940 -0.354

1950 -0.386

1960 -0.546

1970 -0.719

1980 -0.751

1990 -0.873

1995 -0.897

2000 -0.922

2005 -0.911

2010 -0.900

2020 ?

As you can see, by 2020 aerosol forcing (W/m2) might be little changed from 1990. If models use too negative an aerosol forcing, BASED ON AEROSOLS ALONE, they should show too little warming before 1990 and too much warming after 1990. With all of the other complications (volcanic eruptions and, surprisingly, a contribution from the solar cycle), it may be impossible to unambiguously detect this. Detection is your area of expertise.

Ignoring transient volcanos, total forcing has risen about 0.33 W/m2/decade since 1970, but about 0.1 W/m2/K before then. Since 1970, (0.33 W/m2/decade)/(0.19 K/decade) = 1.73 W/m2/K and ECS about 2 K. By 2020, we will have three decades with little change in aerosol forcing and lower probability that negative aerosol forcing is responsible for biasing estimates of climate sensitivity. (However, given the lag associated with the heat capacity of the mixed layer, it would be better to use periods of solar cycles instead of decades. A solar cycle of amplitude 0.1 W/m2 has a non-trivial impact on forcing rising at 0.33 W/m2/decade.

Frank, I should be clearer about the trajectory of the prescribed total negative forcing due to aerosol in the CMIP5 models. It goes steadily more negative from the 1950 to 2005 and steadily becomes less negative from 2006 to 2100.

That forcing is, however, only the prescribed forcing from aerosols and not necessarily what the individual models produce. We know from my analysis and others that the overall forcing in the historical period varies considerably over many of the individual CMIP5 models. There are number of papers that I have referenced in my post that discuss how that difference is most probably aerosol and aerosol cloud related.

A question in my analysis is how individual models seemingly different responses between the historical and future periods in ΔT=ΔF/ρ, or alternatively ΔT=(ΔF-ΔR)/λ, can be explained in terms of negative forcing. In the historical period if an individual model has a much greater (or lesser) negative aerosol forcing than the prescribed, the ΔT=ΔF/ρ equation will give a lesser correlation than where all the models were responding with the same (prescribed) forcing. It is that correlation difference that differentiates the historical and future period in my analysis. The assumption that that greater (or lesser) negative forcing gets carried over from the historical period into the future period then requires an explanation of the excellent correlations that are obtained from ΔT=ΔF/ρ and ΔT=(ΔF-ΔR)/λ for the individual models in the future period.

A qualitative explanation of that carry-over effect is that: (1) the prescribed negative aerosol forcing gets less negative in the future period and thus it would be expected that whatever negative aerosol forcing associated with an individual model be it higher or lower than the prescribed it would also become less negative and thus have a lesser effect on modifying the responding GMST and (2) the ratio of the increasing positive forcing from GHGs to the negative aerosol is increasing and thus resulting in the negative aerosol having a lesser proportional effect on modifying the responding GMST.

I know that qualitatively this explanation works but I have not to date put together a quantitative one.

Judith Curry, I have another post in moderation. It appears that moderation is enforced whenever I list a paper and its authors.

Too many hyperlinks will trigger moderation

curryja | February 19, 2020 at 5:27 pm

“Too many hyperlinks will trigger moderation”

Actually my posts with 3 hyperlinks have made it through without moderation. It would appear what triggered the recent moderations was not hyperlinks – there were none – but rather spelling out the title of a paper along with the authors. Both times it was the paper and authors in the first link of your Week in Review post.

gremlins

Paul, thanks for the feedback – as eliciting feedback is why I posted my analysis. I have not had time to read your post in detail and I will reply after I do that.

For now I can only remind you that in my analysis there were approximately 1/3 of the models where the relationship of warming to 1/ρ was very much the same in the historical period as that for almost all the models in the future period. I have referenced papers in my post/analysis that used the relationship ΔT versus ΔF/ρ as an approximation and in one applied it to the historical period where the correlations were high when the differences in applied forcing for individual models were taken into account. Not included in my post was the analysis I did using the relationship from the energy budget model where,

ΔT=(ΔF-ΔN)/λ , and I obtain results much the same as using ΔT = ΔF/ρ.

I would not at all question that AOGCM climate models would be consistent between the historical and future periods under a 1% CO2 increase or any other forcing where the applied forcing is known and the sensitivities of the individual climate models are taken into account. In fact that is a premise of my analysis in showing that implied forcing on a percentage basis for the individual models in the historical period varies considerably more than in the future period or would be in the ideal case of the same forcing applied for all models.

Papers using the ΔT ≈ ΔF/ρ relationship refer to its validity being based on the forcing increasing monotonically and not necessarily linearly. Empirically the relationship can be tested by a plot of T versus F where the slope should be close to a straight line. After taking the CEEMDAN trends for T and F that plot gives near straight lines for most of the historical period and the early 2/3 of the RCP 4.5 and the entire RCP 6.0 and 8.5 future periods.

The inconsistencies between the historical and future periods are measured indirectly by the warming to forcing relationship of the models within a given period which is different than a direct comparison. The ideal forcing need only be the same for all models and ρ values only consistent with regards to sensitivity within the time period and scenario.

The degree of approximation of a reduced layer heat model needs to be considered in light of the time constants for the alternative models. I have data for the CMIP5 models using 1, 2 and 3 time constants and a diffusion model.

Paul, I do not disagree with what you wrote and formulated here or in the post at the Blackboard. I would disagree with you if you are implying that the approximation ΔT≈ΔF/(λ+κ) does not hold within the boundaries of a monotonically increasing forcing and the purposes of my analysis.

I refer you to a plot in the linked Forster et al. (2013) paper Figure 9 (a). The forcing change for the historical period for individual CMIP5 models was determined using the energy budget model ΔT=(ΔF-ΔN)/λ where ΔT and ΔN were taken from the model simulations and λ was from the 4XCO2 experiment. For the historical period the value of the forcing change divided by ρ for the individual models was then plotted against the temperature changes for the individual models. The regression yielded an r-squared value of 0.76.

https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/jgrd.50174

The most comprehensive paper that I have read concerning heat models is in the link below. I see where such models were discussed (in theory?) in the Blackboard post that you linked above. The paper shows results for models with 1,2,3 time constants and for a diffusion one.

https://iopscience.iop.org/article/10.1088/1748-9326/8/3/034039/pdf

Figure 5 shows a temperature plot versus time for 5 different constant exponential rate of CO2 increase ranging from 4% per yr to 0.25% per yr. The plots appear linear for all rates of forcing. I assume a forcing plot against time would also be linear. These data do not address the linear versus monotonically increasing issue (which I do not consider an issue based on my own analysis) but do show a constant slope (1/ρ value) for a wide range of forcing rate increases.

https://pdfs.semanticscholar.org/aaa7/ff0035b3954a037a19099f3def5d3ce902a1.pdf

Hi Ken,

You wrote: ” I would disagree with you if you are implying that the approximation ΔT≈ΔF/(λ+κ) does not hold within the boundaries of a monotonically increasing forcing and the purposes of my analysis. ”

In a certain sense, this relationship

never holds. If we start with a multibody formulation as an emulator of GCM response, an immediate model error is introduced by the approximation that total feedback, λ, is a constant. The majority of models show some curvature on a Gregory plot for abrupt 4X CO2, but let us ignore that for the moment. A much larger model error is introduced as soon as the approximation, deep flux = κT, is introduced to yield the constant ρ model. To see this immediately, look at the temperature vs time plots in Figure 5 of Ken Caldeira’s paper which you reference. These all show curves, and not straight lines. It takes decades for most of the models to asymptote to a straight line of gradient β/ρ. The curvature evident in the “observed” GCM data is perfectly compatible with prediction from a multi-body emulation model, but not compatible with a ρ = constant assumption.So we are not talking about whether this approximation is “correct”, but rather the amount of unquantified error that it introduces for any application.

I am asserting that the amount of unquantified error it carries when applied to the historic period is much larger than when it is applied to a 1% p.a. CO2 growth case or a prediction that is based on a close-to-linear forcing increase over most of the period.

“I refer you to a plot in the linked Forster et al. (2013) paper Figure 9 (a). The forcing change for the historical period for individual CMIP5 models was determined using the energy budget model ΔT=(ΔF-ΔN)/λ where ΔT and ΔN were taken from the model simulations and λ was from the 4XCO2 experiment. For the historical period the value of the forcing change divided by ρ for the individual models was then plotted against the temperature changes for the individual models. The regression yielded an r-squared value of 0.76.”

The Forster (2013) data in Figure 9 shows a spread of predicted temperature ranging from about 0.4 to 1.9 deg C. the AF/ρ vs T plot shows that this model explains (only) about 76% of the temperature variance. However, AF on its own explains 51% of the variance, while ρ on its own makes a negligible explanatory contribution. He should have plotted 1/ρ for more clarity, but it would have shown a similar small or negligible contribution to explaining the temperature variance. This in fact is presenting similar results to your Figure 4. You will recall that Forster went on to produce a comically bad paper (Marotzke and Forster 2014) wherein he concluded using the same model assumptions that climate sensitivity made only a small contribution to temperature prediction!

“Papers using the ΔT ≈ ΔF/ρ relationship refer to its validity being based on the forcing increasing monotonically and not necessarily linearly. Empirically the relationship can be tested by a plot of T versus F where the slope should be close to a straight line.”

Estimates of κ and ρ show dependence on forcing history. This is easily demonstrated by emulating results using a less assumptive multibody, finite ocean emulator (without the assumption of constant ρ) and then testing the resulting abstracted parameter values. Even with an idealised linear increase in forcing, estimates of κ come from force-fitting a straight line to a curve (again see curvature in the 1% p.a. results from Ken Caldeira for empirical confirmation of this); the fitted gradient of a forcing vs temperature plot tends to get larger if the linear forcing increases more rapidly. More importantly, the more that you deviate from the linear increase in forcing, the more model error is introduced. And little comfort can be taken from the fact that a crossplot of forcing vs temperature shows a high Pearson correlation. Any near-monotonic series will show a high Pearson correlation with any other near-monotonic series. This does not mean however that an abstracted gradient has physical validity.

Ken,

I should have added a further important point to my last comment to specifically address this point you made:-

“Empirically the relationship can be tested by a plot of T versus F where the slope should be close to a straight line.”

If a crossplot of T vs F is NOT close to a straight line, then I agree that this should unambiguously cause rejection of the model. So it does constitute a necessary condition for model validation. However, it is NOT a sufficient condition. If I consider a single-body model under a linear increase in forcing, then F vs T asymptotes to a straight line. If I consider a two-box model or a three-body model or a diffusive model, I find a similar result, although the time it takes to asymptote to a straight line will vary with the ocean model used. In all of these latter instances, I do not have to make the assumption that deep flux = kT. Hence the existence of something that looks like a straight line on a T vs F plot is not a

sufficientcondition for model validation.If you examine Figure 1 in the Lucia article I referenced above, you will see that F vs T asymptotes to a straight line fairly quickly. This is based on a two-box model with finite ocean. The same model shows the deep flux as a curve when plotted against T, and not a straight line.

Paul, I am in the process of putting together the results of an analysis using the relationship of ΔT=(ΔF-ΔN)/λ in comparing the historical and future periods GMST changes much the same as I did using ΔT=ΔF/ρ. The results will have the same conclusions.

Telling for me is that 14 of the 39 models I looked at had similarly very high correlations of ΔT versus 1/ρ in the historical and future periods when regressed as a group. The remaining 25 when regressed as a group had a negative correlation for this relationship in the historical period. The implication for the group of 25 is that the modelers used a higher negative forcing from aerosols to reduce the GMST warming in the historical period. Keep in mind that in the future period the correlation of ΔT versus 1/ρ is very high when the entire group of 39 is regressed. Such an occurrence of events in the historical period is very unlikely to occur with a random error from the model used or applying it in an area outside the boundaries where the model can be used as a good approximation.

I have put together the regression results of using the EBM model equation of ΔT=(ΔF-ΔR)/λ in place ΔT=ΔF/ρ to show differences between the historical 1861-2005 and future 2006-2100 periods. Differences with the implication that forcing was used, or at least had that effect, on a majority of models such that the GMST changes in the historical period were smaller than would be the case where the individual models were responding with the models intrinsic climate sensitivity and a same forcing.

The results which I have linked below are very much in line with those obtained using ΔT=ΔF/ρ and the conclusion drawn from these results are also very much the same in that:

1. The model correlation between ΔT and (ΔF-ΔR)/λ is quite high and positive in the future period and low and negative in the historical period.

2. The models can be ranked in the historical period by a *Difference ΔT such that 2/3 of the models have a significant negative slope when the derived ΔT is regressed against (ΔF-ΔR)/λ and the other 1/3 exhibiting behavior more like the models do in the future period in the future period with a significant positive slope.

3. When the Difference ΔT for the historical period is regressed against the Difference ΔT for the future period for the RCP 4.5 and 6.0 series the slope is negative as opposed to a regression of the Difference ΔT between the RCP 4.5 and 6.0 in the future period where the slope is positive and very significant.

*Difference ΔT is the difference between a Calculated ΔT where that ΔT is calculated as it would be for a same forcing applied to all models and responding to its intrinsic climate sensitivity and the ΔT derived from published GMST data for the model.

The correlations using the EBM equation, while high between ΔT versus (ΔF-ΔR)/λ in the future period, are not as high as the counterparts using ΔT versus ΔF/ρ. I attribute this to the addition of the very noisy R series (Net R TOA) when using the EBM equation. For example, the average plus/minus 95% confidence intervals for T are 0.12, for F are 0.20 and for R are 0.28. If we ratio that number against the average mean trend, R appears relatively much noisier than T and F. I have included some typical R series plots in the link below.

https://www.dropbox.com/s/im6200e4tls23wg/Using_EBM.pdf?dl=0

Pingback: Weekly Climate and Energy News Roundup #399 – Climate News Live

Global Temperature anomalies

The problems with land and ocean based reconstructions of global temperature and global temperature anomaly maps is very basic. Why do them at all?

This is a serious question and does have some complex answers which lead to more complex questions.

The main answer would be to act as an adjunct to and back up checklist for the satellite systems which give a much greater coverage and specificity and accuracy than a limited stone age measuring and recording system can do. Satellite systems are the backbone for all weather estimation, reporting and predicting. They are the only reasonably accurate source of a true current global temperature, depending on what criteria you wish to choose. None of this can be done by land and sea based systems recording singular sites on a current event pattern with poor real time transmission of data from limited sites.

Satellite problems and strengths.

Satellite collection of data does have problems at the local level, a bit like quantum theory when you have to drill down to the size of raindrops you lose the ability to measure the actual micro events like precipitation and surface temperature. This is more than made up for by the ability to reach inaccessible areas all around the globe, by being able to chart map and check cloud formation and coverage over land and sea plus get temperatures from all parts of the visible ocean.

Satellites do drift and adjustments do have to be made for the effects on the wavelengths they are recording. Satellites do have to have confirmation of the temperature readings at specified locations which in turn helps in adjusting the settings to get the best match. What satellites also offer is GPS positioning of all places, heights, depths, elevation and distance from the centre of the earth which effects the temperature and air flow. Mentioning air flow raises the fact that the jet streams polar vortices and other important higher up air flow patterns are assessed as is moisture content of the air masses . Similar data can be gathered to some extent on ocean currents at a superficial level.

Thermometers and a standard.

To come back to the question of assessing the global temperature or more correctly a global temperature anomaly how many, functioning properly, well sited thermometers do you need?

This raises the problem of a definition of an average global temperature

The answer strangely is just two.

The resultant temperature would be a bit like the El Nino, La Nina temperature patterns. It would run in a trough between summer and winter conditions, It would have times when weather patterns conspired to make seemingly ridiculous departures from the norm but over time it would give a reasonable approximation of both anomaly change and average temperature at those sites.

Could you improve the overall accuracy by choice of site position? A site at the equator is different to one at the poles in the amount of variation possible in the temperature range seasonally and in the actual average temperature. Two sites inland by 10 kilometers on a low lying plain say 30* metres elevation on the tropics of Cancer and Capricorn would obviate the problems of polar and equatorial extremes and also allow for a balance between the more land filled NH and the water based SH. Two further pairs would allow back up and comparison to occur.

Six continents, 6 countries, State of the art thermometers and restriction zones of a kilometer and you could have an agreed , reliable , international standard global temperature for all time. Now this would still not be as accurate and reliable as the satellites can do without the blink of an eye but it would standardize issues as it would have a much longer term reproducible result than satellites with their limited life span can give.

The Global Temperature What is it and why is it important.

Coming back to a global temperature and the fact that a reasonably accurate version can only be given currently by Satellite data the question is why is it useful? The answer is that it is needed for all projections of earth weather, climate and climate change.

All GCM’s must have an initial Global temperature start up point. This data point obviously should not change over time. It should be the same for all GCM and all weather and climate predicting models. That is it should have a chosen start date and level known by everyone and written down in black and white.

However different data sets will have different GT.

GCM have been around since the 70’s and are said to be quite good in their projections. There are new ones being created all the time.

If we assume that Zeke is right then there should be an original GT* and date in the first model that matches ir is linked to each subsequent model. If adjustments have been made to this parameter then the models would no longer be working on the same scenario.

If we have a standard GT then there must be a past 1850 GT for each model that would be different due to their different algorithms, no problem there.

Other than there can be no exact agreement on the amount of warming since 1850.

The problem for the models is that they must run on the Input GT at the time of setting up the model. To achieve this they have to input the current level from whatever data set they choose to use. Different data sets may have different GT.

As the new GCM may differ in its assumptions from the data set assumptions there will then exist an anomaly between the two models when they do a backwards run and comparison.

Further the data sets change their data going backwards daily by adjustments [Zeke] . When a comparison is used a year later the current data set model being compared will differ from the one used a year before. The GCM will not adjust the data in the past so it on will give a GT based on the old readings as it enters the new readings.

GCMS and Data sets have a built in bias for CO2 increase, also known as the ECS.

The Data sets like GHCN register the correct current temperatures and therefore bias the past lower to show the global warming that was expected to have occurred. There is no ability to move thermometers upwards. GCM’s on the other hand suffer a double whammy. They have incorporated data sets at the time of inception cooling the past which is fixed and now make assumptions on future warming from that date. Consequently they add on warming at a rate predicted by CO2 levels.

Why does CMIP 6 run much hotter than CMIP5 ? Wherever the newer models start from the GMT was not much higher than the older models yet had a much larger CO2 warming adjustment to work with

Here is where the problem with gridding and adjustments and sites comes into full play.

An assumption that all sea levels are the same yet due to the earth shape grids away from the equator have a lesser air pressure due to lesser gravity which affects the temperature but is not taken into account. Grids here may be determined in part by ones 1000 Km north or south.

Gridding on land may not take elevation fully into account. Two sites on either side of a mountain range have a different temperature to the grid with the mountain between. Elevation is taken into account in local site shifts but very difficult to do properly when sites are hundreds of miles apart. Certainly not taken into account by some of the people describing their grid attempts which are only temperature infills not elevation, forestation, mountains and deserts

When the sites used become airports, airports do need accurate temperature levels, we can either have a Global Airport Temperature reading or an inaccurate Global Temperature outcome.

Note this comment and or post was inspired by Nick Stokes Moyhu blog where he gives an excellent summary of the main methods of assessing global temperatures and local temperatures “US temperatures 27/2/2020” Averaging and graphics methods.

Pingback: Weekly Climate and Energy News Roundup #399 | Watts Up With That?