by Roger Pielke Sr., Phil Klotzbach, John Christy and Dick McNider
An update is presented of the analysis of Klotzbach et al. 2009.
In 2009 we published the paper:
An alternative explanation for differential temperature trends at the surface and in the lower troposphere
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider,
Abstract. This paper investigates surface and satellite temperature trends over the period from 1979 to 2008. Surface temperature data sets from the National Climate Data Center and the Hadley Center show larger trends over the 30-year period than the lower-tropospheric data from the University of Alabama in Huntsville and Remote Sensing Systems data sets. The differences between trends observed in the surface and lower-tropospheric satellite data sets are statistically significant in most comparisons, with much greater differences over land areas than over ocean areas. These findings strongly suggest that there remain important inconsistencies between surface and satellite records.
Published in J. Geophys. Res. [link]
A corrigendum was published shortly thereafter [link]:
Context (written by JC)
The Klotzbach et al. paper is about the expectation that the rate of warming at higher altitudes should be larger than at the surface. When forced by anthropogenic greenhouse gases, GCM climate models on average indicate that the trend of the troposphere is amplified by a factor of 1.2 over that of the surface. When confined to the tropics, the amplification factor is about 1.4. The cause of this amplification is related to the lapse rate and lapse rate feedback. This model-generated tropospheric warming in the tropics is known as the “hot spot” and has been claimed to be a signature of greenhouse warming.
However, this amplification is not corroborated by comparison between surface temperatures and atmospheric temperatures determined from satellites. Klotzbach et al. suggest that part of this difference is caused by a bias in the surface temperatures and that this difference would increase as warming increases.
Since this increased warming in the upper layers is a signature of greenhouse gas forcing in models, and it is not observed, this raises questions about the ability of models to represent the true vertical heat flux processes of the atmosphere and thus to represent the climate impact of the extra greenhouses gases we are putting into the atmosphere. It is not hard to imagine that as the atmosphere is warmed by whatever means (i.e. extra greenhouse gases) that existing processes which naturally expel heat from the Earth (i.e. negative feedbacks) can be more vigorously engaged and counteract the direct warming of the forcing. This result is related to the idea of climate sensitivity, i.e. how sensitive is the surface temperature to higher greenhouse forcing, for which several recent publications suggest models, on average, have been overly sensitive.
Update of Klotzbach et al.
Recently, we have been asked to update our analysis to the present. The figures presented below are an update of Figures 1 and 2 in Klotzbach et al. We have also updated Table 1.
Table 1: Update of Table 1 from Klotzbach et al. (2009). Global, land and ocean per decade temperature trends and ratios over the period from 1979-2008 and from 1979-2014.
Our conclusion is that not much has changed since 2008. The update of Table 1 shows that the temperature datasets have come into slightly better agreement with the UAH satellite product since 2008 but disagree slightly more with the RSS satellite product (as you can see from the changes in the ratios).
Figure 1: Update of Figure 1 from Klotzbach et al. (2009). NCDC minus UAH lower troposphere (blue line) and NCDC minus RSS lower troposphere (green line) annual land temperature difference over the period from 1979 to 2014. The expected anomaly difference given the model amplification factor of 1.2 is also provided. This amplification factor is calculated by multiplying the surface temperature anomaly for a particular year by 1.2 and assuming that that is the value the lower troposphere should be for that year. All differences are normalized so that the difference in 1979 is zero.
Figure 2: Update of Figure 2 from Klotzbach et al. (2009). CRUTEM4 minus UAH lower troposphere (blue line) and CRUTEM4 minus RSS lower troposphere (green line) annual land temperature difference over the period from 1979 to 2014. The expected anomaly difference given the model amplification factor of 1.2 is also provided. This amplification factor is calculated by multiplying the surface temperature anomaly for a particular year by 1.2 and assuming that that is the value the lower troposphere should be for that year. All differences are normalized so that the difference in 1979 is zero.
Figures 1 and 2, however, show that there is still a significant divergence between the 1.2 amplification factor expected over land and what the satellites are showing. This reinforces our conclusion that the difference between the multi-decadal surface and lower tropospheric temperature trends remains.
In the 2009 paper we postulated that part of the discrepancy might be the use of minimum temperatures to compute a long term trend over land. As shown in our 2012 paper by McNider et al. [link], we found
“that part of the observed long-term increase in minimum temperature is reflecting a redistribution of heat by changes in turbulence and not by an accumulation of heat in the boundary layer.”
In the past, maximum temperature trends have shown closer agreement with lower tropospheric measurements. However, recent land surface data sets using homogenization corrections have reduced the trend differences in Tmin and Tmax. Now Tmax may be rising like Tmin in these analyses. But, this leads to an even bigger unexplainable physical discrepancy between the lower troposphere and corresponding surface trends as seen in the analyses.
JC note: As with all guest posts, keep your comments relevant and civil. This is a technical thread, so comments will be moderated more heavily than usual for relevance.
Since this increased warming in the upper layers is a signature of greenhouse gas forcing in models, and it is not observed, this raises questions about the ability of models to represent the true vertical heat flux processes of the atmosphere and thus to represent the climate impact of the extra greenhouses gases we are putting into the atmosphere.
Judith, do yo have a reference for that? It is my understanding that warming for any reason would cause a “tropical hotspot” in climate models. A signature of greenhouse gasses is that the temperature in the stratosphere drops, while the tropospheric temperature goes up.
Hi Victor, I agree that this is relevant for any warming; the topic of greatest relevance is in context of GHG discussion.
Here’s a result using ME
The paper is Herbert et al 2013, Vertical Temperature Profiles at Maximum Entropy Production with a Net Exchange Radiative Formulation.
The abstract and key finding:
“Like any fluid heated from below, the atmosphere is subject to vertical instability which triggers convection. Convection occurs on small time and space scales, which makes it a challenging feature to include in climate models. Usually sub-grid parameterizations are required. Here, we develop an alternative view based on a global thermodynamic variational principle. We compute convective flux profiles and temperature profiles at steady-state in an implicit way, by maximizing the associated entropy production rate. Two settings are examined, corresponding respectively to the idealized case of a gray atmosphere, and a realistic case based on a Net Exchange Formulation radiative scheme. In the second case, we are also able to discuss the effect of variations of the atmospheric composition, like a doubling of the carbon dioxide concentration.”
“The response of the surface temperature to the variation of the carbon dioxide concentration — usually called climate sensitivity — ranges from 0.24 K (for the sub-arctic winter profile) to 0.66 K (for the tropical profile), as shown in table 3. To compare these values with the literature, we need to be careful about the feedbacks included in the model we wish to compare to. Indeed, if the overall climate sensitivity is still a subject of debate, this is mainly due to poorly understood feedbacks, like the cloud feedback (Stephens 2005), which are not accounted for in the present study.”
So there you have it: Convection rules in the lower troposphere. Direct warming from CO2 is quite modest, way less than models project.
http://arxiv.org/pdf/1301.1550.pdf
Ron C: http://arxiv.org/pdf/1301.1550.pdf
Thank you for the link. That is another recent paper that substitutes the steady-state approximation for the equilibrium approximation, so in my mind is another “normal science” step forward.
They use the “maximum entropy production” principle in place of the “maximum entropy” principle.
Matthew, I thought the paper important, but don’t grasp all the implications. Can you say a bit more about steady state vs. equilibrium, and about MEP vs. ME?
Ron C: Can you say a bit more about steady state vs. equilibrium, and about MEP vs. ME?
See Chapter 10 in “Thermal Physics of the Atmosphere” by Maarten H. P. Ambaum.
In brief: think of a system as made of a lot of parts, regions, compartments, layers, whatever (even maybe “black boxes”), through which some stuff moves (a chemical compound, electrons, or energy) and an intensity variable that can be measured in each compartment (for a chemical compound that would be concentration, for the climate system that is temperature [or sometimes specific humidity]). At equilibrium the intensity variable has the same value in all compartments, and there is no flow between compartments. In steady state, the stuff flows into and out of the system, and into and out of each compartment, but the inflow and outflow of the whole system are equal, and the inflow and outflow of each compartment are equal; the intensity variable is not the same in all compartments, but within a compartment the intensity variable stays constant. In drug dosing, with a prolonged infusion at a constant rate, you eventually reach a steady state in which blood concentration and the concentration in each part of the body are constant as the drug flows into and out of the body. Nothing in climate is quite steady state, but the flow of energy from Equator to poles, thence to space, with Equatorial and polar temperatures fluctuating within a finite range, is approximately steady state. For a few hours in daytime, the flow of energy from surface to troposphere and upward may be approximately in steady state.
Ambaum conjectured that a steady state is characterized by the maximum possible rate of entropy increase, a conjecture taken as “assumed true” by Lalibertie et al in the paper discussed here a few weeks ago, and by the paper that you cited.
Thanks for the explanation, Matthew. I am reading working to get my mind around this. I didn’t find Lalibertie, but did find this by Kleidon:
http://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/
Else where, I saw this comment:
“In particular, it is not obvious, as of today, whether it is more efficient to approach the problem of constructing a theory of climate dynamics starting from the framework of hamiltonian mechanics and quasi-equilibrium statistical mechanics or taking the point of view of dissipative chaotic dynamical systems, and of non-equilibrium statistical mechanics, and even the authors of this review disagree. The former approach can rely on much more powerful mathematical tools, while the latter is more realistic and epistemologically more correct, because, obviously, the climate is, indeed, a non-equilibrium system.” Lucarini et al 2014
Ron C: http://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/
Thank you. I missed that one.
I don’t understand how we can have a continuation of warming during the pause without a stratospheric cooling in the past 20 years. I know the propagandists will provide charts showing a “cooling trend” but they will invariably date these trends from prior to the past 20 years. The trend seems flat in the past two decades. So if the stratosphere is not cooling, isn’t it getting its normal allotment of heat from the troposphere?
or, the oceans are slowly giving up their heat…
Do be careful with the stratospheric temperature.
The MSU LS is the LOWER stratosphere ( around 100 millibars ).
But the cooling rate ( negative heating ) imposed by additional CO2
is greatest near the stratopause ( UPPER stratosphere, around 1 millibar ).
See that in this RRTM run:
http://www.arm.gov/science/highlights/images/R00180_1.gif
I haven’t updated these in a while, but the 70, 50, and 30 millibar sonde trends do indicate a greater cooling and a continued cooling ( through 2011, anyway ).
http://1.bp.blogspot.com/-qBLJ6MR2dtk/TdUpDF5VdsI/AAAAAAAAADo/cDQqeUogo_Y/s1600/StratCoolingPlot_2011.png
http://3.bp.blogspot.com/-Mkzz23CYbak/TdUpGpYBe8I/AAAAAAAAADw/7lOzPyDhjdM/s1600/StratCoolingTable_2011.png
Volcanoes and ozone complicate matters, but the cooling trend, especially higher in the stratosphere, is evident.
Lucifer, you can’t imagine how shocked I am to see that your graphs go back further than twenty years, allowing you to create dramatic downward trends.
Shocked! I never would have predicted that that is what you would do, except that I did, if you read my first comment again.
So my question remains, why is there a “pause” in stratospheric cooling if tropospheric warming continues apace. It doesn’t make sense. I wish somebody, just once, would point me to someplace that deals with the last two decades of stratospheric temps.
Since the extra CO2 should continually cool the strat more and more, there has to be a source of heat. Could it be more heat ferried to the strat due to more robust storm cells?
“Since the extra CO2 should continually cool the strat more and more, there has to be a source of heat. Could it be more heat ferried to the strat due to more robust storm cells?”
Good point – cooling can’t continue indefinitely in the stratosphere without increasing the rate of exchange from fluid flow. And the exchange rate of stratosphere and troposphere is not zero.
cooling can’t continue indefinitely in the stratosphere
I have to hand it to your for blindly and deaflly sticking to your talking points. It is a wonder to behold.
Of course it is likely that convection is carrying the heat to the stratosphere bypassing the “enhanced greenhouse effect.” This would explain the pause and the lack of stratospheric cooling which your own graphs quite clearly show.
Victor,
There are multiple ‘signatures’ from the models. The hotspot is one. The discrepancy in mid troposphere warming is, I suspect, more due to errors in the model assumptions than anything else (after all, the moist lapse rate is pretty well defined). If you relax the requirement of constant near surface relative humidity with warming ocean surface temperature, then the hot spot discrepancy could well disappear. I think the hot spot discrepancy tells us more about model inaccuracies, not about GHG driven warming.
The moist adiabatic lapse rate includes many approximations; I have advocated using entropy potential temperature instead (chapter 6 of my text thermodynamics of atmospheres and oceans)
Dr. Curry. Do you think the entropy potential temperature would correctly model the fact, as evidenced by sat data, that the annual min/max difference decreases with altitude?
I don’t know, but it would make it a more thermodynamically defensible calculation.
A Google search found this paper of Pascal Marquet..
The number of different potential temperatures seems to be large, and several of them are called entropy potential temperatures.
Pekka thanks for the refs. I was referring to HH87 or M93.
Pekka Pirilä A Google search found this paper of Pascal Marquet.
Thank you. Most interesting.
Reference:
“The simulated responses to natural forcing are distinct from those due to the anthropogenic forcings described above.”
IPCC AR4, Chapter 9.2.2.1 Spatial and Temporal Patterns of Response.
“Only 1000 stations have records of 100 years and almost all of them are in heavily populated areas of northeastern US or Western Europe and subject to urban heat island effect.” ~Dr. Tim Ball
Say, we’re demned if we homogenize and demned
if we don’t, dang it.
Maybe AMO related, with more divergence with warmer AMO months. Possibly due to increased continental interior drying:
http://www.woodfortrees.org/plot/crutem4vnh/from:1979/plot/uah-land/normalise/plot/esrl-amo/from:1979/mean:13/normalise
Once again another nail in the coffin, but unfortunately the coffin is a mile long.
I was under the impression that the increase in average land temperatures was due primarily to the increase in Tmin. I could be off the mark though.
Well, for the tropics, where the Hot Spot was theorized,
there is roughly twice as much ocean as land, so it would seem likely that
the majority of the difference is something other than land observation contamination.
The usual suspects prevail ( but in this case they’re all guilty ).
Convection is a sub grid-scale process.
Probably don’t need to go any further than that to explain the Not Spot.
The key thing here is statistical significance, since only a few ranges are quoted. CIs are conspicuously missing. Using my trend calculator, I get as 95% CI’s, using Quenouille adjustment for monthly autocorrelation, trends Jan 1979-Dec2014 in C/century:
The spread of the satellite indices is much wider. So it is well within the range of the CI’s that the land indices exceed the satellite, even by a factor of 1.2.
the other thing that bothers me is that the satellite datasets dont have a “tmin” or “tmax” version to compare.
Can this be resolved by using only certain zonal times of sample?
hmm some reading http://images.remss.com/papers/rsspubs/Mears_JTECH_2009_MSU_AMSU_construction.pdf
RSS:
“A second important correction accounts for drifts in
local measurement time, which can alias any diurnal
cycle into the long-term time series if it is not corrected.
Using 5 yr of hourly output from the NCAR Community
Climate Model (CCM3) climate model (Kiehl et al.
1996), we created a diurnal climatology for MSU channels
2–4 and AMSU channels 5, 7, and 9 as a function of
earth location, time of day, time of year, and incidence
angle using the methods described in Mears et al.
(2002). This diurnal climatology was then used to adjust
each measurement so that it corresponds to local noon.
The adjustments are largest for MSU2 and AMSU5
because of the contribution of surface emission to these
channels.”
So the climate models skeptics think are useless, get use to adjust
the data for RSS.
and so you have the following irony
Monkton claims that RSS is most accurate
Monkton claims GCM are wrong
RSS use a GCM output to adjust their data
RE: Steven Mosher | March 4, 2015 at 9:48 pm
“the other thing that bothers me is that the satellite datasets dont have a “tmin” or “tmax” version to compare.”
What’s the problem Steve? Just make the adjustments as you see fit.
The difference in the time of observation adjustment between satellite and ground based is the observation time is perfectly recorded on the satellite data so they don’t have to “detect” proximate spatial discrepancies then cancel the outlier by turning it into a new station.
The larger and most difference of course is that satellites cover the whole globe while thermometers cover a small fraction of land surface mostly in the northern hemisphere and precious little else.
Oops, I meant the satellite indices exceed the surface, even by 1,2.
Read the paper, Nick. CIs aren’t missing at all. No need to introduce a different method to calculate them.
All of these trends are statistically significant at the 95% level based on a p-test. Ninety-five percent confidence intervals are also provided taking into account the autocorrelation of the residuals based upon the methodology outlined by Santer et al. [2008].
The conclusion is, the difference between surface temps and satellites is largest over land areas. Possible explanations are offered aswell.
We conclude that the fact that trends in thermometerestimated surface warming over land areas have been larger than trends in the lower troposphere estimated from satellites and radiosondes is most parsimoniously explained by the first possible explanation offered by Santer et al. [2005].
Specifically, the characteristics of the divergence across the data sets are strongly suggestive that it is an artifact resulting from the data quality of the surface, satellite and/or radiosonde observations. These findings indicate that the reconciliation of differences between surface and satellite data sets [Karl et al., 2006] has not yet occurred, and we have offered a suggested reason for the continuing lack of reconciliation.
“Read the paper, Nick. CIs aren’t missing at all.”
I have read the paper. The CI’s that they base significance on make no sense, as I commented below. I have written a post here.
What they call Santer’s method is Quenouille adjustment, which is also what I use. The problem comes when they take CIs for the differences.
I’ve read your post now. Apart from a few knee jerks and your insinuations that they have something to hide, there is not much in it.
It all boils down to your claim that the CIs for trend differences are too narrow.
Why don’t you recalculate the correct CIs for the differences (also add for amplification), so everyone can see how wrong they got it.
Easy fix.
@ MoruH.
“The conclusion is, the difference between surface temps and satellites is largest over land areas. Possible explanations are offered as well.”
ONE possible explanation for the larger discrepancy over land areas is that the land area data is the data set that has been most heavily and frequently ‘corrected’, ‘adjusted’, ‘krigged’, and ‘infilled’ says Mr. Cynic.
Mouruanh,
“Why don’t you recalculate the correct CIs for the differences “
OK, I’ve done that, at the end. All significance disappears except for just one case – NCDC-UAH over land. But even then, we are testing for a 1 in 20 chance, and we’ve tried twelve times – not surprising.
Nick Stokes:
As was pointed out on your blog, when you compute the CI for the difference in trends, you must adjust for the correlation between the two series. As deepclimate pointed out on your blog:
He is certainly correct on this point, so it appears you made the error you were criticizing them for.
Carrick,
wouldnt the trend of differences be essentially a pair differences test?
It would be great if they came around to explain what they did.. in math not words
Steven Mosher:
Basically, but the issue is you don’t compute the error using sum of quadratures
As I pointed out below, what you get when there is correlation is:
I think the bigger issue is the reliability of the trend estimation from satellite TLT. As you know the long-term trends are cobbled together from disparate satellite measurements.
It’s very easy to picture a scenario where there is some tuning of the long-term trend of TLT to surface data. So not only is there an accuracy issue, in my view, but there is a potential for a bias in the error in accuracy (the trend from TLT will end up looking more similar to SAT than they should).
Carrick,
Ideally you would do that. But I see no indication that they have. If they have calculated a correlation (which would need to be pretty high) that would be valuable information in itself. There is just no mention of any of this.
And the formula that you give is oversimplified. If A and B are highly correlated, and also substantially autocorrelated, then there is lagged cross-correlation. You need to deal with an overall correlation matrix.
In an analysis like this, statistical significance is central, because it is part of the deductive chain of the argument. If it hangs on a big reduction for correlation, you need to at least mention that, and ideally say how you calculated it.
Nick please see below for my comments on the effect of autocorrelation. What you say is actually wrong, the formula is the same because OLS provides an linear unbiased estimation of trend, so there is no correction to make in
.
The purpose of correcting for autocorrelation is to reduce the error in the estimation (the estimate is “less noisy”), but the error in estimation associated with neglecting autocorrelation has a zero mean (there is no bias introduced by ignoring it).
As to the rest, I don’t see much point in debating how they computed the error, until we’ve seen how they’ve done it. But I do think it’s absolutely fair to criticize them for not adequately explaining their error analysis.
Carrick,
On thinking about it, there may be a simpler explanation. They calculated the trends with CIs in table 1, and in table 2 said they were giving results for the differences in trends. And that would require some kind of quadrature, possibly with the cos rule modification that you described. But they may also have directly re-computed the trends of the differences. That would give a direct estimate of CIs. I’ll check that.
But if so, they should have said so.
Nick:
I agree… I’d expect them to have computed the trend of the difference. Of course, it makes things a lot easier if you tell people what you actually did, when it comes time to replicate it.
Carrick,
Yes, they referred in the abstract etc to the significance of the difference in trends. But I think it’s likely that they have given the CI’s for the trend of the differences. As you say, that is a reasonable thing to do, but they should say what they are doing.
Ya Carrick I know you don’t compute the errors under quadrature. I guess I need convincing there approach makes sense.
Nick, I think John Christy has inadvertently confirmed they are computing the trend from the difference of series.
Steven Mosher—I’m not at all sold on using satellite data to compute long-duration trends. That’s a bigger thing to me than where the formal uncertainty requirements are met.
The hot spot in the models is 1.4 to 5°C warmer than RSS/UAH/RATPAC data (according to the chart color coding).
This would appear to be several to many times greater than the CI, which is only about 0.8°C wide for the satellites (RATPAC would have a smaller CI than the satellites)
Why haven’t the models been corrected?
Oh ye fallibull models.
“This model-generated tropospheric warming in the tropics is known as the “hot spot” and has been claimed to be a signature of greenhouse warming”
I think its supposed to a signature of any warming
stratospheric cooling is a unique AGW fingerprint
Since 110% of the warming is due to GHG that kind of limits the “any” part right?
you are confusing two things. stop it
How is he confusing two things, Mosher? If the Anthro is over 100% of the warming, then it would follow that it’s counteracting the cooling as well as doing all the warming.
Or at least it’s responsible for all the apparent surface warming and more.
Steven, “you are confusing two things. stop it”
Nope, that would be called dry humor.
http://climatewatcher.webs.com/HotSpot.png
That would be confusing things
top right, is that the flag of Norway?
The issue is this.
1. If you believe the models have it right.
2. There should be a hot spot, REGARDLESS of the cause of warming.
3. If the warming is due to GHGs, then you should get stratospheric cooling.
hotspot has nothing to due with the attribution argument
“2. There should be a hot spot, REGARDLESS of the cause of warming.”
So, if there is no hot spot – there is no warming, and temp data sets are in error, or introduced spurious warming through adjustments ?
Steven Mosher, “The issue is this.
1. If you believe the models have it right.
2. There should be a hot spot, REGARDLESS of the cause of warming.
3. If the warming is due to GHGs, then you should get stratospheric cooling.
hotspot has nothing to due with the attribution argument”
1. There is no belief involved, this is a comparison of model “projections” with observation.
2. correct, however, with the huge range, ~50%-110%-170% according to the modelers i.e. Gavin, there isn’t much room for “natural” variability is there?
3. Which GHG?
https://lh5.googleusercontent.com/-xV_vk6Y7h1w/VPh0dC36lvI/AAAAAAAAM-I/cz636tYRjDw/w865-h549-no/ratpac%2Ba%2Btropics.png
The inverted stratosphere in blue should have a CO2 “signature” unless of course it is also dependent on “warming” in general. Since zero times 1.1-1.6 is still zero, there just might be a need for actual warming to see the amplification. Is that more consistent with warming due to reduction of aerosols or an increase in CO2? Then there is also that remote possibility that the warming is a result of a longer term recovery.
Hey, he still hasn’t persuaded Muller not to attribute all of the warming since the Little Ice Age to man. Where would we be without us?
===============
I dont agree with ANY attribution argument I have seen.
Attribulation.
That’s where we are, moshe, and I believe you understand why we’d be in a lot better shape for the long term if most of the warming since the coldest depths of the Holocene has been from natural causes.
====================
RE: Steven Mosher | March 5, 2015 at 1:16 am |
“The issue is this.
1. If you believe the models have it right.”
Good one. We all believe, Steven.
Stratospheric cooling is a signature of GH gases in the stratosphere, not of global warming. There is a difference.
Stratospheric cooling is a signature of decreased thermal conductivity of atmospheric layers below stratosphere.
Increased GHG just in stratosphere would lead to increased stratospheric thermal slope, not cooling.
Stratospheric cooling is due to green house gases. The lower pressure there facilitates those molecules to radiate IR rather than lose energy in collision with other molecules.
This is nicely stated.
Do you happen to have supporting documation handy?
Much appreciated.
aaron – You are lucky you didn’t ask Mosher, or you would have been told to google it yourself (do your own work).
Let’s see what Eli says about it:
http://rabett.blogspot.com/2013/04/this-is-where-eli-came-in.html
And from … GASP! … the IPCC …
In the upper stratosphere, increases in all well-mixed gases lead to a cooling as the increased emission becomes greater than the increased absorption
http://www.ipcc.ch/ipccreports/tar/wg1/278.htm
It is a signature of warming of a water surface that can freely supply more water vapor when warmed, i.e. the water vapor feedback. There is no reason to expect a hot spot if you just warmed dry land. I mention this below.
“Tropospheric amplification of warming with altitude is the predicted response to increasing radiative forcing from natural sources
Posted on 24 February 2011 by thingsbreak at Skeptical Science
Victor Venema (@VariabilityBlog) | March 4, 2015 at 7:37 pm |
It is my understanding that warming for any reason would cause a “tropical hotspot” in climate models. ”
Nah, nothing about specifically heating dry land, just the earth which was a water planet last time you looked, Jim D?
The hot spot would be a signature of the water vapor feedback kicking in. The climate’s current transient rate has the land, because of its low thermal inertia, warming twice as fast as the ocean, and the Arctic, because of its albedo feedback, even faster. When dry areas warm first, the hot spot is delayed. In equilibrium, the surface water catches up and the hot spot would be a feature of a new equilibrium state, but all bets are off in a transient state.
Jim D:
Yes, the land has a low thermal inertia, which is why it warms as much as several tens of degrees every day, and cools down about the same amount every night. Of course, drier land warms a bit more during the day, but then it cools a bit more at night. Until the rain comes along, then it cools down a bit again. Or it snows, in which case the land doesn’t warm, but also doesn’t cool as quickly.
And then it gets windy, which both cools and dries the surface, unless the wind is blowing in from the ocean, in which case it both cools and wets the surface.
But these large swings in energy flux are unimportant, aren’t they Jim? What we really should be looking at is the fact that tomorrow the land will be a few millionths of a degree warmer than it is today, isn’t that so Jim?
It is a signature of warming of a water surface that can freely supply more water vapor when warmed, i.e. the water vapor feedback. There is no reason to expect a hot spot if you just warmed dry land. I mention this below.
So, the questions remain – why are the models so wrong about it?
Jim D,
F’d up the blockquotes.
Jim D,
If this were true on a meaningful scale then Tmin trend would be small over land relative to ocean and smaller than Tmax.
Also, the arctic seems to be warming more because of covective and possibly lantent heat transfer changes (what are the trends in percipitated H2O by mass?).
Should have phrased this as a question rather than a declarative:
If this were true on a meaningful scale then Tmin trend would be small over land relative to ocean and smaller than Tmax.
If this were true on a meaningful scale, then wouldn’t Tmin trend be small over land relative to ocean and smaller than Tmax? Think of when Tmin and Tmax happen for ocean, wet, and dry land.
Also, you’re suggesting that dry areas get wetter, i.e. “It’s better than we thought!”
With 90% of the heat going into the oceans AND the oceans hiding the “missing” heat AND ocean temps NOT rising anywhere near enough, the explanation that springs to mind is that the extra heat is causing more evaporation – you know, the extra precipitation that has been blamed on AGW, and the extra intensity of storms and so forth also blamed on AGW. If this extra H20 is NOT present in the atmosphere as you suggested due to the lack of the hot-spot, then this extra storminess and extra precipitation can’t be happening either, right? Which matches nicely with Pielke Jr’s and others analysis of disasters etc.
But that just means we are back at the travesty of not being able to show any extra heat. Unless it’s simply gone back into space where it came from. But that can’t be, because that would mean GHGs are insignificant to global heat balance, and the models are so far beyond wrong, you can’t even see wrong from where they are. With $billions spent on these things, that can’t be allowed to happen, so you better come up with a damn good explanation – perhaps you could choose one of the 50 or so currently posited for the pause? Make sure you pick one that hasn’t already been shown to be inconsistent with the data – if you need to adjust the data to make it “work”, make sure the adjustments show increased warming, or you may find yourself on the wrong team and subsequently defrocked and termed a denier!
Jim D: There is no reason to expect a hot spot if you just warmed dry land.
How much of the tropical land is “dry”?
Matthew Marler, it comes down to whether the land can maintain a high relative humidity at its surface as it warms, especially if the ocean is not warming with it. I don’t think so. The hot spot is controlled by the ocean warming, not the land, because ultimately that is where the moisture comes from, and the tropics are mostly ocean.
<<>>
And the lower stratosphere (that purple bit atop the hot spot in the model simulations) hasn’t cooled at all since 1994, when the effects of the Pinatubo eruption had faded. All the while, CO2 has continued its unabated rise.
Go figure.
The key statement is ‘has been claimed’ and it was the IPCC who claimed it so take it up with them! What the lack of a hot spot really shows is a lack of water vapour positive feedback which, if anywhere, should be in the tropi-tropo. Of course there has been no stratospheric cooling for 20 years either so it’s a double failure.
Now most folk should accept that if all fingerprints for manmade warming are missing then the null hypothesis must be accepted but mainstream climate scientists just can’t bring themselves to admit that yet. Seemingly it will take cooling to get them to see what is in front of their beaks. Either that or diverting more funding to natural warming and waiting anotehr 10 years until they eventually – and yet again – ‘discover’ what skeptics had told them years ago.
I think its supposed to a signature of any warming
Yes – but AGW is a definite subset of ‘any warming’.
So… warming isn’t happening?
Perhaps.
no, what you have are mixed signals. And its not even clear that the difference they point out is statistically significant
Always time for a picture:
http://climatewatcher.webs.com/HotSpot.png
Worry rather about the stratospheric trends over the last 20 years, Steven. There seems to be a problem with your AGW finger print up there.
Look at the trends
oh well
‘Third, the ‘‘Student’s t test’’ was mistakenly described
as a ‘‘p-test.’’ Fourth, the data set that was utilized in the
polar calculations was the CRU TS 3.0 data set (T. D.
Mitchell and P. D. Jones, An improved method of constructing
a database of monthly climate observations and
associated high-resolution grids, International Journal of
Climatology, 25, 693 – 712, doi:10.1002/joc.1181, 2005),
not the CRUTEM3v data set as discussed in the paper. The
caption of Table 4 should therefore be ‘‘Linear Trends for
Maximum and Minimum Temperature for CSU TS 3.0 for
60S – 90N, 0–360, and for 60N–90N Over the Period
From 1979 to 2005.’’ We regret the referencing oversights.
[3] ”
CRU TS3.0, last I looked was unsuitable for climate studies according to the authors.
For climategate fans.. CRU TS is the product that “harry read me” worked on.
” The expected anomaly difference given the model amplification factor of 1.2 is also provided. This amplification factor is calculated by multiplying the surface temperature anomaly for a particular year by 1.2 and assuming that that is the value the lower troposphere should be for that year. All differences are normalized so that the difference in 1979 is zero”
hmm, we’ve been over this territory before. I think the right number is actually 1.1.. searching for link
I believe that should depend on the average temperature of the level of the troposphere being measured by the satellites. RSS lLT should be around 1.25 over the oceans and the MT should be higher. Less energy required per unit anomaly.
really?
lets see what they said in their corrections?
“[4] Gavin Schmidt, at NASA, has pointed out that our
calculations for a 1.2 amplification factor for both land and
ocean were based on a landmask from CRU that differs
from the one that is currently used at GISS. Utilizing an
appropriate landmask and data provided on his FTP site at
http://www.giss.nasa.gov/staff/gschmidt/supp_data_
Schmidt09.zip, we have redone our calculations and found
amplification factors of 1.1 over land and 1.6 over ocean.”
Steven, right, but RSS is indicating a LT temperature of about 279K in the tropics, right in the middle of the condensation zone.
http://climexp.knmi.nl/data/irss_tlt_0-360E_-25-25N_n.png
In the tropical middle troposphere you would have lower specific heat capacity so you could get more amplification, about 1.6 so I believe you need to specify what altitude and temperature before assuming an amplification factor.
they dismiss RSS. your fight is with them and monkton
Steven my only fight is with the thermo.
Nick Stokes–
Can you also calculate the error bars around the land and ocean subsets? They will have to be relatively larger since smaller N. Are any of the land-ocean differences significant?
Judith–
I agree with NIck that uncertainties should have been provided. Can you or one of the co-authors do that?
Lance,
I’m reading results from the trend calculator. It doesn’t have the sat indices subdivided. It does however have:
So the land-ocean difference is significant. Thanks.
The lapse rate feedback, aka hot spot, requires a saturated water surface to be warming and producing the extra latent heat that creates the hot spot as part of the water vapor feedback, Clausius-Clapeyron, etc. This is an OK idea for a water-dominated surface warming uniformly. However, here we seem to be dealing with land data, and I don’t think that there is any expectation that the latent heat flux will increase as you warm the land, so this assumption of seeing a hot spot in proportion to a warming land surface is somewhat flawed. The global warming is land-dominated, so this may be why the hot spot is not as obvious as it would be if the water was warming as fast.
Jim – what exactly is a saturated water surface? What other surfaces does water have?
As opposed to an unsaturated land surface where we would not expect the relative humidity to be maintained.
Jim D: I don’t think that there is any expectation that the latent heat flux will increase as you warm the land,
The Romps et al calculation of increased rate of transfer of CAPE was calculated for the US east of the Rockies. Is that what you call “water-dominated surface warming uniformly”?
and I don’t think that there is any expectation that the latent heat flux will increase as you warm the land, so this assumption of seeing a hot spot in proportion to a warming land surface is somewhat flawed.
Maybe (my favorite word). What we need next are good estimates of the increase (or not) of the rate of advective-convective heating of the troposphere by the land surface, as well as the change in evapotranpirative heating. Much of the land surface (e.g. Amazon and Congo watersheds) might fit your “water-dominated surface” construct.
We are seeing that in practice. The land is warming at a rate of nearly 4 C per doubling. Will it just dry out or will the oceans warm enough to maintain some rainfall?
Jim D: We are seeing that in practice.
Seeing what in practice? Increase in advective/convective and evapotranspirative heating of the troposphere by the warming surface? 2% – 7% increase in rainfall per 1C of surface temperature increase? (range from O’Gorman, previously cited by Pat Cassen and me).
“The global warming is land-dominated”
But mainly Siberian land and mainly at night. ie nothing to be overly concerned about.
True, true, if going from a -50°C to a -40°C at small spots in the coldest and most inhospitable regions on Earth — such as in the dry air of the Arctic or Siberia — and extrapolating that across tens of thousands of miles can be branded as global warming, then perhaps there has been global warming in the US; otherwise, adjusting for that kind of pseudoscience, and for locating official thermometers at busy airports where the tarmac is continually swept clean of winter snow, and for all of the Urban Heat Island effects that corrupts in the data for the Northern US and Europe, there hasn’t been any significant global warming since the 1940s.
I’m not sure that increased water vapor would follow that expected if some portion of the warming was caused by dynamic water vapor movement associated with ocean heat transport changes.
“The differences between trends observed in the surface and lower-tropospheric satellite data sets are statistically significant in most comparisons, with much greater differences over land areas than over ocean areas.”
That claim is made in the original paper, but the significance calculation is clearly faulty. To take just one example of global trend, we have (their Table 1, in C/decade)
I’ve added my calc of their σ, based on 2σ CI’s.
Then in Table 2 they give the differences between amplified (x 1.2) surface and trop trends
This is the basis for their significance claims. But it’s crazy. The σ’s of the differences (which I’ve added), should be got by adding the components in quadrature. It should certainly be larger than either of the components. But theirs are barely more than half that of the satellite index alone. The same applies to the numbers in the corrigendum. Those low σ are what they use to claim significance.
I don’t believe any of their results are significant.
Correction – their table 2 is simple differences. Table 3 has amplification. But the same objection applies. The σ’s of the differences are less than the sat value alone.
Not sure I agree with this. This is the difference of what are presumably highly correlated random variables, so just adding variances would be a very conservative upper bound. A small variance seems quite possible. In the extreme, V[X-X]=0.
“This is the difference of what are presumably highly correlated random variables”
Possibly. But that is what is being tested. You can’t really assume a correlation in advance.
In any case, the paper gives no information to suggest the CI of the difference is based on correlation. It certainly doesn’t give any data on that.
Nick:
Again to correct Nick, the errors are correlated between the two series.
So instead of combining in quadrature, you write:
Nick missed this, though he did correctly criticize Gavin and others for combing the errors in quadrature in relationship to whether the 2014 global mean temperature series is really the hottest year on record.
In that case, they were comparing the same series differenced over periods of up to 17 years, for which
is certainly much smaller than that $\rho_{12}$ between the 1-m and TLT global atmospheric temperature series.
Well, Carrick, as I mentioned above, if you have both autocorrelation and cross-correlation, then it isn’t as simple as that. You need an overall cross-correlation matrix. It may be that your formula provides an upper bound, but some explanation would be nice.
Technically, autocorrelation doesn’t bias the OLS trend estimate, it just makes the estimator less efficient. This is well known (in the presence of autocorrelation OLS is a linear unbiased estimator, but no longer the most efficient…no longer the “best”).
So there’s no correction to make in for the formula for calculating
.
If you want, autocorrelation affects the uncertainty in your estimate of
. There may be a term that comes in there, but I haven’t checked.
Carrick,
It makes a difference to the relevant variance estimate in the CI. That’s where the Quenouille adjustment comes from, as a diagonalised estimate of the inverse of the correlation matrix.
But as I noted above, I think that the CIs they give may be not for the difference in trends, as stated, but for the trend of the differences.
Nick:
Yes, it affects the trend error estimates
and $latex\sigma_2$. If you don’t correct for autocorrelation, in addition to, not being efficient, the formal uncertainties in trends from the OLS Hessian (covariance) matrix will be too small.
What I typically do is use other methods besides use the formal errors from the covariance matrix to estimate the uncertainty in trends (I think Monte Carlo based methods are more reliable).
I believe my formula is correct based on the assumption that you’ve verified that your estimations of
,
, and
are computed correctly, something I implicitly any researcher will do.
If you can show an improved formula, I’d as always be happy to see it.
An alternative explanation for differential temperature trends at the surface and in the lower troposphere.
Klotzbach et al. suggest that part of this difference is caused by a bias in the surface temperatures.
Bias?
Absolute rubbish!
Surface temperatures are rigorously measured, tested and homogenized to remove any traces of lower temperatures.
So
Alternative The satellites are deliberately biased to stay flat
But
When forced by anthropogenic greenhouse gases, GCM climate models on average indicate that the trend of the troposphere is amplified by a factor of 1.2 over that of the surface. When confined to the tropics, the amplification factor is about 1.4. this leads to an even bigger unexplainable physical discrepancy between the lower troposphere and corresponding surface trends as seen in the analyses.
So the satellite trends should be going up even more steeply but they are flat, is that a pause or really a fall?
These findings strongly suggest that there remain important inconsistencies between surface and satellite records.
Says it all, who cares if the amplification is 1.1 instead of 1.2. [non seeing one].
I don’t believe any of their results are significant [non hearing one].
Nothing from the oracle yet [non speaking one]
angech2014 | March 5, 2015 at 1:05 am
When forced by anthropogenic greenhouse gases
Did you mean, “when forced by anthropogenic or natural greenhouse gases?”
Land surface temperatures are not reliable because measured at locations sensitive to UHI. The profile of tropospheric temperatures reveals a very strong hot spot. This hot spot is apparently absent only if land surface temperatures are taken into account.
http://img708.imageshack.us/img708/6844/s8ht.png
I’ve seen these 20S-20N plots and it’s an interesting point.
There could be a Hot Spot, just that it’s a lot smaller and is more than matched by cooling from 20 to 30 degrees north and south.
But it could also be that by limiting stations to 20 degrees, you limit the number of locations geographically ( Carribean, Central Africa, some Western Pacific Islands and that’s it ).
Rather than UHI, we should be looking at other “economic heat islands”.
What we need are krigged, infilled, geographic maps of economic activity over time. (population, energy use, transit, water use, and finance would be the first places I’d look.)
Don’t forget homogenized.
The reason for the observations is that the presence of GHGs allows radiative energy directly to space from within the atmosphere instead of that energy first having to be returned to the surface as heat in adiabatic descent so that the surface can then radiate it to space.
GHGs therefore enhance cooling to space from within the atmosphere but rather than that enhanced cooling being limited to the stratosdphere it actually occurs wherever GHGs are present throughout the entire vertical column.
However, there is no net cooling overall because the energy that leaks out to space from the up and down adiabatic cycle causes a reduction in thermal energy returning to the surface in descent which offsets the potential surface warming effect of GHGs.
The net thermal effect of GHGs is therefore zero. Instead they just reallocate the energy flow to space between radiative energy lost to space from the surface and radiative energy lost to space from within the atmosphere.
That description is entirely consistent with observations whereas the AGW radiative theory is not.
Since you can’t tell a trend from a cycle with data short compared to the cycle, it’s more likely a phase difference in a long cycle than differing trends.
Phase differences come up all over. Trends don’t.
One thing that pops into my head is that perhaps there are processes that make the atmospheric sensible heat highly stratified between the very near surface and the rest of the atmosphere, and that in the RotA heat is highly mixed (and latent heat is released there). These likely result in an increased greenhouse effect at the very near surface, particularly certain regions in mid-latitudes over land, but decreased GHE in the mid troposphere and stratosphere (as the GHE is circumvented in the mid-troposphere by latent heat transfer to upper troposphere and increased convective activity, and vertical water vapor profile becomes more stratified, decreasing GHE in the stratosphere).
I think it is likely that changes in transport of water from ocean to land and changes to retention of water at the land surface are primary factors (anthropogenic transfer of water from aquifers to land are also a likely factor). The net of these changes probably result in more latent heat transfer from ocean to upper troposphere and stratosphere and more moisture at the very near surface over land (much due to transport from the ocean and net increases in retention from biological responses and hydrological distribution changes from both biological and geological interaction with changing circulation and water vapor content and land use, e.g. increased precipitation to dry regions with relatively high retention capacity).
This highlights the importance of studying regional changes rather than global metrics.
Woof, good stuff, better on every reading, so far.
============
Also consider the differences in the volume, density, location, and mass of the portion of the measure being calculated.
Ed.
Also consider the differences in the volume, density, location, and mass of the portion of the atmosphere for the measure being calculated.
John Christy has emailed this response re uncertainties:
There are two types of uncertainty. Measurement uncertainty, which
quantifies the errors from all sources. We calculate these from large
comparisons with radiosondes. The number usually comes out to around
+/-0.05, but I tend to use +/- 0.08 C for global annual anomalies to be
safe.
The other uncertainty is statistical uncertainty which is merely an
expression of the variance of the time series and how that influences
the confidence one may have in a linear trend drawn through the data.
Statistical uncertainty will be present even if you have perfect data.
The 95 percent C.I. for 1979-2014 trends produces a statistical
uncertainty of +/- 0.05 C/decade. However, with the difference time
series, the common variability is removed and that gives statistical
uncertainty of no more than +/- 0.03 C/decade.
The key point here is that the magnitude of the trend of the difference
time series between upper air and surface in observations is large and
significant relative to the magnitude of the trend of the difference
time series between upper air and surface in the models.
weird.
Roger Sr, an author of this paper, took the time to ask zeke questions about a blog post. And zeke answered him.
Is he going to be around to answer questions about his paper?
there was some email discussion about uncertainty, they seem to be reading the comments. I will continue to post any relevant emails. I hope that RP Sr or others will show up to answer questions.
Authors conclusion: “No significant change”
The Stokes Shift: “That’s not significant.”
“The key point here is that the magnitude of the trend of the difference time series between upper air and surface in observations is large and significant relative to the magnitude of the trend of the difference time series between upper air and surface in the models”
One could argue about how large, but I think that is a correct statement. The abstract, though, says “The differences between trends observed in the surface and lower-tropospheric satellite data sets are statistically significant in most comparisons”. That is a different test.
Klotzbach et al. [whatever yhear] is not the only one discussing the possible discrepancy in lower tropospheric and upper tropospheric warming between reality and GCMs (both CMIP3 and CMIP5). The study focusses on satellite data only (which can be done from 1981 onwards), so it is not ‘just’ a problem of the surface temperature datasets, an additional indication in the direction of the notion that the problem may be real rather than just because of problems with the data. The authors are careful in their conclusions of where the problem really lies, though (models, observations, or both).
Po-Chedley and Fu, ERL, 2012:
Abstract
Recent studies have examined tropical upper tropospheric warming by comparing coupled atmosphere–ocean global circulation model (GCM) simulations from Phase 3 of the Coupled Model Intercomparison Project (CMIP3) with satellite and radiosonde observations of warming in the tropical upper troposphere relative to the lower-middle troposphere. These
studies showed that models tended to overestimate increases in static stability between the upper and lower-middle troposphere. We revisit this issue using atmospheric GCMs with prescribed historical sea surface temperatures (SSTs) and coupled atmosphere–ocean GCMs that participated in the latest model intercomparison project, CMIP5. It is demonstrated that even with historical SSTs as a boundary condition, most atmospheric models exhibit excessive tropical upper tropospheric warming relative to the lower-middle troposphere as compared with satellite-borne microwave sounding unit measurements. It is also shown that the results
from CMIP5 coupled atmosphere–ocean GCMs are similar to findings from CMIP3 coupled GCMs. The apparent model-observational difference for tropical upper tropospheric warming represents an important problem, but it is not clear whether the difference is a result of common biases in GCMs, biases in observational datasets, or both.
Sure, both. But I have 95% confidence in attributing greater than 50% of the difference to common biases in GCMs.
====================================
The fact that the hot spot has not materialized shows that the basic atmospheric processes the AGW models predicted due to an increase in greenhouse gasses is not correct. In addition humidity levels have been coming down which was not suppose to take not place in response to an increase in greenhouse gasses.
I think the urban heat island effect could go a long way in explaining the difference between satellite temperature data and NCDC data. This is verified to a degree by the fact the temperature trend discrepancy is greater over the land then the oceans.
Here is the other comment I made on Nick’s blog:
[My] feeling is the systematic error in the satellite series is much worse than what is needed to explore a relatively small difference in tend such as this.
Heck, UAH and RSS don’t even have the same sign for trend from say 1998 to 2014. In fact, UAH is among the highest of the trends over this period, while RSS is the smallest and the only negative trend
SERIES TREND (°C/decade)
GISTEMP 0.077
UAH 0.071
HadCRUT4 0.059
NCDC 0.042
RSS -0.049
The other point, which I think Nick was also trying too make, is you need to look at more than one interval. Otherwise you don’t know how robust this result is (which I would guess is “not very”).
Well, with respect to the HotSpot, anyway, the Height-Latitude trends:
http://judithcurry.com/2015/03/04/differential-temperature-trends-at-the-surface-and-in-the-lower-atmosphere/#comment-680625
of MSU and RAOB are consistent, so throw out the MSU, you have to throw out the RAOB also – not likely that whatever errors remain result in a great cooling just in the location where the models predicted the maximal warming.
Lucifer, the point is RSS and UAH are completely inconsistent with each other, at least in the recent decades. This does not bode well for comparison to surface records, when the variations between RSS and UAH are larger than the surface records.
If you were to go back and see how many different satellites they string together and have to correct systematic effects for, you probably shouldn’t be surprised that the long period satellite trends aren’t very reliable.
As to ROAB and MSU, you don’t know how much tuning MSU has done to get a good match (and yes I think at least the opportunity for tuning is present).
Carrick,
I don’t get where you think RSS and UAH are inconsistent with one another. I know it’s not the best reference but wiki says: “…RSS and UAH TLT are now within 0.003 K/decade of one another.”.
http://www.skepticalscience.com/pics/UAHvsRSSvsGISS.png
ordvic, when you look at consistency between series, the way you do it is not by graphical representations but numerically. And you have to look at the quantity of interest, trend, not the time series.
Why this is a big issue here is because the satellites used are changed over time, and there’s a big systematic error you have to correct for, as the satellite orbit decays.
Anyway I think this is much more informative than a graph full of squiggly lines:
SERIES TREND (°C/decade)
GISTEMP 0.077
UAH 0.071
HadCRUT4 0.059
NCDC 0.042
RSS -0.049
Hopefully you can recognized that if UAH and RSS are more dissimilar than e.g., UAH and the SAT series, there is not much leverage here to analyze the difference between the satellite TLT and ground-based SAT series.
The issue here is the difference between admitted errors, and true error. If you just use admitted errors, probably the ratio between TLT and SAT is significant.
It seems pretty unlikely to me, when you factor in error in accuracy of measurement of long-term trend that here is much meat in this pot.
http://www.drroyspencer.com/wp-content/uploads/UAH-LT-vs-RSS-LT-1981-2010-base-period.png
Carrick, I sort of get what your implying but the numbers you show there seem to point to similarities rather than not? I hate to ask and don’t mean insult but could you clarify further for my understanding? Another squiggly:
http://www.drroyspencer.com/wp-content/uploads/LT-UAH-versus-RSS.gif
Again you’re practicing a form of graphical wiggleology. That’s not how objective analysis is done.
Instead, compare the difference in trends. If it is large compared to the difference in tend between satellite and surface temperature, there’s a problem.
ordvic:
The problem with these sorts of figures is your eye naturally correlates the variability (the eye is easily fooled).
What we are trying to look at though is trend, not variability, and the right way to look at that is objectively.
The high frequency portion of the signals are developed using single satellites (typically) so the systematic effects associated with splicing records between different satellites is not present.
So I’d expect the high-frequency portion of the signal to be reliable. If you wanted to look at amplification in the high-frequency portion of the signal, that should be a reliable measure.
What is surprising with respect to trend is how poorly UAH and RSS do compare to each other in the last 15-years (the equipment is better, you’d expect better agreement, not much worse).
Till this is adequately addressed, I think it’s a fools game to try and focus on the disparity between the trends in the series, especially if UAH is on the high side on warming trend (over some periods its trend is larger than any of the SAT trends) whereas RSS is on the low side.
I hate to ask again and I don’t mean insult but could you clarify again for my better understanding? Another squiggly with trends:
http://www.drroyspencer.com/wp-content/uploads/LT-UAH-versus-RSS.gif
Lucifer, the point is RSS and UAH are completely inconsistent with each other, at least in the recent decades.
Well they are using different satellites ( AMSU ).
And the RSS excludes some areas and blends high and mid-lattitude bands.
So perhaps one shouldn’t expect the trends to be identical.
But for the period of records trends, UAH,RSS, and RATPAC are consistent in producing a similar pattern of warming.
And that pattern does not include the hot spot.
Sorry for the double post, I didn’t think the first one made it. Your last paragraph answered my question, thanks.
Lucifer:
It’s not the pattern of warming we’re measuring, it’s the trend. I addressed in my comments to ordvic why we might expect the high frequency portion of the signals to agree well, since it’s when you have splices between satellites is the sort of systematic effects on long-term trend going to creep in.
As I pointed out to ordvic, “[w]hat is surprising with respect to trend is how poorly UAH and RSS do compare to each other in the last 15-years (the equipment is better, you’d expect better agreement, not much worse).”
Until this issue is straightened out, I just don’t think this is a good way to test for e.g., biases in the long-term surface temperature trend.
Do these systems measure the same space, verticle and geographical?
aaron, there are differences between practically all of the series, surface or satellite. As I’ve noted, the rate of warming varies over the surface. Because of that, differences in coverage will yield different biases in global mean temperature.
In my opinion, this uneven coverage makes comparison of long term trends between series without a bias correction for differences in coverage practically meaningless.
Carrick,
Yet vast areas of surface temperature are not measured in Africa, South America, Antarctic and the Arctic. They are estimated and compiled into a .001 accuracy estimate of global temperature. 2014 hotest ever by .001, Surely those trends need to be documented within error bars as the UAH and RSS trends for investigation into the delta rationale. Systematic error in the satellite series should be compared to the ones in the adjusted and the unadjusted surface series.
Scott
Scott, no doubt there are issues for surface coverage too, and I agree with your recommendations.
But that is a completely different issue than whether you can compare satellite to surface station reconstructions, using the satellite reconstructions as gold standards, when in fact the variation in trend between the satellite reconstructions is actually as large or larger than the variation in trend between satellite and surface station reconstructions.
The definitive comparison between two time-series is provided by cross-spectral analysis, which no “climate scientist” seems capable of doing properly. Linear regression trends of any fixed duration can be readily shown to be a very crude band-pass filter of a time-series–with highly undesirable response characteristics. Since there are multi-decadal, trans-centennial and quasi-millenial oscillatory components in temperature series, reliance upon “trends” over only a few decades and the estimation of their CIs based on “red noise” models is emblematic of the triumph of simplistic supposition over bona fide scientific inquiry. In short, a pretentious academic farce.
Steven Mosher – We are monitoring the comments. :-) John Christy has sent a reply to the uncertainty questions which I hope you have read by now.
Roger Sr.
weird. you can take the time to ask zeke directly and get a direct answer, but not take the time to answer questions directly, but use an intermediary.
That’s fine if you are willing to take the same treatment next time zeke posts. are you ok with that?
Different people have different preferences for communicating on different topics, I do my best to accommodate/facilitate all worthwhile communications.
thats not the issue. I’m just asking Roger if he will complain when we respond to his questions indirectly through a third party. goose gander.
I don’t have a problem with them responding somewhere else, but they do need to provide links.
Ya Carrick I have no issues with them responding in any way they choose. I think their method has
Much to recommend it. People can submit questions in the comments and they can respond via whatever means they choose. That seems fair.
yes roger I read his response.
It doesnt answer nick Stokes concern.
I quick demonstration of how you calculated the signficance would help.
http://moyhu.blogspot.com/2015/03/klotzbach-revisited-its-wrong.html
I think Mosher has a case of Selective Hostility. Let’s have Joshua break this situation. down for us.
Andrew
it’s pretty funny. some folks clamor for public debate. And so we give them what they ask for. come and ask your questions. When the shoe is on the other foot, they don’t exactly do unto others.. That’s fine, just noting it for future reference. some people have the time to ask questions, but have no time to answer them. Its hard to have a good faith dialogue when you have asymetrical protocals
The simple truth still works in a pinch.
Fair enough. I’m not familiar with the dialogs, how long did it take before RPs started sniping about Zeke not responding fast enough?
The problem is, Mosher, is that it’s obvious to everyone (but maybe you) that your purpose here seems to be running cover for most everything and everyone that supports AGW, and being hostile to most everything and everyone that is skeptical of it.
Eagerly awaiting your response informing us that’s not the case.
Andrew
Aaron. Not the issue.
The issue was a question.
Would the authors show up in the comments or file answers through Judith?
Either is ok.
If you are happy with either approach. So am I.
Make sense?
There was the hilarious Real Climate thread where Roger Pielke Sr.’s responses were demands that others answer his questions while refusing to answer other’s questions
Who you gonna believe, Pielke Pere or the Real Climate Rare?
================
Steven Mosher – John Christy is not a third party but is one of the co-authors of our paper. We will have a further response to your questions, but please be patient and avoid being hostile until we do. :-)
Roger Sr.
Typical petty BS!
And there was no hostility when Zeke Hausfather had a first(?) post here at Climate Etc. He was doing a pretty good job too, of answering questions, until you turned up to take over. I was reading more and commenting less. If you haven’t already done so, you owe Zeke an apology.
Amen
For moshe, read that as ‘until we are’.
===========================
Mosher avoiding being hostile is like a fish avoiding being wet.
Mushi is very fine fare when delicately prepared; just hacked up can be toxic.
===========
The reality is that our current surface temperature data is in question which just makes the climatic picture even more uncertain, as if it were not uncertain enough.
Steve Mosher – While we are working on a further response to your questions and criticism, it would be useful for you to read the text below from our 2009 paper [did you actually read the text of our paper?]
“Since 1979, when satellite observations of global atmospheric temperature became available, trends in thermometer estimated surface warming have been larger than trends in the lower troposphere estimated from satellites and radiosondes as discussed in a recent Climate Change Science Program (CCSP) report [Karl et al., 2006]. Santer et al. [2005] presented three possible explanations for this divergence: (1) an artifact resulting from the data quality of the surface, satellite and/or radiosonde observations, (2) a real difference because of natural internal variability and/or external forcings, or (3) a portion of the difference is due to the spatial coverage differences between the satellite and surface temperature data. Santer et al. [2005] focused on the second and third explanations, finding them insufficient to fully explain the divergence. They suggest in conclusion that, among other possible explanations, ‘‘A nonsignificant trend differential would also occur if the surface warming had been overestimated by 0.05C per decade in the IPCC data.’’
While we disagree on Santer’s explanation for the divergence between the trends, he does indicate it is in the data. Our hypothesis is that (as we wrote in the paper)
“…we consider the possible existence of a warm bias in the surface temperature trend analyses.”
The McNider et al paper,completed after the Klotzbach et al paper (which I note you have been silent on), provides at least a partial reason for the difference.
Yes I read all the papers. Have all the data. And have even
Used your arguments. Essentially there is a problem.
Diagnosis will be aided by people making their entire data chains and code available. That includes rss uha and every one else. Just provide the tools for people to do the work themselves and if they can’t figure it out.. It’s their problem.
So we agree there is a problem. Solving it will go faster if folks open up their data and code.
The UAH software is available. Took a long time and not well publicized, but available.
So UAH now the cleanest, most transparent, method of acquiring temperature data?
===================
Since AFAEK no one has actually run through it, no, it is not the cleanest, and since it is a roll your own code, probably not very transparent. If you read the description of the process, you will only run shrieking from the room. Bug free, well don’t bet your bottom dollar, same for RSS
The clearest and most transparent are, IEHO, ccc-gisstemp created by Nick Barnes and the Clear Climate Code effort and the BEST effort. Nick Stokes also has some interesting stuff.
AFAEK IEHO
http://cdn.someecards.com/someecards/usercards/1339373114965_3579072.png
Unfortunately, us folks who are picking up the tab for the development costs don’t seem to have much say in whether we feel their value is worth what it’s costing us. That’s what liberal fascism is all about: if it feels good and someone else is paying for it, it’s a great idea.
I think it would be good if all the code and data were in one place for UAH and RSS both. This will allow people to answer their own questions about things like merging records from the different sats, as well as other questions.
ya. I have said it a hundred times. if a guy gives his code, i’ll never as him another question. its the best explanation of what was done.
been saying that since 2007.
the whole purpose of writing a paper and sharing the code and data is so you DONT have to answer questions. people can just go see what you did
http://www.ncdc.noaa.gov/cdr/operationalcdrs.html for UAH and RSS. S
I found one more graph that seems to be in line with this post:
http://www2.sunysuffolk.edu/mandias/global_warming/images/temperature_trends_1979.png
Shouldn’t that go to 2015 instead of 2011 or so?
The slope of the purple line (the central one) looks like it would give ~ 0.15 degrees per decade. Starting two years later would lower that by quite a bit.
Going through 2014 would do the same. The GCMs overestimate the amount of temperature rise. As so many Science, Nature, and GRL papers by the “team” have now shown, they underestimate the effects of the oceans in heat uptake and of ocean cycles.
Good post, interesting comments. Thanks for the update on both the findings and the ‘controversy’. I researched this in some depth over the past two years for the ebook, and find almost all the (sometimes to the point of silly) researched ‘explanations’ reflected in the to and fro above.
The CMIP3/5 predicted tropical troposphere hotspot (really just an amplification of expected surface warming by about 1.4x (Judith, in the post) is observationally ‘missing’. Its supposed to be there. It isn’t.
That is a BIG problem for warmunists; see the amusing SKS contortions if you think that statement is not true. On to the main excuses:
Excuse #1: Not a GHG signature. Wrong. AR4 WGI figure 9.1.c said it was, since any warming should cause it and one of ‘any” is GHG.
Excuse #2: More ocean rather than land, which is under sampled by radiosondes. Wrong. Still arises over land masked CMIP3 and CMPI5, just not as strongly. And oceans have sufficient radiosonde launches to know whether true. What, you think the US Navy operates no weather balloons on its carrier task forces? Or those carriers do not operate in the tropics?
Excuse #3: Satellite measurements are sufficiently uncertain that the modeled amplification disappears inside the measurement error bars. (See comments above on whether these error bars have even been correctly calculated.) Note an error bar argument almost NEVER arises if something is cutting the warmunist way–most recent example being NASA GISS 2014 warmest year ever! (Uh, with a 68% chance of that statement being wrong, calculated by NASA GISS.) Note that uncertainty was the ultimate warmunist debate refuge on this issue at now defunct ClimateDialog. Defunct since warmunists won’t come out and dialog any more. Note further that if accepted as the explanation, this excuse says the Uncertainty Monster just ate the AR4 and AR5 ‘highly confident’ assertions. Oops. That won’t do…so:
Excuse #4: It was sort of there there, but sort of isn’t now because of the ‘Pause’. Uh oh, #4 undoes the whole CO2/temperature strong linkage. So Mann’s newest paper says there isn’t a pause (well, in the NH). Pause denial. It would be fun to see Trenberth (pause is real ’cause deep oceans ate the heat), Mann (pause isn’t real because we redefined ocean surface oscillations, looked only at NH, and…), and England (pause is real, but due to excess ‘surface ocean La Nina, so watch out!) debate to square themselves away. Now you know why ClimateDialog went defunct.
“Oh what a tangled web we weave, when first we practice to deceive.” Marmion, Canto VI, XVII, Sir Walter Scott (1808). The poem is about the Battle of Flodden Field. It seems apt for the Battle of CAGW.
Excuse #1: Not a GHG signature. Wrong. AR4 WGI figure 9.1.c said it was, since any warming should cause it and one of ‘any” is GHG.
A signature is unique. Since the hotspot could be caused by any warming it’s presence would not be a unique signature. It’s apparent absence is the issue.
Given the questionable status of the models
Given the large number of adjustments made to satellite data, Given the sparse network of direct tlt observations.
I would hesitate to draw any firm conclusion. All three are potentially out of wack
Thought was clear. Never said unique. Since it isn’t. But…You arguing against AR4 WG1 figure 9.1.c? Not that AR4 has a sterling track record for being right. Essays Himalayan Glaciers and No Bodies being notable examples. But still… Sure looks like more uncomfortable contortion.
“Given the large number of adjustments made to satellite data”
Certainly, surface obs, SSTs, MSU, and RAOB all have issues.
But, why would there be MSU error only in the region where the Hot Spot was supposed to be?
Why would the pattern of trends for the quite independent measurements from MSU and RAOB be both incorrect in the same way?
Look one last time at the piccy.
The colors indicate the trends from 1979 through 2014 for GISS Model, RATPAC RAOB, UAH MSU, and RSS MSU:
http://climatewatcher.webs.com/HotSpot.png
The observations actually bear out the model in a number of ways:
1.) stratospheric cooling
2.) Arctic trend maxima with a steeping lapse rate
3.) Rough agreement with Antarctic trend
4.) a ‘slot’ of no change in the Southern ocean around 30 to 60 degrees south.
But no Hot Spot.
And, the models got NH warming right.
“. But…You arguing against AR4”
duh.
Roger,
You mention near the end of the post that:
“In the past, maximum temperature trends have shown closer agreement with lower tropospheric measurements. However, recent land surface data sets using homogenization corrections have reduced the trend differences in Tmin and Tmax. Now Tmax may be rising like Tmin in these analyses. But, this leads to an even bigger unexplainable physical discrepancy between the lower troposphere and corresponding surface trends as seen in the analyses.”
While this is true for the U.S., it is not true globally. During the period of overlap with the satellite record (1979-2014), homogenization actually slightly decreases the global land maximum temperature trend. See the figure below, based on GHCN v3 data:
http://i81.photobucket.com/albums/j237/hausfath/land%20max%20raw%20adj_zpsaspk5vm4.png
While discrepancies between surface and satellite records are certainly of interest, they don’t tell us much about surface temperature homogenization (and, over the U.S. at least, homogenized data seems to be more similar to UAH records than raw data).
http://rankexploits.com/musings/wp-content/uploads/2013/01/uah-lt-versus-ushcn-copy.png
One of the problems with models is the vertical rresolution of the atmosphere. That means that if model results are compared, it is important they be on the same step. Of course modellers are well aware of the problem, but modelers will still try to minimise computing costs by using larger steps than they would like, I’m not suggesting that the authors were unaware of this problem, but readers might not be..
Reblogged this on yidongwonyi.
Hi Zeke- Thank you for contributing and presenting the analysis of the slight reduction in the global land maximum temperature trend.
The issue we are raising on minimum temperatures over over land is a real effect, but if we are correct, it is confined to just near the surface. This is why we have concluded it contributes to the divergence in trends between the surface and lower troposphere. When the maximum and minimum temperatures are combined to create the mean, the mean trend would be expected to be larger than the lower tropospheric trend.
I do assume you do agree with us that there is a difference in trends between the surface and lower troposphere, with the former larger, but let us know if you do not and the reason.
Roger Sr.
Hi Roger,
I do agree with you that there is a noteworthy difference between surface and tropospheric trends. However, I see enough uncertainty in satellite reconstructions (e.g. the divergence between UAH and RSS over the last 15 years, work like Zou et al that result in satellite-based records much closer to surface records, and the history of large adjustments to satellite records in general) that I’m reluctant to assume that the satellite record is the correct one. Thats not to say that the surface record is necessarily correct; only that the error bars of both are large enough to make it difficult to determine which is correct at this point in time.
I’d guess in one of the two adjustments go in two directions, the other, not so much.
==============
Great comment, Zeke.
In my opinion, youu hit all of the main points and did so concisely.
For people that are interested, the paper Zeke is talking about (I think) is Cheng-Zhi Zou, Mei Gao, and Mitchell D. Goldberg. “Error structure and atmospheric temperature trends in observations from the microwave sounding unit.” Journal of Climate 22.7 (2009): 1661-1681.
You can download it here.
Steven Mosher – I agree that all of the data should be available. I am unclear how this applies to the UAH analysis. On the homogeization of the surface data, each step in the homogeization for each location should also be openly and clearly presented.
Roger Sr.
data and adjustment code. both RSS and UAH adjust their
“I agree that all of the data should be available. I am unclear how this applies to the UAH analysis. On the homogeization of the surface data, each step in the homogeization for each location should also be openly and clearly presented.”
I think I’ve been pretty consistent in my position since 2007 when I asked Hansen and Jones and later Menne for their code and data.
We also made an attempt to get the code from RSS and UHA.
With GISS, CRU, NCDC, and now BE. we have all the data and code we need.
1. We have the raw data that is input.
2. We have the code that produces the final output.
3. we have station by station data ( before and after ).
These are the tools anyone needs to understand, criticize, reproduce and IMPROVE the end product. The data and the code ARE the science.
With RSS and UHA we dont have
1. The raw data as ingested
2. The actual code used to do the adjustments.
3. Before and after comparisons ( raw versus adjusted )
For example; ask how many people are aware of the fact that RSS uses GCM output to adjust data.
I’m thinking that one reason a scientist might not want to release code is that the condition of the code might tend to make the writer look unprofessional. Scientists are not coders, so the code could look kind of messy and hard to follow.
As a programmer, I’ve looked back on code I wrote years ago and could have easily asked what idiot wrote it :) Point being, I appreciate not wanting others to see code, at least some if it.
However, we all realize most scientists aren’t professional programmers. I think all the code should be released. We will be understanding.
Oops. Landed in moderation. Fell in the id-jid-iot trap.
“I’m thinking that one reason a scientist might not want to release code is that the condition of the code might tend to make the writer look unprofessional. Scientists are not coders, so the code could look kind of messy and hard to follow.”
1. Yes Jones made that excuse in the climategate mails.
2. Gisstemp was a little funky. we all understand. ClearClimateCode
refactored it in python.
3. Some people will never be satisified. We provide SVN nightly
and still some people want more. you cannot drive your decisions
by positions that fools take
As a programmer, I’ve looked back on code I wrote years ago and could have easily asked what idiot wrote it :) Point being, I appreciate not wanting others to see code, at least some if it.
1. durr me too. over the years I created a bunch of R packages
its still painful to go back and look at it.
2. nic lewis has actually found coding mistakes. they get fixed
not a bigdeal
However, we all realize most scientists aren’t professional programmers. I think all the code should be released. We will be understanding.
Yup
Believe most of the code is here.
http://www.ncdc.noaa.gov/cdr/operationalcdrs.html
“homogeization” should be homogenization. So much for spell check. :-)
This paper is a little old, but …
…
A new data set of middle- and upper-stratospheric temperatures based on reprocessing of satellite radiances provides a view of stratospheric climate change during the period 1979–2005 that is strikingly different from that provided by earlier data sets. The new data call into question our understanding of observed stratospheric temperature trends and our
ability to test simulations of the stratospheric response to emissions of greenhouse gases and ozone-depleting substances. Here we highlight the important issues raised by the new data and suggest how the climate science community can resolve them.
…
http://www.arl.noaa.gov/documents/JournalPDFs/ThompsonEtal.Nature2012.pdf
http://popesclimatetheory.com/page38.html
Way too much effort is put into understanding little changes in the 130 year instrumented data.
You can Extrapolate short term data and follow it out of bounds if you choose.
Understand what caused the changes of the past 500 million years.
Understand what caused the changes of the past 50 million years.
Understand what caused the changes of the past 1 million years.
Understand what caused the changes of the past 20 thousand years.
Understand what caused the changes of the past 10 thousand years.
To understand what is going on now, you can make that easy by understanding what went on before now. We have that data. We can do that. I do that.
Hi Zeke – Thank you for your follow up. You wrote
“I do agree with you that there is a noteworthy difference between surface and tropospheric trends.”
John Christy will address the details of the uncertainty issues you and Steve Mosher have raised.
Until he does, this report Scientific Comment by Roger Pielke Sr. and Tom Chase with Input from John Christy and Tony Reale https://pielkeclimatesci.files.wordpress.com/2009/10/r-278b.pdf
provides more insight into the way the MSU data is used, As we start the report, we write 2009 regarding a exchange of views with Ben Santer et al
“In order to continue the discussion, I invited John Christy and Tony Reale to respond to the two papers. Their input provides further documentation of the value of using the NCEP Reanalysis for climate trend assessments, and as an independent assessment tool to the University of Alabama at Huntsville (UAH) lower tropospheric MSU trend analyses.”
Among the statements are
“it is important to know that the satellite retrieval coefficients are
updated weekly by radiosonde comparisons (Christy et al. 2003). Thus the change in temperature with time is dependent on the radiosondes, not the satellites. As a result, the time series of NCEP and UAH data are essentially independent.”
I also refer you (and Steve Mosher) to the paper
Christy, J.R., B. Herman, R. Pielke, Sr., P. Klotzbach, R.T. McNider, J.J. Hnilo, R.W. Spencer, T. Chase and D. Douglass, 2010: What do observational datasets say about modeled tropospheric temperature trends since 1979? Remote Sensing, 2(9), 2148-2169. https://pielkeclimatesci.files.wordpress.com/2010/09/r-358.pdf
The abstract reads in part
“Updated tropical lower tropospheric temperature datasets covering the period 1979–2009 are presented and assessed for accuracy based upon recent publications and several analyses conducted here. We conclude that the lower tropospheric temperature (TLT) trend over these 31 years is +0.09 ± 0.03 °C decade−1 . Given that the surface temperature (Tsfc) trends from three different groups agree extremely closely among themselves (~ +0.12 °C decade−1) this indicates that the ―scaling ratio‖ (SR, or ratio of atmospheric trend to surface trend: TLT/Tsfc) of the observations is ~0.8 ± 0.3. This is significantly different from the average SR calculated from the IPCC AR4 model simulations which is ~1.4. This result indicates the majority of AR4 simulations tend to
portray significantly greater warming in the troposphere relative to the surface than is found in observations.”
Best Regards
Roger Sr.
Differences between UAH and RSS TLT temperature trends:
Several comments on the difference between UAH and RSS have come to light. The difference in the 36-year global TLT trends are small with UAH about +0.017 C/decade warmer than RSS. The main reason for this is our different approaches in correcting for east-west satellite drift (i.e. diurnal drift.) UAH uses an empirical method based on actual satellite readings to correct of this while RSS uses a climate model estimate. In the period from 1979 to about 2000 the main effect of diurnal drift led to a spurious cooling of temperatures, so a correction needs to be applied to warm them back up. After 2000, the opposite occurs in which the main satellite (NOAA-15) drifts to warmer temperatures, so these must be cooled back down. The UAH corrections are smaller than RSS, so relative to UAH, RSS warms up the data in the period 1979-2000 more than UAH does. After 2000, UAH does not apply a diurnal correction because the main satellites we use were non-drifters, however, there is still a slight spurious warming in UAH due to our necessary use of NOAA-15 which we have not cooled back down (v6.0 will have this fixed). Since RSS has a relatively large correction to cool off the post-2000 data, their time series drops quite a bit relative to ours and to radiosondes. This was examined in several places, e.g. Christy et al. 2011, Int. J. Remote Sens. Since UAH has a bit too much warming after 2000 and RSS too much cooling, I’ve advised folks to simply take the average of our datasets for the best estimate. I’m the author of the upper air temperature section for the annual BAMS report on climate. I write that the 1979-2014 global TLT trend is +0.13 C/decade +/- 0.02 C/decade where this error range represents measurement error as it encompasses all of the estimates from radiosondes and satellites and even ERA-I. There is also statistical uncertainty which I calculate as +/- 0.06 C/decade using a reduction in degrees of freedom due to autocorrelation of the annual anomalies (N = 36, but Neff ~ 32). As an added note, the mean global TLT trend of 102 CMIP-5 RCP4.5 models for 1979-2014 is +0.27 C/decade with a standard deviation of +0.05 C. While this is significant, it becomes highly significant when considering TMT where the average model trend is over three times that of the observations from 1979-2014.
it would be great to see the code for these adjustments.
It would also be instructive to see a step by step documentation
that starts with the raw data and then show the intermediate steps for each adjustment. As I noted above I think some people who use RSS would be not to happy to find out that they use GCM output to correct the data
(
Trend significance – the ratio is the point.
There was a question raised about significance regarding the trends of the various time series. I do not have the exact data utilized in the report above as UAH is closed again because of snow, ice and cold. I was able to generate one example here – the comparison of upper air (UAHLT) and surface (NCDC) data over land. The annual anomaly values are below.
The trend of the difference of the time series is significantly different from zero. We deal with the residuals so as to eliminate the dependent, common variability. The key point, however, is not that the trend of the difference of these two time series is significantly different from zero, it is that the ratio of their trends, here it is 0.73 (upper/sfc), is much less than model projections, 1.1 to 1.2. Indeed for oceans or tropics, the differences in ratios between observed and modeled are also large, especially in the tropics. In observations, the upper air is simply not warming at a rate consistent with model projections given a prescribed surface trend (Christy et al. 2007 JGR, Douglass and Christy 2013 EE). This suggests feedbacks in the troposphere (likely cloud-related) are allowing joules to escape to space rather than accumulating to increase the temperature in accordance with theory.
Annual anomalies of temperature:
70S-70N 70S-70N 70S-70N
Land Land Land
UAHLT NCDC SFC UAH-NCDC
1979 -0.225 -0.444 0.219
1980 -0.120 -0.327 0.207
1981 -0.038 -0.118 0.080
1982 -0.401 -0.515 0.114
1983 -0.112 -0.123 0.011
1984 -0.556 -0.551 -0.005
1985 -0.384 -0.540 0.156
1986 -0.330 -0.331 0.001
1987 -0.001 -0.208 0.207
1988 0.078 -0.061 0.139
1989 -0.182 -0.217 0.035
1990 0.002 -0.017 0.019
1991 0.020 -0.071 0.091
1992 -0.411 -0.346 -0.065
1993 -0.320 -0.280 -0.040
1994 -0.113 -0.149 0.036
1995 0.067 0.148 -0.081
1996 -0.157 -0.253 0.096
1997 -0.063 0.057 -0.120
1998 0.514 0.328 0.186
1999 0.066 0.146 -0.080
2000 -0.041 0.018 -0.059
2001 0.105 0.191 -0.086
2002 0.140 0.315 -0.175
2003 0.164 0.260 -0.096
2004 0.049 0.196 -0.147
2005 0.296 0.421 -0.125
2006 0.272 0.287 -0.015
2007 0.357 0.481 -0.124
2008 0.151 0.239 -0.088
2009 0.274 0.260 0.014
2010 0.550 0.438 0.112
2011 0.186 0.264 -0.078
2012 0.256 0.277 -0.021
2013 0.297 0.382 -0.085
2014 0.229 0.410 -0.181
John Christy:
Trend ratios by themselves are meaningless, unless you’ve removed virtually all of the natural variability affecting the trend estimate.
Suppose you are fitting for a period where you have CO2 forcing plus some contribution from e.g., AMO. Then you’d get:
Unless you know the magnitude and sign of
, there is no way to back out
from the measurement of
.
Because
and
can end up having opposite sign, or
and
could cancel, leaving you with a large ratio.
I’d suggest that this ratio test just as plausibly tells us how important natural variability is and whether it is contributing to the current warming period. Getting a ratio close to the theoretical value would suggest that natural variability is small, for example.
A larger ratio of
than expected from theory would suggest it is. You really have no way to know whether rate discrepancy you are seeing is due to a breakdown in the theory or just due to the affect of other influences (such as AMO).
Despite warts and all in both data sets, I strongly suspect that the patent UAH – NCDC discrepancy is real–and terribly misleading. LT satellite data, after all, is virtually immune to UHI effects, whereas NCDC “global” data is primarily urban. That situation scarcely provides a reasonably basis for testing GHG theory, which itself handles the surface-to-atmosphere heat transfer unrealistically, failing to acknowledge the empirically demonstrated primacy of moist convection. The upshot is that “climate science” knows far, far less than its practitioners imagine.
Differencing temporal and spatial averages of noisy data sets and extracting a trend therefrom is truly a breathtaking leap in the dark.
Eli, you don’t have to difference them to get a scaling factor but when the denominator is zero it isn’t all that useful. Perhaps if RSS, UAH and RATPAC were considered more “scientific” than balloonmometers everyone could just accept that there isn’t a Hot Spot and the stratosphere isn’t cooling very much.
You are assuming linear scaling
A question-at-large re UAH and RSS TLT datasets:
Do these refer to (or are biased towards) “clear-sky” conditions? (KPPCM2009 doesn’t specify)
If so, that data largely refers to *subsiding* air, so uncoupled from the surface. That would make surface and satellite trends incommensurable.
If the GCM.s estimate “all-sky” TLT effects, they also should differ.
Klotzbach et al. are right, the Mann apologists are wrong. I don’t like the graphs in Klotzbach et al. because they lack intuitive power. What they say in so many words and weird graphs was clear to me in 2010 [ See “What Warming? Satellite view of global temperature change,” figures 15, 24]. I demonstrated that ground-based data sets like HadCRUT3 were exaggerating warming by introducing a phony temperature rise of 0.1 degrees Celsius between 1979 and 1997. I later discovered that HadCRUT, GISS, and NCDC had collaborated by using common computer processing of their temperature data, very likely to bring them into register. The footprints of this processing are on their publicly available temperature curves and consist of sharp upward spikes, all in exactly the same positions. Satellite data (both UAH and RSS) do not show these footprints and do not show temperature rise as well. The eighties and nineties include an ENSO wave train consisting of five El Nino peaks with La Nina valleys in between. The global mean temperature in such a case is the midpoint between the tip of an El Nino peak and the bottom of its neighboring La Nina valley. When you mark these points with a dot as I did in figure 15 they line up as a horizontal straight line, proving that there was no global temperature rise in the eighties and nineties. It constitutes a no-warming zone equivalent to the current pause of 18 years that these two papers are about. The authors are ignorant of the existence of this older hiatus/pause/whatchamacallit because of the fake warming promoted by GISS, NCDC, and HadCRUT that covers it up. I have been calling attention to this for five years now but none of these “climate” scientists who write about it has grasped what it means. What these two hiatuses have in common is that carbon dioxide kept increasing but was unable to cause warming that the greenhouse theory demands. This invalidates the Arrhenus greenhouse theory used by the IOCC And if you think two hiatuses is hard to take, Ferenc Miskolczi has unearthed a third one in NOAA radiosonde records going back to 1948. This one was even longer and lasted 61 years, during which atmospheric carbon dioxide increased by 21.6 percent. This fake warming from the eighties and nineties continues into the twenty-first century on top of the pause platform, with the absurd result that in their records the 2010 El Nino stands taller than the 1998 super El Nino does. This is clearly impossible from satellite records but it was the only way they could crown 2014 was the warmest year ever. Having three hiatuses instead of one as these people think changes our outlook on the nature of global climate change. The current hiatus and the one in the eighties and nineties are almost end to end. They are separated only by the super El Nino of 1998 and the short step warming that followed it. As we know, IPCC was established in 1988 after Hansen made his pitch to the Senate about global warming. This means that for over two thirds of its existence IPCC has been in the shadow of this combined “hiatus” of warming as the true believers like to think of it. The hiatus stage is clearly the normal state of our climate. Any warming attributed to Hansen’s greenhouse effect is clearly misidentified and has a natural cause.
It’s all urban heat island effect. The rising divergence of minimum temperature on land vs. lower troposphere is typical of UHI. Concrete surfaces are radiating heat at night resulting to higher minimum temperature.
While UHI effects are, no doubt, an important component of the LT-surface temperature “trend” discrepancy, the fact that thermalization of insolation takes place primarily at the surface should not be forgotten. Thus the magnitude of variability around the long-term steady-state temperature average is largest at the surface, is considerably smaller at the standard height of station thermometers, and is progressively smaller aloft in the troposphere. Such is clearly the case with the diurnal range, which almost invariably DECREASES with altitude. Inasmuch as entropy always acts to equalize temperatures, a similar diminution of range of multi-decadal and longer oscillations cannot be ruled out.
Try this experiment. Put an infrared thermometer 6.6 feet above a hot asphalt. See if the thermometer can measure the temperature of the asphalt.
The temperatures in question here are NOT being measured by infra-red thermometers.
Hi All – In summary on this post, it is concluded that the difference between the surface and lower tropospheric temperature trends is real and that the models do not accurately simulate this differential trend. There remain, however, disagreements on the magnitude of the uncertainties in both data sets. It was reported that the UAH code is available so Steve Mosher and others can explore this further, if they wish.
Thank you for interesting feedbacks and comments which have further confirmed the robustness of our paper.
Roger A Pielke Sr.
On a previous thread, I showed how one CMIP5 model produced historical temperature trends closely comparable to HADCRUT4. That same model, INMCM4, was also closest to Berkeley Earth and RSS series.
Curious about what makes this model different from the others, I consulted several comparative surveys of CMIP5 models. There appear to be 3 features of INMCM4 that differentiate it from the others.
1.INMCM4 has the lowest CO2 forcing response at 4.1K for 4XCO2. That is 37% lower than multi-model mean
2.INMCM4 has by far the highest climate system inertia: Deep ocean heat capacity in INMCM4 is 317 W yr m22 K-1, 200% of the mean (which excluded INMCM4 because it was such an outlier)
3.INMCM4 exactly matches observed atmospheric H2O content in lower troposphere (215 hPa), and is biased low above that. Most others are biased high.
So the model that most closely reproduces the temperature history has high inertia from ocean heat capacities, low forcing from CO2 and less water for feedback.
I’m not fond of climate models, but I’m warming up to this one.
(Oh wait, it’s by the Russians! Can you spell “Big Oil?” sarc/off)
I Now May Concede Maturing for models.
====================
Good one, kim.
How about: It’s Not the Most Convenient Model
http://berkeleyearth.org/graphics/model-performance-against-berkeley-earth-data-set#figure38-inmcm4-vs-berkeley-earth
Is there any easy way to see what it does for the next 15 years?
==============
kim, if you’re asking what does INMCM4 project for the near future, I can provide an answer. Remember I said at the beginning that CMIP5 models always project warming; they can not bring themselves to show cooling. This is especially true of the future, and also true of INMCM4. (Maybe you have to sign a pledge to join the club.)
So INMCM4 shows a rise of 0.16C over this coming decade, 2015-2024, one of the lower estimates (this comes from the Willis dataset).
I was particularly curious about the next 15 years of the INMCM4 vs BEST comparison, which ends in 2000.
However, thanks for your answer, too.
==============
Comparing BEST Land & Ocean to INMCM4 over last 15 years.
Best data is here: http://berkeleyearth.lbl.gov/auto/Global/Land_and_Ocean_summary.txt
From 1998 to 2014: BEST shows warming of 0.10C/decade; INMCM4 shows 0.12C/ decade.
From 2005 to 2014: BEST shows a plateau of 0.001C/decade; INMCM4 shows 0.09C/decade
Thanks again.
=====
Dear Roger, do you really believe spaghetti code can be untangled in a day even were anyone interested in doing so. This, of course, explains much about your faith in your own work.
Judging by the size of that double chin Eli appears to be quite adept in untangling spaghetti provided the destination is Eli’s pie hole.
Fat bunny jokes? Shirley you jest
Fat a$$hole joke.
P.S. Josh Halpern (aka Eli Rabbit) thank you for the response
“Believe most of the code is here.
http://www.ncdc.noaa.gov/cdr/operationalcdrs.html”
Roger Sr.
Now some, not Eli to be sure, might ask what Eli was the bunny who pointed this out, and not John Christy who shirley new? YMMV on the answer dearest Roger.
The warming rates of the lower troposphere and global SST’s are very similar. Higher rates of warming on land are likely due to the drying of continental interiors from the warming of the AMO.
http://www.woodfortrees.org/plot/crutem4vnh/from:1979/trend/plot/uah-land/normalise/trend/plot/hadsst3gl/from:1979/trend
E.g. “Key role of the Atlantic Multidecadal Oscillation in 20th century
drought and wet periods over the Great Plains”
http://www.atmos.umd.edu/~nigam/GRL.AMO.Droughts.August.26.2011.pdf
Also with land and global lower troposphere..
http://www.woodfortrees.org/plot/crutem4vnh/from:1979/trend/plot/uah/normalise/trend/plot/uah-land/normalise/trend/plot/hadsst3gl/from:1979/trend
That kind of makes sense since ocean is 70% of the surface. For the surface temp record, most thermometers are on land. The sats get a much better sample, in terms of coverage, of the Earth’s temp.
Sorry I shouldn’t have selected normalise on those:
http://www.woodfortrees.org/plot/crutem4vnh/from:1979/trend/plot/uah/trend/plot/hadsst3gl/from:1979/trend/plot/uah-land/trend
Rates of sea surface warming are better seen in the context of the same AMO mode. If warming rates from the 1940’s continued at a similar pace, it would amount to roughly an extra 0.5°C of warming by 2100 AD:
http://www.woodfortrees.org/plot/hadsst3gl/from:1944/plot/hadsst3gl/from:1944/trend/plot/hadsst3gl/from:1979/trend
Distribution of population and economic activity is fractal like over land, high population density and vigorous economic activity covering only a tiny fraction of its area. Population density and economic activity increased at a higher rate over the last few decades, where it was already high, the phenomenon is called urbanization.
Meteorological stations are not located randomly relative to this fractal. They are kept close to places with higher than average rate of increase in local population density or economic activity (like airports), just to keep maintenance costs low.
Therefore there is a local effect on them by changes of land use, which can be called the generalized temporal UHI (Urban Heat Island) effect.
The magnitude of this effect is under debate, but from the data provided by Klotzbach et al. a lower bound can be calculated, which is about 0.15°C per doubling of local population density (a proxy to economic activity).
As global population has doubled almost twice during the 20th century, a considerable fraction of temperature increase observed by land based stations is due to this local effect and gives next to no contribution to a global average.
By the way, warming of 0.15°C (up to 0.25°C) per doubling of local population density is a reasonable figure, it is supported by direct observation of UHI at many places.
The way mainstream climate science is trying (but fails) to find an UHI effect on temperature trends is deeply flawed. The urban/rural classification used for this purpose is a static one, while we should look for changes in population density and economic activity over time, a task which needs an entirely different input dataset.
There is an explanation for the “Differential temperature trends at the surface and in the lower atmosphere” : UHI effect on stations.
There is an explanation for the missing hot spot : UHI effect on stations.
There is an explanation for the cooling bias of the discontinuities of raw series: UHI effect on stations.
There is an explanation for the divergence of dendros : UHI effect on stations.
There is an explanation for the too weak glaciers melting : UHI effect on stations.
There is an explanation for the lack of snow deficit : UHI effect on stations.
There is an explanation for the the divergence of ocean warming : UHI effect on stations.
Only one explanation for all this? Is it really too cheap ?
And for the politics there were Urbane Heat Incentives.
===============
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?
Reblogged this on I Didn't Ask To Be a Blog.