by Donald Rapp
Santer et al. (2005) emphasized that “a robust feature” of climate models is that increasing greenhouse gas concentrations will amplify warming in the middle and upper tropical troposphere (compared to the surface). It was then with some consternation that they noted that the data do not support this prediction; indeed, surface warming typically exceeds tropospheric warming.
As Klotzbach et al. (2009) pointed out:
“Santer et al. (2005) presented three possible explanations for this divergence: (1) an artifact resulting from the data quality of the surface, satellite and/or radiosonde observations, (2) a real difference because of natural internal variability and/or external forcings, or (3) a portion of the difference is due to the spatial coverage differences between the satellite and surface temperature data.”
Evidently, the failure of data to support amplification of warming in the troposphere is a serious problem for the credibility of climate models and climate modelers would like to shift responsibility onto the data. Santer et al. focused on the second and third explanations, saying they were “more plausible” that “residual errors” occurred in some data sets, and they suggested that the data that do show increased temperature in the troposphere are more reliable than those measured by the UAH group (c.f. Christy et al., 2007). Klotzbach et al. (2009) presented considerable evidence that surface measurements over land often contain biases and effects due to their local surroundings. Indeed, one of the authors (Pielke Sr.) has written extensively on this subject. The nature of most of these biases is to increase measured surface temperatures. Thus Klotzbach et al. (2009) concluded that a significant factor in the discrepancy between climate models and measured temperature data may lie in the measured surface temperature data being too high.
Thorne et al. (2011) provided a detailed history of the evolution of measurements of tropospheric temperatures, whether by radiosonde or from satellite instruments. Early satellite measurements indicated far less warming in the troposphere than was found at the surface by land based thermometers. This caused challenges for climate modelers who predicted that tropospheric temperatures would rise with surface temperatures although the stratosphere would cool. As the years went by, adjustments and corrections of satellite measurement techniques reduced the gap between measured tropospheric and surface temperatures but a significant gap still remains. Radiosonde measurements seem to involve greater uncertainty and variability.
In interpreting the latest results, Thorne et al. (2011) seemed determined to (1) minimize differences between tropospheric and surface temperatures, and (2) emphasize warming in the recent part of the record. They said: “For the surface temperatures it shows (1) very good agreement between the three analyses [NOAA, NASA and HadCRU]; and (2) the trend has remained quite stable over more than a decade”. That the analyses of surface temperatures by NOAA, NASA and HadCRU are in good agreement is no surprise since they use similar databases and similar data processing methods. That the trend (temperature change per decade) was stable proves exactly nothing. There is no a priori reason to believe that the trend should remain constant, and indeed, tropospheric measurements suggest otherwise. In summing up the tropospheric measurements, Thorne et al. (2011) said:
“In summary, the most recent versions of all datasets do not support the conclusion of a significant difference in trend between the surface and troposphere when considering (1) the structural uncertainty (as evidenced by the spread) in the tropospheric trend estimates, (2) the very likely remaining cold bias in the radiosonde trend estimates, and (3) the fact that the tropospheric trend has a small stratospheric cooling component”.
This is a highly debatable conclusion. First of all, by relying only on trend, rather than yearly variations, they eliminate a great deal of detail. Second, their Figure 10 indicates that the trend at the surface is measured to be three times the trend in the troposphere. Thirdly, the trend in the troposphere has varied widely with time whereas the trend of land measurements has been stable since 1990. Finally, Thorne et al. (2011) seemed not to be able to recognize the obvious fact shown in Figure 1 that tropospheric temperatures made a step function rise after the great El Niño of 1998 and was fairly constant before and after. The ‘trend” that they use is not a steady rise over ten years as they assume but actually a one-time rise.
Figure 1. UAH globally averaged satellite-based temperature measurements of the lower atmosphere.
In their conclusion, Thorne et al. (2011) said:
“Overall, there is now no longer reasonable evidence of a fundamental disagreement between models and observations with regard to the vertical structure of temperature change from the surface through the troposphere. This is mainly due to a much better understanding of the real level of uncertainty in estimates of past changes and expectations from climate models. Ironically, elucidation of the true (large) degree of uncertainty in actual trends from observations and expected trends from models has led to greater confidence that they are not inconsistent”.
This conclusion is debatable.
Christy et al. (2010) updated tropical lower tropospheric temperature datasets covering the period 1979–2009 and assessed them for accuracy. As Christy et al. (2010) pointed out:
“The temperature of the tropical lower troposphere (TLT, 20°S–20°N) figures prominently in discussions of climate variability and change because it (a) represents a major geographic portion of the global atmosphere (about one third) and (b) responds significantly to various forcings. For example, when the ENSO mode is active, TLT displays a highly coupled, though few-month delayed, response, with a general warming of the tropical troposphere experienced during El Niño events. The TLT also responds readily to the impact of solar scattering anomalies when substantial volcanic aerosols shade the Earth following major volcanic eruptions …. In terms of climate change due to increasing greenhouse gases …, climate models project a prominent warming of the TLT which in magnitude is on average twice as large … as changes projected for the surface.”
Christy et al. (2010) further asserted:
“The magnitude of the trend in recent decades of TLT has become controversial because of differing views on … whether the relationship between the observed temperature trend of TLT and the observed temperature trend of the surface (TS) is faithfully reproduced by … climate model simulations. These model simulations indicate that a clear fingerprint of greenhouse gas response in the climate system to date is that the trend of TLT should be [1.4 times] greater than [that of] TS. There have been essentially two groups of publications on this contentious issue, one reporting that trends of TLT in observations and models are statistically not inconsistent with each other and the other reporting that model representations are significantly different than observations, thus pointing to the potential for fundamental problems with models.”
Figure 2 shows the best estimate of the TLT by Christy et al. (2010). A linear fit to the data yields an overall trend of +0.09°C/decade over this 31-year period (red dashed line). On the other hand, as we have discussed previously, one could argue for a step function before and after the great El Niño of 1998 as shown by the blue dotted line. According to this latter interpretation, there has not been a statistically significant increase in TLT over the ten-year period from 2000 to 2010. One could also argue that there was no 21-year period from 1979 to 2000.
Figure 2. Time series of average monthly anomalies of tropical TLT (20°N – 20°S) (Christy et al., 2010).
There are two aspects of this result that are particularly important. One is simply that two long periods without a statistically significant increase in TLT would seem to contradict the view that continuously rising CO2 is continuously driving up TLT. The second aspect deals with the scaling ratio of the trend of TLT to the trend of TS in the tropics. Climate models consistently predict this ratio to be ~1.4; the tropospheric temperature is expected to rise faster than the surface temperature. However, as Christy et al. (2010) pointed out, the observed linear trend for TLT (0.9°C/decade) is only about 80% of the observed linear trend for TS, so the observed scaling ratio is roughly 0.8, not the predicted value of 1.4. These results cast doubt on the veracity of climate models, and also suggest that a linear rate of temperature rise does not necessarily result from a linear increase in CO2 concentration.
Obviously, these results for tropospheric temperature measurements are not supportive of climate models. Ben Santer took on a rebuttal and the result was Santer et al. (2011). It is interesting, perhaps, that Santer included 16 co-authors in addition to himself. Pielke Sr. commented: “This is an unusual number of co-authors for a technical paper, but I assume Ben Santer wants to show a broad agreement with his findings”.
Santer et al. (2011) were concerned with a very basic problem in climatology: how to distinguish between long-term climate change and short-term variable weather in regard to tropospheric temperature (TT) measurements? They treated the problem in terms of signal and noise: the climate trend is the signal, and the variable weather is the noise. However, the climate-weather problem is innately different from a classical signal/noise problem such as a radio signal affected by atmospheric activity. In that case, if the radio signal has a sufficiently narrow frequency band, and the noise has a wider frequency spectrum, the signal-to-noise ratio (S/N) can be improved with a narrow-band receiver tuned to the frequency of the radio signal. The radio signal and the noise are separate and distinct. By contrast, in the climate-weather problem, the instantaneous weather is the noise, and the signal is the long-term trend of the noise. The noise and signal are coupled in a unique way. Furthermore, there is no evidence that it is even meaningful to talk about a “trend” since there is no evidence that the variation of TT with time is linear. Remarkably, Santer et al. never referred to Christy et al. (2010) but based their analysis on older papers (e.g. Christy et al., 2007). It should be noted that whereas Christy et al. (2010) used tropical TLT data (20°N-20°S), Santer et al. used global TT data (82.5°N-70°S).
As Santer et al. (2011) showed, one can pick any starting date and any duration length and fit a straight line to that portion of the curve of TT vs. time. They did this for various 10-year and 20-year durations. In each case, depending on the start date, they derived a best straight-line fit to the TT data for that time period. They found (as is obvious from Figure 3) that the range of trends for 10-year periods was greater (-0.05 to +0.44°C/decade) than for 20-year periods (+0.15 to +0.25°C/decade). The trends for various start dates for ten-year trends are shown in Figure 4. Clearly, the trend line was steepest for a start date around 1988 (ending in the giant El Niño year of 1998. Prior to 1988 and after 1998, the trend was minimal.
Figure 3. Globally averaged satellite-based temperature of the lower atmosphere (http://www.drroyspencer.com).
Figure 4. Trend (°C/decade) of TT vs. start year for ten-year durations. (Santer et al., 2011).
Santer et al. describe use of longer durations as “noise reduction”, and I suppose it is, provided that one assumes the overall signal is linear in time. They state:
“The relatively small values of overlapping 10-year TT trends during the period 1998 to 2010 are partly due to the fact that this period is bracketed (by chance) by a large El Niño (warm) event in 1997/98, and by several smaller La Niña (cool) events at the end of the … record”.
However, as Pielke pointed out, the period after 1998 was 13 years, not 10, and furthermore, as Figure 3.22a shows, the period after 1998 had roughly equal periods of El Niño and La Niña and was not dominated by La Niñas. What Santer et al. (2011) implied was that an unusual conflux of a large El Niño early on and multiple La Niñas later on caused the trend to minimize for that unique period as a statistical quirk. However, that is like a baseball pitcher saying that if the opponents hadn’t hit that home run, he would have won the game.
In simplistic terms, the signal-to-noise ratio can be estimated as follows. For either 10-year or 20-year durations, the signal was the mean trend derived by a straight-line fit to the TT data over that duration. The noise was the range of trends for different starting dates. For ten-year durations, the trend was 0.19 ± 0.25°C/decade. For twenty-year durations, the trend was 0.20 ± 0.05°C/decade. The signal in each case is taken as the mean trend. The distribution of trends within these ranges was similar to a normal distribution. Thus we can roughly estimate the noise as ~ 0.7 times the full width of the range. Hence, the S/N ratio for ten-year durations is S/N ~ 0.19/(0.7 ´ 0.5) = 0.5 and for twenty-year durations is S/N ~ 0.2/(0.7 ´ 0.1) = 2.9. Santer et al. obtained S/N = 1 for ten-year durations and S/N = 2.9 for twenty-year durations.
If it can be assumed that the signal varies linearly with time, one can then estimate what level of precision for the estimated trend can be obtained for any chosen duration. Santer et al. obviously believe that the signal is linear with time for all time. In my discussion, I have relied entirely on the TT data and I have not included predictions of models. However, the paper by Santer et al. mixes up models with TT data and it is sometimes difficult to separate these. By some logic that escapes me, Santer et al. concluded that
“Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature”.
This conclusion seems to be grossly exaggerated. A better statement might be as follows:
Assuming that the variability of TT is characterized by a long-term upward linear trend caused by human impact on the climate, and that variability about this trend is due to yearly variability of weather, El Niños and La Niñas, and other climatological fluctuations, the recent data suggest that the trend can be estimated for any 17-year period with a S/N ratio of roughly 2.5.
Finally, we get to the nub of the paper by Santer et al. that asserted: “Claims that minimal warming over a single decade undermine findings of a slowly-evolving externally-forced warming signal are simply incorrect”. Here is where Santer et al. attempted to dispel the notion that minimal warming for a period contradicts the belief that underneath it all, the long-term signal continues to rise at a constant rate. Pielke Sr. argued that this was an overstatement and should be replaced by:
“If one accepts this statement by Santer et al. as correct, than what should have been written is that the observed lack of warming over a 10-year time period is still too short to definitely conclude that the model’s are failing to skillfully predict this aspect of the climate system”.
However, I would go further than Pielke Sr. First of all, the period of minimal temperature rise was longer than 10 years. Second, there is no cliff at 17 years whereby trends derived from shorter periods are statistically invalid and trends derived from longer periods are valid. According to Santer et al. a trend derived from a 13-year period is associated with a S/N ~ 1.5 which though not ideal, is good enough to cast some doubt on the validity of models.
Tisdale (2011) presented a great diversity of data on ocean properties including El Niño indices. His interpretation was that after 1976, sea surface temperatures rose in three steps: (a) 1976 to 1985, (b) 1989 to 1998 (culminating in an upward spike in 1998), and (c) 2002 to 2005 (dates are approximate). All three step increases were associated with El Niño events. The repeated El Niños from 2002 to 2007 sustained consistently high Pacific temperatures over this time period that relaxed downward after the La Niña of 2008.
Hansen et al. (2010) provided the data shown in Figure 5. These data show a strong correlation of all El Niño events in the 20th century with rising temperatures in the next few months, except for case where a major volcanic eruption caused temperatures to plummet for a year or two. Hansen et al. (2010) evidently believe that the variations of the El Niño index produce small oscillations about the major secular upward trend in temperatures caused by rising CO2, whereas others (as we have pointed out) believe that the El Niños ipso facto were responsible for global warming.
Figure 5. Global temperature anomaly compared to the Nino 3.4 index showing strong correlation of upticks in temperature with El Niños. Major volcanic eruptions are denoted by arrows. Adapted from Hansen et al. (2010).
In summary, Santer et al. (2011) assumed that the variation of tropospheric temperatures (TT) with time over the past 32 years followed long-term straight line presumably due to forcing by greenhouse gases (the signal) with superimposed yearly variations due to El Niños, volcanoes, and chaotic weather changes acting as noise. Within this time period, one can fit a straight line to the TT data for any duration and start date. They showed that the signal-to-noise ratio for such a segment of the timeline increases as the duration increases and that a segment of at least 17 years duration is needed to obtain a good estimate of the long-term trend. However, it is not clear from the data that a straight line plus noise best represents reality. Another interpretation is that TT were relatively flat prior to the giant El Niño of 1998, jumped up after that El Niño, and then remained relatively flat afterward but at a higher level. The fact that TT was relatively flat for 13 years after 1998 suggests (but does not yet prove) that the model of a long-term straight line plus annual noise may not be valid. A second issue is the relative values of the trends of TT and surface temperatures which appear to contradict predictions of climate models. As more data are accumulated over the next couple of decades, these issues should become more resolved.
Biosketch. After receiving my B.S. and M.S. degrees in chemical engineering, I received a Ph. D. in chemical physics at Berkeley in January 1960. I worked as a researcher in chemical physics for a number of years, amassing a considerable number of publications. I was professor of physics at the University of Texas at Dallas from 1969 to 1979. I came to JPL in 1979 to take a position as the Division Chief Technologist (senior technical person) in the Mechanical and Chemical Systems Division (staff of 700 including 100 Ph.Ds). Amongst other things, I was Proposal Manager on the Genesis Discovery Project to collect solar wind and return it to earth for analysis which won in a field of about 25 competitors in Discovery 5, being funded at ~ $220M. Genesis carried out its mission in space from 2001 to 2004. After that, I acted as Proposal Manager for the Deep Impact Discovery mission proposal to open a hole in a comet and investigate its interior, which won, being funded at about $320M. Deep Impact was a spectacular success in 2005. I was manager of the Mars Exploration Technology Program for a period. In the period 2004-2006, I concentrated on mission design for Mars and lunar human missions, leading to a book I published: “Human Missions to Mars”. Starting around 2007, I devoted most of my time to the study of climate change and ice ages, culminating in two books published by Springer/Praxis Publishing.
Moderation note: This is a technical thread and comments will be moderated for relevance.