Busting (or not) the mid-20th century global-warming hiatus

by Bob Tisdale

A closer look at the uncertainties in the mid 20th century ocean surface temperatures.

In previous posts at WattsUpWithThat and at my blog ClimateObservations, I’ve discussed the new NOAA “pause-buster” sea surface temperature dataset (ERSST.v4) a number of times since the publication of Karl et al. (2015)—latest to earliest:

But those posts related primarily to the last few decades.

In this post, we’re going to briefly examine the long-term data and then focus on the post-World-War 2 period—a period when there are major differences between the two sea surface temperature datasets that are used in the combined land+ocean surface temperature products from GISS and NOAA (both of which use NOAA’s ERSST.v4) and UKMO (which uses HADSST3).

The differences in the two datasets are related to the Thompson et al. (2008) letter A large discontinuity in the mid-twentieth century in observed global-mean surface temperature. (Paywalled.) The UKMO adjusted their sea surface temperature data to correct the discontinuity, whereas NOAA didn’t. It appears that NOAA is attempting to bust the mid-20th Century pause as well as increase the warming rate from the mid-20th Century to present.

But first, some background information about long-term ocean surface temperature reconstructions.

LONG-TERM OCEAN SURFACE TEMPERATURE DATA

Blog posts and papers about adjustments to long-term ocean surface temperature data typically begin with a comparison of source and end products. I won’t disappoint you. Figure 1 includes the annual global (60S-60N) temperature anomalies for the two source datasets and two end products, all referenced to the base years of 1971-2000. The source datasets are the Marine Air Temperature (day and night) and Sea Surface Temperature from the International Comprehensive Ocean-Atmosphere Data Set (ICOADS). The ICOADS v2.5 data are supported by the Woodruff et al. (2011) paper ICOADS Release 2.5: extensions and enhancements to the surface marine meteorological archive. The end products include the UKMO Night Marine Air Temperature data (HadNMAT2) supported by the Kent et al. (2013) paper Global analysis of night marine air temperature and its uncertainty since 1880: The HadNMAT2 data set, and the NOAA/NCEI Extended Reconstructed Sea Surface Temperature data version 4 (ERSST.v4), also known as NOAA’s “pause-buster” data, which are supported by three papers:

Note: There is a new paper in press about the NOAA ERSST.v4 dataset and I’ve linked it in the closing.

The top graph in Figure 1 compares the four datasets globally (60S-60N), excluding the polar oceans. The middle and bottom graphs include the comparisons for the Northern (0-60N) and Southern (60S-0) Hemispheres, again excluding the polar oceans. The polar oceans are normally excluded in comparisons such as these because the data suppliers account for sea ice differently.

Figure 1

Figure 1

The source of the data in this post is the KNMI Climate Explorer.

The marine air temperature data come in two flavors. The source data include day and night measurements, while the end product includes mostly nighttime observations. The UKMO deletes most of the daytime marine air temperature observations because of the heating of ship surfaces by sunlight. Let me explain the “mostly” and “most of”.

The spike in the marine air temperature (day and night) data from the early- to mid-1940s stands out in Figure 1. But the spike is more prevalent in the nighttime samples than daytime. See Figure 2, which is Figure 9 from Kent et al. (2013). So to minimize the spike, the HadNMAT2 data use daytime marine air temperature data from 1942 to February 1946.

Figure 2 - Figure 9 From Kent et al 2013

Figure 2

Kent et al. write:

Data from ICOADS Release 2.5 still show additional warmth in their nighttime air temperature anomalies during WW2, from 1942, and the latter part of the adjustment applied by Rayner et al. [2005] is therefore still required and appropriate. Here, we amend it slightly and replace NMAT anomalies between 1942 and February 1946 with DMAT anomalies, adjusted according to the difference between DMAT and NMAT anomalies over the period 1947–1956. Additionally, daytime air temperature anomalies for Deck 195 (U.S. Navy Ships Logs) were anomalously warm compared with data from other Decks and are excluded. An adjustment prior to 1942 appears not to be required due to the addition of many recently digitized measurements for this period. Figure 9 shows time series of monthly unadjusted and adjusted NMAT anomalies for the period 1929–1955, along with the daytime air temperature data used in the adjustment process.

Note: I believe the reference should be Rayner at al. (2003), not Rayner et al. (2005). Rayner at al. (2003) included discussions of bias adjustments for their HadMAT1 night marine air temperature dataset. Rayner et al. (2005) was about the HADSST2 dataset and only mentioned marine air temperature data as they related to the bias adjustments for that sea surface temperature dataset. [End note.]

Over the term of the dataset, HadNMAT2 are also adjusted for other factors such as observation (ship deck) height and wind speed.

The other major difference that stands out in Figure 1 relates to the source ICOADS sea surface temperature data before the 1940s. The source sea surface temperature data run “cooler” than the others. That divergence is attributed to the transitions from different sampling methods: buckets of different types to ship inlets.

For their ERSST.v4 data, NOAA assumed the HadNMAT2 data are correct and used the HadNMAT2 night marine air temperature data for bias corrections (full term) of ship-based observations. NOAA’s reference for the using night marine air temperature to adjust sea surface temperature data was Smith and Reynolds (2002) Bias Corrections for Historical Sea Surface Temperatures Based on Marine Air Temperatures. Thus the ERSST.v4 data mimic the HadNMAT data. See Figure 3.

Figure 3

Figure 3

Note: In more recent decades, NOAA also adjust for ship-buoy biases, but those adjustments do not take place during the post-World War 2 period, which is the subject of this post, so they will not be discussed here. See the other posts linked in the introduction. [End note.]

THE 1945 DISCONTINUITY

In 2008, Thompson et al. A large discontinuity in the mid-twentieth century in observed global-mean surface temperature (paywalled) brought attention to the sharp drop in sea surface temperatures in 1945, which is the last year of World War 2. They used a number of metrics, including an ENSO index and stratospheric aerosols, to show that the sharp drop-off in sea surface temperatures in 1945 was not caused by volcanos or by the transition from an El Niño to a La Niña. See Figure 4, which uses the same source data and end products as Figure 1, but runs from 1940 to 1975.

Figure 4

Figure 4

Thompson et al. write about sea surface temperature (SST) data:

The most notable change in the SST archive following December 1941 occurred in August 1945. Between January 1942 and August 1945, ~80% of the observations are from ships of US origin and ~5% are from ships of UK origin; between late 1945 and 1949 only ~30% of the observations are of US origin and about 50% are of UK origin. The change in country of origin in August 1945 is important for two reasons: first, in August 1945 US ships relied mainly on engine room intake measurements whereas UK ships used primarily uninsulated bucket measurements12, and second, engine room intake measurements are generally biased warm relative to uninsulated bucket measurements.

Hence, the sudden drop in SSTs in late 1945 is consistent with the rapid but uncorrected change from engine room intake measurements (US ships) to uninsulated bucket measurements (UK ships) at the end of the Second World War. As the drop derives from the composition of the ICOADS data set, it is present in all records of twentieth-century climate variability that include SST data.

Thompson et al also note the 1945 discontinuity does not exist in the land surface air temperature data (I’ve included links to their Reference 5 and their Figures 2 and 3 in the text):

The step in late 1945 does not appear to be related to any known physical phenomenon. No substantial volcanic eruptions were reported at the time, and the nuclear explosions over Hiroshima and Nagasaki are estimated to have had little effect on global-mean temperatures: ~100 Hiroshima-sized explosions are predicted to lead to a global-mean cooling of ~1.25 deg C (ref. 5), thus two such explosions might be expected to lead to a cooling of less than 0.03 deg C. Furthermore, ocean and land areas should both respond to an external forcing, but the step is only apparent in SSTs (Fig. 3). The global-mean land time series does not exhibit warming from the middle of the century until about 1980, but there is no large discrete drop in late 1945 in the unfiltered land series and only an indistinct drop in the residual land series (Fig. 3b). As is the case for the global mean time series in Fig. 2, the drop is apparent in the unfiltered global-mean SST time series but is highlighted after filtering out the effects of internal climate variability.

You may be noting in Figure 4 that the discontinuity also appears in the HadNMAT2 night marine air temperature data. The support paper for HadNMAT2, Kent et al. (2013), discussed the World War 2 and post-war periods in their Key Results and Remaining Issues (my boldface):

It is possible though that the adjustments applied to the data after WW2 are not applicable to the data in the region 15 to 55S, since there is a relative cool bias of about 0.4 C here during the mid-1940s to mid-1950s, as compared to the HadSST3 ensemble median.

The HadNMAT2, unlike MOHMAT4 and HadMAT1 is not dependent on time-varying SST for any adjustment, although at the cost of a shorter data set. The requirement for the Suez adjustment was removed by the exclusion of observations rather than using SST anomalies. WW2 biases in NMAT are adjusted using daytime marine air temperature anomalies, as in previous data sets. The adjustment appears to have slightly better results than that used in MOHMAT4 and is applied over a shorter period. However, comparisons with collocated land anomalies suggest that HadNMAT2 remains too warm during WW2. Further investigation of the daytime marine air temperatures is therefore required. Additionally, our analysis suggests that the data prior to 1886 are also erroneously warm and should not be relied upon.

In other words, the discontinuity in the night marine air temperature may also result from observation biases, which then get passed on to the NOAA ERSST.v4 data. Regardless, Thompson et al. (2008) found there was no basis for the 1945 discontinuity, yet NOAA did not correct for it in their ERSST.v4 “pause(s)-buster” data.

FURTHER INFORMATION ABOUT THE WWII AND POST-WAR PERIODS

In addition to changes in temperature-sampling practices, there are a couple of other things to consider when looking at sea surface and marine air temperature data during the 1940s and 50s.

First, the number of source (ICOADS) sea surface temperature observations drops off drastically in some ocean basins during the early 1940s, while in others the sampling was so poor to begin with that the wartime decreases in ship traffic appear to have had little impact on the number of observations there. See Figure 5, which includes the number of observations per 2-deg latitude by 2-deg longitude grid, for the North Atlantic (0-60N, 80W-0), South Atlantic (60S-0, 70W-20E), Indian (60S-30N, 20E-120E), North Pacific (0-60N, 100E-90W) and South Pacific (60S-0, 120E-70W) basins. (The number of observations for the ICOADS v2.5 sea surface and marine air temperature data are available from the KNMI Climate Explorer.) The top graph runs from January 1880 to November 2015, and, in the bottom graph, I’ve shortened the time to January 1900 to January 1960 so that the world war periods are easier to see.

Figure 5

Figure 5

The same holds true for the ICOADS source marine air temperature (day and night) data. The number of source (ICOADS) marine air temperature observations drops off drastically in some ocean basins during the early 1940s, while in others the sampling was very poor before, during and after World War 2. See Figure 6. Now consider, for the HadNMAT2 data, the number of observations is roughly half of what’s shown because they use only nighttime observations (except during World War 2, as discussed above, when they use only daytime observations).

Figure 6

Figure 6

Figure 7 includes the number of marine air temperature (MAT) and sea surface temperature (SST) observations for the Northern (0-60N) and Southern (60S-0) Hemispheres per 2×2 deg grid. I’ve provided it to better show the disparity between the hemispheres in the number of observations. It indicates the number of sea surface and marine air temperature observations are similar until the 1990s, when, assumedly, the deployment of moored and drifting buoys provided a marked increase in the number of sea surface temperature observations…which then skyrocketed in 2005.

Figure 7

Figure 7

But the number of observations per grid doesn’t tell the entire story. The spatial coverage of the global oceans also declined drastically during World War II. My Figure 8 is Figure 2 from Woodruff et al. (2011), which was the support paper for the current version of the ICOADS source data. We’re interested in the red curve (Release 2.5) which relates to the right-hand scale. It shows that during the Second World War the 2×2-deg latitude-longitude grids with sea surface temperature observations covered only roughly 20% of the surface of the global oceans.

Figure 8 - Figure 2 from Woodruff et al 2011

Figure 8

Something else to consider: the grids with data do not remain constant. They obviously depend on where ships traveled, which changed from month to month. To help you visualize this, the maps in Animation 1 show the grids with data, running from January 1935 to December 1950. I’ve also included notes on some months (and paused the animation) to indicate when World War 2 started and ended and to highlight the month with what appears to be the poorest spatial coverage (December 1941, associated with Pearl Harbor).

Animation 1

Animation 1

TO INFILL OR NOT TO INFILL GRIDS WITH MISSING DATA

As discussed and illustrated above, many of the ocean grids do not contain data. For datasets that did not incorporate satellite-based data starting in the early 1980s, this problem of poor spatial coverage remained until recent decades when drifting buoys (not ARGO floats) were deployed. That is, coverage improved since the 1950s, but there were still many portions of the Southern Hemisphere (south of 30S) with little source data until the drifters.

There are a number of different statistical methods that suppliers use to fill in the blank grids…if they elect to infill them.

Figure 9

Figure 9

Figure 9 includes maps of ocean surface temperature-related data for the month of December 1941…a worst-case example during World War II, but could be considered a best-case example before 1900. (See my Figure 8 again.) Now consider that ICOADS showed that grids with source data covered roughly only 20% of the surface area of the global oceans. That means, in these examples, roughly 80% of sea surface temperature data in the infilled datasets (Cowtan and Way, ERSST.v4 and HadISST) are make-believe…created using statistical methods that provide strikingly different results in the patterns of sea surface temperature anomalies. See Animation 2.

Animation 2 HadISST v ERSST.v4 v Cowtan and Way SST

Animation 2

I’ve also included the Cowtan and Way surface temperature anomalies as a reference for another method of infilling. (Cowtan and Way infill the UKMO HadCRUT4 data, which is made up of HADSST3 and CRUTEM4 data.) Because the KNMI Climate Explorer does not include a mask for land surfaces for the Cowtan and Way surface temperature product, that map originally included land surface temperature anomalies. So I “whited out” the data over land surfaces for Figure 9 and Animation 2. As a result, any carry-over from ocean to land that appears in the HADSST3 map (the basis for the Cowtan & Way data) will be lost. I also did not bother to “white out” the polar oceans in the Cowtan & Way data, where land surface air temperature data are extended out over the oceans. The other maps are for sea surface and marine air temperatures products only. (For those interested, an animation that compares maps of the Cowtan and Way data for December 1941 to the source HadCRUT4 data is here.)

The top two graphs show the grids with source data for the ICOADS source sea surface temperature (Cell a) and marine air temperature (Cell b) datasets. Directly below them are maps of the sea surface temperature anomalies for the not-infilled UKMO HADSST3 sea surface temperature (Cell c) and the infilled NOAA ERSST (Cell d) products. On the left of the third tier is the December 1941 temperature anomalies of the Cowtan and Way product (Cell e), which infills UKMO HADSST3 data, and to the right (Cell f) is HadISST data (not the same as HADSST3), which are infilled using yet another method.

The UKMO elects not to fill in grids without source data for their HADSST3 data (used in their HadCRUT4 global surface temperature product) or its HadNMAT2 data (used by NOAA for ship-bias corrections). But referring to the maps in Figure 9, we can see that more of the oceans are covered with the HADSST3 (Cell c) than its sea surface temperature (Cell a) source data. (The same would hold true for the HadNMAT2 data, but, unfortunately, the map-plotting feature for the HadNMAT2 data were not available at the KNMI Climate Explorer when I prepared this post.) The UKMO accomplishes this limited infilling in a very simple way. The source data are furnished in 2-deg latitude by 2-deg longitude grids, where the two not-infilled UKMO products are presented in 5-deg latitude by 5-deg longitude grids. A 2×2-deg grid with data has been expanded to a 5×5-deg grid. (See the animation here for the comparisons of the coverage of the ICOADS source and UKMO end products for sea surface temperatures.) Also, the UKMO indirectly fills in their products (by hemisphere) in their monthly and annual time series values. They do this per hemisphere by determining the average value of the grids with data, which then get assigned by default to the grids without data.

The map of the December 1941 ERSST.v4 sea surface temperature anomalies (Cell d) is to the right on the second tier. NOAA uses a statistical tool called Empirical Orthogonal Teleconnections (EOT) to fill in grids with missing data. (See van den Dool et al. (2002) Empirical Orthogonal Teleconnections.) Basically, NOAA uses the spatial patterns found in satellite-based data (due to their better coverage of the global oceans) to infill missing data. (But NOAA does not include satellite-based data in their ERSST.v4 product.)

Directly below the HADSST3 data (Cell c) is the infilled product from Cowtan and Way (Cell e). See Cowtan and Way (2014) Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Cowtan and Way use a statistical method call Kriging to fill in the blanks. (See the animation here for a comparison of the HADSST3 and Cowtan and Way maps. Again, there are differences along the coasts on land due to my whiting-out of land grids on the Cowtan and Way map.)

The bottom right-hand map includes the sea surface temperature anomalies based on the UKMO’s infilled sea surface temperature dataset called HadISST (Hadley Centre Sea Ice and Sea Surface Temperature data set). HadISST is NOT the same as HadSST3. HadISST is supported by the 2003 Rayner et al. paper Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. HadISST uses satellite-enhanced sea surface temperature data starting in the early 1980s, and it also uses a statistical tool called Empirical Orthogonal Function (EOF) analysis to infill missing data. Like NOAA and their EOT analysis, the UKMO uses the EOF-found spatial patterns during the satellite era to infill missing data before the satellite era.

I believe the UKMO includes another step for the HadISST data that is not employed by NOAA, whereby the UKMO reinsert in-situ data when there is insufficient source data for the EOF analysis.

Bottom line: There are few similarities between the sea surface temperature anomaly patterns in the maps of the three infilled sea surface temperature datasets (HadISST, ERSST.v4, and Cowtan and Way) in this worst-case month. In more recent times, before the drifters were deployed heavily in the 2000s, you’d still find major differences in those spatial patterns, primarily where in situ-sampling was poor, like south of 30S.

NOTE: Notice the odd-looking “El Niño” in the Cowtan and Way data, Figure 9, Cell e.

Cowtan and Way (2014) discuss the benefits of the Kriging method of infilling:

Kriging offers several benefits. The reconstructed values vary smoothly and match the observed values at the coordinates of the observations. The reconstructed values approach the global mean as the distance from the nearest observation increases, i.e. the method is conservative with respect to poor coverage. Clustered observations are downweighted in accordance with the amount of independent information they contribute to the reconstructed value; thus area weighting is an emergent property of the method, with observations being weighted by density in densely sampled regions and by the region over which the observation is informative in sparse regions.

Cowtan and Way (2014) also discuss the disadvantages of Kriging:

Kriging the gridded data also has some significant disadvantages: information about station position within a cell is lost, cells with a single station receive the same weight as cells with many and (equivalently) no account is taken of the uncertainty in a cell value. The acceptability of these compromises will become apparent in the validation step.

But Cowtan and Way (2014) forgot to discuss a blatantly obvious disadvantage of Kriging: As shown in Figure 9, Cell e, Kriging can also create a spatial pattern that bears no resemblance to known phenomena, like their El Niño that runs diagonally, from the northwest to the southeast across the eastern tropical Pacific.

[End note.]

Kennedy (2014) A review of uncertainty in in situ measurements and data sets of sea-surface temperature (paywalled, but Submitted copy here.) provides a reasonably easy-to-understand overview of the uncertainties associated with infilled data. See the discussion that begins:

Although some gridded SST data sets contain many grid boxes which are not assigned an SST value because they contain no measurements, other SST data sets – oftentimes referred to as SST analyses – use a variety of techniques to fill the gaps. They use information gleaned from data-rich periods to estimate the parameters of statistical models that are then used to estimate SSTs in the data voids, often by interpolation or pattern fitting. There are many ways to tackle this problem and all are necessarily approximations to the truth. The correctness of the analysis uncertainty estimates derived from these statistical methods are conditional upon the correctness of the methods, inputs and assumptions used to derive them. No method is correct therefore analytic uncertainties based on a particular method will not give a definitive estimate of the true uncertainty.

Please read the remainder of that section of Kennedy (2014). In fact, the entire paper provides an excellent detailed discussion of the uncertainties associated with sea surface temperature data.

That’s enough backstory.

HADSST3 HAVE BEEN ADJUSTED TO ACCOUNT FOR THE 1945 DISCONTINUITY AND SUBSEQUENT BIASES, WHILE NOAA’S ERSST.v4 DATA HAVE NOT

HADSST3 data are used for the ocean portion of the UKMO combined land+ocean surface temperature product called HadCRUT4. HADSST3 data are supported by the papers:

The global and hemispheric time series graphs in Figure 10 compare the HADSST3 and NOAA ERSST.v4 sea surface temperature products from 1880 to 2014. Notice the differences between the NOAA ERSST.v4 and UKMO anomalies from the mid-1940s to the mid-1970s. The ERSST.v4 data run noticeably “cooler” than the HADSST3 data.

Figure 10

Figure 10

I’ve highlighted the post-World War 2 (post-discontinuity) differences in Figure 11, which includes the UKMO HADSST3 and NOAA ERSST.v4 data for the period of 1940 to 1975.

Figure 11

Figure 11

UKMO have corrected the HADSST3 data to account for the 1945 discontinuity and trailing irregularities. NOAA has not made those corrections with their ERSST.v4 data. NOAA elected to use the HadNMAT2 data for bias corrections of ship-based data during this time…even though, as quoted earlier, Kent et al. (2013) expressed concerns about the HadNMAT2 data during the period, and even though Thompson et al. (2008) concluded there was no reason for the discontinuity.

WHY DO YOU SUPPOSE NOAA FAILED TO CORRECT FOR THE 1945 DISCONTINUITY AND SUBSEQUENT BIASES?

Consider the graph of global surface temperature anomalies furnished as part of the press release for Karl et al. (2015). See my Figure 12. (Notice that the anomalies are presented in deg F for U.S. audiences, not deg C as we’ve been using in this post.) The caption reads:

Contrary to much recent discussion, the latest corrected analysis shows that the rate of global warming has continued, and there has been no slowdown.

Figure 12 - From NOAA Press Release Kent et al 2015

Figure 12

Note that the trend line starts in 1951. That suggests the recent warming period started in 1951, not the mid-1970s (1975) as has typically been shown. In other words, not only has NOAA endeavored to eliminate the slowdown in global warming in recent years, NOAA also appears to be attempting to eliminate the slowdown (or cooling) of global surface temperatures from the mid-20th Century to the mid-1970s.

Let’s return to the global sea surface temperature anomaly data. The top graph in Figure 13 compares the linear trends of the global (60S-60N) NOAA ERSST.v4 and UKMO HADSST3 data for the period of 1951 (NOAA’s start year for their trend line in Figure 12) to 1975 (1975 is often used as the breakpoint between the mid-20th Century pause in global warming and the latest warming period). Globally, the NOAA ERSST.v4 data show warming from 1951 to 1975, while the UKMO HADSST3 data show cooling.

Figure 13

Figure 13

Because the NOAA ERSST.v4 data run “cooler” than the UKMO HADSST3 data during the 1950s and 60s, the warming rate of the NOAA ERSST.v4 data from 1951 to 2014 is noticeably higher than the UKMO HADSST3, globally and in both hemispheres. See the bottom graph in Figure 13.

Since we’ve been discussing the discontinuity starting in 1945, let’s start the comparisons in that year and we’ll end again in 1975, for the top graph in Figure 14. That also provides us with 31 years for the mid-20th Century slowdown (cooling) period. Once again, globally, the UKMO HADSST3 data show cooling from 1945 to 1975, while the NOAA ERSST.v4 data show warming.

Figure 14

Figure 14

And it should go without saying that the global sea surface temperature warming rate of the NOAA ERSST.v4 data from 1945 to 2014 is noticeably higher than the UKMO HADSST3, because the NOAA ERSST.v4 data run “cooler” than the UKMO HADSST3 data during the late-1940s, 50s and 60s. Refer to the bottom graph in Figure 14.

CLOSING

It appears that NOAA is attempting to eliminate another pause in the warming of global surfaces, this time the mid-20th century slowdown/cooling period.

It definitely cannot be argued that NOAA improved the way they process sea surface temperature data to get scientifically better results, because:

  1. NOAA’s results oppose the scientific findings of Thompson et al (2008) and
  2. NOAA overlooked the concerns expressed by the supplier of their reference night marine air temperature data during the post-World War 2 period.

I included the following in the closing to the post Pause Buster SST Data: Has NOAA Adjusted Away a Relationship between NMAT and SST that the Consensus of CMIP5 Climate Models Indicate Should Exist?:

I suspect, when Congressman Lamar Smith’s whistleblowers are concerned about rushing the Karl et al. (2015) study “before appropriate reviews of the underlying science and new methodologies used”, they’re discussing:

  • the uncertainties of the bias adjustments,
  • the uncertainties of the data,
  • the basic methodologies, including how NOAA distributed those adjustments around the oceans, and
  • most importantly, for the “underlying science”, how NOAA appears to have adjusted out a difference between ship temperature measurements and night marine air temperature that should exist according to the consensus of the newer climate models—once again assuming that NOAA’s other grand assumption…“The model SAT is used since the model bias is assumed to be the same during daytime and nighttime”…is correct.

We can now add another bullet point:

  • NOAA failed to make the necessary mid-20th Century adjustments to their ERSST.v4 sea surface temperature datasets—corrections that would have been supported by Thompson et al. (2008). Likely reasons: (1) NOAA did not want to decrease the warming rate starting in 1950 that would have resulted if they had made those corrections and (2) NOAA wanted to show a more continuous warming since 1950, which would not have existed if they had made those corrections.

Once again, maybe, in time, Dr. Sullivan of NOAA will produce the emails requested by Representative Smith so that we can confirm my suspicions and the suspicions of many others.

Last, I suspect some persons will argue that there is too much uncertainty during this period for the results shown in this post to have any merit. I will counter with the argument that, as a whole, climate science as practiced by NOAA is not focused on uncertainties in their presentations to the public. Do we see any uncertainties expressed in any way in NOAA’s press release for Karl et al. (2015) Science publishes new NOAA analysis: Data show no recent slowdown in global warming? A new paper by Huang et al (2015) Further Exploring and Quantifying Uncertainties for Extended Reconstructed Sea Surface Temperature (ERSST) Version 4 (v4) is in press (preliminary accepted version is paywalled). Will NOAA demonstrate a commitment to honestly communicating with the public by producing an easy-to-understand press release for it…showing the wide ranges of uncertainties in the warming rates?

JC note:  This is an invited guest post.  Please keep your comments relevant and civil.

293 responses to “Busting (or not) the mid-20th century global-warming hiatus

  1. Pingback: Busting (or not) the mid-20th century global-warming hiatus | Enjeux énergies et environnement

  2. Before 2 decades are up Trenberth et al., may wish there really had been some extra heat hiding in dem der oceans…

    • So far Trenberth looks to be predominately correct.

      • Without even imaginary facts supporting the conjecture we’re forced to use our imaginations.

      • Supposing he was correct, then what about the hockey stick? How much heat could have been hiding all those other times when they claim it was not so hot as now?

      • http://www.nature.com/nclimate/journal/v4/n11/full/nclimate2387.html

        This study underestimated deep ocean warming due to too much water from melting ice – 0.5 mm too many in Antarctica. But, it also failed to account for groundwater, which is probably about as big as Antarctica or perhaps bigger.

        Whatever the case, with the current rates of melting and sea level rise there just isn’t any room for deep-ocean warming.

      • Okay, I phrased things very badly. When I said groundwater is ‘about as big as Antarctica’ I really meant ‘it probably contributes about 0.5 mm – which is about the same as the overestimate in Antarctic melting’.

      • According to Zwally, Antarctica is removing water from the oceans, which would reduce sea level rise.

        The last paper I read on water mining showed storage and mining are almost perfectly offsetting, but going forward water mining will make a significant contribution to SLR.

      • My point is, you have two factors that can lead to sea level rise:
        -Thermal expansion. The paper I linked to has this at 0.8mm/year for the 0-2000m ocean.
        -Melting land ice. The authors have this at just over 2mm/year.

        Willis and coauthors add this up and, surprise surprise, the total is marginally higher than actual sea level rise. So the deep ocean must have been cooling i.e. decreasing in height (though the effect is very small – the error bars include zero).

        The only way the deep ocean could have had a significant warming in the last decade would be if:
        -Every recent estimate of sea level rise from groundwater depletion is wrong (they’re all positive, not neutral). Even the IPCC has it at 0.3mm/year iirc, and other estimates are much higher.
        -Less land ice than expected is melting, leaving room for 0.x mm in sea level rise to come from the deep ocean. So, the proposition that Antarctica is contributing to sea level rise runs COUNTER to the deep-ocean-warming hypothesis.

        Bottom line, there’s only 3mm or so to go around… and there’s very little evidence any of this is coming from below 2000 meters.

      • Ok, I misread your Antarctica comment. Yes, Zwally would essentially leave 0.5mm in the sea level budget unaccounted for. That could have come from the deep ocean… or from somewhere else.

      • The last paper I read on water mining showed storage and mining are almost perfectly offsetting, but going forward water mining will make a significant contribution to SLR.

        And run-off changes unknown, presumed to be precisely zero.

        But yes, dam building now appears to be close to zero:
        http://cdn.phys.org/newman/gfx/news/hires/2014/waterstorage.png

      • Terrestrial contribution = +.38mm (Wada 2015)
        Antarctica contribution = -.23 mm (Zwally 2015)

      • David Springer

        If the heat is diffused in the ocean so much it can’t be found then it’s not going to effect surface temperature in any significant way because, due to law of entropy, it can never be undiffused to warm the surface layer. Whether Trenberth is right or not is academic.

      • JCH:
        “Terrestrial contribution = +.38mm (Wada 2015)
        Antarctica contribution = -.23 mm (Zwally 2015)

        I just love it when climate scientists express global sea level to an accuracy of 0.01millimetres.

        False Precision Syndrome or what!

      • If the heat is diffused in the ocean so much it can’t be found then it’s not going to effect surface temperature in any significant way because, due to law of entropy, it can never be undiffused to warm the surface layer. Whether Trenberth is right or not is academic.

        This is why it was just plain stoopud to accuse Trenberth of ever meaning this. He never said the missing heat would be found in the abyssal ocean., so claiming he was envisioning the heat reforming and blasting to the surface (coming back to haunt) was great fun, but entirely pointless.

      • David Springer

        No, the consensus is that Kevin Trenberth did indeed imply the hidden heat was in the system and would have a significant effect. Even Gavin Schmidt at Real Climate wagged his finger at Kevin over it.

      • No, Gavin said people would have to ask Trenberth what Trenberth meant by his phrasing. So that is what I did. I have the correspondence.

      • It seems indisputable to me.

    • ‘ South Pacific’ …and other deep_ocean missing warmth.

      ‘ How does it get there
      who can tell you why?
      Fools give you reasons,
      wise men never try .’

    • JCH: I suspect that the energy (not heat) is what is hiding in the oceans. Rotational energy in the equatorial bulge. Or back to Ice at the poles.

      Fits the facts without complex explanations.

  3. Pingback: Busting (or not) the mid-20th century global-warming hiatus | Bob Tisdale – Climate Observations

  4. Hi Judith. Thanks for posting this at ClimateEtc.

    One fix please: the paragraph starting with the word “Hence” should be included in the quote.

    Cheers.

  5. …recent warming period started in 1951, not the mid-1970s (1975) as has typically been shown. In other words, not only has NOAA endeavored to eliminate the slowdown in global warming in recent years, NOAA also appears to be attempting to eliminate the slowdown (or cooling) of global surface temperatures from the mid-20th Century to the mid-1970s.

    This is the sort of bias that we will of course endeavor to correct by throwing more money at the problem.

  6. Outstanding work. Please ensure Rep. Smith’s committee receives a copy.

  7. Pingback: The Mid-Twentieth Century “Global Warming Consensus” | Transterrestrial Musings

  8. Pingback: Busting (or not) the mid-20th century global-warming hiatus | Watts Up With That?

  9. Concerning the closing section uncertainties bullet, it is worth repeating that Karl 2015 used Huang 2015 SST. Huang used the method of Kennedy 2011 to compute a 0.1C adjustment. But neither Karl nor Huang reported the uncertainty in this adjustment. Kennedy did, 0.1C +/- 1.7C. Absolutely unfit for purpose.

    • What’s not to like with what Kennedy did. I’ve seen a lot of that. It saves them the embarrassment of shrugging their shoulders and saying “Who the hell knows.”

      • What Kennedy just did:

        …The reconstruction methods have also been applied to the HadCRUT4 data set, yielding a range of estimates of global near-surface temperature change since the mid 19th century. Results show relatively increased warming in the global average over the 21st century owing to reconstruction of temperatures in high northern latitudes, supporting the findings of Cowtan & Way (2014) and Karl et al. (2015). While there is broad agreement between estimates of global and hemispheric changes throughout much of the 20th and 21st century, agreement is reduced in the 19th and early 20th century. This finding is supported by the climate model trials that highlight uncertainty in reconstructing data sparse regions, most notably in the Southern Hemisphere in the 19th century. These results underline the importance of continued data rescue activities, such as those of the International Surface Temperature Initiative and ACRE.

        The results of this study will form an addition to the HadCRUT4 global near-surface temperature data set.

        Authors
        Colin Morice – Met Office Hadley center for Climate Change
        Nick Rayner – Met Office Hadley Centre
        John Kennedy – Met Office Hadley Centre

    • Absolute fine for the intended purpose.

      the intended purpose is to inform policy makers of the best estimate science can produce. That estimate will involve assumptions, decisions, and adjustments. It will be uncertain. The uncertainties may even be large,
      huge. But it nevertheless is the best that can be done.

      No skeptic has stepped forward with a better more defensible estimate.
      Consequently policy makers are free to decide based on the best available information.

      That information along with other considerations has convinced them that climate change is an important problem. They didnt read your e book.
      They passed a budget. They gave loads of money to renewables,
      again, ignoring your writings, as well as others.. I am not trying to single you out, just laying out the political realities.

      The realities are decision makers get to decide. You can elect them to cut budgets or defund climate science, or whatever you like.
      In the end they decide.

      From a pragmatic standpoint, the data and methods you all criticized.. won out. Criticism is necessary but not fit for purpose. You have to take the next step and do your own science

      • “The intended purpose is to inform policy makers of the best estimate science can produce. That estimate will involve assumptions, decisions, and adjustments. It will be uncertain. The uncertainties may even be large,
        huge. But it nevertheless is the best that can be done.”

        The uncertainties are so large they can’t possibly justify a multi-trillion dollar price tag. I like warmth. If feels good. I like to play golf. I like to swim. I like to play tennis. More importantly, It ensures a longer growing season, which means more food. I like more Co2 in the atmosphere because it’s already led to a greener world.

        In fact the uncertainties are so large it could very well be that doing nothing makes much more sense.

        (aka pokerguy)

      • It seems to me that when NOAA and NASA scientists write papers discussing what they conclude from their new ideas for adjustments to previously archived data, they should show how these adjusted data agree or disagree with their agency’s own satellite data. They should also discuss whether their scientific conclusions are supported by their agency’s own satellite data that don’t have such poor land and ocean coverage as surface measurement databases that allowed the assumptions and human decisions involved with the data adjustments being made in the paper.

        As a former NASA employee in our manned space program, I would also like to see NASA and NOAA conduct the kind of in-depth independent, objective, “non-advocacy” reviews of their new scientific findings and conclusions being used for critical public policy decisions, as we did at critical decision points in manned space programs where scientific truth was paramount in protecting astronaut safety. People like Bob Tisdale would be sought out by the agency to be on a review team of “non-advocacy” reviewers so the reviewers could ask penetrating questions of the agency’s key presenters of their new “discoveries”. That is how a well-run government agency seeking scientific truth protects its image. NOAA’s Ms. Sullivan should know that as her safety as a NASA astronaut depended on this independent review process.

      • What a pompous a_s. What about Spencer and Christy? Unlike you, they have done and continue to do their own science, And guess what – they even publish it.

      • Steven Mosher – What a appalling comment you have just made. You are saying that politicians will do their own thing and therefore they can choose whatever evidence suits them as “best available information” and happily ignore all valid criticism of that evidence and all better evidence. Meanwhile those who try to point out that the evidence the politicians are using is heavily flawed and that there is much better evidence available are insulted by you and dismissed with the nonsensical instruction “do your own science”. Steven, bad science is bad science, regardless of what anyone else has or has not done, and science relies on bad science being called out. The fact that the politicians ignore the valid criticism and the better evidence does not make the criticism “not fit for purpose”, it makes the politicians not fit for purpose. Your warped logic actually encourages corruption and dictatorship.

      • The notion that best available knowledge is adequate knowledge is too mystical for me. (And too common!). Best available is best available. Adequate is adequate. Scant can be adequate, abundant can be inadequate. Shouldn’t need saying, but these days…

        As to the point that decision makers get to decide and have decided – yes! Steven is right. A clear victory to the climatariat. They won, skeps lost, for sure. South Australia and Tasmania have been greened to bankruptcy, and the threat to reduce the rest of my country to green beggary is still serious.

        Big Green has won so far, against thrift, conservation and commonsense. A crushing win.

        But it’s rather like Dunkirk. You don’t leave it at that, do you?

      • Mosher, your hubris continues to astonish me.

      • Poker guy.
        You don’t get to decide what is justified.
        You get to elect people who ignore you.

      • Harold, your points are well taken. It appears as though Nasa and Noaa have disowned their satellite data.

      • My hubris?
        I think the skeptics are the ones with that issue

      • It’s pretty simple mark.
        They didn’t listen to you. Or me.
        Your hubris is thinking that blog comments matter.

      • It’s possible to ID the people who think blog comments don’t matter. They seldom, if ever, comment on blogs.

      • Steven Mosher said “My hubris?
        I think the skeptics are the ones with that issue”

        Steven – you are the one who thinks they get to decide who wins the thread. I have seen that comment numerous times.

      • Bottom line is it all depends on who we elect next November. At least for the US.

        It’s up to Mosomoso in Oz. The UK seems to be waking up. The EU is a hopeless basket case.

      • mosomoso

        By far the biggest beast in Judith’s stable of climate science ‘monsters’ is the ‘I don’t know the answer’ monster. Unfortunately he appears to wear an invisibility cloak as he has never been officially seen in any of the bastions of the science establishment, but some working there murmur that they can often feel its presence.

        tonyb

      • No skeptic has stepped forward with a better more defensible estimate.

        The ice core data for the past ten thousand years provides a better and more defensible estimate of the future cycles. What has happened will repeat. What has not happened is just flawed model output.

      • Inviting skeps to come forward with their own “science” and their own facile explanations is a good example of consensus thinking. Then you’re not a skep, duh.

        They really think that someone somewhere will be able to reduce the stupendously complex flux which is climate to a kiddie console of buttons and levers. With a special little green coloured switch for “solutions”.

        Been tried, warmies. You just end up with a big bill from Big Green.

        – And Then There’s Cloud.

      • David Springer

        Mosher is living in a dream world. Climate alarmism has not been given more than lip service by decision makers in the US and very little elsewhere. COP21 should be called COPOUT21 as no binding agreement came out of it. Ask Hansen how the war is going. LOL

      • David Springer

        Mosher is a crushed man. Watts exposing the intentional corruption of the US surface temp record was the final straw. The five stages of grief have begun.

        http://img09.deviantart.net/b55e/i/2013/159/1/5/the_five_stages_of_grief_by_filthyphantom-d68b7em.png

      • Actually the policy maker wants the best estimate of the uncertainty (’cause that’s where the risk lies) not just the best estimate of the parameter.

        If you are certain the asteroid will hit there is really no risk, and nothing for the policy makers to worry about :)

      • The climate null hypothesis ( despite the – er(r) – 97%
        conn-sensus attempts by Cook or Trenberth) remains
        that natural variations will continue to occur. The
        burden of proof lies with those who challenge the null.
        To date no verification by observation from those
        challengers – hot, hot models that fail to match
        observations do not suff-ice, homogenizing data
        practices do not measure up, or rather, measure
        down as up, if you get my drift.

      • “Steven – you are the one who thinks they get to decide who wins the thread. I have seen that comment numerous times.”

        Err no. giving your opinion of who wins the thread does not decide who wins the thread. some people say +1, some say +100. That means
        “I like and agree with what you said”. The superlative of that is
        ‘You win the thread” or “You win the internet”

        Nevertheless the point remains. None of us are deciders. If we were,
        the last budget would have never passed. And so despite my luke warmism, and despite your skepticism, actions were taken.

        I will continue to tell you guys that if you want influence you have to join the fight in science. Sitting outside the ring sniping and throwing fruit
        gets you nowhere..

        Look at Rud. Look at all the energy he poured into his books. Imagine if he applied his skills on the actual battlefield of science. Same with other skeptics I read. I see it as a huge waste of talent.

      • Hi david.

        I’m not talking about paris. I’m talking about the budget deal.

        It sure looks like the republicans bought the BS on renewables.
        how many Billion?
        They didnt listen to you. me either.. but there you go.

      • “you are the one who thinks they get to decide who wins the thread.”
        You can’t win the thread if you uncritically echo totally junk figures like the one that set it off. Or if you think you do, it isn’t worth anything.

      • David Springer

        Mosher re; budget deal

        read harder

      • The best available knowledge may not be the right knowledge. Certainty of a solution of a complex puzzle may not use the carefully prepared view you and others have. Pride/fall?

      • David Springer

        P.S. Mosher

        Unrelated to reading the budget deal harder follows.

        Renewables aren’t BS. There is a time and place for them. Texas is leading the nation in successful use of wind power. If prices drop enough on solar PV (panels & grid tie electronics) it too is a dog that will have its day.

        http://www.texastribune.org/2015/03/18/georgetown-goes-all-renewable-energy/

        Georgetown is a city near me. I know it well. It’s notoriously conservative. They did the math and went all renewable.

      • David Springer

        http://www.utilitydive.com/news/congress-strikes-deal-to-extend-wind-solar-tax-credits-and-lift-oil-export/410947/

        Wind, solar, and Big Oil all have something to cheer about in 2016 US budget.

        It’s probably what’s called a revenue-neutral deal between liberals and conservatives in congress. Any tax revenues lost from extending tax incentives to wind and solar power would be negated by increase in tax revenue from lifting the 40-year ban on oil exports.

        You don’t hear environmentalists cheering $35/bbl crude much, eh? US gasoline price at the pump near me is $1.65 gallon. Allowing US producers to export crude keeps their profits up, global supply up, and prices down. Life is good.

        P.S. Drill baby drill.

      • @david springer

        “Georgetown is a city near me. I know it well. It’s notoriously conservative. They did the math and went all renewable.”

        Sorry, but the link you provided did not give proof of the statement in the title… nobody can go 100% fossil-free using only PV and wind, it is physically impossible to do it, since during any year in any place on earth there are many hours when there’s neither wind nor sunshine.
        The math they did is deeply flawed, to say the least.
        Please do not buy into this green propaganda!…

      • David, exports should narrow the spread between WTI and Brent and raise gas prices in the US and lower international slightly.

      • I think a good part of the reason oil and gas prices are so low (aside from drop in demand) is the end of the Iran embargo. I think this was driven primarily by the need to ease the economic burden and prevent cicil unrest in the EU. This is one area where I think Obama probably did what he had to because of international pressure. Need for democrats to have low prices in an election year probably didn’t hurt.

      • Low oil prices also have the benefit of hurting some unsavory national oil producers who are over leveraged.

        That hurts high marginal cost US producers, but the export reduces that burden some and could lower international prices putting more stress on those state producers. Part of the US problem is monetary easing causing over investment in frac’ing in the US.

      • Mosher Look at Rud. Look at all the energy he poured into his books. Imagine if he applied his skills on the actual battlefield of science.

        Yeah, actual scientists don’t write no stinkin’ books.

      • Mosher continues to butcher logic. He’s made this silly claim before and will likely to continue to do so. I’ve seen no evidence that he can stop.

        Perhaps if he learned a bit about burden of proof and sufficiency of proof, he’d get a better grasp of the logic.

    • No Mosher, some of us have moved beyond that and are way ahead of you by concluding the uncertainties are so huge that we take it all with a grain of salt.

      I’m sure you too will catch on some day.

      • except the uncertainty isnt 1.7. Opps.. I guess you should have been more skeptical

      • I was making a larger point. Look at Alberto’s linked study. The uncertainty was 5 times as large as the base measurement. This is endemic. Among scientists it is appropriate. However, for public consumption what do they hear? When was the last time the Washington Post or NYT included the level of uncertainty or confidence level in the fright pieces. The media don’t waste their time on the details. They want to sell newspapers. Their ideology pushes them to save the world. So the media will focus on the worst case scenario and ignore the scientific findings in the peer reviewed studies. A symbiotic and enabling relationship exists, and both sides love it. Why wouldn’t they when both benefit.

    • Rud,
      “Kennedy did, 0.1C +/- 1.7C.”
      Could you be more specific? I couldn’t find that in either past of Kennedy’s 2011 paper. I did find this:
      “Because of the incomplete metadata, and the difficulty of estimating biases in historical SST records, the uncertainties of the recent adjustments are relatively large, amounting to almost 0.1◦C in the late 1940s and in the 2000s”

      • NS, there are 2 Kennedy 2011 SST papers. You probably have looked at the second. Kennedy, Rayner, Smith, Parker, Saunby: …Biases and homogenization., J. Geophys. Res. 116, D14104. That is what Huang cites. But now read the first ‘part 1, measurement and sampling uncertainties’, available at http://www.metoffice.gov.uk. Note figure 2b and table 1. Bottom line is, no amount of statistical prestidigitation can fix the past SST record for GAST purposes. Heck, I don’t think Kennedy’s estimates are any good either. Self evident when you read his two papers valiently trying.
        You cannot make a silk purse out of pig ears in the real world.

      • Rud,
        So they didn’t actually give that number? I don’t see where you could deduce it. Fig 2b just gives the ratio of two uncertainties, each of which is less than 0.1°C. And Table 1 is not an estimate of bias uncertainties at all. It is a measure of uncertainty of individual measurements (not adjustments or averages) and in any case is nowhere near 1.7.

        I think your figure (1.7) has no basis. The paper is here.

      • Nick Stokes:

        I believe the 1.7C comes from doubling Kennedy’s standard deviation of 0.85C as found in Table 5 of Kennedy (2011) part 2.

        Kent

      • “Table 5 of Kennedy (2011) part 2”
        Well, first he said it isn’t in part 2 but part 1. Then he said, I think, no, in the 2013 paper. But OK, say that really is it. It isn’t the uncertainty of any adjustment. It (0.85) is the standard deviation of individual pairs of observations. It isn’t the uncertainty in the quoted mean bias of 0.12°C. Kennedy does give that uncertainty measure right there – it is the standard error of that mean, SE=0.01°C.

        I don’t think that can be what he meant. I hope not.

      • Jim D,
        “Nick, it is.”
        My hopes are dashed. It seems this stuff goes around the echo chamber so much that people lose track of where it came from. The college kid is right, of course. It’s obvious when Table 5 lists the various (12) sub-regions. You’d expect them to vary a bit more than the global SE, and they do, from .05 to 0.21. Nothing like a range of 1.7.

        Ross does make a lot of these bungles. And sceptics don’t seem to check.

      • Yes, you see the skeptical claims go around, but the responses are not remembered.
        Below I had a question for Tisdale that finally came out of moderation. What is the difference between what Tisdale says Karl didn’t account for and the large correction around 1945 that Karl did account for in his Figure 2B?

      • I see that we have been through all this here before. John Kennedy himself refuted it. Rather too gently, I thought. Victor Venema was more to the point:

        A statistician confusing the sample standard deviation and the standard error of the mean, that is quite something. Scary what mitigation scepticism does to a person.

        And yet, even when pointed out, he just didn’t get it at all.

      • Sorry wrong link – the Kennedy refutation is here.

      • Nick Stokes:

        I may have their position bungled, but it seems some folks (such as Prof. McKitrick) believe that standard error of the mean is insufficient, by itself, to convey the true nature of the uncertainty in such global measurements.

        Concerning Karl, et al., there was disagreement as to whether the standard error of the mean provided sufficient information about the true uncertainty in this particular adjustment, particularly when compared to the large individual-pair deviations. It is my impression that “best practices” would clearly report both the SD and the SE, though many are tempted to trumpet only the “best looking” of the two and leave the uglier one buried in footnotes and previous papers.

        This reflects the larger debate over the precision asserted in many climate change papers. Given the way SE is calculated, when you are measuring anomalies (always a small number and typically a fraction), 20k+ measurements will necessarily produce a small SE (in this case 0.01). Mathematically correct but, to critics, misleadingly precise.

        These arguments may be more important in medical research, where (typically) smaller sample sizes do not produce such impressive standard error calculations.

        I recall Prof. McKitrick arguing to the effect that important information should not be discarded. Consider that Karl, et al., applied a 0.12C adjustment (again, Table 5) to all buoys across the globe despite a standard deviation of approximately 1.7C across the test sample. That adjustment is significantly “wrong” in almost every individual case (and we don’t know in which direction). To some observers, reporting the standard error by itself is simply an insufficient approach.

        Kent

      • Kennedy explains. Nobody listens. Conspiracy rot all the way down. SM says to wait until the Brits weigh in. It appears at AGU15 the Met Office affirms C&W and Karl15, and announces a coming addition to HadCrut4. Lamar Smith took physics in college.

      • JCH

        I have no idea where that very large uncertainty figure of 1.17C comes from. It doesn’t seem to be in any of Johns work.

        Both you and He know my opinion of the very limited value of the extraordinarily sparse and methodologically fractured global SST’s prior to around 1960 but in the context of this particular uncertainty argument that concern isn’t relevant

        tonyb

      • While 1.7C seems extreme, so does Kennedy’s SE of 0.01. It seems to assume approx zero correlation over 21K paired observations over 10 years. No spatial or temporal correlation to speak of? Very unlikely.

      • climatereason & foias:

        As I pointed out above, the 1.7C is a 2-sigma standard deviation estimate derived from Kennedy’s published 1-sigma of 0.85. The mean of all samples was 0.12 (global). Obviously, there was a relatively large amount of variability around the mean.

        I believe more attention is now being paid to the SE vs SD issue in biomedical research. For example:

        …SD and SEM estimate quite different things. But in many articles, SEM and SD are used interchangeably and authors summarize their data with SEM as it makes data seem less variable and more representative. However, unlike SD which quantifies the variability, SEM quantifies uncertainty in estimate of the mean. As readers are generally interested in knowing the variability within sample and not proximity of mean to the population mean, data should be precisely summarized with SD and not with SEM.

        Taken from:
        “What to use to express the variability of data: Standard deviation or standard error of mean?”

        http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3487226/

        Kent

      • Let me clarify my use of the pull quote above, since it appears to come down against using SE in the Karl paper (and I do not believe that is the case). I was actually trying to show that those who want to see SD variability highlighted in the Karl paper are expressing the common concern that important information should not be left out of a presentation.

      • opluso,
        “However, unlike SD which quantifies the variability, SEM quantifies uncertainty in estimate of the mean.”
        Exactly. And 0.12 is the mean, and 0.01 is the uncertainty of the mean. The mean is the estimate of bias. SD quantifies the variability of samples. In that medical context, that is of interest (patients), but it isn’t here. The samples consist of pairs of measures taken at different but close times and places. You’d like them to be the same, but you don’t have that data. Sometimes the variation in time/space will have the ship warmer, sometimes the buoy. That’s expressed in the SD. They choose pairs limited to up to 50km apart, and in a specific night period, to diminish this variability, and still have about 21000 samples. They average to get rid of that random variability and estimate the residual bias. The entirely artificial variability (SD) imposed by these measuring issues is of no interest on its own. You just want to get rid of it to estimate the bias. That’s why they average 21,000 (SEM), not 1 (SD).

        This is really elementary stuff. As VV says, it’s shocking that Ross can’t get it right.

      • Nick Stokes:

        The entirely artificial variability (SD) imposed by these measuring issues is of no interest on its own.

        Assuming that the variability between measured pairs is “entirely artificial” is probably close to the heart of the dispute. You average the variability because you do not know which number is correct (or how close either is to the “true” temp). To then display only the SE of your averaging calculation would imply, to many, an overly optimistic claim of accuracy in the measurement of actual temperatures/anomalies.

        My home frequently experiences 2C differences from the official monitoring station approximately 10km away. That is not an “artificial” difference and ignoring it rejects valuable information about climate variability and, therefore, local anomalies.

        Kent

        P.S. Happy Festivus to all in the Southern Hemisphere!

        http://hypervocal.com/wp-content/uploads/2012/12/happy-festivus.jpeg

      • “They average to get rid of that random variability and estimate the residual bias.”

        Two critical assumptions, implied in this statement, are that the sampled bias is a random variable and that the sample values Kennedy used to estimate the bias are uncorrelated over time/space. To accept Kennedy’s use of SEM for estimating the uncertainty in the residual bias, it’s necessary to agree to these two assumptions. I don’t, at least not without first testing the data for possible trends.

        If there is an underlying time-dependent trend in the bias, using a single global bias value for calibration purposes will obviously negatively impact any time-based reconstruction of a global temperature series. A time-dependent trend might creep in through gradual changes over the years in instrumentation or ship-type.

        Because not all samples are weighted equally when computing a global temperature, if there is a spatially-dependent trend (or higher-order function) in the sampled bias this would also affect the uncertainty. Prof. McKitrick thought this might be the case when he talked about the impact of shipping lanes. His suggestion was to weight observations according to location and sample density in order to properly estimate a global mean. (Alexander Coulter seemed to agree with him on this point.)

        “This is really elementary stuff. As VV says, it’s shocking that Ross can’t get it right.” Before you make statements like this, maybe you should think a little more.

      • “Two critical assumptions, implied in this statement, are that the sampled bias is a random variable and that the sample values Kennedy used to estimate the bias are uncorrelated over time/space.”

        There is no assumption that the bias being estimated is random, in fact, that makes no sense. You attenuate the random effects by averaging to estimate the bias.

        There is almost always some issue of correlation in samples. Still, SEM is the standard estimator of mean uncertainty. There is nothing about this data set to suggest that it is inappropriate. But none of this in any way justifies use of sample SD (170 times larger) as an alternative. That was a simple elementary bungle. The rest is just BS to cover.

      • “There is no assumption that the bias being estimated is random, in fact, that makes no sense.” It is the sampled bias that is assumed by Kennedy to be a random variable. Not the true bias.

        “There is almost always some issue of correlation in samples.” Coin flips?

        “Still, SEM is the standard estimator of mean uncertainty.” When dealing with statistically independent samples.

        “There is nothing about this data set to suggest that [SEM] is inappropriate.” There are good reasons to believe that space/time trends might exist in the samples. The data should be tested first, before assuming SEM is appropriate.

      • “There are good reasons to believe that space/time trends might exist in the samples.”

        The samples are difference pairs. Out of millions of ship observations and buoy observations, they select pairs that are close in time and space (50km). Of course there are temperature trends in time and space. But no reason to expect a trend in the frequency of which the buoy is on the warmer side in individual pairings.

      • “Of course there are temperature trends in time and space.” Unless you test the data, you’re mostly just guessing about the existence of trends. I suspect Kennedy doesn’t believe there are trends, otherwise he wouldn’t have recommended using SEM for bias uncertainty.

        “But no reason to expect a trend in the frequency of which the buoy is on the warmer side in individual pairings.” I agree, if you restrict your bias estimate to the region and to the approximate time of each individual pairing. But if you wish to ignore information about possible space/time trends and relegate your analysis to individual samples, then you are stuck with accepting something close to standard deviation as the measure of bias uncertainty.

      • Given that the standard error is derived from the standard deviation I keep wondering what the fuss is all about.

        Nevertheless, with the possible exception of this particular sub-debate over Karl (2015), everyone seems to agree that you cannot conduct proper climate science without frequent reference to standard deviations.

        For example, when Kennedy presented his original findings on co-located ship/buoy pairs in Table 5, he showed global differences (average bias = 0.12) along with regional differences (ranging from an average bias of 0.05 to 0.23). But he also presented each respective set’s standard deviation (ranging from 0.43 to 0.94) together with the respective standard error calculations (from 0.01 to 0.1).

        A key precursor to Karl (2015), Huang (2015), relies upon standard deviations for many crucial steps. The paper simply couldn’t have been written without them because they contain such important information.

        For example, the quality control check for sea surface temperature data in Huang (2015) utilizes the standard deviations:

        The SSTs from ships or buoys were accepted (rejected) under a QC criterion that observed SSTs differ from the first-guess SST from ERSST.v3b by less (more) than 4 times standard deviation (STD) of SST.

        However, at one point, Huang presents a standard deviation for ship/buoy differences in a particular period (using 1990-2012 instead of Kennedy’s 1998-2007) but the “STD” turns out to be surprisingly small. I assume this is a typo and Huang was actually referring to the standard error for some reason?

        The mean difference of ship-buoy data between 1990 and 2012 is 0.12C with a STD of 0.04C (all rounded to hundredths in precision).

        Kent

      • “But he also presented each respective set’s standard deviation (ranging from 0.43 to 0.94) together with the respective standard error calculations (from 0.01 to 0.1).”
        Yes. Apart from the S Ocean, with 55 obs, the SD range is .7 to .94. It’s basically a property of the temperauure variability reflected through the differencing of near pairs. The variation in SEM is due to the large difference in numbers of pairs N in each region. To get SE you divide SD by sqrt(N). N varies from about 10000 to 55.

        But that is why the SD is in no way an estimator of the uncertainty of global mean. It’s the uncertainty you’d get if you relied on just one pair to measure the bias. No-one would do that. They went to the trouble of assembling 21000 pairs to get the uncertainty down.

        “Given that the standard error is derived from the standard deviation I keep wondering what the fuss is all about.”
        The fuss is about the difference between 1.7 and 0.01. Near enough for sceptic work, I guess.

      • Nick, “But that is why the SD is in no way an estimator of the uncertainty of global mean. It’s the uncertainty you’d get if you relied on just one pair to measure the bias.”

        If you had just one point, you wouldn’t have any indication of uncertainty based on that data. With each additional point you begin to get an indication of the uncertainty based on that set of data. Use an infinite number of points and there will be no uncertainty how uncertain the data set is. SEM only gives you information on the data you are using not the object you are using it on, that takes SD.

        A great example is BEST’s estimation using sea ice as land or as oceans. The difference due to the choice is 0.7 C +/-0.09 C which isn’t going to be reduced very much by more measurements, it is a real world limit. If you were measuring an object you know would never change like a brick by a manufacturer from the same mold, the SEM would provide more information on that brick while SD would provide information on the quality of the measurements. However, if there is a mixture of types of bricks, you cannot tell if SD is related to the mixture of bricks or quality of measurements, all you know is there is some uncertainty.

      • But that is why the SD is in no way an estimator of the uncertainty of global mean. It’s the uncertainty you’d get if you relied on just one pair to measure the bias. No-one would do that. They went to the trouble of assembling 21000 pairs to get the uncertainty down.

        I agree with captdallas. The SD is the estimator of the uncertainty of the actual measurements. It doesn’t go away just because you thereafter calculate a standard error for the mean of this uncertainty range. It also doesn’t seem to matter what result you get from your SE calculation if you are going to use the SD mean to make your adjustments regardless of the SE.

        It’s not even clear to me that any of the uncertainties (of the actual measurements) did “go away” in Karl (2015). It’s admittedly difficult to track the multiple adjustments through the earlier papers. But don’t forget that Karl (2015) had to apply a 0.10 significance level, rather than the more stringent 0.05, in order to claim they had “busted the pause”.

        To me this latter point is potential evidence of post hoc p-hacking. Consider that at other points in the paper trail (e.g., Huang 2015) the 0.05 level is (at least sometimes) used. But the bottom line seems to be that Karl (2015) preserved at least some of the measurement uncertainty in the final product.

  10. Nice job Bob. I am sure with enough time Karl et al. will get around to finishing their little project. You know comparing the absolute values of ersstv4 with hadssti is pretty comical.

  11. When looking at long term ocean temperature we need to put in perspective as to what ‘long term’ actually means.

    Firstly, the first proper oceanographic expedition was considered to be the British ship ‘Challenger’, which for several years from 1872 gathered invaluable oceanographic material including temperature of the ocean surface and depths.

    http://oceanexplorer.noaa.gov/explorations/03mountains/background/challenger/challenger.html

    The US space craft were named after this ship.

    Whilst an epic voyage, it covered a tiny fraction of the earths oceans and subsequent additional data was highly sporadic, ranging from that taken by Royal navy to Merchant Navy and fishing vessels. So much of it is not necessarily accurate or detailed enough for the end results to be gold standard ‘scientific’ data.

    Better, more extensive data, collected in a more methodical fashion, dates from well after World war 2. So our knowledge of what has been happening in the oceans in the more distant past is extremely limited. Here is a chart showing glacier advances and retreats over the last thousand years.

    https://curryja.files.wordpress.com/2015/11/slide71.png

    With high water stands in Roman Times and around 1200AD and in 1600AD the melting and re-freezing of the glaciers is going to have some correlation with both the high water marks and the temperature of the oceans. After suffering severe chilling during the various periods of the LIA, the last of which was around the time of the Challenger voyages, should we really be surprised that the ocean is gradually warming again?

    This is not to take anything away from Bobs presentation, but our knowledge of the ocean until relatively recently is limited, so any trends can take little note of the more distant past in order to provide historical context..
    Sorry John K..

    tonyb

    • should we really be surprised that the ocean is gradually warming again?

      Glaciers grow, Glaciers recede, Glaciers grow, Glaciers recede, Glaciers grow

      Glaciers grow when Polar oceans are warm and thawed. It snows more than the sun removes every year.

      Glaciers recede when oceans are cold and frozen. It snows less than the sun removes every year.

      The ice core data does indicate this.

      We are back to a warm period that will grow the Glaciers.
      Recent NASA Publication about Antarctica does confirm this.

      Any studies that show we have warmed, ever since the cold little ice age, does agree with actual data from the ice cores. Natural climate cycles do work this way, with or without mankind’s fossil fuels.

      • “Glaciers recede when oceans are cold and frozen. It snows less than the sun removes every year…”

        …and yet, Hannibal crossed the Alps during the Roman climatic optimum. If you remember, it was a piece of wood that brought Christian Schlüchter Bernese geologists into conflict with climate research. Schlüchter learned that Hannibal didn’t cross an icy Alps: his army crossed a forest.

        We have since learned that the glaciers come and go on a lot faster Earthly timetable than we realized –i.e., they were gone both 2,000 and 4,000 years ago not just 10,000 years ago; and, of course, the reason for their demise at these times obviously had nothing to do with us moderns injecting our CO2 into the atmosphere.

      • Hannibal crossed the Alps during the Roman climatic optimum.

        The snow had started falling, but the glaciers advance after much snow has fallen. The warm period is after ice has depleted and retreated. It does take several hundred years of more snowfall to replenish and more years to advance.

        Actually, the snowfall starts while the glaciers are still retreating. That is the part of the cycle we are in now. Ice Volume is increasing, the ice retreat is slowing and/or paused, the advance will come later.

      • We have had thousand year cold and warm cycles for ten thousand years. This is the new normal.

      • “Glaciers recede when oceans are cold and frozen. It snows less than the sun removes every year…”
        As the glaciers recede, it gets warmer, the warm oceans thaw and increase snowfall, the glaciers are still retreating and it is still warming. This is what we have observed. Ice volume is growing while some glaciers are retreating. These are wonderful times. We are collecting much data while this cycle repeats and confirms past data.

    • Consensus Theory makes earth cold to grow glaciers and makes earth warm to recede glaciers.

      Mother nature grows glaciers in warm times when thawed ocean water is available to provide moisture for snowfall. Mother nature depletes glaciers in cold times when the frozen oceans are not available to provide moisture for snowfall.

      You should all think about this. Earth cools and warms because the glaciers grow and deplete, because the glaciers advance and retreat. The glaciers are not a result of tilt of the earth or solar cycles or CO2, the glaciers grow and and then advance and cool the earth. They grow in warm times and the cooling follows.

    • Ya forget using old whaling logs

      • So throw the data, untouched, into a public facing database so that others may use it also. Describe you assessment of the quality and reliability of the data as well.

        Don’t predigest it to your own ends and thereby give them no real chance to respond.

        P.S. I do love how an extrapolation/interpolation of the sparse point sampled data set (thermometer) is somehow a better way to get to the underlying 3D Temperature Field than a more continuous volume sampling of the same Field.

      • i.e. Satellite. (pushed return too soon)

  12. Very nice detailed work Bob. Helps put into context a lot of the data (and some of the dangers of interpolation/extrapolation).

    • The problem isn’t the interpolation or the extrapolation but the malpolation.

      As side note:
      -polation comes from the latin polare (from polire to polish).

      So interpolation is polishing between and extrapolation is polishing the outside.

      • Most people miss the most important point. Before and After all the interpolation and extrapolation and manipulation, the results are still well inside the bounds of the Roman and Medieval Warmings.

        Natural Variability rules and most people on the different sides are not studying the natural variability. Those who do, pick a driver and ignore other possibilities. They all get good correlations, sometimes, but not always. The right driver does always correlate, overpowering the others.

    • I think that what they are missing is how to reconcile the differences that will appear when you compare a point sampling instrument and extrapolate/interpolate it to derive the field and then compare it to a volume/area sampling instrument looking at the same field.

      Both will be correct. Both will ‘see’ different things. Both will average out to any long term ‘trends’ in the future.

  13. Remaining puzzled by the analysis of station based data, I wonder why satellite observations are not used to put in evidence the recent temperature evolution, in particular the lack of statistically significant change since about 18 years:
    http://climate.mr-int.ch/images/graphs/SatelliteUAH.png
    Global monthly observations as published by the University of Alabama at Huntsville, and smoothed in various ways

  14. The Thompson quote is interesting:
    “The most notable change in the SST archive following December 1941 occurred in August 1945. Between January 1942 and August 1945, ~80% of the observations are from ships of US origin and ~5% are from ships of UK origin; between late 1945 and 1949 only ~30% of the observations are of US origin and about 50% are of UK origin. The change in country of origin in August 1945 is important for two reasons: first, in August 1945 US ships relied mainly on engine room intake measurements whereas UK ships used primarily uninsulated bucket measurements12, and second, engine room intake measurements are generally biased warm relative to uninsulated bucket measurements.

    “Hence, the sudden drop in SSTs in late 1945 is consistent with the rapid but uncorrected change from engine room intake measurements (US ships) to uninsulated bucket measurements (UK ships) at the end of the Second World War. As the drop derives from the composition of the ICOADS data set, it is present in all records of twentieth-century climate variability that include SST data.”

    Re-read that last sentence!

    If the change in measurement method made such a large difference during WWII, would not that same change later in the century also make a difference? Didn’t the later measurements also shift to engine intake measurements? This would mean that all the SST measurements in the late 20th Century should be shifted down. This would mean the warming is not as much as thought, and that perhaps the “pause” is because the measurement methods are no longer changing.

    • That is what Huang purported to do, adjusting enfinemroom to later float buoys. But he took the earlier engine room down, not up. Got the sign wrong he did. What happens when the uncertainty is 17 times the estimated adjustment. Sure upset Mosher upthread to point that out. He thinks we lost the war. Heck, this long war has just begun. And the poor warmunists like him are just beginning to realize Ma Nature is not on their side and COP21 is toothless political theater. Hence the growing, ever easier to spot desparation and chicanery BT eloquently exposes here at Judith’s invitation.

      • Rud,
        “What happens when the uncertainty is 17 times the estimated adjustment”
        I wish you would point more specifically to your basis for this claim. I can’t find it.

      • For hose who are unaware of John Kennedy’s work on SST uncertainties, here is a copy of his very detailed paper;

        http://www.metoffice.gov.uk/hadobs/hadsst3/Kennedy_2013_submitted.pdf

        John is a fine scientist who is aware of the shortcomings of SST’s. Other than in a few very well travelled sea routes I am not sure that SST’s back to 1850 can be considered a true scientific set of data from which conclusions can be drawn Those from 1970 we can take as being somewhere in the ball park.

        I note that in his paper John mentions both myself and Greg Goodman showing that he does take note of Blog comments.

        tonyb

      • TY, Tony. I had just pointed NS to this same reference upthread where he asked first. The confusion is that Huang cited Kennedy’s second 2011 paper on the same subject, published JGR. Read both, and Bob Tisdales magnificent critique becomes very simple to understand.
        Buried the uncertainty monster deep, Huang did. But The Force Awakens…

      • …The reconstruction methods have also been applied to the HadCRUT4 data set, yielding a range of estimates of global near-surface temperature change since the mid 19th century. Results show relatively increased warming in the global average over the 21st century owing to reconstruction of temperatures in high northern latitudes, supporting the findings of Cowtan & Way (2014) and Karl et al. (2015). – one of the authors = John Kennedy, Met office.

      • JCH | December 21, 2015 at 6:09 pm |
        Results show relatively increased warming in the global average over the 21st century owing to reconstruction of temperatures in high northern latitudes, supporting the findings of Cowtan & Way (2014) and Karl et al. (2015).

        Gee that’s nice. However, as noted above the LOD isn’t changing much.

        The ocean can’t warm and the polar ice can’t melt without the planet slowing down.

        You can construct algorithms to misadjust the data consistently between data sets.. While clever – if the earth’s rotation isn’t reduced they are simply wrong.

        By the one objective standard we have that correlates to planet warming and measures the planet as a whole, the warming is very slow in the 21st century and it certainly is happening much slower than in the 20th century.

      • We demonstrate that the combination of lower estimates of the 20th century GMSL rise (up to 1990) improved modeling of the GIA process and that the correction of the eclipse record for a signal due to angular momentum exchange between the fluid outer core and the mantle reconciles all three Earth rotation observations. This resolution adds confidence to recent estimates of individual contributions to 20th century sea-level change and to projections of GMSL rise to the end of the 21st century based on them. …

      • JCH | December 21, 2015 at 9:32 pm |
        We demonstrate that the combination of lower estimates of the 20th century GMSL rise (up to 1990) …

        https://i.imgur.com/FngGLXx.png

        You are claiming that 2 < 1. Bull. From 1965 to 1998 the average was slightly more than 2. Since 1998 the average has been less than 1.

        Nice try. The earth is slowing less than it used to when the earth was warming more..

        Further your study ends at 1990, where the green arrow ends and the red arrow, less slowing, thus less sea level rise, begins.

      • The earth is slowing less than it used to when the earth was warming more..

        NO!

        Shorter length of day does mean the Earth Spin Rate is faster than it was 40 years ago. That does mean that sea level did drop. Increasing sea level would have slowed the earth and increased Length of Day. That did not happen.

      • Popes, I think you misunderstood PA’s chart. As long as it’s above zero, it means days are getting longer and the Earth is spinning slower. So the deceleration in the Earth’s rotation continues, just not as strongly as gefore.

        Funnily enough this is easy to measure, simply by looking at how often are leap seconds being added.

        So:
        -Either the increase in sea level rise has been exaggerated (instrument changes, going from tide gauges to satellites, whatever)
        -Or the water isn’t coming from the poles

      • tonyb,
        thanks for the Kennedy paper.

        You and ristvan always provide good information and links. Plus turbo has joined the enlightening group with professionalism and courtesy..
        Scott

      • Scott

        Just the man. I have been looking out for you for a couple of days.

        Someone sent me some very interesting original work on the Roman and MWP sea level rise. It fits in pretty well with the dates I gave you for glacial advance and retreats but what is especially interesting are the comments on land movements. I have been given permission to send it to others so if you want a copy please email

        tonyATclimatereasonDOTcom

        tonyb

      • As long as it’s above zero, it means days are getting longer and the Earth is spinning slower.

        The Y axis is the LOD, not the rate of change, so Pope is correct.

      • tonyb,
        I will e-mail you when I get home tonight.
        Regards, Richard

      • http://www.nist.gov/pml/div688/grp50/leapsecond.cfm

        List of leap seconds added.

        There were 22 leap seconds added in the 70s, 80s, 90s.

        There have been only 4 leap seconds added in the 21st century.

      • I predict an imminent change of tack

      • an article

        Of course, all that dam building of the past sped up rotation, so some slowing will be necessary to get us back to pre dam building.

      • peter3172 | December 22, 2015 at 12:35 pm |
        As long as it’s above zero, it means days are getting longer and the Earth is spinning slower.

        The Y axis is the LOD, not the rate of change, so Pope is correct.

        The leap seconds follow the LOD in milliseconds so it is a displacement not a rate. However, the 1.7 milliseconds/d/c due to moon and other drag is a rate, therefore the LOD due to drag alone in 2015 should be 0.765 milliseconds longer than the day in 1970.

        So… The earth is displaying a significantly smaller moment of inertia in the 21st century than it did in the 20th.

      • So:
        -Either the increase in sea level rise has been exaggerated (instrument changes, going from tide gauges to satellites, whatever)
        -Or the water isn’t coming from the poles

        There are other things that can be in play. I cannot find a single definitive paper that makes the claim that is being made here.

  15. Officially, an El Niño is an ocean temperature rise of 0.5°C above the historical baseline for three months in a row. What constitutes ‘history’ you may ask? That would be, 1950 to 2015.

    Given that the Earth and therefore the oceans have been heating for thousands of years, any ‘baseline’ in an truly historical context would be lower. Accordingly, as the Earth cools over the next ~30 years, there could be a La Niña that may well be the equivalent of an El Niño of yesteryear.

  16. I keep getting excited by that circa 1908 to 1940 temp upturn. Looks pretty radical, but it’s a conversation-ender, by and large. Like the unmissable PIG ash sheet and active volcanism, it’s there, but one does not discuss. Try to talk about it and someone goes to the bathroom, someone else has to make a call or check the roast.

    The 1908 upturn does bring out a few rare if half-hearted sunspot enthusiasts from among the warmistas, and they might poke around listlessly for some volcanism to explain the temp downturn prior to that date. But their hearts aren’t in it. Where’s the boil, the melt, the sizzle?

    As far as early 20th warming as shown on so many those guess-graphs…it’s eyes right, and stand to attention!

    • Mosomo: Here’s a guy who discovered what all the warmunists missed:
      http://ozonedepletiontheory.info/climate-trends-and-depletion.html

      • Well…

        22 PPM = 0.2 W/m2 says that something other than CO2 was responsible for most of the warming.

        Warmunists aren’t interested in attribution because that would attribute most of the warming to something else.

        I don’t have a pet theory. It could be a combination of ALW (anthropogenic local warming), other gases or aerosols, high multiple solar forcing, or just bad bookkeeping and data mismanagement by the climate centers.

        Ozone theory as a part of the solution is cool. The disinterest by warmunists means none of the other explanations is ruled out.

  17. Why do we calculate Sea Surface Temperature at all? As a metric, what is it supposed to represent? What do we hope to learn from Sea Surface Temperatures compared to past Sea Surface Temperatures? — other than their usefulness in predicting some of the climate cycles, like El Nino?

    It certainly does not tell us anything about the heat content of the oceans and nothing whatever about the concepts involved in GHG warming.

    What is hypothesized about Sea Surface Temperatures that makes us want to track them over time?

    • Kip Hansen,

      You must be a heretic. You better watch it, mate, otherwise you’ll have your Warmist Magic Decoder Ring taken away, and stomped on!

      The sea surface temperature is not exactly the sea surface temperature. It’s exactly the same as the surface air temperature, but different. Both are completely pointless. As a means of divining the future, one might well as claim that measuring the distance between the runes with ever increasing accuracy makes the future ever so much more easy to predict.

      Predicting El Niño, or even El Morhino, is a useless waste of time. Oh dear, the science is settled, El Niño is coming. Sometime, somewhere, maybe! Let us all panic and gnash our teeth, rend our garments, and all the rest of the Warmist nonsense! Or have a nice cup of tea, keep calm, and carry on.

      You ask why anybody would bother with measuring something of such supreme indifference, and even less precise definition. It’s simple. Money, prestige, ignorance, better than having to look for a real job . . .

      Cheers.

    • isn’t it because there was a big pile of haphazard and inconsistently gathered historical data
      thrown together and given a fancy name “Sea Surface Temperature”
      because fancy names are good for guiding unproven hypotheses down the road to Dogma

    • What an original thought.

      Scott

    • Kip: It is the desire to get a figure for the 3D Temperature Field everybody talks about as Global Temperature. Without that 70% of data, the claims made are rather silly.

      So no choice really. (Doesn’t mean they got it right though).

  18. Sometimes backing away a bit makes the picture clearer.

    Carbon dioxide has been erroneously suspected of being a forcing on global temperature. Compelling evidence, CO2 has no effect on climate requires only (1) Understanding that temperature changes with the integral of the net forcing (not directly with the instantaneous value of the forcing itself). And (2) all life depends ultimately on photosynthesis which requires CO2.

    The 542 million years of evolution on land required substantial atmospheric CO2. The integral of CO2 (or a function thereof) for 542 million years could not consistently result in today’s temperature. Documented in a peer reviewed paper at Energy & Environment, vol. 26, no. 5, 841-845 and also at http://agwunveiled.blogspot.com which also identifies the two factors that explain climate change (97% match since before 1900).

  19. Maybe something needs to be distinguished, because in Figure 2B of Karl et al., there is a very large 1945 adjustment that warms the early century, so it seems they accounted for this, and they attribute it to Smith and Reynolds (2006). In what way is the Thompson adjustment different, and why use that instead? The post is too long to decipher on this point.

    • Curious George

      Adjustments are my love.

      • OK, explain the difference between Smith and Reynolds used by Karl, and what Tisdale wants them to use. That would be useful, because Tisdale is far from clear on this.

    • Jim D;

      I thought the Thompson adjustment was based upon the post-WWII (not pre-1945) shift from US to British ship measurements.

      • The largest shift in Smith and Reynolds was around 1945. Is that a different one from Thompson?

  20. Stephen Hawking said –

    “Many scientists are like Einstein, in that they have a deep emotional attachment to determinism.”

    They also appear to have a deep emotional aversion to reality. Earlier on this thread Michel provided a graph of data “smoothed in various ways.” Now most scientists have to reject the reality of data, by smoothing, or other methods, because their brains cannot cope with reality. Smoothing allows the emergence of nice smooth patterns, which fit the preconception that “God does not play dice with the universe . . . ” Chaos is purely unacceptable.

    If an examination of raw data shows no pattern, maybe there is no pattern. Resorting to statistical methods to “show” that patterns exist, might be mere self delusion. A square wave can be analysed to show that it is the sum of an infinite number of odd harmonics added to the fundamental.

    Or it could be a switch opening and closing at regular intervals.

    Without having knowledge of the mechanisms generating the data, it is silly to place our own assumptions on what the graph “should” look like, and bend the data to our will.

    If the data looks chaotic, maybe it is. Derive the fractal dimension, examine scale invariance, and maybe all the smoothed patterns fly off into the realm of fantasy, rather than fact.

    Cheers.

    • John Carpenter

      Based on what you think, the use of statistical process control wouldn’t work at all, yet it is a useful tool used by countless numbers of manufacturing companies to refine and improve their manufacturing processes. Design of experiments and the use of statistical analysis would be useless because chaotic looking data may not yield any trends or patterns.

      Good thing you don’t get to decide.

    • John Carpenter,

      Statistical process control does work, and there is precisely nothing in what I said that claims otherwise.

      Chaotic data by definition does not yield useful trends or patterns. Warmist and many others deny this reality. “Tell me it ain’t so.” they cry, “Surely God does not play dice.”

      The use of statistical analysis to divine the future is as effective as a naive persistence assumption in general. Try me if you wish.

      Cheers.

  21. There appears to be a non-paywalled version of Thompson et al. (2008) here.

  22. Take home message
    ” Cowtan and Way (2014) , Kriging can also create a spatial pattern that bears no resemblance to known phenomena, like their El Niño that runs diagonally, from the northwest to the southeast across the eastern tropical Pacific.”

    Thanks Bob.

    I would like every denizen to paste this comment whenever they comment on Cowtan and Way for the next year [2016].

    That should get Nick Stokes attention, maybe even Zeke.
    There should be Skeptics out there doing their own Kriging to show what a sorrowful set of “perfect ” modelling has been attempted by the Skeptical science boys.
    Mind you they did it to bust the pause as well and first.
    Funny that they wasted their time when their mates found another method.
    33+ explanations for the pause each accounting for half a degree rise of temp that did not occur.
    Why not call them to account Judy?
    We should be 16.5 degrees warmer by this logic.
    [usual caveats T to 4th power etc]

    • angech wrote:

      Take home message
      ” Cowtan and Way (2014) , Kriging can also create a spatial pattern that bears no resemblance to known phenomena, like their El Niño that runs diagonally, from the northwest to the southeast across the eastern tropical Pacific.”

      Thanks Bob.

      I would like every denizen to paste this comment whenever they comment on Cowtan and Way for the next year [2016].

      Cowtan and Way’s comments regrettable. It appears that their knowledge on the subject at the time was superficial, narrow, and misinforms. I hope they can explain things better now. Tisdale compounds matters by also not really adequately informed on the topic. Tisdale’s critique pertains to block kriging, used by many including including C&W, but angech incorrectly generalizes. Not that it matters….

      Doesn’t anyone even try to understand the fundamentals of the toys they use and critique?

  23. As a non-scientific follower of CE, I read nearly every post and glean a great deal from most. I found this post very difficult to follow. I’m guilty of wishing Bob had an editor, though I understand that resources are not free.

    • Yes, I got one of his books on Oceans. It is tedious stuff, I don’t know what an editor can do.

      • I have great respect for Bob Tisdale and read everything he writes. He digs for every ounce of data he can find and subjects it to analysis. I don’t think there is a way reduce the volume without leaving data out that may at some point matter. Tedious -absolutely, and not for the faint of heart. I fear we are so used to “talking points” and fractious conclusions that we forget how complex the atmosphere/ocean science really is. It is natural to try to reduce information to as simple an understanding as possible, but there is danger in doing so. Keep chugging Bob. I learn from you!

      • Yes, after reading his ocean stuff I found I learned more that I thought and am better able to understand posts regarding that. Thanks Bob!

  24. nobodysknowledge

    I am impressed with all the work laid down in this presentation. What catch my mind is the close relationship between changes in upper ocean and air temperatures. I would like to see some graphs with absolute temperatures and not anomalies, to see the differences between air and ocean temperatures.

  25. Judicial Watch Sues for Documents Withheld From Congress in New Climate Data Scandal

    DECEMBER 22, 2015
    (Washington, DC) – Judicial Watch announced today that it filed a lawsuit on December 2, 2015, in the U.S. District Court for the District of Columbia seeking records of communications from National Oceanographic and Atmospheric Administration (NOAA) officials regarding methodology for collecting and interpreting data used in climate models (Judicial Watch v. U.S. Department of Commerce (No 1:15-cv-02088)). The lawsuit sought the same documents unsuccessfully subpoenaed by a House committee. Less than week after Judicial Watch served its lawsuit on NOAA, the agency finally turned over the targeted documents to Congress

    http://www.judicialwatch.org/press-room/press-releases/judicial-watch-sues-for-documents-withheld-from-congress-in-new-climate-data-scandal/

  26. So Cowtan & Way has diagonal El Ninos?

    Does BEST also have this issue?

    • Geoff Sherrington

      Just as there are good singers and poor, there are those who are good at the Krige and those who are not so good. It is not the fault of the known Krige math. It is a problem of less than optimum application.
      One cannot establish profiency without, for example, the multi-year process of estimating a pre mining ore resource and then validating it against measured production.
      Doing less robust validations like holding back data then recalculating it, might be part of the path to confidence estimation, but it does not cut the full mustard.
      (My colleagues from last century spent months with people like Michel and Agterberg and other pioneers of geostatistics including Kriging. Some rubbed off.)

      • Geoff,

        Just as there are good singers and poor, there are those who are good at the Krige and those who are not so good. It is not the fault of the known Krige math. It is a problem of less than optimum application.

        Thank you, thank you, thank you.

      • “It is a problem of less than optimum application.”
        No, it’s a problem in the data. December 1941 was the month of Pearl Harbour. Data is not abundant. Look at Fig 9 in the post. Top left, there is just one track across the Nino area, in a NW direction. Next plot down, HADSST3 shows data along just that track. That is what they are infilling. So they interpolate the data to a few neighboring cells. They can’t recreate the proper El Nino. The data just isn’t there. They have done what infilling can do.

      • Nick Stokes,

        No, it’s a problem in the data. December 1941 was the month of Pearl Harbour. Data is not abundant.

        And so the method is being applied where it should not be, i.e., at distances greater than the range of a semi-variogram or a correlation length of a correlation function.

        Data? Application? Six in one hand and half a dozen in the other. A key element missing is the estimation or simulation of local error estimates.

      • read that “estimation or simulation of local errors”

      • “And so the method is being applied where it should not be”

        I see no indication of that. If you compare HADSST3 and C&W, there are a few colored cells adjacent to the data, then a uniform color, which is, I believe, zero anomaly. The purpose of C&W is to provide a better global average. Any average implicitly infills with something; zero anomaly where you have no better knowledge is reasonable. Nothing to do with kriging.

        But C&W was about using infilling for non-SST regions such as Arctic. For SST, oiSST will do much better. That’s not what C&W is about.

        Anyway my point here is simply that the diagonal is not an artefact of kriging. It’s right there in HADSST3, to which kriging is applied.

      • Geoff Sherrington

        Nick,
        ” … To which Kriging is applied.”
        Yes, inappropriately. Thus the poor vocalist.

      • Nick,

        We agree it is not an artifact of kriging. The ‘misapplication’ is that the big idea in kriging is to use as theoretical model for the pair correlation as a function of pair separation distance. When that distance is exceeded by all the points in a search neighborhood 1.) the local estimate approaches the neighbor mean and 2.) really what’s the point of applying the correlation model when autocorrelation is essentially zero? Clearly the utility of neighborhood means as local estimates in the situations is not beyond argument.* When I write, ”And so the method is being applied where it should not be” I mean at locations where it (local estimation) should not be and not that the use of kriging in the overall approach is inappropriate. Hope that clarifies the matter from my perspective. So, again, “Data? Application? Six in one hand and half a dozen in the other.”

        I always resisted the temptation to fill when I did not have any search neighborhood points in range. However my concerns were invariably local estimates.

        ——————–
        * For a search neighborhood where the some points are in ranges and others are out…well tahat can be a different matter.

      • This graph of Bob Tisdale’s comes from files on their website, in which they simply gave the kriged grids for all years from 1880. There is no claim for fitness for any particular purpose. Their paper dealt with the years from 1997 onward, where SST coverage is anyway good. They say in the paper:
        “For short extrapolation ranges (e.g. one cell in latitude or 550 km), the difference between kriging and the hybrid method is small and at midlatitudes the unobserved regions in the SST data tend to be small and isolated; thus the choice of infilling method makes little difference.”

        They also point out that the only SST with poor coverage (in these years) is where there is also intermittent ice, which requires special treatment.

      • Nick,

        One last clarification, if you will:

        “…thus the choice of infilling method makes little difference.”

        between the result [of the comparison of the two methods with one another(?) ]

        Thanks.

      • Well, the pper is here. Sec 5.2 is on SST. Here is the full text:
        “SSTs are better modelled by ordinary kriging. When the calculation is performed with SSTs only, the range of the variogram is 915 km (compared with 830 km with land–ocean data) and the optimum cross-validation results are obtained for s = 0.0 (i.e. ordinary kriging) or s = 0.2; this is consistent with the poor correlation between SST and satellite temperatures (section 3). However, the kriging results are only marginally better than the null reconstruction.

        Given the difference in the optimum value of s, is it reasonable to use a single approach for land and ocean data? For short extrapolation ranges (e.g. one cell in latitude or 550 km), the difference between kriging and the hybrid method is small and at midlatitudes the unobserved regions in the SST data tend to be small and isolated; thus the choice of infilling method makes little difference.

        The only large contiguous unobserved regions in the SST data are in the Arctic and Southern Oceans. These regions are also characterized by seasonal or perennial sea ice, which must be considered separately.”

      • Nick,

        I have the paper and I read the quoted material. I was looking for your read of the quote. Why? Because it is not immediately apparent (to me anyway) that a comparison of two methods each of which incorporates some form of infilling speaks much to the quality of the infill estimates.

      • mw,
        Yes, I don’t think that part tells you anything about the quality. They aren’t going to the trouble of kriging to improve the SST component of the average. In their period, 1997-, any interpolation would do for the gaps in SST. They are kriging to deal with the non-SST areas of poor coverage, especially Arctic. My own view is that even there, any reasonable interpolation will get most of the benefits.

      • Geoff Sherrington

        Nick,
        ” …. any interpolation would do for the gaps in SST”
        What are your comments on ” …. any extrapolation”?
        We seem to be talking about different objectives.
        I am looking to the use of stats to provide systematics of the system, which includes the question of whether it is necessary, desirable or irrelevant to use statistics for this type of analysis. There might not be an answer, except one that is terribly bad and possibly misleading.
        What is the objective with your comments?

      • Geoff,
        C&W are dealing with global averages. To get that average, you have to estimate each point and add them up. That’s usually done by estimating grid cell values, then assuming constant within cell, though you can do fancier. There is an assumed value for each cell – if you don’t make one explicitly, averaging will assign to them the average value of the rest, but usually you know better. Interpolating means that you give a missing cell a value consistent with neighbors. Kriging is a way of doing that. But if there is plenty of neighbor information, any reasonable interpolation scheme will give much the same result.

      • Nick

        The trouble is that often there isn’t .’neighbour information’ or if there is its very thin on the ground. The SST data can be extremely sparse and certainly shouldn’t be termed ‘global’ until well into the 20th Century.

        tonyb

      • Tony,
        Cowtan and Way’s paper dealt with trends from 1997 onward. As they say, Outside ice areas, there was good coverage.

      • Nick

        It was YOU who mentioned 1880 on this sub thread. The idea that we had accurate global SST’s for 20 years before Dickens died warrants just one response. Bah! Humbug!

        A merry Christmas to you Nick, with appreciation for your calm and reasoned voice.

        tonyb

      • “Cowtan and Way’s paper dealt with trends from 1997 onward.”

        Well any ‘trend’ they can establish can only accurately contain periods shorter than 18 years. Anything over that and they have no power to resolve it (especially in the presence of noise). Don’t believe me, consult Nyquist.

      • I’ll try that again.

        “Cowtan and Way’s paper dealt with trends from 1997 onward.”

        Well any periods they can observe in the data are only going to be <18 years long, too short for climate surely?

        Don't believe me

        https://climatedatablog.wordpress.com/2015/12/26/nyquist-doesnt-apply/

      • Tony,
        Thanks for the kind thoughts and wishes – reciprocated.

        As said, C&W published a paper on the period since 1997. They did so because of claims that you may have heard that there had been a pause in that time. And they showed that for HADCRUT, a big contributor to this “pause” was undercounting the Arctic warmth. One way they showed this was by kriging, and that (or other interpolation) was appropriate for that coverage gap.

        Having developed the kriging analysis, they ran it back to 1880 and put the result on the web. That extension wasn’t mentioned in their paper, and AFAIK no claims have been made of it. For my part, I think kriging does no harm for 1880 SST, but little benefit either.

        But the detail demands of averaging are not great. While spatially there are gaps, it (from 1880) is still a fair sample. With political polls, for example, quite good predictions can be made with a sample of a thousand out of millions, without any benefit of spatial structure that can be used for interpolation. I showed here (more graphics here) that just 60 land-based stations gave quite a good approximation to the global average. With SST there is a strong spatial structure that is used by OiSST. But HADSST3 does pretty well too.

      • Nick

        Polling is a poor analogy. The polls got our recent general election result wrong.

        Secondly, the SST average is being taken by inexperienced people from numerous depths, the water temperature sampled by numerous un-calibrated thermometers not at ambient temperatures with the bucket water left out in the warm sun or cold night for varying times, to cool or cook.

        Would you take an average of land thermometers to be correct if they were all from different altitudes, some in the sun some in the shade, mostly taken in the summer and not at night, read by untrained observers and with the vast majority of the world not being sampled anyway?

        tonyb

      • Geoff Sherrington

        Nick,
        To end this and to wish you a nice 2016, I’ll simply say that if mining standards for geostatistics were contemplated to get an average SST or similar variant, one would do an initial evaluation whose outcome, I suggest here, would quite promptly be “Do not even think about doing it.”
        My argument is that the progress of this global warming monster has caused harm in the wider world by trying to add respectability to dubious data and dubious selection criteria for treating it.
        No average is better than a bad average for this SST material. There is no compelling reason to even compute an average, global fiction.
        Geoff.

      • Geoff

        I have a lot of time for Nick, but like too many in this science he uses data that is little better than fiction.

        Global SST’s to 1850 are a waste of time. We would be far better off to agree the timings when well travelled trade routes were sampled well enough to yield worthwhile information and when (or if) that could be used as global data.

        Using interpolation from already highly vague adjacent grid cells provides little of scientific value.

        tonyb

      • Nick

        I showed here (more graphics here) that just 60 land-based stations gave quite a good approximation to the global average.

        Now about the estimation errors–global and local ? well…fuhgedaboudit! Curiously the word ‘error’ occurs only once on the first linked page—in the last comment in a paper title—and not at all on the second linked page. Add this to fact that comparisons with other estimates defines good and I remain unconvinced of ‘goodness’.

        Also based on what I have seen with semi-variograms for the US and physiographic areas in the US…no way! Respectfully, I think you are fooling yourself.

        Over time the more I have thought about the problem of estimating a global average temperature the more issues there seems to be.* It is folly–folly at a conceptual level deeper than selecting an interpolation or averaging method. I think that Geoff has an arguable position that attempting to calculate a global estimates has done more harm than good.
        ———–
        * In the past I have argued that it is valid to to calculate global temperature estimates as metric. That has not changed for me. What has changed over time is the perception of issues to be overcome in making usefully rigorous estimates…representativeness, coverage, error analysis, time dependence of correlation. The way is yet long.

      • “and not at all on the second linked page”
        An ambition is mentioned on that page. The process of station reduction can lead to quite different subsets. The best measure of error of integration is the scatter of results from the subsetting. I did promise that as next project – thanks for the reminder.

      • Nick,

        Well ambition I can understand. There is a lot on the plate. Best wishes for the New Year. I shall stay tuned.

        mw

      • “It was YOU who mentioned 1880 on this sub thread. The idea that we had accurate global SST’s for 20 years before Dickens died warrants just one response. Bah! Humbug!”

        Tony.

        The SST measures are just as good if not better than CET.

      • mw,
        “Best wishes for the New Year.”
        Thanks, and to you. I shall make a resolution.

      • Mosh

        You must be VERY impressed with global SST’s to 1850 then, as you have preciously said about CET;

        “At the extreme you can take one reliable station ( CET) and estimate the whole globe. …’

        http://judithcurry.com/2013/08/13/impact-of-climate-population-and-co2-on-water-resources/#comment-364794

        AND also

        Steven Mosher | October 4, 2013 at 12:13 am |
        “Odd the way your crappy BEST moving average is so far outside the data in 1780. ”

        Another idiot comment from greg.

        The BEST Data at 1780 is the average of the entire field. the small patch of england tony refers to as well as most of europe and some of north america.

        Tony is comparing CET ( a few square miles) to a much larger area.

        That location (CET) along with a few others has reasonable correlation with the entire globe, although with CET ( and others) you will find years in which it is at odds with the rest of the world.”

        A happy new year Mosh.

        Tonyb

      • “The SST measures are just as good if not better than CET.”

        Based on what data? For the whole globe? Seriously?

      • ““Best wishes for the New Year.”
        Thanks, and to you. I shall make a resolution.”

        And a good New Year to you also.
        I too will make a resolution. :-)

  27. In the 1970’s, there was little doubt among professional climatologists that the coolest temperatures of the century were being experienced throughout the globe. In fact, some even raised the alarm of slipping into the next ice age.

    Through various devices, the agenda-driven present-day “climate science” establishment has succeeded in virtually eliminating that deep mid-century cooling in the GSAT record, bringing it into closer accord not with any genuine empirical benchmark, but with the simplistic a priori assumption of white-noise-and-trend.temperature variation. Just as the descent from the medieval warm period into the “liitle” ice age was being minimized decades ago, the massage of the SST data discussed here is but another manifestation of tendentious eradication of natural temperature swings in order to promote the notion that there is a dangerous secular warming trend at work.

  28. Late comment on a mostly dead thread. Mostly for the CE record.

    Like everyone else, I do not always get it right. Missed on Eemian SLR in the original guest post By Land or by Sea on O’Leary’s academic misconduct concerning rapid SLR. Denizens rapidly fixed that, and Judith graciously put out a revision. I forgot here was relying here on McKittrick’s analysis of Kennedy, rather than directly on the two archived Kennedy papers themselves. Been busy with due diligence selling one of my companies, and all that. But for NS above to dismiss the McKittrick issue of SE vs SD is either disingenuous, or reflective of lack of statistical chops.

    To Mosher 12/22 at 00:41 asserting I wasted energy on my three ebooks which took collectively 6 years to complete, energy better spent in his opinion doing ‘science’ (to prove/disprove BEST?), two observations.
    1. The ebooks were NOT aimed at scientists. They were aimed at making a contribution to the general discourse in ways voting laymen could understand. Because that is what matters for policy. Judith herself was gracious enough to say so in her foreword to Blowing Smoke. Don’t you just hate it when some of us venture from the scientific to the political arena? In your name (plus a bit more), I just gifted an Amazon Kindle version of Blowing Smoke to Senator Cruz. Seemed the right thing to do under the Data/Dogma circumstances. Markey did not deserve same. I am still pondering Admiral Titley. Such a waste of my $8 unless he reads it.
    2. I have done real science. Am co or sole inventor on 13 issued US patents, in four different areas. Hard experimental science like RFID (Maxwell equations consequences), patient monitoring (FDA), and energy storage (Helmholtz double layer consequences). My problem is, I find most climate science (not Judith’s) rather ‘squishy’, rather more akin to Feynman’s discussion of rat mazes in his famous Cargo Cult commencement address. When all the climate science ‘projections’ are being falsified by Ma Nature, it is hard not to find a big squishy uncertainty monster at the core of what you so firmly believe and espouse is long since settled.

    BTW, Mosher, how many issued patents do you hold as indicia of doing hard science? Just curious, since proceeds from a few of mine helped enable the 6 long years spent researching and writing my 3 per you ‘waste of time’ ebooks.

    • Don’t you just hate it when some of us venture from the scientific to the political arena?

      I think Dr. Curry calls them “advocates” and unethical but I am not sure what that makes you. I will keep your admission in mind when I read your posts, though.

    • Rud,

      For what it’s worth, “Blowing Smoke” is a regular go to for me.

      Nobody gets it right all the time. Putting it out there takes guts.

      The warmunists are finding fewer and fewer places to hide. They are running scared.

      Keep up the good work and your incredible energy and dedication.

      • You could not be more wrong. Nobody is running scared. Confidence soaring. Nobody is even remotely afraid of Lamar Smith and his baseless investigation.

      • Rud, I don’t always appreciate what sometimes seems like self promotion of your books, but I always appreciate the time you take to re-express your quality thoughts here for free.

      • You could not be more wrong

        Global warmers can’t prove:
        1. The sea level is rising significantly based on rotational anomalies.
        2. That more warming is bad.
        3. That more CO2 is bad
        4. That warming is going to be significant.
        5. That the TSR is even as high as the low end of IPCC estimates based on empirical measurement.
        6. That the average annual CO2 increase will ever be much over 2.2 PPM/Y

        You can dance dance dance that we are doomed all you like. Haven’t made any sort of case. By 2020 we need to fold the “Climate Change” tent and go back to funding real science.

      • richardswarthout

        JCH

        “You could not be more wrong. Nobody is running scared. Confidence soaring.”

        Sign of yet another weakness of our human condition.

        Merry Christmas

        Richard

      • “Nobody is running scared. Confidence soaring.”

        Pride before a fall?

      • “Nobody is even remotely afraid of Lamar Smith and his baseless investigation.”

        It has a basis in whistleblowers’ disclosures to him.

      • Deadenders in their last trench… it does not end well.

      • I had to look at the sig to see which side had said this.

    • Rud:

      Late comment on a mostly dead thread.

      If this thread could quote Monty Python, I think it would say it’s not dead yet.

      https://youtu.be/UPatfgoNBRo

    • Like I said Rud.. with all that brain power you wasted your time on ebooks
      that no policy maker took notice of.

      As smart as you are, as gifted, and as talented, and acomplished as you are, you did us all a dis-service by wasting those talents on ebooks.

      You should have joined Nic Lewis and Judith or Mcintyre or Watts.
      You should have waged war on the climate science battle field..

      I’m serious Rud. I shake my head and think, If I had Rud’s skills, and connections, and money, and gift for commmunication, I would never waste it on doing ebooks for “non deciders”.

  29. I have mentioned the hiatus in today’s email to over 100 Australian politicians, as copied below …

    To Hons Greg Hunt, Malcolm Turnbull and other politicians

    THIS IS A VERY SERIOUS MATTER

    I have proved beyond reasonable doubt, based on the laws of physics (in which I am qualified and experienced over 50 years) that the radiative forcing greenhouse hypothesis is false. Carbon dioxide does not and cannot warm Earth’s surface.

    What I say is backed up by the work of a brilliant professor of applied mathematics in his paper Mathematical Physics of BlackBody Radiation* as well as by data from all planets, a study of temperature data for Earth and experiments with relevant devices as well. More importantly, it is based on correct physics, whereas what climatologists (unqualified in physics) teach themselves and their pal-reviewers is fictitious, fiddled physics.

    I am happy to discuss this with you and at any enquiry you initiate, as you should do. Your government has failed to pay due diligence in checking the false science they promulgate. You have not checked with physicists who understand thermodynamics, as I do. I will prove anyone else wrong if they try to claim the current “explanation” is right. I have done so with many hundreds on climate blogs and never been proven wrong. I’ve even offer $10,000 if anyone can do so.

    Note also that, whilst carbon dioxide levels continue to rise, there has been no warming at all since the maximum in 1998. Since then it has now been proven that the Americans are tampering with weather station records and ignoring the artificial “warming” cause by urban crawl, air conditioners near weather stations, etc, thus creating apparent warming that is not shown in the satellite measurements for the lower troposphere. In fact, as the past shows, there will be a period of nearly 500 years of long-term cooling starting before the year 2100, and temperatures will not rise more than about half a degree before that cooling starts.

    If you fail to look into what I have explained at http://climate-change-theory.com and in my linked papers, my videos and my book “Why It’s Not Carbon Dioxide After All” then I will start arranging a huge class action by major companies against the government aimed at forcing changes in any law or governmental requirement that is in place purely because of the false assumption that carbon dioxide warms, when in fact it can only cool by a minuscule amount.

  30. My feeling is, given the current state of public education, Jesus would be a skeptic.

    • For Jesus, you need to be a believer, not a sceptic. For science you need to be impartial.

    • Wasn’t the comment ““I think any good scientist ought to be a sceptical”?

      • Freeman Dyson just does not believe climatologists “understand the climate,” and says, ”their computer models are full of fudge factors.”

      • Computer models for something as complex as modeling Earth’s Climate “BY NECESSITY” must have “FUDGE FACTORS” for those “KNOWN UNKNOWNS” and “UNKNOWN UNKNOWNS” to go along with those variables the modelers “think” are “KNOWN”. The effort of modelers is to “REPLACE” each these “UNKNOWN FUDGE FACTORS” with “KNOWNS” as they become available. Until “KNOWNS” are available for any “FUDGE FACTOR”, the “FUDGE FACTOR(S)” should be “ADJUSTED” so that the computer models agree with observations. Typically that would be to vary the “FUDGE FACTORS” that give the greatest response; this will, in turn, provide clues as to where to spend effort in making sure these factors are as “WELL KNOWN” as possible. JCH clamors about PDO. But I suspect and hear that “CLOUDS” may be the major “UNKNOWN”. Until “CLOUDS” are properly addressed and included, varying the other major variables might just be secondary. Having done some “simple” modeling” and the need to include a “fudge factor” until my model met the data and then looking to see what that fudge factor had to be, I can appreciate the tremendous effort that goes into the main and sub coding that must be required to do what is being asked of those doing “CLIMATE” modeling. I would expect, however, that there would be more “FLEXIBILITY” demonstrated in having those models match reality. Models should be directing research areas needed to “NAIL DOWN” factors as well as being themselves “FLEXIBLE”!

      • Joel: I think that there are some things that we can quantify now.

        How big is the uncertainty of the point sampled data (thermometer) being extrapolated/interpolated to approximate the underlying 3D Temperature Field?

        The spacing and data length is horrible. And Nyquist applies over it all, in all dimensions.

      • RichardLH,
        I am sure there are things that can be handled well now. Q is whether there are variables/factors that have larger impacts that are not being varied such that these “well handled ones” are appearing to be the the major controlling factors when, in fact, they are not.

      • Ah. The problem with multi input clockwork engines. Bit difficult to tell which lever to move to speed things up.

    • I think I understand Freeman Dyson’s position. I thought that his original comment was broader than you implied. Still sceptical, just not necessarily a sceptic.

      • True, true I’m sure you are right in so far as Dyson’s position on skepticism not being limited simply to AGW theory. “It was similar in the Soviet Union,” Dyson observed. “Who could doubt Marxist economics was the future?”

  31. Does anyone understand why we need to fund 2 government agencies (NOAA and GISS) to continually turn the crank on surface temperature measurements? Seems like this would be NOAA’s domain and GISS (Goddard Inst. of SPACE Studies, a branch of Nasa) should be focused on satellite data.

    Why do we need 2 government funded interpretations of the same low quality measurements? Why does GISS/Nasa apparently disown the satellite data?

    • Mark

      I would be interested in reading a technical article explaining just why we should take no notice of satellites that take global temperature readings whilst at the same time the author could explain just why the satellite readings for sea level and sea ice are supposed to be so accurate.

      tonyb

      • Tony,

        I spent some time on the GISS/Nasa website and could not find anything relating to satellite temps. Found tons of stuff on climate and modeling though.

        I ended up in a FAQ section of the website where again there was nothing relating to satellite temps. There was a contact for asking additional questions (Please email any questions to Dr. Reto Ruedy.)

        So I submitted this question: “Does GISS use the atmospheric temperature data derived from satellite measurements to verify climate models. If so, how is this done and if not why not?” If I get a response, I’ll post it.

      • Mark

        Excellent. A reply would be very interesting to read

        tonyb

      • I received an almost immediate response from Dr. Ruedy saying that they use the satellite temperature data and have been since it became available. He provided links to 2 references dated 1995.
        http://pubs.giss.nasa.gov/abs/sh01000b.html
        http://pubs.giss.nasa.gov/docs/1995/1995_Hansen_etal_3.pdf

        The punch line of the Hansen paper is:

        “We conclude that surface and satellite temperature change data are really not at odds. Rather they have the potential to provide, in combination, a powerful tool for analyzing global change. Temperature change need not be the same at all altitudes, and deviations from uniformity contain invaluable information on the mechanisms of change. Exploitation of this potential requires good data for the profile of temperature change, as can be provided by radiosondes, as well as the global coverage attainable from satellites, and absolutely calibrated surface mea- surements. Interpretation of measured temperature changes will require monitoring of global climate forcings and feedbacks (Hansen et al., 1993b).
        Although we have pointed out uncertainties regarding MSU data at the level of several hundredths of a degree Celsius per decade, this is not meant as a negative comment regarding the value of the MSU data. On the contrary, our interest is sparked by the great potential of these data, and it is hoped that highlighting these matters will only make the data all the more valuable. Spencer and Christy are to be commended for their hard work in preparing the MSU data set and for the care and completeness with which they have assessed and presented statistics for several possible sources of error. Undoubtedly the MSU data will be an important component of future global change studies.”

        I have asked for something more current than 1995.

      • https://notalotofpeopleknowthat.files.wordpress.com/2015/09/trend4.png
        https://notalotofpeopleknowthat.files.wordpress.com/2015/09/trend4.png
        The NALOPKT site has an interesting post.

        Apparently GISS and UAH match trends up to 2005, then they part company.

        One would think this would be of interest to someone to figure out why.

      • “…the author could explain just why the satellite readings for sea level and sea ice are supposed to be so accurate.”

        tonyb, it now seems that some satellite readings for sea ice are more equal than others.

        Many have been aware for some time of a certain situation in the southern hemisphere (the world’s leading hemisphere). Seems there’s just too much ice down here, according to satellite data. That’s the very same sat data which has brought us the great Arctic melt, which is the greatest most record-breakingest – need more superlatives! – melt evah since 1979.

        Talking about the melty bits in Western Antarctica (which is volcanic, but sssh) has been a good way to distract public attention, but Antarctic sea ice levels have been getting rather too large and inescapable for academic comfort.

        So “scientists”, according to Science last year, decided to “dig deeper”. (The digging has not required removal to southern climes with shovels, rest assured. That sort of physical thing is so last Enlightenment.)

        “Despite global warming, the fringe of sea ice around Antarctica is expanding slightly, in contrast to the marked decline of sea ice in the Arctic. Scientists have blamed this curious fact on various forces, from shifting winds to smaller waves, but a new study suggests a more mundane culprit: an error in the way the satellite data have been processed. The miscalculation, the authors say, might be making the sea ice increase appear larger than it is.” – Science. (Note how Antarctic sea ice is a “fringe”.)

        I always knew that there would be a “new study” to this effect. Don’t know how I knew. I just knew. Can’t you just sense these things coming? I can.

        Of course, the scrambled article was mainly an opportunity for Science to publish a big stock image of some melty-looking ice while sowing vague doubt verbally. So It’s not about false information and speculation. It’s about subliminal messaging. Advertising, if you like. Show business, as it were.

        Think Science, think Ethel Merman.

      • Tony,

        So far no further response from Dr. Ruedy of GISS. However since it’s that time of year when all goes quiet, I’ll give him the benefit of the doubt. I’m on it though and will not let it go until I get an answer.

      • Geoff Sherrington

        Mosomoso.
        “Western Antarctica (which is volcanic, but sssh)”
        I doubt that the volcanic factor deserves a mention, because of options like –
        1. There has been no change in volcanic heat output in the last 1,000 years, so uniformitarianism can be assumed.
        2. There has been change but the magnitude cannot be known.
        3. We can infer change because there is more ice melting now than before, except we do not know before.
        4. The forces typically associated with volcanism of the slow moving type are low in comparison with other heating sources.
        Which is all quite academic because we don’t have the data to tell which option is best or worst.

    • Mark

      Good question. Mission creep which affects most agencies. Same reason there are several dozen jobs programs in many departments

      • cerescokid,

        Yeah, I get it. The end of the cold war, curtailment of manned space flight and overall budget sensitivity has driven NASA, NOAA and most of the National Labs to chase the easy climate alarmism money.

        This is the sort of waste that Ted Cruz should be chasing in his Senate subcommittee.

    • Does anyone understand why we need to fund 2 government agencies (NOAA and GISS) to continually turn the crank on surface temperature measurements?

      We don’t. And NOAA is supposed to be reporting temperature not creating it.

      However:
      http://www.giss.nasa.gov/projects/
      Surface air temperature change is a primary measure of global climate change. The GISTEMP project started in the late 1970s to provide an estimate of the changing global surface air temperature which could be compared with the estimates obtained from climate models simulating the effect of changes in atmospheric carbon dioxide, volcanic aerosols, and solar irradiance. The continuing analysis updates global temperature change from the late 1800s to the present.

      So NASA is deliberately creating a surface temperature set to make their lousy climate models look better.

      The other temperature sets, that in theory, have more honest objectives should not resemble GISS. GISS should be considered invalid for determining “warmest year”, since honest rendering of historic temperatures isn’t the objective.

  32. Somebody used this the other day wit respect to balloons and RSS/UAH6.X: ground truth.

    This is might be called water truth.

  33. Rather than limited to El Niño/La Niña & PDO oscillations, this could be a true observation for all climate modeling, the 97% consensus and AGW theory in general:

    “The idea… is based on very well-established physics, so it’s appealing to think that nature works this way. But our analysis shows that it’s not that simple,” ~Julien Emile-Geay (see Twitter link)

  34. It takes little more than an understanding of photosynthesis to recognize the compelling evidence that CO2 is absolutely necessary for all life on the planet, and always has been.

    If CO2 made the planet warmer (was a forcing on planet temperature) it would have been doing it cumulatively for at least 542 million years (The computational mandate is that temperature change occurs as a transient in response to the time-integral of net forcing (or the time-integral of a function thereof) not directly with the instantaneous value of the net forcing itself). But average global temperature has gone up and down over the eon. The only way this could consistently result is if CO2 has no effect on temperature and temperature change is caused by something else. http://agwunveiled.blogspot.com

  35. Here in El Nino Central on the mid-coast of NSW El Nino has failed to perform. I hear from our precious MSM that he’s doing all sort of mischief elsewhere, but here it’s cool and damp. (Maybe Nino’s gone drag and dressing up as sis Nina. Who’s not trans these days?)

    Temps are hanging in the low 20sC…but don’t worry! The globe is not cooling. Even though today will enter into the local record forever as cool it’s only because of CLOUD. Take away the CLOUD and you get hot. Bring on the CLOUD and you get cool. Works every time. Because nobody can distort temperatures like CLOUD.

    Merry Christmas to all.

    – ATTC

    • mosomoso

      I am spiritually gifting you a membership of the Cloud Appreciation Society.

      https://cloudappreciationsociety.org/

      Perhaps you can join and write off the cost as research using the constant stream of funds you get from big Bamboo?

      A happy Christmas to you, its only hours away from it for you.

      Me? I still have to negotiate the Christingle service with our grand daughter and then the midnight carol service. All with a hacking cough. That should delight the congregation.

      tonyb

      • I really agree with many of the aims of that society. Don’t like cloudless conditions. Those years when we get strong westerlies starting in late winter and you can’t buy a cloud till late September are the pits. That’s why Spring can be so dangerous in these parts: dry heat and inland winds after frost. The dead bladey grass and bracken are like napalm waiting to go off. It was supposed to happen this year for Super-Nino…but squelch!

        I remember spending late Spring in Siena and being horrified by the lack of afternoon cloud let alone thunder after baking hot days. The Tuscan sky just stayed the same all week. No wind, either. Wouldn’t happen here between the Great Divide and the Tasman Sea! (Well, not often.)

        I’m guessing cloud has cooled today by about ten degrees centigrade. Adjust accordingly! If you call it eleven or nine nobody will notice once the dodgy max reading goes into the pot with all the other dodgy max readings.

        Merry Christmas. tonyb!

  36. My final comment for the year.

    Alex Pope thinks holocene climate is bounded by changes in the ice and evaporation cycle and I find myself agreeing that this would explain some of the negative feedbacks keeping climate within remarkable stable bounds.

    Mosomoso has found clouds as also having a significant effect on local weather and again I agree.

    My own observations on local and regional temperatures in Australia brings me to conclude that the prevailing winds (in response to the movement of high and low barometric pressure cells, troughs and ridges) has a far more critical influence on daily maxima and minima. If the wind blows from inland the temperatures rise and if the wind blows from the sea temperatures fall, regardless of cloud cover. Heatwaves result from having repeating patterns of hot winds blowing from inland combining with weak sea breezes and these patterns are chaotic in origin and impossible to predict.

    Happy Christmas to Judith and all who visit her pages and may 2016 bring better economic, social and weather conditions for people from all areas of the world.

    • Peter

      In my last article last month I said this;

      “A future area of my research relates to updating work by Hubert Lamb who created an interesting graphic of the number of days in a year that the UK (and generally Europe) has South Westerly winds. The data begins in 1340 and ends in around 1978. I have asked the Met Office for the source of the data whereby Lambs’ graphic can be updated on a like for like basis, as the wind direction in recent decades may have a bearing on the current temperature decline.”

      The Met Office have been unable to directly supply the information I want but have been helpful in pointing me to other potential sources of information.

      Winds and jet streams are of crucial importance to determining our daily weather, The sun and clouds are closely related to our weather also. They all seem a much more logical candidate than co2 for the changing weather/climate we can observe over the centuries.

      tonyb.

    • Correcting the record Peter M, 2nd last comment fer
      the year. ) Plus one fer Xmas and New Year, Peter.

    • “Winds and jet streams are of crucial importance to determining our daily weather, The sun and clouds are closely related to our weather also. They all seem a much more logical candidate than co2 for the changing weather/climate we can observe over the centuries.”

      Indeed as is being evidenced at present. A few km of movement of the jet stream E or S and the heavy rain we have would be very thick snow in all probability.

      I am not sure how CO2 effects that.

  37. Bob, there is a link to the Thompson paper in my article here on C.Etc a couple of years ago about the wartime adjustments to HadSST
    http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
    Thompson et al 2008
    http://www.atmos.colostate.edu/ao/ThompsonPapers/Thompson_etal_Nature2008.pdf

    You may like to add it to your refs.

  38. I am very sceptical of the accuracy of all these adjustments. Sure there are data sampling problems but I’m not sure that the error bars before or after are realistic.

    I do not accept that you can chop 0.5 degrees out of the data as a speculative adjustment and then claim to be +/- 0.1 degree accurate on what is left.

    However, one thing that does give me some confidence that Hadley are possibly not too far wrong at least in N. Atlantic is comparison to accumulated cyclone energy.
    https://climategrog.files.wordpress.com/2013/04/amo_ace-no_detrend-1925adj.png?w=640
    https://climategrog.wordpress.com/?attachment_id=215

    I have never had the time find out what is behind the 0.1 deg hack I needed to do to align the two around 1920. However, the coverage of samples shown in this post may be a clue.

  39. There was also quotes from Hadley saying that pre-1886 is probably too warm. That is not supported by the comparison to ACE. Here are the adjustments made by M.O. Hadley:
    https://climategrog.files.wordpress.com/2012/03/plots_hadsst-icoads.png

  40. Well Hello Progress ! Growth rarely comes alone. Clearly when plants start growing faster they are going to consume nutrients faster. Growth in any location will diminish over successive years. That’s why crop rotation has been practised for millennia. A richer atmosphere seems a good reason to feed them what else they need to take advantage of their carbon bounty though.

  41. Pingback: Weekly Climate and Energy News Roundup #211 | Watts Up With That?

  42. Pingback: NOAA’s Climate Change Lies | Tai-Chi Policy

  43. Reblogged this on Climate Collections.

  44. Pingback: Summary of the Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product | Bob Tisdale – Climate Observations

  45. Pingback: The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts | Watts Up With That?