Understanding adjustments to temperature data

by Zeke Hausfather

There has been much discussion of temperature adjustment of late in both climate blogs and in the media, but not much background on what specific adjustments are being made, why they are being made, and what effects they have. Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends. The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

Slide1

Figure 1. Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth. Series are aligned relative to 1990-2013 means. NCDC data is from GHCN v3.2 and USHCN v2.5 respectively.

Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best to interpret large datasets with numerous biases such as station moves, instrument changes, time of observation changes, urban heat island biases, and other so-called inhomogenities that have occurred over the last 150 years. Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

This will be the first post in a three-part series examining adjustments in temperature data, with a specific focus on the U.S. land temperatures. This post will provide an overview of the adjustments done and their relative effect on temperatures. The second post will examine Time of Observation adjustments in more detail, using hourly data from the pristine U.S. Climate Reference Network (USCRN) to empirically demonstrate the potential bias introduced by different observation times. The final post will examine automated pairwise homogenization approaches in more detail, looking at how breakpoints are detected and how algorithms can tested to ensure that they are equally effective at removing both cooling and warming biases.

Why Adjust Temperatures?

There are a number of folks who question the need for adjustments at all. Why not just use raw temperatures, they ask, since those are pure and unadulterated? The problem is that (with the exception of the newly created Climate Reference Network), there is really no such thing as a pure and unadulterated temperature record. Temperature stations in the U.S. are mainly operated by volunteer observers (the Cooperative Observer Network, or co-op stations for short). Many of these stations were set up in the late 1800s and early 1900s as part of a national network of weather stations, focused on measuring day-to-day changes in the weather rather than decadal-scale changes in the climate.

Slide1

Figure 2. Documented time of observation changes and instrument changes by year in the co-op and USHCN station networks. Figure courtesy of Claude Williams (NCDC).

Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves. Most of the stations have changed from using liquid in glass thermometers (LiG) in Stevenson screens to electronic Minimum Maximum Temperature Systems (MMTS) or Automated Surface Observing Systems (ASOS). Observation times have shifted from afternoon to morning at most stations since 1960, as part of an effort by the National Weather Service to improve precipitation measurements.

All of these changes introduce (non-random) systemic biases into the network. For example, MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location. There is a very obvious cooling bias in the record associated with the conversion of most co-op stations from LiG to MMTS in the 1980s, and even folks deeply skeptical of the temperature network like Anthony Watts and his coauthors add an explicit correction for this in their paper.

Slide1

Figure 3. Time of Observation over time in the USHCN network. Figure from Menne et al 2009.

Time of observation changes from afternoon to morning also can add a cooling bias of up to 0.5 C, affecting maximum and minimum temperatures similarly. The reasons why this occurs, how it is tested, and how we know that documented time of observations are correct (or not) will be discussed in detail in the subsequent post. There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings.

Because the biases are large and systemic, ignoring them is not a viable option. If some corrections to the data are necessary, there is a need for systems to make these corrections in a way that does not introduce more bias than they remove.

 What are the Adjustments?

Two independent groups, the National Climate Data Center (NCDC) and Berkeley Earth (hereafter Berkeley) start with raw data and use differing methods to create a best estimate of global (and U.S.) temperatures. Other groups like NASA Goddard Institute for Space Studies (GISS) and the Climate Research Unit at the University of East Anglia (CRU) take data from NCDC and other sources and perform additional adjustments, like GISS’s nightlight-based urban heat island corrections.

Slide1

Figure 4. Diagram of processing steps for creating USHCN adjusted temperatures. Note that TAvg temperatures are calculated based on separately adjusted TMin and TMax temperatures.

This post will focus primarily on NCDC’s adjustments, as they are the official government agency tasked with determining U.S. (and global) temperatures. The figure below shows the four major adjustments (including quality control) performed on USHCN data, and their respective effect on the resulting mean temperatures.

Slide1

Figure 5. Impact of adjustments on U.S. temperatures relative to the 1900-1910 period, following the approach used in creating the old USHCN v1 adjustment plot.

NCDC starts by collecting the raw data from the co-op network stations. These records are submitted electronically for most stations, though some continue to send paper forms that must be manually keyed into the system. A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.

Quality Control

Once the data has been collected, it is subjected to an automated quality control (QC) procedure that looks for anomalies like repeated entries of the same temperature value, minimum temperature values that exceed the reported maximum temperature of that day (or vice-versa), values that far exceed (by five sigma or more) expected values for the station, and similar checks. A full list of QC checks is available here.

Daily minimum or maximum temperatures that fail quality control are flagged, and a raw daily file is maintained that includes original values with their associated QC flags. Monthly minimum, maximum, and mean temperatures are calculated using daily temperature data that passes QC checks. A monthly mean is calculated only when nine or fewer daily values are missing or flagged. A raw USHCN monthly data file is available that includes both monthly values and associated QC flags.

The impact of QC adjustments is relatively minor. Apart from a slight cooling of temperatures prior to 1910, the trend is unchanged by QC adjustments for the remainder of the record (e.g. the red line in Figure 5).

Time of Observation (TOBs) Adjustments

Temperature data is adjusted based on its reported time of observation. Each observer is supposed to report the time at which observations were taken. While some variance of this is expected, as observers won’t reset the instrument at the same time every day, these departures should be mostly random and won’t necessarily introduce systemic bias. The major sources of bias are introduced by system-wide decisions to change observing times, as shown in Figure 3. The gradual network-wide switch from afternoon to morning observation times after 1950 has introduced a CONUS-wide cooling bias of about 0.2 to 0.25 C. The TOBs adjustments are outlined and tested in Karl et al 1986 and Vose et al 2003, and will be explored in more detail in the subsequent post. The impact of TOBs adjustments is shown in Figure 6, below.

Slide1

Figure 6. Time of observation adjustments to USHCN relative to the 1900-1910 period.

TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.

Pairwise Homogenization Algorithm (PHA) Adjustments

The Pairwise Homogenization Algorithm was designed as an automated method of detecting and correcting localized temperature biases due to station moves, instrument changes, microsite changes, and meso-scale changes like urban heat islands.

The algorithm (whose code can be downloaded here) is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally. If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.

To detect localized biases, the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors (separately for min and max) and looks for breakpoints that show up in the record of one station but none of the surrounding stations. These breakpoints can take the form of both abrupt step-changes and gradual trend-inhomogenities that move a station’s record further away from its neighbors. The figures below show histograms of all the detected breakpoints (and their magnitudes) for both minimum and maximum temperatures.

Slide1

Figure 7. Histogram of all PHA changepoint adjustments for versions 3.1 and 3.2 of the PHA for minimum (left) and maximum (right) temperatures.

While fairly symmetric in aggregate, there are distinct temporal patterns in the PHA adjustments. The single largest of these are positive adjustments in maximum temperatures to account for transitions from LiG instruments to MMTS and ASOS instruments in the 1980s, 1990s, and 2000s. Other notable PHA-detected adjustments are minimum (and more modest maximum) temperature shifts associated with a widespread move of stations from inner city rooftops to newly-constructed airports or wastewater treatment plants after 1940, as well as gradual corrections of urbanizing sites like Reno, Nevada. The net effect of PHA adjustments is shown in Figure 8, below.

Slide1

Figure 8. Pairwise Homogenization Algorithm adjustments to USHCN relative to the 1900-1910 period.

The PHA has a large impact on max temperatures post-1980, corresponding to the period of transition to MMTS and ASOS instruments. Max adjustments are fairly modest pre-1980s, and are presumably responding mostly to the effects of station moves. Minimum temperature adjustments are more mixed, with no real century-scale trend impact. These minimum temperature adjustments do seem to remove much of the urban-correlated warming bias in minimum temperatures, even if only rural stations are used in the homogenization process to avoid any incidental aliasing in of urban warming, as discussed in Hausfather et al. 2013.

The PHA can also effectively detect and deal with breakpoints associated with Time of Observation changes. When NCDC’s PHA is run without doing the explicit TOBs adjustment described previously, the results are largely the same (see the discussion of this in Williams et al 2012). Berkeley uses a somewhat analogous relative difference approach to homogenization that also picks up and removes TOBs biases without the need for an explicit adjustment.

With any automated homogenization approach, it is critically important that the algorithm be tested with synthetic data with various types of biases introduced (step changes, trend inhomogenities, sawtooth patterns, etc.), to ensure that the algorithm will identically deal with biases in both directions and not create any new systemic biases when correcting inhomogenities in the record. This was done initially in Williams et al 2012 and Venema et al 2012. There are ongoing efforts to create a standardized set of tests that various groups around the world can submit homogenization algorithms to be evaluated by, as discussed in our recently submitted paper. This process, and other detailed discussion of automated homogenization, will be discussed in more detail in part three of this series of posts.

Infilling

Finally we come to infilling, which has garnered quite a bit of attention of late due to some rather outlandish claims of its impact. Infilling occurs in the USHCN network in two different cases: when the raw data is not available for a station, and when the PHA flags the raw data as too uncertain to homogenize (e.g. in between two station moves when there is not a long enough record to determine with certainty the impact that the initial move had). Infilled data is marked with an “E” flag in the adjusted data file (FLs.52i) provided by NCDC, and its relatively straightforward to test the effects it has by calculating U.S. temperatures with and without the infilled data. The results are shown in Figure 9, below:

Slide1

Figure 9. Infilling-related adjustments to USHCN relative to the 1900-1910 period.

Apart from a slight adjustment prior to 1915, infilling has no effect on CONUS-wide trends. These results are identical to those found in Menne et al 2009. This is expected, because the way NCDC does infilling is to add the long-term climatology of the station that is missing (or not used) to the average spatially weighted anomaly of nearby stations. This is effectively identical to any other form of spatial weighting.

To elaborate, temperature stations measure temperatures at specific locations. If we are trying to estimate the average temperature over a wide area like the U.S. or the Globe, it is advisable to use gridding or some more complicated form of spatial interpolation to assure that our results are representative of the underlying temperature field. For example, about a third of the available global temperature stations are in U.S. If we calculated global temperatures without spatial weighting, we’d be treating the U.S. as 33% of the world’s land area rather than ~5%, and end up with a rather biased estimate of global temperatures. The easiest way to do spatial weighting is using gridding, e.g. to assign all stations to grid cells that have the same size (as NASA GISS used to do) or same lat/lon size (e.g. 5×5 lat/lon, as HadCRUT does). Other methods include kriging (used by Berkeley Earth) or a distance-weighted average of nearby station anomalies (used by GISS and NCDC these days).

As shown above, infilling has no real impact on temperature trends vs. not infilling. The only way you get in trouble is if the composition of the network is changing over time and if you do not remove the underlying climatology/seasonal cycle through the use of anomalies or similar methods. In that case, infilling will give you a correct answer, but not infilling will result in a biased estimate since the underlying climatology of the stations is changing. This has been discussed at length elsewhere, so I won’t dwell on it here.

I’m actually not a big fan of NCDC’s choice to do infilling, not because it makes a difference in the results, but rather because it confuses things more than it helps (witness all the sturm und drang of late over “zombie stations”). Their choice to infill was primarily driven by a desire to let people calculate a consistent record of absolute temperatures by ensuring that the station composition remained constant over time. A better (and more accurate) approach would be to create a separate absolute temperature product by adding a long-term average climatology field to an anomaly field, similar to the approach that Berkeley Earth takes.

Changing the Past?

Diligent observers of NCDC’s temperature record have noted that many of the
values change by small amounts on a daily basis. This includes not only
recent temperatures but those in the distant past as well, and has created
some confusion about why, exactly, the recorded temperatures in 1917 should
change day-to-day. The explanation is relatively straightforward. NCDC
assumes that the current set of instruments recording temperature is
accurate, so any time of observation changes or PHA-adjustments are done
relative to current temperatures. Because breakpoints are detected through
pair-wise comparisons, new data coming in may slightly change the magnitude
of recent adjustments by providing a more comprehensive difference series
between neighboring stations.

When breakpoints are removed, the entire record prior to the breakpoint is
adjusted up or down depending on the size and direction of the breakpoint.
This means that slight modifications of recent breakpoints will impact all
past temperatures at the station in question though a constant offset. The
alternative to this would be to assume that the original data is accurate,
and adjusted any new data relative to the old data (e.g. adjust everything
in front of breakpoints rather than behind them). From the perspective of
calculating trends over time, these two approaches are identical, and its
not clear that there is necessarily a preferred option.

Hopefully this (and the following two articles) should help folks gain a better understanding of the issues in the surface temperature network and the steps scientists have taken to try to address them. These approaches are likely far from perfect, and it is certainly possible that the underlying algorithms could be improved to provide more accurate results. Hopefully the ongoing International Surface Temperature Initiative, which seeks to have different groups around the world send their adjustment approaches in for evaluation using common metrics, will help improve the general practice in the field going forward. There is also a week-long conference at NCAR next week on these issues which should yield some interesting discussions and initiatives.

2,044 responses to “Understanding adjustments to temperature data

  1. Adjustments to data ought always be explained in an open and transparent manner, especially adjustments to data that become the basis for expensive policy decisions.

    • David Springer

      Good faith was undermined about the time James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988. Good faith collapsed completely with the Climategate emails two decades later.

      Good faith my ass.

      • I realised HADCRUT couldn’t be trusted when I started realising that each and every cold month was delayed (I think it was 1day per 0.05C), whereas each and every hot month was rushed out.

        I realised HADCRUT could be trusted, when I went back to check my figures a year later and found that nothing was the same any longer.

        I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet

        I realised HACRUT couldn’t be trusted when I saw the state of their code.

        I realised HADCRUT couldn’t be trusted when I realised the same guys were doing it as those scoundrels “hiding the decline”.

        And I still know I can’t trust it … when academics like Judith Curry still don’t know the difference between “Quality” as in a system to ensure something is correct and “Quality” as in “we check it”.

        This is not a job for academics. They just don’t have the right mind set. Quality is not a matter of figures but an attitude of mind — a focus on getting it right for the customer.

        I doubt Judith even knows who the customer is … I guess she just thinks its a vague idea of “academia”.

      • Of course, none of that contradicts anything Zeke said. Do you have a substantive argument to make?

      • Quality are all those features and characteristics of a product or service that bear upon the ability to meet stated or implied needs.

        The problem with this definition is the word “needs” if there is a need to confuse or give rise false conclusions then tinkering with the data may well give rise to quality data, I.e. It achieved it’s purpose.

        Quality in terms of data does not imply accuracy or truth.

      • David may appreciate new Earth-shattering insight into global warming:

        http://stevengoddard.wordpress.com/2014/07/07/my-latest-earth-shattering-research/

      • David Springer wrote:

        Good faith was undermined about the time James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988.

        Such a claim sounds very nuts to me. How is James Hansen supposed to have sabotaged the air conditioning at such an event in such a building? If you don’t want to be called a liar who spreads libelous accusations, what about you provide the evidence for such an assertion?

      • David Springer

        A noob who didn’t know. Precious. The air conditioning was sabotaged by opening all the windows the night before so the room was filled hot muggy air when the congressional testimony took place. The testimony was scheduled on the historically hottest day of the year. One of the co-conspirators, Senator Wirth, admitted to all of it in an interview.

        http://www.washingtonpost.com/wp-dyn/content/article/2008/06/22/AR2008062201862.html

        http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html

        PBS: And did you also alter the temperature in the hearing room that day?

        Wirth: What we did it was went in the night before and opened all the windows, I will admit, right? So that the air conditioning wasn�t working inside the room and so when the, when the hearing occurred there was not only bliss, which is television cameras in double figures, but it was really hot.

        So Hansen’s giving this testimony, you’ve got these television cameras back there heating up the room, and the air conditioning in the room didn’t appear to work. So it was sort of a perfect collection of events that happened that day, with the wonderful Jim Hansen, who was wiping his brow at the witness table and giving this remarkable testimony.

      • David Springer: “James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988.”

        No, it most assuredly was not James Hansen who switched off the air conditioning. And no doubt if somebody had closed the windows, instead of opening them, you’d be making the same claim it was done purposely to trap heat in the room. Nothing Hansen said that day hinges on whether the windows were open or closed. All very silly.

      • It wasn’t Hansen himself, it was (then) US Senator Timothy Wirth, who boasted of doing so on the PBS program “Frontline” —

        And did you also alter the temperature in the hearing room that day?

        … What we did it was went in the night before and opened all the windows, I will admit, right? So that the air conditioning wasn�t working inside the room and so when the, when the hearing occurred there was not only bliss, which is television cameras in double figures, but it was really hot. …

        So Hansen’s giving this testimony, you’ve got these television cameras back there heating up the room, and the air conditioning in the room didn’t appear to work. So it was sort of a perfect collection of events that happened that day, with the wonderful Jim Hansen, who was wiping his brow at the witness table and giving this remarkable testimony. …

        http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html

        “Wirth served as a U.S. Senator from Colorado until 1993, when he left the Senate to serve under President Clinton in the State Department. He is now president of the United Nations Foundation. Wirth organized the 1988 Senate hearing at which James Hansen addressed global warming, and he led the U.S. negotiating team at the Kyoto Summit. In this interview, Wirth describes the debate surrounding global warming within the Bush I and the Clinton administrations, including his experience of the Kyoto negotiations, and asserts that partisan politics, industry opposition and prominent skeptics have prevented action from being taken. This is an edited transcript of an interview conducted Jan. 17, 2007.”

      • David Springer

        Senator Wirth said WE opened the windows the night before. He wasn’t alone. The “we” was purportedly him and Al Gore. Hansen was the originator of the idea that if the hearing was scheduled during hot weather it would be more effective.

        http://www.aip.org/history/climate/public2.htm

        The trigger came that summer. Already by June, heat waves and drought had become a severe problem, drawing public attention to the climate. Many newspaper, magazine, and television stories showed threatened crops and speculated about possible causes. Hansen raised the stakes with deliberate intent. “I weighed the costs of being wrong versus the costs of not talking,” he later recalled, and decided that he had to speak out. By arrangement with Senator Timothy Wirth, Hansen testified to a Congressional hearing on June 23. He had pointed out to Wirth’s staff that the previous year’s November hearings might have been more effective in hot weather. Wirth and his staff decided to hold their next session in the summer, although that was hardly a normal time for politicians who sought attention.

      • Wirth also boasted (same interview) of how they intentionally picked the hottest day/week of the year in DC, how the weather co-operated, and how the original campaign was integral to politics of the Democratic Party and to that year’s (unsuccessful) presidential campaign by Michael Dukakis. So, whatever Hansen thought he was doing, he certainly allowed himself to be the political tool of manipulative and dishonest political partisans of the Democratic Party:

        What else was happening that summer? What was the weather like that summer?

        Believe it or not, we called the Weather Bureau and found out what historically was the hottest day of the summer. Well, it was June 6 or June 9 or whatever it was, so we scheduled the hearing that day, and bingo: It was the hottest day on record in Washington, or close to it. It was stiflingly hot that summer. [At] the same time you had this drought all across the country, so the linkage between the Hansen hearing and the drought became very intense.

        Simultaneously [Mass. Gov. Michael] Dukakis was running for president. Dukakis was trying to get an edge on various things and was looking for spokespeople, and two or three of us became sort of the flacks out on the stump for Dukakis, making the separation between what Democratic policy and Republican policy ought to be. So it played into the presidential campaign in the summer of ’88 as well.

        So a number of things came together that, for the first time, people began to think about it. I knew it was important because there was a big article in, I believe, the Swimsuit Issue of Sports Illustrated on climate change. [Laughs.] So there was a correlation. You figure, well, if we’re making Sports Illustrated on this issue, you know, we’ve got to be making some real headway.

      • Greg Goodman

        Scottish sceptic says: “I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet”

        And why would a competent programmer want or need to use a spreadsheet.for data processing?!

        Spreadsheets are for accountants. It is pretty amateurish to use one data processing. However most amateurs that manage to lash up a “chart” in a spreadsheet for some reasons think they are then qualified to lay into anyone who is capable of programming and has never needed to rely on point and click , cut and paste tools to process data.

        You’d also look a lot more credible if you could at least get the name of dataset right and realised that it is the work of two separate groups.

        There’s plenty to be criticised at CRU, at least try to make credible criticisms.

      • Skiphil wrote: “Wirth also boasted (same interview) of how they intentionally picked the hottest day/week of the year in DC”

        How could Timothy Wirth have known it was going to be the hottest day of the week–let alone the entire summer–weeks in advance of the hearing having been scheduled? Seriously, show a modicum of scepticism. It transpires the air conditioning wasn’t even switched off; it was simply made less effectual because a senator had opened some windows the night before. People believe this diminishes Hansen’s testimony. It does not. Enough distraction. Can we move forward now?

      • thisisnotgoodtogo

        Anon said:

        “How could Timothy Wirth have known it was going to be the hottest day of the week–let alone the entire summer–weeks in advance of the hearing having been scheduled?”

        They checked the records and found the most-often hottest day of the year in the city.

      • thisisnotgoodtogo

        “People believe this diminishes Hansen’s testimony.”

        No, people believe it “embellished” it.
        Hansen’s testimony itself was bogus. It needs no diminishing.
        He used part of a hot year to make his point about anthro warming.

      • I agree that they defenestrated “good faith” when East Anglia lost the original climate they had collected, at the same time, writing in the Climategate emails that they would rather destroy the data than hand it over to skeptics.

        So they need to keep all versions of the data. It is not like it wouldn’t fit on $50 worth of hard drive. Except they don’t. They keep it hid and the only way people find out about adjustments is if they take their own snapshots.

        The time for “assuming good faith” is long gone, “trust but verify” is more what is needed today.

      • Anon,

        can you READ?? It is Wirth who boasted of seeking the hottest day… of course he couldn’t be sure he would get the very very hottest, but that is what he sought and that is (according to him) what he got. As for distraction, when people like you can explain how Wirth and co. are honest and competent, then we can move on.

      • Jan,

        While sabotaged is not the best term, (air conditioning was turned down or off), it doesn’t change the overall point. Steps were taken to ensure the hearing room was hotter than it normally would have been in order to emphasis the point Wirth and Hansen wanted to get across.

      • timg56,

        the accusation was made by David Springer specifically against James Hansen. Regardless, whether you call it “sabotaging” or “turning off”, I am still waiting for the evidence to back up this accusation. So far, nothing.

      • David Springer

        Opening up windows the night before on the historically hottest day of the year overwhelmed the air conditioner. Sabotage is exactly the right word. It was Hansen’s suggestion to Wirth to hold the hearing on the hottest day of the year so there’s collusion in black & white. Wirth admitted “we” opened up the windows the night before. The only question is whether “we” included Hansen whose idea it was to stage the hearing in hot weather to be more effective.

      • Don Monfort

        Please continue to wait, perlie. Watching you make a fool of yourself over a throwaway comment that you want to blow up into libel is very amusing. Are you going to hold your breath? And stamp your little feet? We can tell that you are not a lawyer, perlie.

      • David Springer wrote:

        A noob who didn’t know. Precious.

        Noob? We will see who has the last laugh.

        The air conditioning was sabotaged by opening all the windows the night before so the room was filled hot muggy air when the congressional testimony took place. The testimony was scheduled on the historically hottest day of the year. One of the co-conspirators, Senator Wirth, admitted to all of it in an interview.

        http://www.washingtonpost.com/wp-dyn/content/article/2008/06/22/AR2008062201862.html

        http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html

        I can even better, thanks to Anthony Watts with his junk science blog. Here is a video excerpt of the TV broadcast, where the opening of the windows and the AC issue is addressed and Wirth is asked about this. Watts had tried this one on me already some time ago, and linked the video himself, apparantly totally delusional about what it would prove.

        Not a single word in there that implicates James Hansen in the matter. Neither by Wirth, nor by the narrator. So how does this work with such an accusation in “skeptic” land? By some “skeptic” assigning of guilt by association?

        It’s all just about throwing dirt, isn’t it? Facts don’t matter.

        As someone else has already correctly pointed out. The windows and AC thing is irrelevant for the content of Hansen’s statement anyway.

      • David Springer

        Like I pointed out with links Hansen suggested to Wirth that his November testimony would have been more effective in hot weather. Wirth then says in an interview “we” (maybe his staff, maybe a climatologist) determined that June 23rd was on average the hottest day of the year in Washington and scheduled the hearing on that day. Then “we” (Wirth and unnamed others) opened up all the windows the night before so the hot humid air overwhelmed the air conditioning. I don’t but usually the way these things work is Hansen would have flown in the day before and spent some face time with those in the senate on his side. Al Gore was US Senator from Tennessee so almost certainly all three were in town that night and no one is going to question two United States senators prepping a hearing room. It went off like a frat club stunt. Given the heat was Hansen’s idea in the first place and knowing how guys behave probably all three of them were in on it and not exactly sober either. But hey, that’s just a guess. Wirth knows and didn’t say.

      • Let’s see, Jan Perlwitz!

        “A Climate Hero: The Testimony

        Worldwatch Institute is partnering with Grist to bring you this three-part series commemorating the 20-year anniversary of NASA scientist James Hansen’s groundbreaking testimony on global climate change next week. Read part one here.

        “The greenhouse effect has been detected, and it is changing our climate now,” James Hansen told the Senate Energy Committee in 1988.An unprecedented heat wave gripped the United States in the summer of 1988. Droughts destroyed crops. Forests were in flames. The Mississippi River was so dry that barges could not pass. Nearly half the nation was declared a disaster area.

        The record-high temperatures led growing numbers of people to wonder whether the climate was in some way being unnaturally altered.

        Meanwhile, NASA scientist James Hansen was wrapping up a study that found that climate change, caused by the burning of fossil fuels, appeared inevitable even with dramatic reductions in greenhouse gases. After a decade of studying the so-called greenhouse effect on global climate, Hansen was prepared to make a bold statement.

        Hansen found his opportunity through Colorado Senator Tim Wirth, who chose to showcase the scientist at a Congressional hearing. Twenty years later, the hearing is regarded as a turning point in climate science history.

        To build upon Hansen’s announcement, Wirth used the summer’s record heat to his advantage. “We did agree that we should figure out when it’d be really hot in Washington,” says David Harwood, a legislative aide for Wirth. “People might be thinking of things like what’s the climate like.”

        They agreed upon June 28. When the day of the hearing arrived, the temperature in the nation’s capital peaked at 101 degrees Fahrenheit (38 degrees Celsius). The stage was set.

        Seated before the Senate Committee on Energy and Natural Resources, 15 television cameras, and a roomful of reporters, Hansen wiped the sweat from his brow and presented his findings. The charts of global climate all pointed upward. “The Earth is warmer in 1988 than at any time in the history of instrumental measurements,” he said. “There is only a 1 percent chance of an accidental warming of this magnitude…. The greenhouse effect has been detected, and it is changing our climate now.”

        Oh, a one percent chance of a heat wave.

        Great science testimony too, Jan!

  2. A fan of *MORE* discourse

    Question  Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?

    Answer  Because considering *ALL* the available information yields *FAR* better betting strategies.

    Question  Why does the strongest climate science synthesize historical records, paleo-records, and thermodynamical constraints??

    Answer  Because considering *ALL* the available information yields *FAR* better assessments of climate-change risk.

    These realities are *OBVIOUS* to *EVERYONE* — horse-betters and climate-science student alike — eh Climate Etc readers?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Why do climate scientists hide the raw data? Why do they use anomalies and 5 years smoothing to hide the data?

      You can’t spell anomalies with LIES.

    • Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?

      Because that is what the customer wants.

      Now answer me this … would you be happy with a bank statement with “adjusted” figures for each and every transaction.

      And what would you say if they saidAnswer Because considering *ALL* the available information yields a *FAR* better assessmentk.

      • Matthew R Marler

        Scotthish Sceptic and A fan of *MORE* discourse: Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?

        Because that is what the customer wants.

        Now answer me this … would you be happy with a bank statement with “adjusted” figures for each and every transaction.

        And what would you say if they saidAnswer Because considering *ALL* the available information yields a *FAR* better assessmentk.

        The issue relates to how accurately the fundamental data have been recorded in the first place. There are people, including auditors, who do sample financial records and perform Bayesian hierarchical modeling in order to assess the overall effects of errors, and their likely prevalence.

      • Don’t give the banksters any ideas, Scottish. ;)

    • The Beyer speed analogy got to me. It succeeds at what it was designed to do. Kudos.

      As I am oft wont, lay curiosity (in climate science and horse betting) forced an immediate investigation into Beyer speed.

      As a thought and pattern matching exercise, the Beyer speed analogy is quite good. However, within a few minutes, I found an erudite bettor who supplies a different take on the underlying premise that Beyer speed, while working as designed, furnishes reliable data on which to bet one’s wad of cash. He wrote:

      “The theory:
      Horses that can win races are the ones that can significantly IMPROVE their previous race speed figure. Today’s winner is not the horse with the highest figure from its last race but the horse that is most likely to REACH its highest figure today. Bold-face Beyer figures function essentially as mirages, optical illusions that distort racing reality. Yes, they are more than reasonably accurate most of the time. But they are not worth their face value, for an accurate rendering of the past is not the same thing as an objective prediction of the future. Better stated, the past performances are something that should be seen dynamically, as if they were part of a moving process.”

      It seems climate science and horse betting share more than one initially thinks.

      I enjoyed the analogy. As we attempt to understand scientific research, numskulls like me could use more of them.

    • Fan,
      What is your favorite conspiracy today?

      • He is too busy working on his “Climate Youth” project to bother answering a question like that.

  3. Rob Bradley

    The author states: “Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.”

    But surely incentives matter. Peer pressure matters. Government funding matters. Beware of the ‘romantic’ view of science in a politicized area.

    • When an auditor checks accounts, they do not assume bad faith.

      Instead they just assure the figures are right.

      So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?

      Because academics don’t have a culture of having their work checked by outsiders

      The simple fact is that academics cannot stomach having outsiders look over their figures. And this is usually a symptom of an extremely poor quality regime

      • Here’s an audit of HADCRUT3

        In July 2011, Lubos Motl did an analysis of HADCRUT3 that neatly avoided all the manipulations. He worked with the raw data from 5000+ stations with an average history of 77 years. He calculated for each station the trend for each month of the year over the station lifetime. The results are revealing. The average station had a warming trend of +0.75C/century +/- 2.35C/century. That value is similar to other GMT calculations, but the variability shows how much homogenization there has been. In fact 30% of the 5000+ locations experienced cooling trends.

        What significance can you claim for a 0.75C/century claim when the standard deviation is 3 times that?

        Conclusions:

        “If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.

        Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority.
        Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.”

        http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html

      • Steven Mosher

        “So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?

        We dont.

        But imagine this.

        Imagine an auditor came into your company

        A= audior
        S= Scottish

        A: Can I see your books.
        S: Yes here they are.
        A: (ignoring the books). Here is a chart I found on the internet showing
        your bogus adjustments to income.
        S: please look at our books.
        A: no first explain this random stuff I found on the internet.
        S: here are the books, can you just audit us?
        A: you should be audited
        S: I thought thats what you were doing, here are the books. please look.
        A: What are your interests in this company?
        S: I own it. I make money
        A: AHHHH, so how can I trust these books
        S: can you just look at the books.
        A: first I want to talk about this youtube video. See this chart, the red is really red.
        S: I didnt make that video, can you just look at the books.
        A: do you have an internal audit.
        S: ya, here are some things we published, you can read them.
        A: Ahhh, who reviewed this.
        S: It was anonymous, just read the paper.
        A: How do I know your friends didnt review that, I dont trust those papers.
        S: well, read them and ask me questions.
        A: I’m giving the orders here tell me what is in the papers.
        A: and where are your books?
        S; I gave you the books.
        A: who is your accountant?
        S: my wife, she does all the books
        A…. Ahhh the plot thickens… you need to be audited.
        S: err, here are the books.
        A: oh trying to make it my job huh.. Im here in good faith
        S: ah ya, to audit, here are the books.
        A.not so fast, youre trying to shift the burden of proof

    • Steven Mosher

      “But surely incentives matter. Peer pressure matters. Government funding matters. Beware of the ‘romantic’ view of science in a politicized area.”

      When JeffId and RomanM ( skeptics) started to look at temperature series the incentive was to
      A) find a better method
      B) Show where GISS and CRU went wrong

      Their results showed more warming.

      When I first started looking at temperatures my incentive was simple.
      I wanted to find something wrong, specifically with adjustements.
      7 years later I can only report that I could find nothing of substance
      wrong with them.

      When Muller and Berkeley started to look at this matter their incentive
      was to build a better method and correct any mistakes they found.
      Koch and others found this goal laudable and funded them.
      With this incentive what did Berkeley find? Well, the better method
      extended the record, gave you a higher spatial resolution and showed
      that the NOAA folks basically get the adjustments correct.

      Many people, all with the incentive to find some glaring error, some mistake that would overturn the science, all came to the same conclusion.
      While NOAA isnt perfect, while we can make improvements at the margin,
      the record is reliable. The minor issues identified dont change the fundamental facts: It has been warming since the LIA. There are no more
      frost fairs in London. The estimates of warming since that time using
      some of the data, or all of the data, using multiple methods
      ( CAM, RSM, Kriging, Least Squares, IDW ) all fall within narrow bounds.
      The minor differences are important to specialists or to very narrow
      questions ( see Cowtan and Way ), but the big picture remains the same

      • Steve Fitzpatrick

        Yup, that is right. Small changes (a la Cowtan and Way) at the margins do happen. But nothing fundamental has changed. Is there still some uncertainty? Sure, at the margins, but the data are quite clear: there has been average warming in the range of 0.8C to 0.9C since the mid 19th century.

      • Mosh: One of the things that personally gives me faith in some of the newer temperature records is that skeptics like you, Roman, Jeff and then Muller et al get similar results. Unfortunately, dealing people like Goddard is now prompting you say dubious things like: “Well, the better method extended the record, gave you a higher spatial resolution and showed that the NOAA folks basically get the adjustments correct”. Several years ago, you would have recognized that no one knows the “correct adjustments”. You would remember that the half-dozen reconstructions that “reproduced” Mann’s hockey stick did not make Mann “correct”. Pairwise adjustment are hypotheses than make assumptions about the nature of the events that produced undocumented breakpoints, not tested theories. More than half of US warming and about a quarter of global warming can be traced back to breakpoint corrections and the total number of breakpoints identified has risen to about one per decade (if I remember correctly). Only a modest fraction of these breakpoints are due to properly-studied phenomena like TOB and instrumental changes. Any undocumented breakpoint could represent a return to earlier observing conditions (which had gradually deteriorated) or a shift to new conditions. Worst of all, temperature change still appears to reported as if all the uncertainty arises from scatter in the raw data and none from systematic errors that could arise from processing the data.

      • Steven Mosher | July 7, 2014 at 4:16 pm |

        “So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?
        We dont.
        But imagine this.
        Imagine an auditor came into your company

        A= audior
        S= Scottish

        A: Can I see your books.
        S: Yes here they are.

        This also happens:
        A: Can I see your books?
        S: No – you just want to find something wrong with them.

        Trust is not a part of the game and hasn’t been for some time. About the time cordiality disappeared from the landscape.

      • @Frank 5:14 pm
        Pairwise adjustment are hypotheses than make assumptions about the nature of the events that produced undocumented breakpoints, not tested theories. ….. Worst of all, temperature change still appears to reported as if all the uncertainty arises from scatter in the raw data and none from systematic errors that could arise from processing the data.

        Agree. Every adjustment adds error.

        Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.

        But is temperature uncertainty reported as if it derives from the average anomaly and not derived from the measured daily Tmin and Tmax? If a month’s mins and maxes are 10 degrees C apart, the Trmse (mean standard error) of the month’s Tave is a minimum of 0.67 deg C.

      • Matthew R Marler

        Stephen Rasey: Every adjustment adds error.

        That is not true. Errors and random variation are in the data, but the best adjustments (like the BEST adjustments) do the best job of reducing the error. This is proved mathematically for some cases, and it has been shown computationally by simulations where the “true” values and “errors” and “random variation” are known by fiat. I put some references in my comments to Rud Istvan.

      • Steven Mosher

        “Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.”

        Proof by assertion.

        Not backed up by any example, any data, or any analysis showing what is claimed.

        Typical skeptic.

      • Matthew R Marler,

        “…but the best adjustments (like the BEST adjustments) do the best job of reducing the error.”

        If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods. Indeed, BEST removes very effectively the fixes present in the raw data in the form of discontinuities.

        For this reason the average of absolute temperature is a better method than anomalies.

      • @Matthew R Marler at 11:42 am |
        Stephen Rasey: Every adjustment adds error.
        That is not true. Errors and random variation are in the data, but the best adjustments (like the BEST adjustments) do the best job of reducing the error.

        It is true. Every adjustment, even the subtraction of the mean to create the anomaly is the addition of an estimated parameter. Error is always added.

        What may be confusing is that adjustments can improve signal to noise as you add error. Or more precisely, the act of improving signal to noise, must add error in the process, but jin some circumstances the signal adds faster than the error.

        A case in point is the seismic common depth point move-out correction. It is a process by which a recorded signal, offset by a known distance from he source, is variably compressed in the time-domain to estimate an adjusted record equivalent to a zero-offset source-receiver pair. The velocity used in the move-out is estimated, an average of subsurface velocities, but the right estimate increases coherence of events that arrive at different times in the raw data. When you get it right, it greatly increases signal/noise ratio. But high signal to noise doesn’t prove it is right. It is possible to make noise coherent, too.

        Homogenization could a act in much the same way as seismic stacking. It is possible that “stacking” temperature anomalies will improve the signal to noise ratio as it adds error to the process. The question is, does it? It adds error — of that there is no doubt. Does signal improve faster than error? Or are we just making coherence out of noise and added error?

      • (reposted, first attempt was at the wrong parent in the thread)
        @Steven Mosher at 11:57 am |
        Rasey: “Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.”
        Proof by assertion.
        Not backed up by any example, any data, or any analysis showing what is claimed.

        Please argue any of the following points by methods that exclude ad hominem.
        1. Breakpoints are derived from something.
        2. Breakpoints are created where documentation of changes to the station are do not exist.
        3. BEST, and others, use krigging to create a regional field to compare to the station under study.
        4. Breakpoints, empirical undocumented breakpoints, can be created from a function of differences between the station and the krigged field.
        5. The krigged regional field is defined by control points.
        6. These control points are other temperature record stations.
        7. Every temperature record contains error and thus contain some uncertainty. (I will expand on this in a following comment)
        8. When at least one control point of a krigged surface has uncertainty, i.e. error bars, the krigged surface itself is fuzzy — every point of the uncertain control point influences the surface gains uncertainty.
        9. All stations have uncertainty, so all control points of the krigged surface have uncertainty. Therefore the krigged surface is fuzzy at all points.
        10. Zeke himself said it was an iterative process.

        the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors

        11.a source for huge amounts of error. Well, now there you have me…. I didn’t define “huge”. Huge in this case means “at least on the order or larger than the signal sought.”

      • A C Osborn

        What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
        Better for what, certainly not the historic record.
        How can declaring old temperatures “WRONG” by replacing them with “calculated temperatures” be right.
        The people that lived through the 30s in the USA did not experience “calculated” temperatures, they experienced the real thing as reported by the thermometers of the day. They experienced the real affects of the temperatures and the Dust Bowl droughts.
        In Australia in the 1800s they experienced temperatures soo high that Birds & Bats fell out of the air dead of Heat Exaustion, in the early 1900s they had the biggest natural fires in the world and yet according to the Climate experts after adjustments it is hotter now than then.

        It is like historians going back to the second world war and changing the number of Allied Soldiers who died and making it far less than the real numbers. Try telling that to their familes and see how far you would get.

        Based on these CRAP adjustments we hear the “Hottest” this and “Unprecedented” that, the most powerful storms, Hurricanes & Typhoons, more tornadoes, faster sea level rise when anyone even over 60 knows, based on their own experiences that they are Lies.
        I remember as a child in Kent in the UK during the 50s & 60s the Tar in the road melting in the summers due to the heat, followed by a major thunderstorm and flooding with cars washed down the streets and man hole covers thrown up by the water. It is no hotter in the UK now than it was then.

        THE ADJUSTMENTS DO NOT MAKE IT A MORE ACCURATE ACCOUNT OF HISTORY.
        It is not REAL, that is why the work that Steve Goddard does with Historic Data is so important, it SHOULD keep scientists staight but it doesn’t.

      • Matthew R Marler

        Stephen Rasey: What may be confusing is that adjustments can improve signal to noise as you add error. Or more precisely, the act of improving signal to noise, must add error in the process, but jin some circumstances the signal adds faster than the error.

        I think that you are going in circles. The Bayesian hierarchical model procedure produces the estimates that have the smallest aggregate mean square error. They do not add error to the data, or add error to the estimate.

      • Matthew R Marler

        A. C. Osborne: What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
        Better for what, certainly not the historic record.

        The procedure used by the BEST team produces estimates that have the smallest attainable mean square error. There is a substantial literature on this topic.

      • Matthew R Marler

        phi: If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods.

        How is that known? The BEST team and others have made extensive efforts to estimate and account for UHI effects, and they are not the major source of warming in the instrumental record.

      • More on Point 7 above:
        7. Every temperature record contains error and thus contain some uncertainty.

        Let us list the sources of uncertainty in each temperature record:
        1. Systematic temperature miscalibration of the instrument.
        2. Weathering of the instrument as a function of time
        3. Instrumental Drift away from calibration.
        4. Precision of daily reading
        5. Accuracy of daily reading, including transposition in record)
        6. Instrument min-max reset error resulting from Time of Observation policy.
        7. Data gaps from vacation, instrument failure, etc.

        There are others, but I want to turn to the big errors that occur in processing.
        A great deal of the temperature record used is based upon the station’s Average monthly temperature Anomaly. What are the sources of uncertainty involved with it? What is the Temp Anomaly “Mean Standard Error” (TArmse)
        First we must find the Trmse of the month’ avg temp.
        Trmse(Month i) = StDev(30 Daily Ave. Temp) / sqrt (30)
        Right?
        Wrong. We never measure a Daily Ave. Temp. We measure instead a min and a max. Instead,
        Trmse(Month i) = StDev (30 Daily Min + 30 Daily Max) / sqrt (60)
        If we assume a flat constant avg temp of 10 deg C for the month, coming from thirty 5 deg C min readings and 15 deg C max readings.
        Trmse = 0.645 deg C.
        So the Mean for a month is 10.000 deg C, but the 90% confidence range is 8.92 to 11.08. deg C. That is a big error bar when you are looking for 0.1 to 0.3 deg C/decade.

        You want to convert Tave(month) to an anomaly TAavg.
        Well that’s just a bulk shift of the data. There is no uncertainty.
        Wrong.
        A bulk shift would apply if and only if each station and each month received the same bulk shift. But we don’t do that. Each station-month is adjusted by an estimate of the mean for that month and that station

        Ok. Suppose we have 30 years of the very same month: 30 days of 5 deg low and 15 deg high. The 30 year mean is 10 deg C. What is the Trmse(30 year, month I)? It is (Trmse(month i)/sqrt (30). In this case
        Trmse(30 year, month I) = 0.645 / sqrt(30) = 0.118 deg. C.

        So, the 30 year Tavg for a month is known to +/- 0.193 deg C at an 90% confidence.

        But, we are going to create the anomaly for the month: that quantity is (Tave(month), Trmse(month)) + (-Tave( 30 year, month), Trmse(30 year, month)
        The temp anomaly mean is a nice fat zero.
        but the rmse of the anomaly = sqrt(0.645^2 + 0.118^2)
        TArmse (month, 30 year base) = 0.656 deg C. or +/- 1.079 deg C at 90% confidence.

        The uncertainty in the 30 year mean did not add much to the TArmse of the month, but it did never reduces it. Furthermore, in this discussion of breakpoints, if we make segments short, say 5 years, then the uncertainty of the mean, Trmse(5 year, month) = 0.289 deg C. Adjusting by a 5 year mean between breakpoints would yield a
        TArmse(month, 5 year base) = sqrt(0.645^2 + 0.289^2) = 0.716 deg C
        or +/- 1.179 deg C at a 90% confidence interval.

        So more breakpoints, shorter segments, increases the uncertainty in the Temperature Anomaly data stream. If you want to tease out climate signals of a fraction of a degree, you need long segments.

      • Matthew R Marler,
        Excuse me, but you write a lot on this thread while you do not seem to master the subject. I suggest some literature:

        http://onlinelibrary.wiley.com/doi/10.1029/2001JD000354/pdf
        http://onlinelibrary.wiley.com/doi/10.1002/joc.689/pdf

        Good reading.

      • @Matthew R Marler at 1:59 pm |
        I think that you are going in circles

        No. I don’t deny that you can reduce mean standard error or mean squared error through increasing the sample size when errors are random. But in the process, the errors, the variance, to be more specific, add at each step. The mean error can be reduced by an increase in number of samples.

        You cannot subtract error, at least not when the error is random. Errors accumulate. Every estimate and adjustment contains error.

      • Matthew R Marler,
        I specify that Hansen et al. 2001 will show you why the BEST method is inadequate in case of increasing UHI. Regarding Böhm et al. 2001, you will find an interesting evaluation of the UHI effect on the Alpine Network at the end of the nineteenth century (greater than 0.5 ° C).

      • Windchasers

        Stephen Rasey says:

        You cannot subtract error, at least not when the error is random.

        Well, it’s a good thing that the errors aren’t random! =D

        Seriously, though. TOB is a systematic error, not random.

      • Matthew R Marler

        phi: http://onlinelibrary.wiley.com/doi/10.1029/2001JD000354/pdf
        http://onlinelibrary.wiley.com/doi/10.1002/joc.689/pdf

        I have written enough for one thread, but I do thank you for the link to the paper.

      • Matthew R Marler

        phi, I read the paper that you linked to, and here is a quote from the summary: This paper discusses the methods used to produce an Alpine-wide dataset of homogenized monthly
        temperature series. Initial results should illustrate the research potential of such regional supra-national
        climate datasets in Europe. The difficulties associated with the access of data in Europe, i.e. related to the
        spread of data among a multitude of national and sub-national data-holders, still greatly limits climate
        variability research. The paper should serve as an example of common activities in a region that is rich
        in climate data and interesting in terms of climatological research. We wanted to illustrate the potential
        of a long-term regional homogenized dataset mainly in three areas:
        (i) the high spatial density, which allows the study of small scale spatial variability patterns;
        (ii) the length of the series in the region which shows clear features concerning trends starting early in
        the pre-industrial period; and
        (iii) the vertical component in climate variability up to the 700-hPa level.
        All these illustrate the advantage of using carefully homogenized data in climate variability research.

        Not only did they “homogenize”, but they worked with deviations rather than restricting themselves to absolute temps, and they estimated breakpoints. They were able to identify a trend “like” UHI, despite your assertion that such methods were the worst when such trends are present. I don’t see how it supports your original claim: If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods. Indeed, BEST removes very effectively the fixes present in the raw data in the form of discontinuities.

        For this reason the average of absolute temperature is a better method than anomalies.

        The main obvious difference is that the Best team carried out an explicitly Bayesian hierarchical model, whereas this team seems not to have.

      • @Windchasers at 4:41 pm |
        Well, it’s a good thing that the errors aren’t random! =D
        Seriously, though. TOB is a systematic error, not random.

        I agree. Systemic corrections can be added —- as long as they contain the uncertainty in the magnitude of the correction. That flows back to the move-out example I used above. It is a real effect whose magnitude must be estimated, perhaps by looking for the value that maximizes coherence.

        TOB is a valid correction under some circumstances. (personally, I think it is overrated, but valid) The magnitude of the correction can only be estimated, even if it is a Bayesian estimation. But the mean standard error of the estimated TOBS correction is no zeros and could be more than half the size of the correction itself. We must estimate how much to apply at that station, at that month, at that year (when the time of the change is not documented).

        To apply a TOBS correction AFTER the recording time policy was really changed is certainly adding error.

      • Matthew R Marler,

        To remove discontinuities is a bad method if these discontinuities are in fact corrections​​. The results of Böhm and BEST are identically bad since both remove these fixes to recover the bias in its full amplitude. I proposed Böhm because he explains the bias of discontinuities by a large UHI effect on the network in the nineteenth century. If it was important at this time, it could only have progressed until today.

        Otherwise, I can only encourage you to read chapter 4 of Hansen et al. 2001 You will read, for example:”…if the discontinuities in the temperature record have a predominance of downward jumps over upward jumps, the adjustments may introduce a false warming, as in Figure 1.”.

        This character is actually present in the raw temperature data worldwide.

      • RE: Stephen Rasey at 5:34 pm |
        TOB is a valid correction under some circumstances. …. The magnitude of the correction can only be estimated, even if it is a Bayesian estimation. But the mean standard error of the estimated TOBS correction is no zeros and could be more than half the size of the correction itself.

        I must add that the error associated with the uncertain estimation of the magnitude and time of application of the TOBS correction is also a systematic, non-random error. If you over or under estimate the TOBS correction for one month, you will do so systematically for many other months. So we cannot assume the error will decrease by the sqrt(number of months it is applied).

        Likewise, when we create the temperature anomaly, we must add the negative of the mean for the month with its mean standard error. The error applied for May 2013 and June 2013 come from different estimates of the mean and so the errors added random. But the error added between TA(May 2013) and TA(May 2012) come from the same estimate of the mean, so the mean standard error is NOT random between years for the same month, but is likely random between stations.

      • Windchasers

        To apply a TOBS correction AFTER the recording time policy was really changed is certainly adding error.

        No, I don’t think so. The TOB creates an ongoing bias – a hot bias if temperatures are recorded near the hottest part of the day, and a cold bias if temperatures are recorded near the coldest part.

        If we switched from recording in the afternoon to recording in the morning, I’d rather see us adjust for both biases, not just one. It seems more logically consistent that way.

      • @Windchasers at 6:03 pm |
        If we switched from recording in the afternoon to recording in the morning, I’d rather see us adjust for both biases, not just one. It seems more logically consistent that way.

        I cannot argue it wouldn’t be more consistent.
        If you want to apply a different TOBS(morning), a TOBS(Afternoon), a TOBS(Noon), and a TOBS(late evening), I have no theoretical objection —– Provided the mean standard error of the adjustment is applied and another error source is added to account for the probabilistic uncertainty that the wrong adjustment is used.

        You want to apply a 0.05 deg C TOBS(morning) adjustment with a 0.15 deg C mean standard error uncertainty? Knock yourself out.

  4. “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

    I used to laugh at accusations of conspiracy among establishment climate scientists. Then I read the climate-gate emails. I’m not laughing anymore.

    If there is an ‘unfortunate narrative,” these guys have no one to blame but themselves.

    • Steven Mosher

      please do not tar the NOAA people with the same brush as the CRU people.

      You know early on in climategate when the focus was on CRU, I used to get mails from right wing organizations and people telling me that ‘we” had to find a way to turn this into a NOAA scandle.

      needless to say they got an earful from me.

      Climategate is not an indictment of the whole profession.
      peoples attempt to make climategate about the temperature series or about all climate scientists is part of the reason why the investigations were botched

      • Steve: Surely you don’t believe the Climategate investigation was botched ONLY because of a need to protect the validity of CRUTemp? The profession had other temperature records to fall back upon. Has the profession even recognized the mistakes that were made? What actions have been taken to ensure that problems don’t occur again? How about releasing all data and processing programs with publication? (You might wish to re-read your own book.)

      • Matthew R Marler

        Steven Mosher: please do not tar the NOAA people with the same brush as the CRU people.

        the difficulty there is that some NOAA people (including writers at RealClimate) defended the bad practices revealed in the CRU emails. So the NOAA people tarred themselves.

        I have to make this my last post, so if you reply you’ll have the last word. Your tenacity in defense of Zeke’s post and the BEST team is admirible, though I disagree with you here and there.

    • => “If there is an ‘unfortunate narrative,” these guys have no one to blame but themselves.”

      Indeed. They made you do it.

      • Hole on there big fella.

      • They could have published all adjustments, with original data, and justifications based on the literature, instead of having skeptics discover it in the worst possible way, suspecting something was up, recording a snapshot, then watching the data change unanounced, always in ways that increased the warming trend. So yeah, they made skeptics do it.

      • Windchasers

        They could have published all adjustments, with original data, and justifications based on the literature, instead of…

        The adjustments and justification are right there in the literature, in papers ranging from 10-30 years old. And the data, justifications, adjustments, and explanations are available on the NCDC website:
        http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php

        How much longer were they supposed to wait, for you to do your DD?

        Don’t blame the scientists for your laziness.

      • Since the antics of Phil Jones and the CRU data, there is a certain Caesar’s wife expectation of historical climate data, on which depends decisions regarding trillions of dollars.

        Every time published data is modified, it should be noted as modified where it is published, along with a link to the previous data, and a link to the peer reviewed justification for the change.

        I am just suggesting strategies for coping with the appearance of a “thumb on the scale” since the apparent fact that the adjustments strongly trend in a single direction already looks bad enough.

        You guys are just trying to make skeptics, I swear. Take the steam out of these criticisms up front. Treat this data as transparently as if it were a bank statement to the owner of the money, because it is far more important than that.

        “Trust us” and name calling or motivation questioning of anybody who doesn’t automatically trust such important data on the say so of obviously politically motivated climate scientists like Hansen, for example, is simply no longer an option.

    • thisisnotgoodtogo

      Mosher said:

      “Climategate is not an indictment of the whole profession.”

      Oh, so the profession took care of it in a timely , open and and transparent manner.

      Thanks for bringing truth, Steven

      • Mark Lewis

        That is an interesting question. How much can we hold the profession responsible for the actions of some of its prominent members?

        Mosher – how do you rate the profession’s response to CRU emails?

        For me, how the profession reacts to their outing is critical. Certainly, my information about the response by the profession was partial and probably biased, but the reaction of the climate/temperature profession to the CRU emails as a whole did not bolster my confidence in it.

      • Most in the profession were probably either a) doing climate science and not paying attention or b) frightened by the furore and decided to keep their heads down.

        Climategate is an indictment–of about half a dozen people who chose one of the worst times possible to act like complete bozos. It is in no way an indictment of climate science or the overwhelming majority of climate scientists.

      • And, the whitewash of the climategate investigations is and indictment – of what ?

      • thisisnotgoodtogo

        Tom Fuller,

        Keeping your head down and being too fearful/busy, is an offense and an indictment of the profession. Who spoke out publicly?

        A rotten bunch for sure.

      • When somebody who is purported to be a responsible scientist and the custodian and curator of a central repository of historic temperature data writes “I would rather destroy the data than hand it over to skeptics” then, amazingly, like the IRS, the very data in question is destroyed, I would say that the ‘profession’ has taken a severe black eye and has some serious reputation restoration work to do.

  5. How could you have written this article without once mentioning error analysis?

    Data, real original data, has some margin of error associated with it. Every adjustment to that data adds to that margin of error. Without proper error analysis and reporting that margin of error with the adjusted data, it is all useless. What the hell do they teach hard science majors these days?

    • Steven Mosher

      the error analysis for TOBS for example is fully documented in the underlying papers referenced here.

      first rule. read the literature before commenting.
      looking at the time of your response I have to wonder what you were taught.
      you didnt read all the references

      • I read about TOBS. They had a set of station data to analyze from the 50s and 60s (no hourly data was stored on mag tape after 64 or 65).

        One station moved 20km and one moved 5 km and other moves were “allowed” up 1500m … but they broke the rules for those two stations.

        How many stations were in the same place from the beginning to the end of the data?

        It could be zero.

      • Steven Mosher

        Bruce,

        Looks like you didnt read the papers. read the original paper and then the 2006 paper.

        And then do your own TOBS study.. oh ya, dont make the same mistakes you made with Enviroment Canada data

      • Zeke posted a link at WUWT to the papers.

        ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/

        The stations moved. The height of the thermometers changed.

        What a crappy “reference” collection …

        Karl 1986

        ” For these reasons seven years of hourly data (1958—64) were used at 107 first order sta-tions in the United States to develop equations whichcan be used to predict the TOB (Fig. 4). Of these 107 stations, 79 were used to develop the equations, and28 were reserved as an independent test sample. The choice of stations was based on their spatial distribution and their station histories.

        Spatial station relocations were limited to less than 1500 m except for two stations—Asheville, North Carolina, and Talla-hassee, Florida.

        These stations had relatively large sta-tion moves, 20 km and 5 km respectively, but they were retained because of their strategic location with respect to topography and major water bodies.

        At 72 of the 79 stations used to develop the TOB equations,temperature was recorded very close to 2 m above thesurface.

        At the remaining seven stations, the instru-ments were repositioned from heights in excess of 5 m to those near 2 m sometime between 1958 and 1964.

        Changes in instrument heights from the 28 independent stations were more frequent: at nearly 50% of these stations the height of the instruments was reduced to 2 m above the ground from heights in excess of 5 m sometime in the same period”

      • “first rule. read the literature before commenting.”

        The great thing about this site is that is not nanny moderated like some of the other climate sites…

      • Steven Mosher

        two papers bruce.

        read them both.

        post your code

        stop calling people dishonest unless you have proof.

      • Mosher, aren’t you going to thank me for reading the first TOBS paper and point out the serious problems with the data?

        Whats the name of the 2nd paper?

    • Matthew R Marler

      Patrick B: Every adjustment to that data adds to that margin of error.

      That is not true. The best adjustments reduce the error the most, whereas naive adjustments do not do a good job at all (example of a “naive” adjustment: concluding that a data point is “bad”, and omitting it from the analysis is computationally equivalent to a second-rate method of adjustment.) This is explained in the vast quantity of mathematics and simulation analysis of diverse type of estimation, including the methods used by the BEST team. The papers of the BEST team explain their analyses in good detail, with supporting references. I put some references in comments on the posts by the estimable Rud Istvan.

  6. Congratulations, you’ve written a long post, managing to avoid mentioning all the main issues of current interest.

    “Having worked with many of the scientists in question”
    In that case, you are in no position to evaluate their work objectively.

    “start out from a position of assuming good faith”
    I did that. Two and a half years ago I wrote to the NCDC people about the erroneous adjustments in Iceland (the Iceland Met Office confirmed there was no validity to the adjustments) and the apparently missing data that was in fact available. I was told they would look into it and to “stay tuned for further updates” but heard nothing. The erroneous adjustments (a consistent cooling in the 1960s is deleted) and bogus missing data are still there.
    So I’m afraid good faith has been lost and it’s going to be very hard to regain it.

    • Hi Paul,

      Iceland is certainly an interesting case. Berkeley doesn’t get nearly the same scale of 1940s adjustments in their record: http://berkeleyearth.lbl.gov/stations/155459

      I wonder if its an issue similar to what we saw in the arctic? http://www.skepticalscience.com/how_global_warming_broke_the_thermometer_record.html

      GHCN-M v4 (which hopefully will be out next year) and ISTI all contain many more stations than GHCN-M v3, which will help resolve regional artifacts due to homogenization in the presence of sparse station availability.

      • Zeke, I have no idea who you are, but posting links to sks, a website that still doggedly defends the Hockey Stick in public while trashing it when they thought nobody could see, is just over the top.

        How are we supposed to know what they really think when their editorial positions, when exposed, showed that they value propaganda over true “skeptical science”?

    • Steven Mosher

      Paul

      I dont see how good faith is lost.

      Like you I’ve reported any number of errors to NCDC
      remember NCDC collects data supplied by sources.
      In some cases the errors have been corrected. NCDC informs
      the source and the change is made upstream.
      in some cases NCDC informs the source and changes are not made
      in one case the change was made upstream and then in the next
      report the mistake was back in the record.

      you assume bad faith on one data point.

      bad science.

      • “I dont see how good faith is lost.” Seven Mosher

        “Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.” – CRU

        They couldn’t have printed it?

        ” If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send it to anyone.”” – Phil Jones.

        Nope, nothing to see here with Phile “Rosemary Woods” Jones.

        Honestly, pretending that Climategate never happened and so no good faith has been lost is lunacy.

    • Paul, even if they cocked up Iceland data completely, it’s kind of a postage stamp in terms of global temps, isn’t it? And of course you could legitimately reply that the entire globe is made up of postage stamps, but I would then ask if you have noticed similar problems elsewhere.

      If it were a conspiracy to drive temp records in one direction, wouldn’t they choose to fiddle with statistics in a wider region on smaller scales?

      • What if it is not a conspiracy, but bungling? They design a bad adjustment algorithm, run it and it gives them data that looks like what they expect to see. So they declare it good, publish it in a journal with scant review and no data to speak of. Then when you look under the hood you find that the actual adjustments don’t fit reality, the errors aren’t uniform but are most prevalent where data is less dense but the data was never tested at the station level. Just compared to the expected result, and since it confirmed the expected result the details were never looked at or understood. Then people will defend it saying it is based on 30 year old published results failing to notice that it gives the ‘correct’ answer by getting it all wrong.

    • Matthew R Marler

      Paul Matthews: Congratulations, you’ve written a long post, managing to avoid mentioning all the main issues of current interest.

      That is unfair.

      Could you mention specifically one of the main issues of current interest that managed to avoid mentioning? clearly, he couldn’t address every issue of current interest in a posting of finite length, but perhaps you have a specific issue he might bring up next time, relevant to adjustments to the temperature data.

    • Don Monfort

      An issue of current interest:

      http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/

      Anthony Watts:
      “This isn’t just some issue with gridding, or anomalies, or method, it is about NOAA not being able to present historical climate information of the United States accurately. In one report they give one number, and in another they give a different one with no explanation to the public as to why.

      This is not acceptable. It is not being honest with the public. It is not scientific. It violates the Data Quality Act.”

  7. What are you still using anomalies? There are only 50 US stations with relatively complete monthly data from 1961 to 1990 in USHCN ? The “anomaly” baseline is corrupted.

    Secondly, why not use Tmin and Tmax temperatures? Tmin is corrupted by UHI and therefore so is Tavg.

    Thirdly … 5 years smooth? Quit tampering Zeke.

    https://sunshinehours.wordpress.com/2014/07/03/ushcn-tmax-hottest-july-histogram-raw-vs-adjusted/

    • Steven Mosher

      Smoothing is not tampering.
      I suggest you go to Jonova and tell david Evans that smoothing TSI is tampering.

      dare you.

      • Smoothing is misleading in this case since we are trying to determine relatively small changes in trends.

        Smoothing removes data pertinent to this discussion.

      • Steven Mosher

        bruce,

        go to jonova. accuse them of being dishonest.
        prove you have principles.
        post your code.

      • If Zeke posts his R code for his infill graph, I’ll fix it and add trend lines and do one graph per month. And I’ll post his code.

        I have a bunch of USHCN data already downloaded.

      • Matthew R Marler

        sunshine hours: Smoothing removes data pertinent to this discussion.

        That is not true. Smoothing does note “remove” data. Do you perhaps have evidence that Zeke Hausfather has “removed” data. You are not disputing that they preserve their original raw data, and write out the adjustments and many other supporting statistics in separate files, are you?

    • Hi Bruce,

      Anomalies only use infilled data in the fourth case examined (QC + TOBs + PHA + infilling). In all other cases missing months during the baseline period are simply ignored. They are rare enough that the effect will be negligible.

      The reason I used a 5-year smooth on the first graph is that using monthly or annual data makes the difference between adjusted and raw data too difficult to see due to monthly and annual variability in temperatures. Smoothing serves to accentuate the difference if anything. The rest of the graphs show annual differences (though I could have been clearer in stating this in the text).

      • The mean # of Estimated values for tmax December 1961-1990 is 3.14.

        A little over 10%. Not rare.

        I haven’t checked for distribution by Elevation or Lat/Long.

      • Steven Mosher

        when you do bruce, post your code.
        we want to ISO9000 audit you.
        given your mistake with Env canada..

      • Mosher, you really are a bitter man. Just ask Zeke to redo his infilling graph to bolster his claim infilling doesn’t change the trends.

        You read way too many climategate emails. You just want to be as bloody-minded as them.

      • Estimated data is about 30m in elevation higher than non-estimated for the 1961-1990 period.

      • The reason I used a 5-year smooth

        Hopefully the frequency response of your smoothing method doesn’t have large side lobes.

        The question of how to smooth was discussed here at length some months ago in a post by Greg Goodman. For smoothing as a low-pass filter, a Gaussian filter can be taken as a good starting point. The many comments at Greg’s post by a number of contributors considered variants of the Gaussian filter with different criteria for how to minimize side lobes. No one spoke up in defense of moving-average smoothing.

        More sophisticated methods get into band-pass filters, for which even-order derivatives of the basic Gaussian filter are good, starting with the so-called Mexican hat or Ricker filter.

      • A C Osborn

        Zeke Hausfather | July 7, 2014 at 12:14 pm | Reply

        Hi Bruce,

        Anomalies only use infilled data in the fourth case examined (QC + TOBs + PHA + infilling). In all other cases missing months during the baseline period are simply ignored. They are rare enough that the effect will be negligible.

        The very thing Steve Goddard was slated for.

  8. David in Cal

    OK, but I still have two concerns:
    1. Can purely formula adjustments be fully adequate. That is, wouldn’t it be better to look at the actual characteristics of each weather station over time? (Granted, that’s a big job.)
    2. How much variation is added by the adjustment process? Is this variation reflected in various models? My impression is that this source of variation is ignored; that models take the adjusted values as if they were actual certain readings.

    I

    • David, you said at WUWT, If you want to understand temperatures changes, you should analyze temperature changes, not temperatures. You are right, and that is what Motl did on the HADCRUT3 dataset.

      http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html

    • Steven Mosher

      “OK, but I still have two concerns:
      1. Can purely formula adjustments be fully adequate. That is, wouldn’t it be better to look at the actual characteristics of each weather station over time? (Granted, that’s a big job.

      Be more specific.
      A) instrument changes. A side by side test was conducted on the
      LIG versus MMTS. MMTS was demonstrated to introduce a bias.
      That bias has a mean value and an uncertainty. This correction
      is applied uniformly to every station that has the bias.
      What would you suggest.
      B) how do you handle stations that started in 1880 and ended in 1930?
      time travel to investigate the station?
      C) yes formal adjustments are adequate.

      2. How much variation is added by the adjustment process? Is this variation reflected in various models? My impression is that this source of variation is ignored; that models take the adjusted values as if they were actual certain readings.

      A) what models?
      B) what do you mean by “variation added” the best estimate of the bias
      is calculated. It is added or substracted from the record.
      Roy Spenser does the same thing for UAH, ask him how it works

      • Absolutely, I vote for time travel. Then we can educate all those farmers about ISO 9000.

      • Steven Mosher | July 7, 2014 at 11:35 am | Reply | Reply w/ Link |

        A) instrument changes. A side by side test was conducted on the
        LIG versus MMTS. MMTS was demonstrated to introduce a bias.

        This made me curious, and is probably more rhetorical as opposed to being actual questions. Did they not calibrate them in a metrology dept? And did they compare more than one? You could be adding a half degree adjustment for an issue with potentially only a subset of the actually deployed thermometers.

        Which, is yet another reason why IMO any adjustment after the fact is based on less information than was available when the record was recorded, at least generally. I understand why you want to correct the data, but as I tell my data customers, at some point after enough changes, it’s not your data anymore, it’s made up. I’ll even go as far as saying it’s probably more accurate, but the error of that data is larger, it has to be.

  9. Why does figure 5 use 1900-1910 as the reference period when the graph it is trying to emulate uses 1900 to 1999?

  10. It all sounds very logical except for the assumptions e.g. assuming current measurements are more accurate. And from what I can see from studying this for close to a decade now is that the ‘revisions’ always seem to make the past colder to the point that they are now in conflict with non NOAA & NASA temperatures records. There is no way I would believe that the data is not being manipulated to some degree without an ‘independent’ and openly published study.

    • Steven Mosher

      “There is no way I would believe that the data is not being manipulated to some degree without an ‘independent’ and openly published study.”

      See BerkeleyEarth.

      “It all sounds very logical except for the assumptions e.g. assuming current measurements are more accurate.”

      There are 114 pristine stations called CRN that have been in operation for a decade. these stations are stamp with a gold seal by WUWT.

      Guess what happens when you compare these 114 to the rest of the stations: NO DIFFERENCE.

    • Steven Mosher

      where is your code bruce.
      Iso9000 for you.. get crackin.

      • Are these comments really necessary? They seem more like a past issue that Mr. Mosher cannot get over.

      • Matthew R Marler

        FTA: Are these comments really necessary? They seem more like a past issue that Mr. Mosher cannot get over.

        There is that problem that “seems” is in the mind of the beholder. It seems to me that sunshinehours1 and some other people are posing the same misunderstandings over and over (ignoring the substantial statistical literature on methods of estimation and their error rates), forcing Steven Mosher and some others to make the same statistical points over and over.

      • Steven Mosher

        No FTA.

        I hold all people to the same standard.
        where were you we we badgered hansen for code?
        in your mothers basement?

      • Matthew than wouldn’t it simply be better to refer readers to that fact? The comments from Mosher simply don’t help the dialogue along and instead turn it combative and nonproductive.

        Mosher – you don’t seem capable of being civil from my perspective – as a newcomber to this topic. I’ll note you as an idealogue and focus my interest in learning towards others such as Zeke (who presents an excellent article and continues to answer professionally).

  11. It appears there should be a limited number of stations that did not change their TOBS. How does the trend of those stations, assuming they wouldn’t require a TOBS adjustment, compare to the trend of the stations in the same region where the adjustment has been made? Has this analysis been done? If there is no difference the TOBS corrections are probably accurate. If not why don’t they match up?

  12. David Springer

    Stepwise differences due to USHCN adjustments.

    As one can clearly see in this breakdown, straight from the horse’s mouth, that without TOBS and SHAP adjustments there is no warming trend in the US instrument record.

  13. Cool! Thanks for writing this. I look forward to working through it.

  14. “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

    I think the genie is out of the bottle. At best, we can conclude that due to the fact that there are adjustments being done, means there hasen’t been and is not a good process in place for measuring and reporting temperature.

    Andrew

    • Without making too much of it, I would have to agree. Unfortunate for the hotties that the stations switched time of day.
      TOB has to be difficult. Living in Colorado tells me that. Without thunderstorm knowledge the temperature adjustment has got to be incredibly difficult. I cant reach the ftp site yet. Interested in the further discussion of TOB.

    • Matthew R Marler

      Bad Andrew: I think the genie is out of the bottle. At best, we can conclude that due to the fact that there are adjustments being done, means there hasen’t been and is not a good process in place for measuring and reporting temperature.

      That is just plain ignorance.

    • Steven Mosher

      sadly you dont post your code so I cant find your error.
      unlike the time you botched the enviroment canada data when you error was obvious.

  15. In view of all you’ve written Zeke, should the record ever be used to make press releases saying ‘warmest on record’ or unprecedented when no matter how honest the endeavour, the result has to be somewhat of a best guess? Especially when the differences between high scorers are so small.

    • If I ran the organisation doing these stats and anyone even so much as implied anything “good” or “bad” about the temperature, I’d kick them out so fast that their feet would not touch the ground.

      That is what you need in an organisation doing these stats. Instead, it is utterly beyond doubt that those involved are catastrophists using every possibility to portray the stats in the worst possible light.

      That is why I’d kick the whole lot out. The principle aim indeed, perhaps the sole aim should be to get the most impartial judgement of the climate.

      Instead we seem to have people who seem no better than greenpeace activists trying to tell us “it’s worst than we thought”.

      Yes, it’s always worse than they thought – but not in the way they suggest. It’s worse, because nothing a bunch of catastrophists say about these measurements can ever be trusted.

    • Steven Mosher

      Press releases claiming ‘warmest” or “coolest” are rather silly in my mind.
      precisely for the reason you state.

      now, back to the science.

      • Steven, In I think it must be March 2007, whilst I was waiting for the February HADCRUT figure to come out, there was a deluge of climate propaganda so that nightly the news was full of climate related stories. Then eventually (I would guess more than a week late) the figure came out and it showed the coldest February in 14years. Of course there was no official press release, and in retrospect it was obvious the propaganda and late release of the data was to saturate the media with stories so that they would not pick up on the story that global warming had come to an end (at least for that month).

        Over the next few months/years that figure has “warmed”. For anyone working in a quality environment, that kind of creeping change is a total anathema. For those producing climate data it seems to be a given that they can constantly change the data in the past without so much as an explanation.

        That February 2007 was the point I realised the figures are so bound up in propaganda that even with the best will in the world, the people involved could not be trusted. Climategate proved me right.

        Now 7 years later, nothing really has changed. We still have people making excuses for poor quality work. And to see the difference between “trying your best” and “fit for purpose”, see the image on my article:
        https://scottishsceptic.wordpress.com/wp-admin/post.php?post=3657&action=edit&message=6&postpost=v2

        None of them are accused of not “trying their best” – it was just that they didn’t produce something that met the requirements of the customer.

      • Matthew R Marler

        Scottish Sceptic: For those producing climate data it seems to be a given that they can constantly change the data in the past without so much as an explanation.

        Given the plethora of explanations, why the cllaim that there has not been an explanation?

      • Steven Mosher

        More pr comments about pr.
        Back to the science

  16. Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best

    Well it isn’t good enough.

    You sound like someone talking about a charity where no one quite knows where the money has gone and some are claiming “they are doing their best”.

    We don’t need the academics “best”, what we need is the standard of qualify, accountability and general professionalism you see in the world outside academia.

    So:

    1. Fully audited methodology and systems
    2. Quality assurance to ISO9000
    3. Some come back WHEN we find out they weren’t doing the job to the standard required that doesn’t involve putting them in jail.
    4. Accountability to the public – that is to say – they stop saying “we are doing our best” and start saying “what is it you need us to do”.

    • ISO 9000 on readings taken by farmers 100 years ago?

      • Think of it as repairing cars – the cars may be junk, but that does not mean you can’t do a good job.

        ISO9000 cannot improve the original data, but it will create a system which ensures quality in handling that data and the key to the system is the internal auditing, fault identification and correction.

        Instead, the present system is:
        1. Pretend its perfect
        2. Reluctantly let skeptics get data – “only because you want to find fault”.
        3. Deny anything skeptics find
        4. When forced to admit they have problems – deny it is a problem and claim “we are only trying our best”.

        Basically: Never ever admit there is any problem – because admitting problems shows “poor quality”.

        In contrast to ISO9000 … only by searching for problems and admitting them can you improve quality.

    • Steven Mosher

      “We don’t need the academics “best”, what we need is the standard of qualify, accountability and general professionalism you see in the world outside academia.”

      standard of “qualify”?

      stones and glass houses.

      The data is all open
      The code is all there.

      Yes in a perfect world everyone would be ISO9000. But as you know you are very often faced with handling data that was generated before the existence of ISO9000.

      According to ISO9000 how are these situations handled.

      Be specific, site the standard.

      • @ Steve Mosher

        “The data is all open
        The code is all there.”

        And as Zeke went to great lengths to point out, the actual data stinks. Without going into motivations, the simple fact is that the actual data is being heavily massaged and used to produce headline after headline that states some variation of ‘Year X or Month Y is the hottest year/month of the last thousand years (or some other long period), beating the old record by a small fraction of a degree, and proving that we need to take action now to control ACO2 to avoid catastrophic climate change.’. And no amount of correcting, kriging, infilling, adjusting, estimating, or any other manipulation of sow’s ear data is going to turn it into silk purse data capable of detecting actual century or multi-century anomalies in the ‘temperature of the Earth’, whatever that is, with reliable hundredth or even tenth of a degree precision. The actual instrumentation system and the data collected by it is not ‘fixable’, no matter how important it is to have precision data, how hard the experts are trying to massage it, or how noble their intentions are in doing so. Using the previous analogy of the auditor, if the company to be audited kept its books on napkins, when they felt like it, and lost half of the napkins, no auditor is going to be able to balance the books to the penny. Nor dollar.

        We are told that anthropogenic climate change is the most important problem facing the human race at this time and for the foreseeable future. If so, why don’t the climate experts act like it?

        Want to convince me that it is important? Develop a precision weather station with modern instrumentation and deploy a bunch of them world wide.

        Forget the 19th century max/min, read them by hand thermometers and deploy precision modern instruments that collect data electronically, every minute if necessary, buffer it, and send it back to HQ at least daily for archiving. Make sure that they include local storage for at least a year or two backup, in case of comms failure. Storage is cheap, in the field and at HQ.

        Deploy the stations in locations where urban heat is not a factor and in a distribution pattern that guarantees optimum geographic coverage. It is no longer necessary to have humans visit the stations for anything other than routine maintenance or, for really remote sites where electronic data forwarding is not feasible (Where would that be nowadays?), periodic data collection.

        Set up a calibration program; follow it religiously. Ensure that the ACTUAL data collected is precise enough for its intended purpose and is handled in a manner that guarantees its integrity. If data is missing or corrupted, it is missing or corrupted. It cannot be ‘recreated’ through some process like the EDAC on a disk drive. It’s gone. If precise data can be generated through kriging, infilling, or whatever, why deploy the collection station in the first place?

        Collect data for a long enough period to be meaningful. Once collected, don’t adjust, correct, infill, krig, or estimate the data. It is either data or it isn’t.

        Oh, and give up the fiction that atmospheric CO2 is the only important factor in climate variability, the climate models that assume that it is, and the idea that we can ‘adjust the thermostat of the Earth’ by giving the government, any government, taxing and regulatory authority over every human activity with a ‘carbon signature’

      • k scott denison

        You know that Mosher, your repeating “the data is open, the code is all there” doesn’t relieve you of responsibility. You act as if this absolves you and your colleagues. Those of us out here in the real (regulated) world find that attitude arrogant and counterproductive. My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. Until then, your snide remarks are undoing what credibility you may have had.

      • site the standard?
        stones and glass houses indeed!

      • Matthew R Marler

        Bob Ludwick: the simple fact is that the actual data is being heavily massaged and used to produce headline after headline that states some variation of ‘Year X or and blah, blah, blah.

        The BEST team is doing the best possible with the records that exist. Silk purses and sow’s ears are not in the picture. That some people may be motivated to prove global warming and others may be motivated to prove there is no global warming, there is no justification for ignoring the temperature record outright or using purely naive methods.

        Whether CO2 is important or not, getting the best inferences possible out of the data that exist is the best approach.

        You are not advocating that the whole temperature record be ignored, are you? If not, what exactly is wrong with the BEST team using the best methods?

      • Steven Mosher

        “You know that Mosher, your repeating “the data is open, the code is all there” doesn’t relieve you of responsibility. You act as if this absolves you and your colleagues. Those of us out here in the real (regulated) world find that attitude arrogant and counterproductive. My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. Until then, your snide remarks are undoing what credibility you may have had.”

        1. Who said we were relieved of responsibility.
        2. you find it arrogant. boo frickin hoo. your job is to find the mistake.
        you dont like my attitude, see your therapist. get some meds.
        3. What makes you think that ISO9000 is even the right standard?
        4. No amount of process will change your mind. You are not the least
        bit interested in understanding. Look you could be a skeptical hero.
        go do your own temperature series.
        5. credibility. Whether or not you believe me is immaterial. You dont matter. get that yet? when you do work and find the problems, then you matter. or rather your work matters. Appealing to credibility is the flip side of an appeal to authority.

      • Is the current product worth the price paid?

      • “2. you find it arrogant. boo frickin hoo. your job is to find the mistake.
        you dont like my attitude, see your therapist. get some meds.”

        Okay Mr. go read a book. Go read this book:

        http://www.amazon.com/How-Sell-Yourself-Winning-Techniques/dp/1564145859/ref=sr_1_3?ie=UTF8&qid=1404838524&sr=8-3&keywords=selling+yourself

        Some relevant quotes:

        “Communication is the transfer of information from one mind to another mind…. Whatever the medium, if the message doesn’t reach the other person, there’s no communication or there’s miscommunication….
        We think of selling as being product oriented….Even when there’s a slight price difference, we rarely buy any big-ticket item from someone we really dislike.
        Ideas aren’t much different. The only time we pay close attention to an idea being communicated by someone we don’t like is when we have a heavy personal investment in the subject….
        Don’t waste your time with people on your side. They’re already yours…Forget about trying to convince the people on the other side. You’re not likely to make a convert with a good presentation. They’re already convinced that you’re wrong, or a crackpot, or worse. The only people who matter are the folks who haven’t made up their minds. The undecided. And how do you win them? By presenting yourself as a competent and likable person.”

        You can thank me later.

      • @ Mathew R. Marler

        “……….headline that states some variation of ‘Year X or and blah, blah, blah.

        The BEST team is doing the best possible with the records that exist. Silk purses and sow’s ears are not in the picture. That some people may be motivated to prove global warming and others may be motivated to prove there is no global warming, there is no justification for ignoring the temperature record outright or using purely naive methods.

        Whether CO2 is important or not, getting the best inferences possible out of the data that exist is the best approach.

        You are not advocating that the whole temperature record be ignored, are you? If not, what exactly is wrong with the BEST team using the best methods?”

        WHY is the BEST team doing the ‘best possible with the records that exist’? Why is it important that multi-century old data, collected by hand using data handling procedures that in general would earn a sophomore physics student a D-, at best, using instruments wholly unsuited to the task, be massaged, corrected, infilled, kriged, zombied, and otherwise tortured beyond recognition in order to tease out ‘anomalies’ of small fractions of a degree/decade, if NOT for the ‘……..headline that states some variation of ‘Year X or and blah, blah, blah……’? What OTHER purpose justifies the billions of dollars and thousands of man-years of effort? Were it not for the headlines, and the accompanying demands for immediate political action to control ACO2 to stave off the looming catastrophe that it will cause if we don’t control it, all citing the output of the ‘best efforts’ of the BEST team and others as evidence, would anyone notice that we are, as we speak, being subjected to the ongoing ravages of ACO2 driven climate catastrophe?

        Are you actually claiming that the ‘best efforts’ of the data massagers are able to not only tease out temperature anomalies with hundredth degree resolution for the ‘annual temperature of the Earth’ going back a thousand years or more, all but the most recent couple of hundred years based solely on a variety of ‘proxies’, but, having teased them out, are able to successfully attribute them to some specific ‘driver’, like ACO2?

      • Steven Mosher

        Nickels. You are not the customer.
        I am not interested in selling to you or anyone else.
        Folks who want the data get it for free.
        Psst
        You did a bad job of selling the book.
        Perhaps you should reread it

      • k scott denison

        Sorry Mosher, I didn’t realize your efforts were all mental masturbation. By all means, carry on both with your efforts to create information from data that isn’t up to the task and at trying to convince whomever it is you are trying to convince of whatever it is you are trying to convince them. Because honestly, most of the scientific world doesn’t believe you or your data.

        Good luck with that.

      • k scott denison wrote:

        “My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. “

        I think anyone who has worked in the regulated world has an appreciation for that comment but also can see the fleeting sardonic smile on your face when your wrote the above.

  17. Thanks for the sensible post Zeke…you may not get the kindest reaction here for suggesting there’s no massive conspiracy.

    • Zeke is doing a good enough job of proving there is a small conspiracy to mislead.

      • Steven Mosher

        Lewandowsky loves skeptics like you.

      • Mosher, Zeke has had since June 5th to prove me wrong by emulating my graphs.

        http://rankexploits.com/musings/2014/how-not-to-calculate-temperature/

        If I’m wrong I will apologize.

      • Sorry Bruce, averaging absolute temperatures when the network isn’t consistent gives you screwy results. The graphs in this post are nearly identical to those in Menne et al 2009, and use a method (anomalies + spatial weighting) used by pretty much every published paper examining surface temperature data.

      • Zeke, you wrote in this blog post: ” infilling has no effect on CONUS-wide trends.”

        Yet you won’t post a graph with trendlines or post the trend difference.

        And you graph has a -.2 to .5 scale and the data barely gets away from 0.

        We could be arguing about the trends if your post had numbers.

      • The graph has a scale consistent with all the other graphs. The impact of infilling is pretty much trend-neutral (rather by definition since it mimics spatial interpolation). The big adjustments are TOBs and the PHA.

      • Matthew R Marler

        sunshinehours1: Zeke is doing a good enough job of proving there is a small conspiracy to mislead.

        This is total ignorance. You plain and simply do not understand how the statistical analysis procedure works. And your evidence for a conspiracy to mislead is that your demonstrably inferior inferences are different in some cases?

    • Chris,

      From the outside looking in, the direction the adjustments almost always go seems pretty “convenient.” But the implication that most of us believe AGW is a “massive conspiracy” is also convenient. Seems the true conspiracy whack jobs are on your side of the fence.

      Would you care for a little cream and sugar with your straw man?

      • Steven Mosher

        see the sunshine.

      • Uh, when Obama says he is making $1 billion available to fight “climate change”, just who in academia do you think will get this through grants? Anything even REMOTELY skeptical will not even be allowed the light of day. Yes, that computes to MASSIVE…..
        Should Marcott get more grant money?

      • Steven Mosher

        DAYHAY

        changes the subject. not interested in understanding science

      • Mosh, you don’t want to be accused of being only interested in those who change the subject ;-)

      • Steven Mosher

        phatboy

        I think I could build a bot to parse comments and classify them

    • So I assume my graphs are not wrong, you just disagree with me on their significance.

      What do you mean by “screwy results” since you left the trend lines out of your infilling graphs

      http://sunshinehours.wordpress.com/2014/07/07/misleading-information-about-ushcn-at-judith-currys-blog/.

    • stevefitzpatrick

      Chris Colose,
      In a development which is nearly as shocking as Nixon going to China, for once I agree with you; Zeke has done a good job of explaining a fairly messy process. I also agree he won’t convince some people of anything, but at least he has laid out a clear explanation. Lets hope it influences the less strident.

    • You left-wing scientivists love conspiracy theories far more; eg every skeptic apparently receives money from Exxon or the Koch Brothers (who?) or about the USA going to war with Iraq because of oil. I bet most of you believe some other big whoppers too. Where did the expression Big Pharma come from anyway? So physician heal thyself!

      Of course conspiracies do actually happen but I don’t believe you are a conspiracist. I believe you and your fellows genuinely believe the planet is warming dangerously due to manmade emissions. The main problem is that nature fundamentally disagrees with you. This is actually a very common occurence in the history of science and is perfectly normal, even necessary for science to progress. It is also perfectly normal to find it difficult to admit you have been teaching (or been taught) the wrong thing for years. So conspiracy no, cognitive dissonance hell yeah!

      We have now conducted the experiment of adding a large slug of manmade CO2 and planet earth just shrugged it off. This expoeriment tells us that CO2 is clearly no more than a minor feedback to the climate system. Never mind the skeptics, that is what the actual data is screaming at you. You and your cronies just refuse to believe it, for reasons that are likely nothing to do with climate should you bother to think about it objectively.

    • I agree that Zeke’s post is sensible and helpful. It underscores the absurd nature of the task of trying to make sense of massive amounts of data collected in a haphazard way over the course of many many years by a lot of different groups. To further assert that the results of analyzing the data are adequate to determine that CAGW is real and the most important problem facing mankind is troubling.

    • Matthew R Marler

      Chris Colose: Thanks for the sensible post Zeke…you may not get the kindest reaction here for suggesting there’s no massive conspiracy.

      Thank you for that.

  18. Zeke,
    I’m a bit confused by figure 3, the distribution of Tobs over the USHCN. There are now only ~900 actual stations reporting rather than ~1200. However, the total station count in figure 3 appears to remain constant near 1200. How can a Tobs be assigned to a non-reporting station?

  19. Zeke, which version of USHCN was used? Because USHCN recalculates a lot of its temperatures daily I always try to put version numbers on the graphs.

    http://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-updated/

    The changes tend to warm the present as usual.

  20. “Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves”

    What is the major cause of station moves?

    Is the general trend to move from a more urban environment to a more rural environment?

    Can we surmise that just after the move of a station the data is likely to be less wrong than at any other time in the station history?

    • In the 1940s there was a big transition from urban rooftops to more rural locations. When MMTS instruments were installed most stations had to move closer to a building to allow for an electric wired connection. Other station moves happen frequently for various other reasons.

      • surely in this situation the adjustments to the raw data for an individual station should only apply at the point in time the change in location/instrument/tobs took place ?

      • Zeke, that you mentioned you had worked with many of the people involved would prevent you from any analysis in the private sector. By definition, you are biased not only because of this, but also because you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions. Did you honestly believe you’d be viewed as objective?

      • Steven Mosher

        “By definition, you are biased not only because of this, but also because you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions.”

        The problem with this is that you havent read any of my comments on the issue of adjustments between 2007 and 2010.
        in short I wass highly skeptical of everything in the record.
        until I looked at the data.

        Then again perhaps we should use your rule.
        Anthony is a non warmist. he is not objective
        Willis is a non warmist he is not objective.

        All humans have an interest. We cannot remove this.
        We can control for it.
        How?

        Publish your data. Publish your method. let others DEMONSTRATE
        how your interest changed the answer.

        Oh, two years ago WUWT published a draft study. no data. no code.
        and you probably believe it.

        Scaffetta argues its the sun. no data. no code. you probably believe it.

      • bit chilly,

        They are only applied when and where the breakpoint is detected. However, because these breakpoints tend to add a constant offset going forward (e.g. 0.5 C max cooling when switching to MMTS), you need to either move everything before the breakpoint down 0.5 C or everything after the breakpoint up 0.5 C. NCDC chooses the former as they assume current instruments are more accurate than those in the past, though both approaches have identical effects on resulting anomaly fields.

      • “you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions”

        I’ve pointed this out many times. The chances of them presenting information that contradicts their own declarations is zero.

        Andrew

      • andrew adams

        And by the same logic any chance of you accepting information which contradicts you own declarations is also zero. So basically none of us can ever really learn anything, or educate others, so we may as well give up on any hope of improving human knowledge.

      • Steven Mosher

        “I’ve pointed this out many times. The chances of them presenting information that contradicts their own declarations is zero.”

        Actually not.

        see my declarations about adjustments and UHI and microsite before I actually worked through the data. I used to be skeptical. I declared that.
        I was dead wrong.

        The chances of you looking at my past declarations is zero.

      • “see my declarations about adjustments and UHI and microsite before I actually worked through the data”

        Why don’t you post one in a comment and link a reference to it? Should be easy.

        Andrew

      • Steven Mosher

        easy
        start there
        http://climateaudit.org/2007/06/14/parker-2006-an-urban-myth/

        there are tons of other.

        read much.

      • Mosher,

        Why do I have to dig for it? Why don’t you just quote what you had in mind?

        Andrew

      • Andrew, why not grow a pair and do your own leg work.

      • I looked through Mosher’s link to CA and there are no “declarations” from him concerning adjustments and/or UHI.

        Thanks for nothin Mosher, as usual.

        Andrew

  21.  
     
    Adjust this:


     
     
    Such is, the Socio-Economics of Global Warming!

    • Steven Mosher

      Easy.

      Zeke shows you how in his paper. The sum total of UHI in the US is around
      .2C. Correctable.

      However, linking to a chart from the EPA that has no documentation of its source data, effectively one data point, is just the sort of science one expects from Wagonthon.

      one data point. from an EPA chart. that doesnt show its source..

      man, if you were Mann trying to pull that sort of stunt, Styne would write a column about it

      • Kristen Barnes (Ponder the Maunder) at 15 years old could figure this out. Making decisions based on a climate model that is a simple construct of, “a climate system,” according to Pete Spotts of, The Christian Science Monitor, “that is too sensitive to rising CO2 concentrations,” would be like running a complex free enterprise economy based on the outcome of a board game like Monopoly. There is a “systematic warm bias” that, according to Roger Pielke, Sr., “remains in the analysis of long term surface temperature trends.” Meanwhile, the oceans that store heat continue to cool.

      • Steven Mosher

        “Kristen Barnes?”

        you realize that her work was really done by someone else..

        hmm maybe I should dig those emails up..

  22. US Temperatures – 5year smooth chart.
    As a layman I cannot comprehend how “adjustments” to around 1935 RAW can generate a 0.5C cooling to the RAW recordings. Sorry, but I just do not believe it and see it as an attempt to do away with 1935 high temperatures and make current period warmer all in the “cause”. As stated above, it is suspicious that all adjustments end up cooling the past to make the present look warmer.

  23. To those of us who have been following the climate debate for decades, the next few years will be electrifying. There is a high probability we will witness the crackup of one of the most influential scientific paradigms of the 20th century, and the implications for policy and global politics could be staggering. ~Ross McKitrick

  24. This is entertaining, a tweet from Gavin:

    Gavin Schmidt ‏@ClimateOfGavin 1m
    A ray of sanity in an otherwise nonsenslcal discussion of temperature trends and you won’t believe where! http://wp.me/p12Elz-4cz #upworthy

    • Oh geez. You’ve poisoned the well by saying Gavin liked the post.

    • Judith, this is hardly a trivial matter. You are yet again trying to defend a culture which does not allow outside scrutiny to ensure it is producing quality work by saying “they are trying their best”.

      In my experience in industry almost everyone “tries their best”, but that in no way guarantees quality. But instead it is those in a culture that accepts rigorous inside and outside scrutiny and then have a system to identify and correct problems and then drive through improvement that ever achieves the highest quality.

      And in my experience, those that “sweep problems under the carpet” and have a general culture of excusing poor quality because they are “trying their best that are usually the ones with the greatest gap between the quality they think they are producing and the actual quality of what comes out.

      • Steven Mosher

        “defend a culture which does not allow outside scrutiny to ensure it is producing quality work by saying “they are trying their best”.

        outside scrutiny?

        Zeke doesnt work for NOAA

        They provided him ( and you) access to their data
        They provided him ( and you) access to their code.

        you dont work for NOAA.

        Zeke applied outside scrunity
        You can apply outside scrunity and you are not even a customer.
        Zeke has the skill
        You have the skill ( If I believe what you write)

        Take the data
        Take the code.
        Do an Audit
        Be a hero.

    • The comments prove Gavin right, again.

      • Matthew R Marler

        Chris Colose: The comments prove Gavin right, again.

        Very droll. They are an instance of his not being wrong.

  25. i really hope sunshinehours1 questions do not get lost in the comment thread. the answers to them should lead the discussion.

  26. Jeepers. The denizens are not showing their best side in the comments. “Consider that you may be mistaken.”

    • In the UK there is a sale of goods act that gives us the right to ask for our money back for goods or services that are “not fit for purpose”.

      We are just trying to exercise that right – except there is an academic cartel of like minded catastrophists who are stopping a reliable and impartial organisation coming in to do the job in a way that can be trusted.

      Let me put it this way. A cowboy builder comes in and puts up your house without proper foundations. They may well have done “the best they are able”, but that doesn’t mean it wasn’t good enough.

      We want people in charge on these temperature measurements who stop trying to excuse bad quality work and instead some organisation that takes quality seriously.

      And to start – they have to understand what quality means – so Judith go read up about ISO900o

      Then tell me how many of those organisations doing these temperature figures even know what ISO9000 is let alone have it.

      • Matthew R Marler

        Scottish Sceptic: We are just trying to exercise that right – except there is an academic cartel of like minded catastrophists who are stopping a reliable and impartial organisation coming in to do the job in a way that can be trusted.

        You continue to miss several important points. (1) the statistical methods used by BEST are in fact the best available; (2) they have described their methods in published papers and have made their data and code available to anyone who wishes to audit them; (3) no one is stopping anyone from coming in to do the job in a way that can be trusted.

    • Steven Mosher

      They are all experts.
      And they forget their feynman about the ignorance of experts.
      Note how NONE of them address the science.
      Note how many commented before reading the papers zeke linked to.
      Note that none took time to look at the data or the code.

      Why?

      because they are not interested in understanding.
      period.

      • Actually I designed temperature control and monitoring systems ran a factory with several thousand precision temperature sensors and then went into meteorological weather stations for the wind industry.

        From that experience I learnt that it was impossible to reliably measure the temperature of a glass slid about 1cm across to within 0.01C let alone an enclosure a few tens of cm.

        Then I came across a bunch of academics who told me the end of the world was nigh because they were absolutely certain global temperature had risen since the days of hand-held thermometers to the modern era of remote instrumentation.

        … and I laughed … until I realised they were serious … and worse … people actually took them seriously. And then I was down right despairing when I saw that rather than the carefully planned sites I had imagined, there were sensors in parking lots.

        And then when those responsible said that none of that mattered and then started calling us “deniers” – in any other walk of life, ministers would resign and those responsible would go to prison.

      • Steven Mosher

        really sceptic?
        I dont believe you.
        show your data and code.
        appeals to personal experience and authority by someone who calls themselves a sceptic..
        tsk tsk.
        also, your iso9000 certs.
        thanks Ill wait

  27. One of the issues you’ve ignored is how the picture has been changed in the last few years. Back in 2000 the US temperature plots showed clearly that the 1930s were warmer than the 1990s, with 1936 0.5C warmer than 1998. Since then this cooling has been removed by the USHCN adjustments. This is Goddard’s famous blinking gif that appears regularly at his site. On the other hand it still seems to be acknowledged that most of the state record highs occurred in the 1930s (there are lists at various websites).

  28. HaroldW,

    Figure 3 ends in 2005, when there were still about 1100 stations in the network still reporting.

    • Zeke,
      I agree with your point that figure 3 goes only to 2005, but that doesn’t explain the situation. From figure 2, the station count in 2005 was between 1000 and 1100, say 1075.

      Reading the most recent (2005) values from figure 3:
      AM: 750
      PM: 350
      Midnight: 120
      Other: 10
      The total is over 1200. There’s a minimal error involved in reading these values under magnification, and it’s not large enough to reconcile this total with an active station count below 1100. Non-reporting stations were associated in Menne with a time of observation, which is puzzling.

  29. Pingback: Did NASA and NOAA dramatically alter US climate history to exaggerate global warming? | Fabius Maximus

  30. I am unconvinced of the need to “adjust” the data. There are thousands and thousand of data points and associated error margins. The results are by their very nature statistical.

    “Adjustments” invariably invite abuse, whether intended or not.

    • Mike, I think Zeke’s explanation for why the adjustments are absolutely essential for calculating temperature changes over space and time was clear and compelling. I find it difficult to think of a cogent argument against it.

  31. Pingback: Have the climate skeptics jumped the shark, taking the path to irrelevance? | Fabius Maximus

  32. Pingback: Comment threads about global warming show the American mind at work, like a reality-TV horror show | Fabius Maximus

  33. BS baffles brains….you can bet every apostrophe was double checked on this message to say as little as possible.
    “But I want to say one thing to the American people. I want you to listen to me. I’m going to say this again: we did not screw around with the temperature data”

  34. The Fig. 8 caption appears to be incorrect.

    Figure 8. Time of observation adjustments to USHCN relative to the 1900-1910 period.

    Shouldn’t it say Pairwise Homogenization Algorithm adjustments?

    • Good catch. Asking Judy to fix it.

      • should the years also be 1900-2010 period ?

      • The adjustments are shown relative to the start of the record to better show their cumulative effects over time. This is following the convention from the USHCN v1 adjustment graph on the NCDC website to use a baseline period of 1900-1910. In reality, what matters is the impact of the adjustments on the trend, so the choice of baseline periods is somewhat irrelevant and only really impacts the readability of the graph.

  35. If someone could explain why, after the initial adjustments are made to raw data (assuming they are valid/correct which may or may not be the case), additional adjustments are made on a nearly annual basis. I might accept that there is “good faith” in making these adjustments.

  36. Zeke’s graph fig 5 shows that the total effect of the adjustments is a warming of about 0.5C from the 1930s to now.
    There is a graph at Nick Stokes’s Moyhu blog, also at Paul Homewood’s, showing 0.9F, ie the same.
    And I think this is what Goddard says also, so maybe that’s something everyone agrees on?

  37. using current methodology ,if the time series was extended by 500 years at either end,and random data from the existing data input,would these adjustments even out to create a realistic manufactured temperature record ,or would the past temperatures continue to decline,and future temps continue to rise,at a similar rate.
    if so,whilst your current methodology may be the best mathematically possible,it would indicate a problem.

  38. Andy Skuce of SkS tweets:

    Andy Skuce ‏@andyskuce 13m
    Great piece by @hausfath at @curryja blog, but don’t read the crazy comments. https://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/

  39. An excellent and informative post. This is a “must read” by anyone who would hope to understand the complexities of this subject. Thanks for taking the time to write this, and thanks to Judith for providing the opportunity!

    • Steven Mosher

      Do a count of denizens who actually engage the science.
      you know a count of those who want to understand
      Do a count of denizens who

      A) invoke conspiracy
      b) question zeke’s motives.
      c) derail the conversation
      d) say they dont believe but provide no argument.
      e) refuse to do any work with the data or code, and yet call themselves engineers. eg springer.

      • Mosher, you spend a lot of time attacking me instead of the graphs I post.

        Maybe you should politely ask Zeke to add trendlines to this infilling graph. And change the scale a little. And do it by month.

      • Cut it out Mosher. Defensiveness is unbecoming. We didn’t say we didn’t believe Zeke. We said he is not in a position to be objective. Tell us you agree that!

      • I’ve read the emails of a lot of denizens. I’ve read takedowns of the remarkably poor quality of their work. They are totally untrustworthy people. Anyone who relies upon or has endorsed their work, knowing that they are untrustworthy, is also untrustworthy.

      • Steven Mosher

        sunshine.

        your graphs come from your code.
        in the past you made boneheaded mistakes.
        I’ll comment on the graphs when I study the sources and methods.

        See. I treat every problem the same.
        Zeke makes a claim. I go to the sources. FIRST
        You make a claim. I want to go to the sources. FIRST

        So, cough up your code. I will audit you and let you know.

      • Steven Mosher

        “Cut it out Mosher. Defensiveness is unbecoming. We didn’t say we didn’t believe Zeke. We said he is not in a position to be objective. Tell us you agree that!”

        huh. I already said that.
        Every human including you has an interest.
        none of us are objective, none of us are free from interest.
        We CONTROL for this by releasing our data and code.
        that way you can look all you like to see if you can DEMOSTRATE
        any piece or part where our interest changed the result.

        Doing science means you accept that individuals are not objective.

        Now, can I be objective about my judgements about zekes objectivity?
        Can you be objective about your observations?

        theres a paradox for you. go think about that.

      • No code yet Steve? No trend for infilling?

    • Matthew R Marler

      R. Gates: An excellent and informative post.

      I agree.

  40. Please don’t be afraid of space again.

  41. The root cause of the bias between MMTS and LIG measurements was not determined past some generalities: closer to buildings, wood temperature changed via coating type. I didn’t see any testing that swapped or paired the thermometers in the housings. Nor were hosing maintenance and temperatures paired. It’s not unusual for some instrumental methods to have biases with some changes, for example, gas chromatography. However, there are methods to correct those biases. I didn’t see any of that here.

    I haven’t seen the description of QC procedures for the instruments. Were they calibrated to some traceable reference standard once or periodically? If the latter, then what adjustments and annotations have were made to the data based on calibration and drift corrections? If this hasn’t been done, then you don’t know the accuracy of the measurements. I’ve been required by government or customers to recertify NIST traceable thermometers, including the master reference thermometer at 2-5 year periods and check the ones I used for actual measurements periodically. Anything like that going on with these measurements?

    Continuously adjusting past data products to match some current activities? I think it is a poor practice and in some cases, such as environmental data, it could be quite problematic. The same goes for infilling missing data. You either have the data for that station or you do not. It may be a reasonable assumption that the temperature at stations 10-30km apart will be similar, but you don’t know that and the estimate has to add significantly to uncertainty.

    • “It may be a reasonable assumption that the temperature at stations 10-30km apart will be similar”

      Actually, the Pielkes studied a region that included multiple stations and found that sites even a few km apart show very different climate records. And none of the stations replicated the regional averages.

    • Hi Bob,

      I dug into the MMTS issue in much more detail a few years back here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

      The best way to analyze the effect of the transition is to look at pairs of otherwise similar stations, one of which transitioned to MMTS and the other of which remained LiG. There is a pretty clear and consistent drop in maximum temperatures. The rise in minimum temperatures is less clear, as there is significant heterogeneity across pairs. I’ve suggested in the past that the difference in min temperature readings might be a result of the station move rather than the instrument change, as many MMTS stations are located closer to buildings than their LiG predecessors.

      • Thanks Zeke,
        I read that from the link in your post. It sounds like a reasonable way to estimate a bias in the absence of basic QC validation of the equipment change. For all the data messaging going on with this, I’d expect the adjustments to be made using a higher level of QC. Instrument/method validation is a pretty standard QC practice. Did they put the MMTS out thinking any difference was minor for the purpose (agriculture) and now we are trying to force fit it into something more serious, like a data source for rearranging economies?

      • Bob,

        The MMTS transition was dictated by the desire of the National Weather Service to improve hydrologic monitoring and forecasting. The climate folks at the time were very unhappy with this choice, as they wanted a consistent record, but climate monitoring was presumably less of a priority than weather monitoring back in the 1980s, and the stations were used for both.

      • Also, Bob, here is a good side-by-side study conducted after the transition: http://ams.confex.com/ams/pdfpapers/91613.pdf

      • Did no one think to run MMTS and LiG measurements in parallel at the same location for a few years (hell, days) to estimate the bias?

    • Steven Mosher

      “The root cause of the bias between MMTS and LIG measurements was not determined past some generalities: closer to buildings, wood temperature changed via coating type. I didn’t see any testing that swapped or paired the thermometers in the housings”

      read Qualye and then comment.

      • Specific link or should I just find the first Qualye on google?

      • Steven Mosher

        Bob proves that he did not read zeke.
        had bob read zeke and followed all the references
        he would have found Qualye
        instead bob wants me to do his homework

        Here is the link that zeke provided
        http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

        read everything there. you are qualye hunting now.

      • Zeke, thanks for the Hubbard-Lin link, clarifies for me what has been done
        Mosher, (a) I had “read zeke” (b) It’s “Quayle” not “Qualye” (c) I suppose it is easier to make cryptic remarks than actually put up a link and discuss the what you consider important.
        My questions on root cause analysis of the differences seems to be answered. It wasn’t done. Instead, comparisons were made using large numbers of stations and only one proximate set (CSU). In Quayle, mention was made of some stations having both CRS and MMTS for a while, but the data were ignored for months 0-5. I assume, but don’t believe it was mentioned, that they may not have been recording both. The differences between the stations are conjecture: liquid separation (but no documentation of readings with this), differences between heating of shelters (but no documentation), siting (but no documentation). No discussions of instrument drift, calibrations or any of those messy QA/QC things.

        I’m late to this game and my questions were an attempt to form an opinion on the quality of this high-quality dataset and the adjustments. As has been said, the system wasn’t designed for what it is being used for.

    • Mr. Greene, these measuring instruments were not put into place to monitor climate change, as Zeke explains. They were pressed into service decades later. This has caused problems, obviously. Many of those problems have been cited by skeptics for a decade now. I think Zeke in this post has gone a long ways towards answering the questions posed by most and does, in my opinion, serve as an honest guide for anyone with an open mind.

  42. From: Tom Wigley
    To: Phil Jones
    Subject: 1940s
    Date: Sun, 27 Sep 2009 23:25:38 -0600
    Cc: Ben Santer

    “It would be good to remove at least part of the 1940s blip,
    but we are still left with ‘why the blip.'” and

    ‘So … why was the SH so cold around 1910? Another SST problem?
    (SH/NH data also attached.)’

    So they “fixed” the Southern Hemisphere as well.

    Well that certainly proves “good intentions” to me.

    • The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes.

      • thisisnotgoodtogo

        And there as no land blip

      • There is a blip in the land-only data too, and both blips occur around 1940. It seems to be a robust feature of the data, even if we do have to make a bucket correction for some of the SST measurements.

      • Wood for trees comparison:

        BEST, CRUTEM3 and HadSST2

        The argument that it’s an artifact does not seem to be a plausible one.

      • thisisnotgoodtogo

        Hi Carrick.

        “There is a blip in the land-only data too, and both blips occur around 1940. It seems to be a robust feature of the data, even if we do have to make a bucket correction for some of the SST measurements.”

        Yes, there is. WHUTTY was trying to slide stuff by again.

        We see that Tom and Phil were confabulating on how to adjust by figuring how much they wanted to take away from appearances. Like this: “Must leave some because there is a land blip, how much removal can we get way with?”

      • WebHubTelescope

        WWII was nasty. It affected measurements in ways that we will never quite figure out. The SST bias is well known and the data is patchy, the land measurements are possibly biased as well . But since the ocean is 70% of the global temperature signal, that is the one that clearly stands out.

      • WHT wrote
        “The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes”

        As noted by Tom and Phil , and circumlocuted by WHT, that does not explain the land blip.

        His response:
        “since the ocean is 70% of the global temperature signal, that is the one that clearly stands out”

        Clearly ! And getting rid of it by off-the-cuff figurings on what they could get away with, would affect Global average so much more ! Perfect.

      • ClimateGuy commented

        The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes.
        WHT wrote
        “The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes”

        The early 40’s blip was due to a warm AMO and a warm PDO overlapping.

      • “The early 40′s blip was due to a warm AMO and a warm PDO overlapping.”

        Partly, and that is accounted for in the natural variability. There is still a tenth of a degree bias due to mis-calibration as military vessels took over from commercial vessels during WWII.

    • Steven Mosher

      Chuck, the mail is about SST.
      This post is about SAT.

      Note another skeptic who cant stay on the topic of adjustments to the LAND data.

      doesnt want to understand.

      When Zeke shows up to discuss land temps, change the topic to SST.

      • Matthew R Marler

        Steven Mosher: doesnt want to understand.

        Assume good faith, and a range of intensities in “want”. Point out the error and then stop.

      • Steven Mosher

        mathew,

        How about this.
        How about YOU police the skeptics.
        Spend some time directing them to what the real technical issues are.

      • Yea Marler, during WWII the navy and merchant marine took over the responsibility for collecting SST measurements. Do you have any clue as to the calibration issues that resulted from that action?

        What are they supposed to say in emails? That Hitler and Hirohito really messed things up?

      • Matthew R Marler

        steven mosher: How about YOU police the skeptics.

        I read most of your posts and I skip most of the posts of some others. I’d rather not be distracted by the junk that you write.

        “Assume good faith” was taken from Zeke Hausfather. I guess you don’t think it’s a good recommendation.

      • I know what the post is about. I am questioning whether some of the players have “good intentions.” (No aspersions are being cast on what Zeke and even you, despite your drive-by cryptic arrogance, are doing.)

      • Mosh, these deniers see exactly what they want to see. Amazing that they can put blinders on to WWII — its almost a reverse Godwin’s law.

      • Matthew R Marler

        WebHubTelescope: That Hitler and Hirohito really messed things up?

        Well they did, dontcha know?

      • Steven Mosher

        Matthew

        Again,

        how about you police the skeptics.
        give it shot.
        show your chops.
        its good practice to call out BS whereever you see it.
        be a hero.

      • Matthew R Marler

        Mosher: its good practice to call out BS whereever you see it.

        I can’t do everywhere. In particular, I try to ignore people who are always wrong. There are a couple who are right or informative just barely often enough, but others whom I never read.

      • Hey Steve–the skeptics don’t need to be policed. Some of them might benefit from being ignored a bit…


      • Tom Fuller | July 8, 2014 at 4:29 am |

        Hey Steve–the skeptics don’t need to be policed.

        That’s right, you don’t “police” little kids that make a mess of the house and get chocolate all over their face.

      • Steven Mosher

        “Hey Steve–the skeptics don’t need to be policed. Some of them might benefit from being ignored a bit…”

        yes you ignore them and they show up to say that there questions were never answered, their demands never met, that zeke is hiding something, blah blah blah.

        I suggest that people who suggest ignoring should start by ignoring me as I play wack a mole.

        Its fun

        I get to have fun.

    • Matthew R Marler

      Chuck L. :From: Tom Wigley
      To: Phil Jones
      Subject: 1940s
      Date: Sun, 27 Sep 2009 23:25:38 -0600
      Cc: Ben Santer

      Why exactly is that relevant to Zeke Hausfather and Steven Mosher and the BEST team?

  43. Pingback: Misleading Information About USHCN At Judith Curry’s Blog | sunshine hours

  44. Zeke

    Well done for writing this long and informative post. It warrants several readings before I would want to make a comment. I do not subscribe to the grand conspiracy theory nor that scientists are idiots or charlatans or a giant hoax is being perpetrated on us. Which is not to say that I always agree with the interpretation of data or that often extremely scant and dubious data is given far more credence than it should.

    I will read your piece again and see if I have anything useful to say but thanks for taking the time an effort to post this.

    tonyb

  45. In my opinion, Zeke and Mosh are just two more “scientists” who are trying to change history by waving their hands. Leave the 1930 alone! You are no better than Mann and Hansen.

    • Steven Mosher

      dont address the science, attack the man.

      sceptical Lysenkoism

      • Leonard Weinstein

        Steve,
        I appreciate what Zeke has done here, and consider both he and you as basically reasonable and trying to be honest. However, this last comment is strange, since 99% of those that attack the scientists, are attacking skeptics (Lindzen, Christy, Spencer, etc,), and do exactly attack the man, not the science. It is a fact that many skeptics (including myself) started out accepting the possibility of a problem, and by studying the facts in depth came to the conclusion that CO2 effects are almost certainly small, dominated by natural variation, and mainly are desirable. I agree that there has been warming in the last 150 years, and a small part of that likely due to man’s activity. I really don’t care if it was 0.5C or 0.8C total warming, and if man contributed 0.1C or 0.4C of this. The flat to down temperature trend of the last 17 or so years, and likely continued down trend clearly demonstrate the failure of the only part of CAGW that is used to scare us: The models. I think the use of data adjustment and then making an issue of 0.01C as a major event is the bug in many of the skeptics here.

      • Steven Mosher

        leonard.

        good comment.

        here is the problem.

        there is all this skeptical energy. it should be focused on the issue that matters.

        how can I put this. After 7 years of looking at this stuff.. this aint where the action is baby.

      • I totally agree. The focus on the “measured” temperature record is akin to mental mas…bation.

        So where do you think the action is?

      • Matthew R Marler

        Leonard Weinstein: The flat to down temperature trend of the last 17 or so years, and likely continued down trend clearly demonstrate the failure of the only part of CAGW that is used to scare us: The models. I think the use of data adjustment and then making an issue of 0.01C as a major event is the bug in many of the skeptics here.

        It is useful to address the measurement and temperature problems, and then to address the modeling and theoretical problems separately. Some of the people who have posted “skeptical” comments here clearly (imo) do not understand the statistical methods that have been employed in the attempt to create the best attainable temperature record. That’s independent of whether the same people or different people understand any of the CO2 theory or its limitations.

        This thread initiated by Zeke Hausfather is very informative about the temperature record and the statistical analyses. His next two promise more information about the temperature record and the statistical analyses.

      • David Springer

        “dont address the science, attack the man”

        Ah. Like you did earlier attacking me for calling myself an engineer? Actually it was my employers since 1981 who insisted on calling me an engineer. I prefer to call myself “Lord and Master of all I survey.”

        You are such a putz, Mosher. Of course you know that already.

  46. Regardless of whether these adjustments are made in good faith or not, I would like NASA to run some experiments. Take the pre global warming scare algorithms, and run them against the 1979 – current temperatures. Compare these to UAH. Then take today’s algorithms. Compare them to UAH. At least then the amount of adjusting that’s going on would be known.

    • Hi Ed,

      You can do one better: compare raw data and adjusted data to UAH. Turns out that over the U.S., at least, UAH agrees much better with adjusted data than raw: http://rankexploits.com/musings/wp-content/uploads/2013/01/uah-lt-versus-ushcn-copy.png

    • Steven Mosher

      Well ed?

      Zeke answered your complaint.

      Are you interested in understanding? can you change you mind based on evidence.

      It was your question..

      Second, you realize that UAH is highly adjusted.
      right?
      you realize that the UAH records has changed many times by adjusting for
      instrument changes..
      right?

      • Zeke,

        Yeah, absolute temperatures are interesting, but I’m mostly interested in the change in the shape of the graphs. If modern day adjustments more closely follow the UAH shape than the algorithms of ten, or twenty years ago, then that gives food for thought. Specifically, I’m thinking UAH methodology is completely different from NASA’s, and so it’s unlikely errors in one are identically reflected in errors in the other. If the modern day adjustments more closely reflect UAH, that’s a good indication the approach is getting better. On the other hand, if modern algorithms yield cooling in the 1980s and warming in the 2000s vs. UAH and this effect is pronounced compared to earlier NASA algorithms, then that could indicate bias in either NASA or UAH algorithms, though probably in the NASA algorithms since now the previous NASA algorithms must be wrong and UAH too must be wrong.

        Why look at previous NASA algorithms? In my view bias is a subtle thing, and even people with very solid credentials and the best of intentions can get snookered.

        Mosher:

        What complaint am I making?

      • Steven Mosher

        Ed
        UAH and SAT are two different things.

        Suppose I had an method for calculating unemployment
        Suppose I had a method for calculating CPI

        both methods require and use adjustments.

        You dont learn anything by comparing them.

    • “You dont learn anything by comparing them.”

      How then do you interpret Zeke’s comment within the prism of your claim?

      “You can do one better: compare raw data and adjusted data to UAH. Turns out that over the U.S., at least, UAH agrees much better with adjusted data than raw:”

      I can think of several, one being he doesn’t agree with you. Here, Zeke is using UAH to bolster the idea that NASA adjustments make for a better temperature record. If you agree with that, then these are comparable data-sets. If not, take it up with Zeke.

      Meanwhile, I’m still waiting for you to explain my “complaint.”

  47. Pingback: The Skeptic demands: temperature data (draft) | ScottishSceptic

  48. Good post Zeke, but I’m curious that if you have several readings for day and average them to a temperature mean, wouldn’t that wipe out any need for a TMax or Tmin adjustment?

    • Dale,

      If you had hourly readings you would no longer need TOBs adjustments. You would still have to do something about station moves and instrument changes, however. I’m a bit more of a fan of the Berkeley approach of treating breakpoints as the start of a new station record, rather than trying to conglomerate multiple locations and instruments into a single continuous station record.

    • Dale, hourly data is what is used to estimate the TOBS correction.

      See for example this post from John Daly’s site:

      http://www.john-daly.com/tob/TOBSUMC.HTM

  49. Judith Curry

    When I have had to change instruments, I’ve run concurrent outputs for the same experiment to see if the results are the same: i.e., overlap.

    When I see that there has been a change using Liquid in Glass and two automated systems which necessitated physically moving the automated systems closer to buildings as well as Time of Observation changes, I am curious as how long the readings run concurrently so that there is overlap in using all of the instruments.

    For example: TOB, when there was a switch to AM from Afternoon, how long (and I am assuming there was overlap observations) was the observation period that had morning and afternoon recorded, a season? a year? a decade? ongoing?

    When the switch from LiG to MMTS or ASOS, how long was the overlap field observation? Or was this another in lab experiment?

    “NCDC assumes that the current set of instruments recording temperature is
    accurate,” Electronics don’t drift? go haywire? Issues with my computer tell me otherwise.

    I am first concerned with the fundamentals/integrity of the observations vs the fiddling with the outputs. Output fiddling is the game of statisticians on whom I am dependent for their own integrity.

    • RiH008:

      There have been a number of papers published looking at differences between side-by-side instruments of different types. This one for example: http://ams.confex.com/ams/pdfpapers/91613.pdf

      The NCDC folks unfortunately had no say over instrument changes; it was driven by the national weather service’s desire to improve hydrologic monitoring and forecasting. Per Doesken 2005:

      “At the time, many climatologists expressed concern about this mass observing change. Growing concern over potential anthropogenic climate change was stimulating countless studies of long-term temperature trends. Historic data were already compromised by station moves, urbanization, and changes in observation time. The last thing climatologists wanted was another potential source for data discontinuities. The practical reasons outweighed the scientific concerns, however, and MMTS deployment began in 1984.”

      • Zeke Hausfather

        Thank you for your response. As I understand it, NWS made the decision to change the instrumentation and in some cases location of the observing stations.

        I did not see anywhere how the transition took place.

        A 20 year retrospective analysis of one station in Colorado:

        “Is it possible that with aging and yellowing of the MMTS radiation
        shield that there is slightly more interior daytime
        heating causing recent MMTS readings to be more
        similar to LIG temperatures. But in a larger
        perspective, these changes are very small and
        would be difficult to detect and explain, except in a
        controlled co-located environment. Vary small
        (less than 0.1 deg F) changes in MMTS-LIG
        minimum temperatures have also been observed,
        with MMTS slightly cooler with respect to LIG. The
        mean annual MMTS-LIG temperature differences
        are unchanged.
        Just as in the early years of the
        intercomparison, we continue to see months with
        larger and smaller differences than the average.
        These are likely a function of varying
        meteorological conditions particularly variations in
        wind speed, cloud cover and solar radiation.
        These are the factors that influence the
        effectiveness of both the MMTS and LIG
        radiations shields.”

        If I am understanding what the article you provided said: There was no side-by-side comparisons of LiG and electronic observer in a proscribed way. There may have been some side-by-side, and there are anecdotes, but the transition was not geared for climate research, particularly longitudinal. The instrument period observations are influenced by meteorological conditions not quantitated.

        It appears to me that the instrument period, at least from the transition onward, is spurious because of that transition. The adjustment mechanisms are ill designed and suited to this data set, and there is @ 0.5 C adjustments based upon a best….estimate. This is all here in the USA. What happened around the world?

        I am still curious.

      • RiH008,

        There is no prescribed 0.5 C adjustment for MMTS transitions. Its handled by the PHA, which looks for breakpoints relative to neighbor difference series. Instrument changes tend to be really easy to pick up using this approach, as they involve pretty sharp step changes up or down in min/max temperatures.

        In that particular case its pretty clear that there is a ~0.5 C difference in max temp readings between the instruments. I looked at many other examples of pairs of stations here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

    • Steven Mosher

      See the follow up paper.

      • Which one? Is the data any better?

      • Steven Mosher

        read zeke again.
        take notes.
        write down the references.
        read the papers
        get the data.
        get the code.
        write your own code.
        compare the results.
        write a paper.
        be a hero.

      • Second paper

        Ooops:

        ” Data for the analysis were extracted from the Surface
        Airways Hourly database [Steurer and Bodosky, 2000]
        archived at the National Climatic Data Center. The analysis
        employed data from 1965 –2001 because the adjustment
        approach itself was developed using data from 1957 –64.
        The Surface Airways Hourly database contains data for 500
        stations during the study period; the locations of these
        stations are depicted in Figure 2. The period of record
        varies from station to station, and no minimum record
        length was required for inclusion in this analysis”

        Wow. The stations could and would have moved spatially and elevation.

      • Steven Mosher

        yes bruce.
        and the station moves are part of the reason why the error of prediction is
        what it is.

        If you had been reading my comments from 2007 to 2010,you’d know
        how important the error of prediction is.

        Its not that hard to understand.

        give it a try.

        you could actually go through the records and find the stations that moved. its pretty simple.

        show us your chops.

        Oh when you do tell roy spenser he uses the same data without accounting for the moves.

  50. After the appalling comment by Judith “that they are only trying their best”, it seemed to me rather than saying what is currently wrong with the present system, what I really wanted to do is to say what we needed instead. So, I’ve decided to “list my demands” on my own website. I would welcome any comments or additions.

    https://scottishsceptic.wordpress.com/wp-admin/post.php?post=3657&action=edit&message=6&postpost=v2

    • ScottishSceptic: I had a problem with your link

    • Steven Mosher

      read ISO9000 for starters. thats my advice.

    • Scottishsceptic

      Your link goes to a place which asks for my email mail AND a password.

      I have no wish to create yet more passwords. When I bought underlay for my carpet online I was required to create a password so these days I tend to steer clear of new places that require one for no good reason.

      Tonyb

  51. Mr Hausfather,

    I tend to agree with some comments regarding the lack of credibility caused by the “scientific community´s” bad apples as they try to evolve into “scientific manipulators”. I can see they are giving you a headache.

    The problem, as I see it, is that data manipulation is quite evident. They do tend to treat the public with a certain contempt.

    And I´m not referring to the temperature adjustments. I´m referring to the use of the red color palette by NOAA to display worldwide temperatures, and similar issues, or the use of tricked graphs and similar behaviors. You know, if we use a search engine and start searching for climate change graphs and maps, there´s a really insteresting decrease in the number of products after 2010. It seems they realized the world´s surface wasn´t warming, and they stopped publishing material. This is seen in particular in US government websites. Is the US President´s “science advisor´s” political power reflected in the science they show us?

    Anyway, I realize this thread is about temperature adjustments in the USA. But I do wonder, does anybody have a record of the temperature adjustments by independent parties, for example Russia and China? Do you talk to personnel in the WMO Regional Climate Center in Moscow?

    • Steven Mosher

      If you dont like the colors download the data and change the palette.

      • Mr Mosher, I´m sophisticated enough to catch “palette bias”. I don´t need to download the data. However, US government websites intended for the general public do have a significant bias. And it´s not reasonable to expect individual members of the public to understand there´s a bias, download the data, and plot it using software most of them lack.

        I´m extremely cynical when it comes to honesty by government leaders in general, and this applies to the whole spectrum. Thus my social criticism isn´t aimed at a particular population of politicians (although I do admit I have an issue with real red flag waving communists).

        Take US politics. Those of us who are smart enough realize we got lied about the Tonking Gulf Incident, that Clinton lied about genocide in Kosovo, that Bush lied about WMD in Iraq, etc etc etc.

        Therefore I´m not really surprised to see government agencies toe the line and use deceit to plug the party line du jour. On the other hand, I do write and talk to explain these deceptions do go on. During the Tonking Gulf Incident I was sort of innocent and I wasn´t too aware of what went on out there. Later, as I realized things were being distorted, i made it my hobby to research what really went on. And what I found wasn´t so nice.

        This climate warming issue is peanuts. How do you like the fact that we spent $1 trillion invading Iraq looking for those fake WMD and here we are 11 years later watching a Shia thug allied with Iran fighting a civil war against a bunch of Sunni radicals? This climate warming issue is peanuts compared to the lies and the mistakes the US government makes when it lies to the people to justify making irrational moves.

      • Steven Mosher

        “Mr Mosher, I´m sophisticated enough to catch “palette bias”. I don´t need to download the data. However, US government websites intended for the general public do have a significant bias”

        Show me the experiment you did to prove the bias.

        If you dont like the palette, do what I do.
        change it.

      • David Springer

        Fernando – excellent. It went completely over Mosher’s head of course so his instinct was to simply repeat the unreasonable demand.

    • Rud Istvan

      Jennifer Marohasy has documented “cool the past and/or warm the present for specific stations in Australia by its BOM (equivalent to NCDC), in their so called High Quality (HQ) data set. The bias was so obvious that a national audit of HQ was demanded under an Australian Law. The BOM response was to drop HQ and commence with a new homogenization program.
      In New Zealand, NIWA has aggressive and apparently unjustifiable cooled the past. A lawsuit was filed seeking technical disclosure. It got rebuffed at the highest court level on dubious legal grounds similar to Mass. V. EPA. Appeals Courts are not well positioned to determine matters of fact rather than law, and depending on how laws are written have to defer to fact finders like EPA or NIWA even if biased.
      Frank Landsers RUTI project has similarly documented at least regional warming bias in HadCrut.
      Steriou and Katsoyiannis documented warming homogenization bias in global GHCN using a sample of 163 stations. Paper was presented at EGU 2012 and is available on line from them. Quite a read.

  52. Mosher and Zeke,

    After all the adjustments, how do you determine if the information is more accurate than before the adjustments?

    Andrew

    • Steven Mosher

      simple. out of sample testing.

      With TOBS what you do is this ( this is how it was developed)

      you take 200 stations

      you make two piles

      You use 100 to develop the correction.

      your predict what the other 100 should be recording.

      you compare your prediction to the observations.

      You see that your prediction was solid

      You publish that paper years ago.

      Then you watch skeptics avoid reading the paper, and you watch them demand proof.

      When you point them at the proof, they change the subject.

      When you point out that they are avoiding reading the proof they demand, they get nasty an attack zeke’s motives.

      • “you take 200 stations

        you make two piles

        You use 100 to develop the correction”

        Doesn’t sound very scientific to me. Just sounds like you are making group A more like group B. There is no scientific reason to do this.

        Andrew

      • Tom Scharf

        This assumes the stations are independent of each other and not affected by independent variables, which is not always the case. If the in sample and out of sample data consistently read incorrectly the same way, a “confirmation” could still occur. Out of sample testing can be very useful, but there are many ways to do it wrong and sometimes no way to do it right depending on the data sets available. Not saying it was done wrong here, only saying that stating OOS testing was done is not a blanket confirmation. Certainly better than not doing it at all.

        Another example, if one claimed the post 1980 divergence issue in tree rings was out of sample confirmation data, then it would fail and clearly invalidate the tree ring proxy record. So we have an OOS failure but the reconstruction still holds for many.

      • “Then you watch skeptics avoid reading the paper, and you watch them demand proof.

        When you point them at the proof, they change the subject.

        When you point out that they are avoiding reading the proof they demand, they get nasty an attack zeke’s motives.

        All the good work Zeke is doing to help improve communication on this issue…..
        another “just saying”….

      • “you make two piles
        You use 100 to develop the correction.
        your predict what the other 100 should be recording.
        you compare your prediction to the observations.”

        It seems to me the only way to actually verify a “correction” for a change in equipment, location or procedure, is to continue taking temps at the same location(s) using both methods/instruments over an extended period of time. If you do that, with enough stations, and the change in each is the same within a certain range, it seems to me that that gives you your correction with error bars for that change. (You could then use it to “predict” the change in temps at other sites, but I don’t see the purpose. How do you know the temps/average temps/trends of the other stations remained the same?)

        Is this what “develop the correction” means?

        If on the other hand, you are making a statistical “correction” based on assumptions and then comparing it against other stations to see if your “predictions” are correct, I don’t see the value in that at all.

      • David Springer

        The time of observation ate my global warming.

        Priceless.

  53. If Zeke is to be allowed three long guest posts here, how about allowing Goddard to write one?

  54. Pingback: Adjustments To Temperature Data | Transterrestrial Musings

  55. Skeptics are better off barking up another tree than the temperature record.

    I trust they can read a thermometer without letting their political activism get in the way. This is one measurement area where attempting to corrupt the record would be easy to identify, as opposed to the paleo record which is a mess of assumptions, guesses, and questionable statistics.

    One problem I have with the temperature record is when it is presented without a vertical scale in the media, which seems to happen much more often than one would expect. The same goes for sea level rise.

    Another issue is when it is shown only from 1950 or 1980 which hides the fact that first half of the 20th century had significant warming which was not AGW based. This is such old news that it is never discussed anymore, but I think it is significant relative to how much natural forces may be responsible for the last 50 years of warming.

    Presenting the magnitude of the temperature change over the past century relative to how much the temperature changes on a daily or yearly basis can be quite an eye opener to many people who seem to believe this warming is “dramatic”.

    • There’s no problem with the temperature record. The problem is with the ‘adjustments’, which with each revision add in more and more warming. The first USHCN version added in 0.5F warming, now they are adding 0.9F. It’s the so-called scientists who can’t read a thermometer without their political activism getting in the way.

      • Steven Mosher

        “It’s the so-called scientists who can’t read a thermometer without their political activism getting in the way.”

        Im a libertarian.

        Your theory is that liberal scientists are making stuff up because of their activism.

        TOBS was first done in 1986. before the IPCC
        Im a libertarian, where is my activism.

        So much for your theory.

        more bad science from you.

      • You have excelled yourself here. It’s all about you! As with climategate, you seem to have a delusional view of your own importance.

      • “So-called scientists”? No ad homs here. No sirree.

      • Paul Matthews:

        There’s no problem with the temperature record.

        I thought you were smarter and better informed than this.

        Of course there are problems with the (raw) temperature record. Given the manner in which the data were collected, the issue isn’t whether the data should be adjusted to correct for the errors, but whether sufficiently good adjustments could ever be made, and whether we could know that they had been made.

      • @Carrick
        … and how much error is added to the data with each estimated correction and adjustment and how much uncertainty flows to the results of the analysis.

      • Steven Mosher

        ” and how much error is added to the data with each estimated correction and adjustment and how much uncertainty flows to the results of the analysis.”

        That is a good question.

        One thing that I droned on about for maybe 3 years was the propagation of errors due to adjustment.

        Its one reason I like the Berkeley approach better. Its top down
        AND we have much larger errors than Jones.

        He flipped out when he read this and could not understand the math

    • Matthew R Marler

      Tom Sharf: Skeptics are better off barking up another tree than the temperature record.

      I agree, but I am glad that other people are watching this with energy and alertness.

    • Tom Scharf

      I would certainly say that “miraculously” many temperature adjustments seem to make the past colder and the present warmer, and the adjustments mostly trend that direction over time. This certainly brings confirmation bias into question, but you have to look into what they actually did, and I don’t see any authentic corruption here.

      Enough people have looked into it (particularly BEST in my opinion) that it seems good enough to me and not likely to get much better, or change much from here on out.

    • Nice point about the presentation. I’d thought of that but your link was the first time I had seen the presentation in a normal scaling… Telling, eh?

  56. Pingback: More Globaloney from NOAA - Page 4 - US Message Board - Political Discussion Forum

  57. Thanks, Zeke and Judith, for this post. It is exactly the kind of thing I look for on climate blogs: basic information to better my own personal understanding (and with less hype, even if the lower hype makes it a bit less exciting than the latest post ostensibly threatening to up-end the field).

  58. Don Monfort

    The USCRN doesn’t seem to be working properly:

    http://www.forbes.com/sites/jamestaylor/2014/06/25/government-data-show-u-s-in-decade-long-cooling/

    Adjustments will be needed.

  59. Matthew R Marler

    Zekd Hausfather, thank you for your post, and the responses to comments. I look forward to your next posts.

    Steven Mosher, thank you for your comments as well.

    From Zeke: Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

    Yes to understanding exactly what has been done.

    “Assuming good faith” is a problem. The assumption should be that errors have been committed, and that the people who made the errors will be very defensive about having them pointed out. Sorry. It’s hard to avoid thinking that a check of your work (or my work) is an assault on your integrity or value as a person (or mine). Assuming good faith is why journal editors generally have trouble detecting actual fraud; everybody makes mistakes, and the reputation of academia is that they do not do as good a job checking for errors in programs as do the pharmaceutical companies, who have independent contractors test their programs. “Assuming good faith” ought to be reciprocal and about equal, and equally conditioned.

    Should FOIA requests be granted the “assumption of good faith”, however conditioned or qualified? Say the FOIA requests made to the U of VA by news organizations and self-appointed watchdogs for the emails of Michael Mann? Or perhaps the re-analyses by Stephen McIntyre of data sets that have had papers published about them? It’s a tangent from your post, which is a solid contribution to our understanding.

    • Steven Mosher

      ““Assuming good faith” is a problem. The assumption should be that errors have been committed, and that the people who made the errors will be very defensive about having them pointed out.”

      Err no.

      Assuming good faith is not a problem.
      you do work for me. I assume you will make mistakes. that is not bad faith.
      you do work for me. I claim you must have made mistakes because you
      are self interested and because some one across the ocean made mistakes in a totally different field, and I refuse to look at your evidence
      until you prove you are a virgin. that is what most skeptics do.

    • Matthew R Marler

      Mosher: Err no.

      Assuming good faith is not a problem.
      you do work for me. I assume you will make mistakes. that is not bad faith.
      you do work for me. I claim you must have made mistakes because you
      are self interested and because some one across the ocean made mistakes in a totally different field, and I refuse to look at your evidence
      until you prove you are a virgin. that is what most skeptics do.

      How you do go on.

      There are professionals whose work is always audited. I mentioned the pharmaceutical companies, whose programs are always checked by outsiders. Financial institutions have their work audited; professional organizations like AAAS and ASA have their finances audited; pharmaceutical and other scientific research organizations maintain data audit trails and they are subject to audits by internal and external auditors.

      Whether the auditors assume good faith or not, mistakes are so prevalent that it ought to be assumed by auditors that they are present.

    • Steven Mosher

      “Whether the auditors assume good faith or not, mistakes are so prevalent that it ought to be assumed by auditors that they are present.”

      I can tell you with CERTAINTY that there are mistakes in our product.
      it is not a can of pringles.

      Lets start form the top.

      1. De duplication of stations.
      We decide algorithmically when two stations are the same or different
      starting with over 100K stations we reduce this to 40K unique.
      there WILL BE errors in this. even if our algorithm were 99% perfect
      Central Park was a nightmare of confused source data.
      Another user pointed out an error that led to a correction of 400 stations.
      There are error in the EU where the metadata indicates two stations
      and some user insist that historically there was only one.
      These errors dont effect the global answer but the Local detail will
      not be the best you could do with a hand check of every station record.

      2. The climate regression. we regress the temperature against elevation
      and latitude. This captures over 90% of the variation. However, these
      two variables dont capture all of the climate. Specifically if a station is in an area of cold drainage the local detail will be wrong in certain seasons.
      Next, because SST can drive temps for coastal stations and because the
      regression does not extract this, there will be stations where the local detail is wrong. However, adding distance to coast doesnt remove any
      variance on the whole. so the global answer doesnt change. If you’re really interested in the local detail, then you would take that local area and do
      a targeted modelling effort.

      3. Slicing. the slicing can over slice and under slice. It relies on metadata
      and statistical analysis. So there will be cases of over slicing and under slicing. This is one area where we can turn the slicing knob and see the effect. there will be a local effect and a global effect.

      4. Local detail. one active question under research is how high a resolution can we drive to. Depending on choices we make we can oversmooth the local or undersmooth it. Some guys like Prism drive the resolution down to sub 30minutes.. this tends to give answers that are thermodynamically suspect. On the other hand you have CRU which works at 5 degrees.
      Now, you can play with this resolution. from 5 degrees down to 1/4 degree
      what you find is the global answer is stable, but the local detail is increase.
      The question is “is this local detail accurate”

      The question of bad faith is this. Are these errors which we freely admit the result of my libertarian political agenda? or Zeke’s more liberal polilitcal agenda? please decide which one of our agendas created these errors which we freely admit to

  60. New Rule: Anyone who doesn’t trust the temperature data, can’t use that data as evidence for the Pause.

    • Don Monfort

      It’s like this davey, if the Soviet Union admitted one year that production of cement had declined, you could believe them.

    • Steven Mosher

      David, you are expecting consistency from skeptics.
      they will question the record when it fits their agenda
      they will endorse the record when it fits there agenda.

      They will ignore that the very first skeptical attempt to construct a record
      (jeffid and romanM) actually showed more warming

      • Don Monfort

        Pointing out that their record shows no warming is not necessarily endorsing their record. You know that.

      • Steven Mosher

        Don citing the record AS PROOF of a pause,
        citing the record AS PROOF of c02 is not the cause,
        requires, logically, endorsement.

        Merely pointing is one thing. citing as proof is another.

        I own a gun.
        you find your enemy dead.
        the bullet matches my gun.
        You argue against the match, you raise doubts.
        You find your dog dead.
        the bullet matches my gun.
        You argue I killed your dog

      • Mosher will be denying the Pause any moment now.

      • Steven Mosher

        no bruce.

        I’m pretty clear on the pause.

        wishful thinking.

        1. If you assume that the underlying data generating model is linear
        2. And you fit a straight line model to the data.
        3. the model will have a trend. not the data, data just is.
        4. The trend in that model will have an uncertainty.
        5. Depending on the dataset you select and the time period you can

        find a period in the recent passed where the trend of the assumed model is “flat”.

        some people refer to this as a pause, hiatus, slowing, etc.

        Its just math.

      • David Wojick

        “The data just is” with no properties? What a strange concept of reality!

      • Don Monfort

        You are making sweeping generalizations about skeptics, Steven. Maybe you should say ‘some skeptics’ blah…blah…blah. That’s to differentiate yourself from apple and the rest of that mob.

      • Steven Mosher

        “David Wojick | July 7, 2014 at 3:25 pm |
        “The data just is” with no properties? What a strange concept of reality!

        yes david. data doesnt have trends
        the data is what it is.

        you produce a trend by making an assumption and doing math.

        hmm go read briggs. Then come back.
        no link for you, you have to do the work for yourself.

        hint. you have to choose a model to get a trend. the trend is not ‘in’ the data.

        the trend is in the model used to apply to the data.

      • “they will question the record when it fits their agenda
        they will endorse the record when it fits there agenda.”

        Yeah, that’s why I put “pause” in quotes, and refer to the “reported” temperature record.

        A fair number of skeptics I have read doubt, as I do, that anyone knows what the global average temperature/heat content is with the accuracy and specificity claimed. Let alone knows past averages and can predict future temps with the same precision.

        It is totally different to show the flaws in the reported averages (ie. UHI, uniformity of adjustments, etc.) The argument “Even assuming you are right about A, you are clearly wrong about B” does not admit that you are correct on A.

        “Your reported temperature trends are garbage, but even your own reports undermine your overall theory because they don;t show the warming you all uniformly predicted.” See how it works?

        But of course, you know all that. You’re just being an obscurantist.

      • Steve Mosher,
        Stop feigning indignation! When did marketing presentations get accepted without argument.

        I enjoy you but Zeke’s effort speaks for itself. A dam good effort so far (but you weaken his argument by being so over the top). The methods are worth discussing (some questions are fair and some are not) but what is new about that in climate discussion? Like R.G.B. at Duke points out continually to everyone (you on several occasions) their are weaknesses in the physics and arguments on both sides. Deniers?

        Business as Usual?
        The I988 projections for CAGW (science is settled talking heads) were pretty tough on everyone (even those much smarter than themselves F. Dyson, etc.). Zeke is doing just fine without winning every point.

      • Steven Mosher

        Don

        seriously, you should note that Zeke is answering every good question.
        with patience and good humor. he amazes me.
        me? i get to police skeptics who are out of line.
        you could always do that,
        you could be nice and gentle about it.

        But quite frankly Zeke puts a lot of effort into this stuff. Normally at Lucias
        there is 1 troll for every 10 good questioners. But here the ratio is reversed.

        if me pounding on a few off topic people bugs you, then pull them aside and do it yourself.

      • Don Monfort

        Steven, Zeke is doing fine. He is answering almost every question with plausible explanations. You are not helping him by echoing davey apple’s tarring of skeptics with a broad brush. Isn’t that an off topic distraction?

        Nobody has answered my question on why the warmest month on record changes from July 1936 to July 2012 with great fanfare, then changes back to July 1936, without a peep fro NOAA. Have their own algorithms stabbed them in the back and they are blissfully unaware?

        https://www.google.com/search?q=july+2012+hottest+month+ever&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb

        http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/

        I didn’t see any comment on Anthony’s post by you or Zeke. Would either of you care to comment on the unreported flip flop?

      • Don Monfort

        According to NOAA one state set a record high temperature in July 2012, while 14 states had their record high temperatures recorded in July 1936. Yet when homogenized and anomalized, July 2012 was declared the warmest month on record.

        http://www.ncdc.noaa.gov/extremes/scec/records

    • Matthew R Marler

      David Appell:: New Rule: Anyone who doesn’t trust the temperature data, can’t use that data as evidence for the Pause.

      Why? If the pause persists despite (possibly motivated) adjustments, does that not warrant greater credence in the pause?

    • Tom Scharf

      Does the inverse of this rule also apply?

      “Anyone who trusts the temperature data can’t deny this as evidence for the Pause.”

      • Tom Scharf w0ite:

        “Anyone who trusts the temperature data can’t deny this as evidence for the Pause.”

        No, not quite. The temperature data by itself aren’t the evidence. You will have to provide some analysis of it, like demonstrating that there is a “Pause” based on some statistical metric. No?

      • Don Monfort

        I have an analysis for you, perlie. The pause is killing the cause.

      • And that is all fake skeptics have to offer.

      • Don Monfort

        I don’t have time to waste on pause deniers, perlie. That’s all you get.

      • I know, since you are actually not interested in the scientific question at hand. You are just an ideologue, like fake skeptics in general, who try to further their anti-science propaganda whatever their particular economic interest or political or religious motivation is for doing so.

      • Truth.

      • The temperature data by itself aren’t the evidence. You will have to provide some analysis of it

        The anomaly data already represent an analysis

      • phatboy wrote:

        The anomaly data already represent an analysis

        So, tell me then. How do you derive the assertion about the alleged “pause” from the anomaly data themselves? How do you recognize the “pause”. You don’t need any trend analysis, any statistical metrics, nothing?

      • Don Monfort

        That’s right, perlie. We are all motivated by some combination of ideology, profit and religion. Very scientific. You are going to save the world with that crap.

      • Don Monfort

        Perlie hasn’t heard:

        google.com/search?q=the+climate+pause&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb

      • A graph of the anomaly data is effectively a trend in itself. So you just need to use your eyeballs. Trying to get a trend of what is effectively a trend produces all sorts of wonderful results, as you would know from following the comments of certain individuals.

      • Tom Scharf

        I can look at the temperature trend over the past century and state this trend is increasing over the past 100 years.

        I can look at the same trend over the past 20 years and say this same trend is essentially flat.

        Can you not bring yourself to do that? At all?

        Arguments that 20 years are too short for this analysis, or other forces are causing this phenomenon are worth debate, but simply ignoring the trend slow down (when it was supposed to be accelerating with BAU CO2) is not a very convincing argument.

        Equivocating that the pause means something other than the flat temperature trend-line in the much monitored and accepted global trend(s) is moving the goalposts.

      • Don Monfort wrote:

        Perlie hasn’t heard:

        google.com/search?q=the+climate+pause&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb

        Obviously, monfie thinks one can prove an assertion regarding a scientific question as true by being able to present a list of links from a Google search for a combination of keywords related to the question. He should try to get a paper published, applying such an approach.

        I know one too:
        https://www.google.com/search?client=ubuntu&channel=fs&q=alien%2Babduction%2Banal%2Bprobe&ie=utf-8&oe=utf-8#channel=fs&q=aliens%2Babduction%2Bprobe

      • Don Monfort

        The pause is a reality, perlie. We don’t have to show you no stinking trends. Everybody knows about it. Google it. Try to catch up, perlie. Stop being a nuisance.

      • Tom Scharf wrote:

        I can look at the temperature trend over the past century and state this trend is increasing over the past 100 years.

        I suspect, here you actually mean “positive” instead of “increasing”? The more important fact here is that the trend of the surface temperature over the last 100 years is not just positive (ca. 0.073-0.085 K/decade), it is also statistically significant with more than 13 standard deviations.

        I can look at the same trend over the past 20 years and say this same trend is essentially flat.

        Can you not bring yourself to do that? At all?

        To do what? To state falsehoods? Why would I do that? The trend over the last 20 years is not flat. These are the trends (in K per decade) over the last 20 years for the various surface temperature data sets together with the 2-sigma intervals:

        GISTEMP: 0.116+/-0.096
        NOAA: 0.097+/-0.089
        HadCRUT4: 0.101+/-0.094
        Berkeley: 0.126+/-0.094 (ends in 2013)
        HadCRUT4 krig v2: 0.143+/-0.099
        HadCRUT4 hybrid v2: 0.15+/-0.109
        (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html)

        All positive, and all even statistically significant with more than 2 standard deviations.

        but simply ignoring the trend slow down (when it was supposed to be accelerating with BAU CO2)

        Who are supposed to be the ones who allegedly said that the trend for every 20-year period would always be larger than the previous one, moving forward year by year? Please provide a quote and proof of source.

        The temperature trends over same length time periods, e.g. 20 years, have a frequency distribution, too. The individual trends will lie around a median value. In about 50% of the cases they will be larger than the median value, and the other ones they will be smaller (or about equal the median value). The shorter the time interval, the wider the distribution (with sufficient short time periods Zero or even negative trends will be part of the distribution also). No one has claimed that the trends will always only be increasing. Like no one has claimed that CO2 is the only factor influencing temperature variability. This is the next “skeptic” strawman often presented in this context and also hinted by you here.

        Equivocating that the pause means something other than the flat temperature trend-line in the much monitored and accepted global trend(s) is moving the goalposts.

        The logical fallacy of “moving the goalpoast” is only given, if the one who is allegedly doing that had before defined a normative criterion about something, which then is changed, when it is fulfilled. Have I done that? Otherwise, your accusation against me of applying the logical fallacy is false.

    • thisisnotgoodtogo

      Appell, if you say there is no pause then its you that can’t use temp data to say temps rose.

    • thisisnotgoodtogo

      Mosher, cluttering the thread with very many similar comments, says:
      “David, you are expecting consistency from skeptics.”

      While showing his own inconsistency by directing his criticism only to skeptics, as per his agenda, and ignoring Appell’s position that there has been no pause.

      • Steven Mosher

        I beat on david Appell all the time.
        Today is his lucky day.
        its pretty simple, police your own team.

      • thisisnotgoodtogo

        Yet in this thread you chose to not notice what he did.
        Instead you chose to protect your investment.
        Police yourself, Mosher.

      • thisisnotgoodtogo

        I thought you said you don’t have a team, Mosher. Why police the team you aren’t on?
        I don’t have one.
        Who could you be talking at?

    • Wrong. The pattern of the global temperature indices is probably roughly correct, only the trend, especially the late 20th warming (the AGW period) may be exaggerated. Furthermore, as Don Monfort correctly says, pointing out that the record shows no warming is not necessarily endorsing the record.

      Example:
      http://www.woodfortrees.org/plot/hadcrut4gl/plot/hadcrut4gl/from:1950/detrend:0.4

  61. Don Monfort

    Do we know what the warmest year on record for the U.S. is, today?

  62. Zeke:

    First, I want to thank you for your posts – here and elsewhere. I always read them and learn something, and I really appreciate the time you are contributing.

    I have a couple of questions.

    1. On the issue of adjusting for MMTS versus LiG – I was not clear on whether the LiG (the older style) is being adjusted to conform to the MMTS or visa versa. Could you clarify?

    Also, is one type of instrument more accurate than the other?

    One would assume the MMTS is more accurate than the LiG (just because it is newer) – however I am just guessing that.

    It would seem to make sense to adjust the less accurate to conform to the more accurate, but I just want to clarify which way the adjustment runs.

    2. Time of Observation. This is probably a stupid question – but are the measurements being taken more than once per day? Moving the time of observation from afternoon to morning sounds like we are shifting the time we look at the temperature (like one time) – but that doesn’t make sense to me. I assume we want to capture the minimum and maximum temperature at each site daily – which would seem to require more measurements (hourly or ever more frequent). So could you clarify that point.

    In a perfect world – with automated stations, going forward, I would assume we would capture data fairly frequently. In a 100 years with data every minute (or 5 minutes or whatever), we would capture the min/max – is that is where we are going?

    3. As to the “changing the past” issue – that is deeply unsettling to me and I assume many others. What is the point of comparing current temperatures to past temperatures if the past changes daily?

    How about doing it both ways and providing a second set of data files where they adjust the new relative to the old in addition to the old relative to the new. I would love to see how that would feel (see the data over time adjusted compared to the old) just to see the difference.

    4. UHI adjustment. When you write your third post could you perhaps explain the philosophy of this adjustment. I don’t get it. From my point of view we pick a spot and decide to plop a station down. For years it is rural and we have one trend. Then over a decade or so, that spot goes urban and there is a huge warming trend, then once it is urban the trend settles back down and is what it is (just warmer than rural).

    Why do we adjust for that? That station did get warmer during that decade – so what are we adjusting it to? Are we trying to forever make that station be adjusted to read rural even though it is now urban? Or change the rural past to read urban? I just don’t understand the reason for this adjustment if the instrument was accurately reading the temperature throughout its history.

    What if something (like a hot spring forming or a caldara forming) where to change the reading – would we adjust for that also?

    Anyway – thanks in advance for looking at my lay person questions and hopefully responding.

    Rick

    • Rick,

      NCDC makes a general assumption that current temperature readings are accurate. Any past breakpoints detected in the temperature record (e.g. due to an instrument change) are removed such that the record prior to the breakpoint is aligned with the record after the breakpoint. In this sense, MMTS instruments are assumed to be more accurate than liquid in glass thermometers for min/max temperature readings.
      .
      As far as TOBs go, both LiG and MMTS instruments collect a single maximum and minimum temperature since they were last reset. The issue with TOBs isn’t so much that you are reading the temperature at 10 AM vs. 4 PM, but rather that when you are reading the temperature at 4 PM you are looking a the max/min temps for 12 AM to 4 PM on the current day and 4 PM to 11:59 PM on the prior day. This doesn’t sound like much, but it actually has a notable impact when there is a large temperature shift (e.g. a cold front coming through) between days. I’m writing a follow-up post to look at TOBs in much more detail, but for the time being Vose et al 2003 might be instructive: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/vose-etal2003.pdf

      TOBs isn’t relevant to modern instruments that record hourly temperatures, and certainly not to the new Climate Reference Network that records temperatures every 5 minutes or so.
      .
      For “changing the past”, either way results in identical temperature trends over time, which is what folks studying climate are mostly interested in. Its not a bad idea to provide both sets of approaches, though it might prove confusing for folks.
      .
      UHI is fairly complicated. The way its generally detected is if one station (say, Reno NV) is warming much faster than its more rural neighboring stations, it gets identified as anomalous through neighbor comparisons and adjusted back down after it diverges too far from its neighbors. Menne et al 2009 has a good example of this further down in the paper: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-etal2009.pdf

      Our recent paper looked in more detail at the effect of pairwise homogenization on urban-rural differences. It found that homogenization effectively removed trend differences across four different definitions of urbanity, at least after 1930 or so, and did so even when we only used rural stations to homogenize (to reduce any chance of aliasing in an urban signal).

      • Zeke:

        Thanks for the answers.

        So for UHI – it sounds like it only gets adjusted relative to its neighbors during the transition from rural to urban and then once fully urban, assuming its trend is similar to its neighbors no further adjustments would need to be made. Is that correct?

      • RickA,

        Yes and no. If a switch from rural to urban introduces a step change relative to neighbors, that will be corrected. If an urban-located station has a higher trend than rural neighbors due to micro- or meso-scale changes, that will also generally be picked up and corrected. Its not perfect, however, and some folks (like NASA GISS) add additional urban corrections. For the U.S., at least, it seems to do a reasonably good job at dealing with UHI.

      • Zeke, I don’t suppose that there were stations with overlapping max/min thermometers+ LiG readings and then overlapping LiG and MMTS reading?

      • Thanks Zeke, so even a correction for the LiG and MMTS is non-trivial; this is not a simple off-set problem but both instruments give different Tmax/Tmin off-set reading, at different months.

  63. Conspiracy theorists wonder:
    “Can anyone reach either rankexploits of http://ftp.ncdc.noaa.gov?
    I can’t.
    I’d like to read this stuff….

      • ???? rankexploits seems to have a hyperactive ip blocker…

      • I guess unless some other people cant reach them we’ll just assume its my setup here…..

      • I’ve had my IP blocked by Lucia’s blog a number of times.

        She uses a blacklist to block IP addresses associated with malicious behavior. Unfortunately, those IP addresses often belong to ranges owned by ISPs who serve many customers. Since any customer can get any (dynamic) IP address with the ISP’s IP ranger for their area, people can often wind up using IP addresses which have previously been responsible for malicious behavior.

      • Blocking ip’s isnt so great. Tor can get you an ip anywhere in the world you would like and anyone really up to something….

      • nickels, Lucia also blocks Tor connections.

      • ah, nifty. cleverer than your general ip blocker!

      • But not a very useful site for links since they are blocked…. :(

      • nickels, you can e-mail lucia. She’s pretty good about helping legitimate users access her site.

        As for the other site, it may be a coincidence, but you provided the link as http://ftp.ncdc.noaa.gov. Zeke responded by saying ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/ works for him. As you’ll note, his link begins with ftp, not http. That’s because the link is to an FTP server. That may be why you are having trouble. (Of course, you could have used the right link but typed the wrong one here.)

      • I realize I should have been more careful typing that one:
        ncftp -u anonymous 205.167.25.101
        fails.
        Must be something weird in my firewall. My main intent for the post was just in case something was down, in which case there would have been some chime in. I do need to get the TOB papers, but I guess I’ll wait until that posts comes out and email someone!!

      • nickels there are lots of sites that don’t trust or permit raw ftp. You might try a proxy server for that.

      • Lucia blocks anonymizers.

    • I’m not into conspiracy theory, but links that don’t work don’t help….???

    • I couldn’t reach it either, but the other one I could

    • stevefitzpatrick

      If you contact Lucia by email and explain, she may unblock you IP address. She’s done it for me a couple of times when I have been overseas in ‘bad’ regions. She says: ” If you need to contact me: my name is lucia. The domain name is ‘rankexploits.com’. Stuff an @ in between and send me an email.”

      • given the fragmented references I find to rankexploits that would be a bit of pain…. but its a nice offer…. if its a critical paper I’ll do it.

  64. If you have two piles of stations, a la Mosher’s comment, and you compare the two and let’s say they give similar results.

    So, it could be that all the thermometers have no problem, and they compare well.

    Or, it could be that many thermometers in both piles have similar problems, but they still compare well.

    So, what degree of confidence does this sort testing give me? Only that the results from the two piles are similar, not that the over-all result from all of them are accurate or meaningful.

    • Windchasers

      jim2, now that we have accurate hourly/daily records, we can predict what the time of observation bias (TOB) would be, if we were still recording temps the same way today as 50 years ago. Which means we can build a model for the TOB *just* off of the high quality CRN stations, if we want. Then we can apply that model to the old records, get the adjusted temps, and compare those temps to those of the gold-standard stations. They should match up.

      This is a pretty good test, since the ‘piles’ are different. It’s *out*-of-sample testing, not in-sample testing.

      There are other ways to test the TOB adjustments. One way the TOB shows up in the temperature record is with a fingerprint of reduced intradaily variability. It’s from max or min temperatures effectively being counted twice, and how often the double-counting occurs depends on the time of day that temperatures were recorded, as well as how quickly temperatures change from one day to the next.

      Based on the modern hourly/daily records, we can say that if the temperature was recorded at, say, 4 pm every day, then we should have X number of double-recorded days. So look at the historical data for days recorded at 4pm. Do we see a number close to X? Yes.

      Or we can turn it around. Can we just look at the X, and infer the time of day that the data was recorded? Also yes.

      There may be other ways of checking the TOB adjustments that I’m missing. These are just a few off the top of my head and from reading some of the papers that Zeke linked.

  65. Thanks, Zeke, your efforts are appreciated by at least some of us.

  66. Zeke: Have there been any adjustments to the USHCN data based on USCRN observations?

    • Nope, while the full co-op network is used in the pairwise homogenization process, the USCRN network is not. However, from a CONUS-wide standpoint USCRN and USHCN have been identical since USCRN achieved U.S.-wide coverage in 2004/2005: http://rankexploits.com/musings/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-1.25.23-PM.png

      • @ Nick Stokes

        I followed your link to Watts 2011 post and found this comment, which is a pretty good summary of my opinion of climate data and the analysis thereof. I might add that the adjusting, infilling, correcting, kriging etc described by Zeke are superimposed on the basic problem described by Ms. Gray in her comment to Watts: We have no intrinsic method of separating signal and noise, even given pristine temperature records, and the existing records are anything BUT pristine. And can’t be made so.

        “Pamela Gray says:
        March 6, 2011 at 8:16 am
        When I was searching for a signal in noisy data, I knew that I was causing it. The system was given a rapidly firing regular signal at particular frequencies. By mathematically removing random brain noise, I did indeed find the signal as it coursed through the auditory pathway and it carried with it the signature of that particular frequency. The input was artificial, and I knew what it would look like. It was not like finding a needle in a haystack, it was more like finding a neon-bright pebble I put in a haystack.

        Warming and cooling signals in weather noise is not so easy to determine as to the cause. Does the climate hold onto natural warming events and dissipate it slowly? Does it do this in spurts or drips? Or is the warming caused by some artificial additive? Or both? It is like seed plots allowed to just seed themselves from whatever seed or weed blows onto the plot from nearby fields. If you get a nice crop, you will not be able to say much about it. If you get a poor crop, again, you won’t have much of a conclusion piece to your paper. And forget about statistics. You might indeed find some kind of a signal in noise, but I dare you to speak of it.

        This is my issue with pronouncements of warming or cooling trends. Without fully understanding the weather pattern variation input system, we still have no insight into the theoretical cause of trends, be they natural or anthropogenic. We have only correlations, and those aren’t very good.

        So just because someone is cleaning up the process, doesn’t mean that they can make pronouncements as to the cause of the trend they find. What goes in is weather temperature. The weather inputs may be various mixes of natural and anthropogenic signals and there is no way to comb it all out via the temperature data alone before putting it through the “machine”.

        In essence, weather temperature is, by its nature, a mixed bag of garbage in. And you will surely get a mixed bag of garbage out.”

  67. Curious George

    Zeke, thank you for an explanation of what is going behind the scenes. I’ll need time to digest your text. Meanwhile, one question is in my mind: Is the treatment of data that you describe a standard statistical technique? Can you estimate how many professional statisticians are involved?

    • Hi Curious,

      David Brillinger was involved in the design of the Berkeley approach. Ian Jolliffe and Robert Lund are involved in the benchmarking process for homogenization through the International Surface Temperature Initiative. I’m sure there are a few more folks that are “professional statisticians”; I know a number of the scientists have degrees in mathematics, but aren’t professional statisticians.

  68. Oops, I had an unfortunate typo in the article. When I said “There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings”, I should have said “There are also significant positive minimum temperature biases from urban heat islands, with urban stations warming up to 0.2 C faster than rural stations”. The two are not the same, as not all the stations in the network are urban.

    • A fan of *MORE* discourse

      Scottish Sceptic gets juvenile   “Would you be happy with a bank statement with ‘adjusted’ figures?”

      Matthew R Marler wears blinders  “There are people, including auditors, who do sample financial records …”

      Climate Etc readers are invited to verify for themselves that auditors require a 180 page code of ethics to even *BEGIN* to grapple with ‘adjustment practices’ of the financial world that are *ACCEPTED* and *LEGAL*.

      In a nutshell, nowhere in business or finance or insurance do we *EVER* encounter numbers that are “unadjusted.”

      Conclusion Skilled climate-auditors like Zeke Hausfather and Steven Mosher — and team efforts like Berkeley Earth (BEST) and the International Surface Temperature Initiative (ISTI)  — deserve all of our appreciation, respect, and thanks … for showing us a world whose warming is real, serious, and accelerating.

      Of course, there are *PLENTY* of Climate Etc conspiracy theorists and/or astroturfers who *INSIST* that Zeke and Steve and BEST and ISTI are one-and-all agents of a vast conspiracy.

      Of course, no amount of reason and evidence *EVER* convinces a conspiracy theorist, eh Climate Etc readers?

      But what Zeke and Steve and BEST and ISTI are showing us *is* enough to convince the next generation of young scientists. And in the long run, that’s what matters, eh?

      Good on `yah, Zeke and Steve and BEST and ISTI!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Rud Istvan

        Real, yes, to some degree debated here concerning USHCN. Partly unreal owing to homogenization, also debated here with respect to the quality thereof. That’s what happens when the world emerges from an LIA caused by natural variation.
        Serious depends on other context, not debated here.
        Accelerating, no. That darned pause again, even showing up in BEST.

      • Steven Mosher

        FAN ITSI is really cool.

        it is everything we asked for after climategate.

        even more cool is they have 2000 stations that we dont have.

        So,

        Our approach makes a prediction about what “would have been recorded” in every location were we had no data.

        Now, thanks to data recovery ITSI has additional sources.
        sources that we did not use constructing our prediction.

        Do you think skeptics will make their own predictions about what this out of sample data says?

        I think not.

      • “In a nutshell, nowhere in business or finance or insurance do we *EVER* encounter numbers that are ‘unadjusted.’”

        Yes, ENRON adjusted its financial numbers, and so does Berkshire Hathaway.

        Saying everybody adjusts numbers tells you precisely nothing about how accurate the adjustments are.

        The primary problem skeptics have in temperature trends is that virtually every reported adjustment in trends results in lower figures for the past, and warmer figures for the present. The most famous “adjustment” being the hokey stick. (Yes that’s paleo, not temp measurements, but the principle seems to work the same in both.)

        As an industry, the CAGW consensus is always “discovering” that “it’s worse than we thought,” including in temperature reports. And the apparent total lack of any skeptic involved in generating these adjustments just makes that less acceptable as mere coincidence.

        But the alternatives are not an evil conspiracy of BEST, NOAA, et al, and pure, pristine, accurate, precise temperature trends. Confirmation bias, faulty shared assumptions, shared over confidence in the raw data and the accuracy of the adjustments are more likely to cause bad results than any conspiracy.

        For example, I don’t see Mosher as being willing to engage in any conspiracy even if there were one. But I also know that he has tied his entire sense of self to defending climate models and temperature reports. He has spent years ridiculing those who disagreed with him or questioned the results he defends. So I simply do not see him as a credible check on his fellow tribesmen.

      • Windchasers

        But the alternatives are not an evil conspiracy of BEST, NOAA, et al, and pure, pristine, accurate, precise temperature trends. Confirmation bias, faulty shared assumptions, shared over confidence in the raw data and the accuracy of the adjustments are more likely to cause bad results than any conspiracy.

        So challenge those biases and assumptions. Point out flaws in the methodology. Improve it!

        This is how science progresses. Get educated on the problem, then make it better. Don’t just sit around wringing your hands and talking about potential biases.

        I was unconvinced, so I got educated on the subject. I read the literature, checked the data, checked the calculations, and now I’m pretty satisfied with the adjustments. But that takes work, and most people aren’t going to bother doing it. It’s far easier to just be suspicious than it is to do your DD.

        And the apparent total lack of any skeptic involved in generating these adjustments just makes that less acceptable as mere coincidence.

        No one’s saying that the one-sidedness of adjustments are the result of coincidence. They’re the result of how we recorded data in the past and how we record it now, and the well-documented biases that result.

      • “Mosher
        Do you think skeptics will make their own predictions about what this out of sample data says?

        I think not.”

        I expect to see waves of heat crashing into the Western and Eastern seaboards, matching the Atlantic and Pacific ocean warming/cooling cycles.

        You can go to the McDonalds website and enter a Zip code and it will give you the nearest 5 McDonalds and the distance.
        My guess is that if you prepare a McDonald index, Dist 1/1 + Dist 2/2 +Dist 3/3, you will find that the areas with the highest McDonalds index have the greatest level of warming.

      • Steven Mosher

        “The primary problem skeptics have inhow temperature trends is that virtually every reported adjustment in trends results in lower figures for the past, and warmer figures for the present. ”

        yes. exactly as they should.

        For example. When you change instruments from type A to type B
        You can expect there to be a bias. the bias will be up or the bias will
        be down. if the bias is zero, well then thats no bias. duh.

        So the change to MMTS caused a bias.
        how much?
        what direction?
        easy, test them side by side.
        yup, that science was done.

        read it for change of pace

      • Matthew R Marler

        A fan of *MORE* discourse: Of course, there are *PLENTY* of Climate Etc conspiracy theorists and/or astroturfers who *INSIST* that Zeke and Steve and BEST and ISTI are one-and-all agents of a vast conspiracy.

        Plenty?

      • Fan once again brings the AICPA ethics links. Keep those coming! Many adjustments are of the kind, No you’re not worth quite as much as you think, and No you didn’t make quite as much as you thought. Some of these are timing differences. If the client is asked to show a bit less income in the current period, generally at least some of that income will simply be pushed into the following time period though this is an extremely simplified example and each client has a unique situation. This conservative approach has served them well for a long time.

  69. Absence of correlation = absence of causation.

    There is no correlation between planetary climate (Earth’s paleoclimate, or Venus) and CO2 concentration. Your theory (and your models) may say that there “should” be warming, but the real world says it ain’t happening.

    The hypothesis “CO2 causes warming” is falsified by this lack of correlation (except in reverse — warming driving increased CO2). This is why I rule in favor of those -protesting- data diddling — no matter how noble the purposes or intentions of the data-diddlers.

    Data-diddling to try to show that CO2 causes warming is AT BEST some true believer trying to salvage his or her career claims with fancy hand waving. (“At worst” is left as an exercise for the reader.)

    A scientist worthy of the name says, “Oh, look at that, the hypothesis was wrong” and moves on.

    • Windchasers

      mellyrn,

      The temperature record should stand on its own, regardless of any imputed effects from CO2 or anything else. It’s a non-sequitur to say that the adjustments are wrong because scientists are trying to show that CO2 causes warming.

      You should either find a legitimate problem with the adjustments, or you should accept them.. but your acceptance of the temperature data should not be based on what you think about CO2.

      Just focus on the data. That’s how science is done.

    • Steven Mosher

      Off topic.

      This is about adjustments to the temperature record.

      people who dont want to understand change the topic

  70. “TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.”

    I’m just a novice at this stuff, but how is this possible?

    If you take a reading at 5 PM, I can understand how a hot day might be double counted, and thus influence the average Tmax for the month. But how would the Tmin for the month possibly be affected?

    If you take a reading at 7 AM, I can understand how a cool morning might be double counted, and thus influence the average Tmin for the month. But how could Tmax for the month possibly be affected?

    For a station that switched observation time from late afternoon to morning, there should be a TOBS adjustment to reduce the Tmax prior to the switch, and a TOBS adjustment to raise the Tmin after the switch. Once a station is reading at 7 AM, there should be NO additional TOBS adjustment applied to Tmax. Likewise, there should be NO TOBS adjustments applied to Tmin prior to the switch.

    • Windchasers

      Once a station is reading at 7 AM, there should be NO additional TOBS adjustment applied to Tmax. Likewise, there should be NO TOBS adjustments applied to Tmin prior to the switch.

      Yep, that’s right. We used to record temps in the afternoon back in the ’30s, and later that was changed to the morning. So the raw data had a hot bias in the past, and a cold bias now.

      “TOBs adjustments affect minimum and maximum temperatures similarly”

      I’d wager this means that the hot bias from measuring near the hottest part of the day is about as big as the cold bias from measuring near the coldest part of the day. Same magnitude, opposite sign of bias.

      • It seems that a much simpler and more logical way to estimate the trend over time would be to track the change in Tmin temps prior to the switch, then the Tmax temps after the switch, where NO ADJUSTMENT would be necessary.

        Why pollute the dataset by using averages that require adding in temps that are clearly biased by the time readings are being taken?

      • Steven Mosher

        write it up KTM
        get it published
        be a hero

    • Steven Mosher

      how is this possible?

      1. read the posts on the skeptical site run by John Daly. its explained.
      2. read the posts on CA. its explained.
      3. read the papers zeke linked to. its explained.
      4. Wait for the second in the series, where it will be demonstrated for the umpteenth time.

      • I guess my main critique is how the data is being presented. According to the graph, the Tmax TOBS adjustment was near zero in the past, and is currently near +0.2C. This makes no sense, Tmax TOBS adjustments should be large in the past and near zero today.

        I think it would be much more informative and accurate to show what the actual TOBS adjustments are for Tmin and Tmax over time. The two curves would not overlap, since they are being applied very differently over time.

        I also question the logic behind making all these adjustments, since it is possible that even at a midnight reading you could get double-counting of cold temps on two consecutive days. Why set the standard for USHCN at midnight when the vast majority of observations at being made at other times?

        Also, where are the error bars for these graphs?

      • Steven Mosher

        “This makes no sense,”
        it does make sense.
        read harder.

  71. Alexej Buergin

    Now is the time to repost this:

    “A C Osborn | July 2, 2014 at 2:34 pm | Reply
    You jest, BEST Summaries show Swansea on the South West Coast of Wales in the UK a half a degree C WARMER than LONDON.
    Now anybody living in the UK knows that is not correct due to location and UHI in London.
    It also shows Identical Upward Trends for both areas of over 1.0C since 1975, obviously BEST doesn’t know that the west coast Weather is controlled by the Ocean and London by European weather systems.
    So what does the Met office say about the comparison, well they show that on average Swansea is 0.6 degrees COOLER than London.
    So who do you believe, The people who live in the UK and the Met Office or BEST who have changed the values by 1.1 degrees C?”

    • Steven Mosher

      The values are not changed.
      you are looking at an expected value, not data.

      next, this post is about NOAA.

      stay on topic.

  72. Alexej Buergin

    According to the Icelandic WXmen, the adjusted average temperature in Reykjavik 1940 was 5°C. According to GISS, it was 3°C.

  73. Alexej Buergin

    I never look at the numbers fromm GISS, and I do not read the (very long) posts by Mr. Housefather.

    • Alexjej

      If you do not read he nformation I hope you will not complain if they show something that you do not agree with?

      Zeke has gone to a lot of trouble to post information, the least denizens can do is read it

      Tonyb

      • Alexej Buergin

        On NASA-GISS I would refer to Astronauts Schmitt and Cunningham.
        On the “lot of trouble” I agree, but my experience is this: If you really, really understand something, you can explain it in one paragraph.

      • Matthew R Marler

        Alexei Buergin: If you really, really understand something, you can explain it in one paragraph.

        You can be terse, clear, accurate, and complete, but generally not more than 2 at a time. Zeke Hausfather achieved an excellent balance: not too long, real clear, accurate, and with links to more complete details.

      • No. You can really understand something but be unable to describe it adequately because of poor communications skills.
        Equally, you can be a good communicator but with poor understanding of your subject.

    • Steven Mosher

      another example of a denizen who does not want to understand.

      • Alexej Buergin

        Actually, I would like to understand how anybody could get the results mentioned (Reykjavik, Swansea/London). But nobody wants to (or can) explain that, and they are obviously wrong.

      • Steven Mosher

        huh I explained.
        go read harder.

      • Alexej Buergin

        Your “explanations”:

        Reykjavik: “GISS is not NCDC.”
        We agree that GISS is producing Dreck?

        Swansea/London: “expected value, not data”.
        If by “expected value” you mean the sum of T(i)*p(i), that should not change the fact that Swansea is cooler than London (and the ridiculously named BEST is nonsense here).

      • Steven Mosher

        “Swansea/London: “expected value, not data”.
        If by “expected value” you mean the sum of T(i)*p(i), that should not change the fact that Swansea is cooler than London ”

        No.

        there is no changing of the fact.

        We create a model to estimate the temperature WHERE IT WASNT MEASURED. to do that we create a model

        T = C + W +e

        the climate of a place is estimated via regression as a function of Y, Z and time or season.

        the raw data is used to create this surface

        This surface is subtracted from the raw data to create a residual

        the residual is W.. the weather.

        Now, since the model is simple ( lat, alt and season ) the residual WILL contain some structure that is not weather but is actually climate

        these cases can be handled two ways

        A) increase terms in the regression — like coastal/non coastal
        B) keep a simple regression because these cases are small in number
        and zero biased.

        we do B. That means you will find that there are a small number of cases
        were the expected value of the model deviates from the raw.
        this happens in places where the climate is NOT dominated by Latitude altitude and season. For example, places where coastal/seasonal effects dominate

        to test this we add a variable for coastal to the regression.
        yes we see local changes.. BUT the R^2 stays the same.. no additional variance is explained. so adding it to the model doesnt change the overall performance of the estimate.

        We have acouple ideas how to squeeze some more explanatory power out of the regression, but we would only be fiddling with local detail and not the global answer

  74. Speaking of adjusters, does Gavin Schmidt still believe that the MWP did not really exist…?

    • As a global phenomena happening all over the world at the same time?

      • In terms of global phenomena, it seems rather than regions which have always cooled and warmed during global warming or cooling trends, the metric of rising sea levels [which have been occurring throughout our current interglacial period [10,000 years] should be metric used.

        So one could compare rate of rising sea levels of MWP, LIA, and during the current period in which we recovering from the Little Ice Age- the time period after 1850.

      • Claimsguy

        Is the modern warming period happening synchronously everywhere in the world?

        Tonyb

      • “Before the most recent Ice Age, sea level was about 4 – 6 meters (13 – 20 feet) higher than at present. Then, during the Ice Age, sea level dropped 120 meters (395 ft) as water evaporated from the oceans precipitated out onto the great land-based ice sheets. The former ocean water remained frozen in those ice sheets during the Ice Age, but began being released 12,000 – 15,000 years ago as the Ice Age ended and the climate warmed. Sea level increased about 115 meters over a several thousand year period, rising 40 mm/year (1.6″/yr) during one 500-year pulse of melting 14,600 years ago. The rate of sea level rise slowed to 11 mm/year (0.43″/yr) during the period 7,000 – 14,000 years ago (Bard et al., 1996), then further slowed to 0.5 mm/yr 6,000 – 3,000 years ago. About 2,000 – 3,000 years ago, the sea level stopped rising, and remained fairly steady until the late 1700s (IPCC 2007). One exception to this occurred during the Medieval Warm Period of 1100 – 1200 A.D., when warm conditions similar to today’s climate caused the sea level to rise 5 – 8″ (12 – 21 cm) higher than present (Grinsted et al., 2008). This was probably the highest the sea has been since the beginning of the Ice Age, 110,000 years ago. There is a fair bit of uncertainty in all these estimates, since we don’t have direct measurements of the sea level.”
        http://www.wunderground.com/blog/JeffMasters/sea-level-rise-what-has-happened-so-far

    • Steven Mosher

      changing the subject.
      doesnt want to understand.

    • A fan of *MORE* discourse

      Wagathon “[smears Gavin Schmidt]”

      Wagathon, your personal endorsement of the Harold Faulkner/Save America Foundation climate-change worldview and the novel economic theories of its associated Asset Preservation Institute are enthusiastically supported by the world’s carbon-asset oligarchs and billionaires.

      That”s how it comes about that *EVERYONE* appreciates the focus of your unflagging efforts, wagathon!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • I did find Gavin‘s comment a little amusing because in fact 8,000 years ago, at a peak of warming much higher than today, you know what the climate people call it? The climate optimum. In other words it‘s actually perceived as more optimal in terms of vegetation and other factors. ~Philip Stott

  75. It should be noted that there are at least the two uses of the word “data”
    – information output by a sensing device or organ that includes both useful and irrelevant or redundant information and must be processed to be meaningful.
    – information in numerical form that can be digitally transmitted or processed.

    The raw temperature measurements, along with instrument quality information and locations, are data in the first and second senses. Adjusted temperatures and anomalies are data only in the second sense.

    To adjust historic instrument readings seems sloppy measurement practice and will produce poor scientific thinking — eg, the adjusted results are estimates of the temperature record, not the record itself. These estimates should be reported with error estimates that had better span the measurements too.

    • Windchasers

      The raw temperature measurements, along with instrument quality information and locations, are data in the first and second senses.

      Sure, but it’s rather useless data by itself, sans adjustment. Even a spatial average of temperature is some sort of “adjustment”; to get to the national temperature chart we have to start applying math. And once you start using math, it’s math all the way down. ;-)

      Basically, you can’t take something like a 7am temperature reading in New Jersey and two 4pm temperature readings in Illinois and build a national temperature out of them. You have to adjust for spatial weighting of the records, for the time of day that the temperature was observed, etc. Otherwise it’s an apples-and-oranges comparison of data.

      To adjust historic instrument readings seems sloppy measurement practice and will produce poor scientific thinking — eg, the adjusted results are estimates of the temperature record, not the record itself

      It’d be far worse to not adjust them.

      But “estimate” vs “record”? Not really relevant. Any record is itself just an estimate, as no data-recording equipment is completely perfect. We aim for good enough, not perfect. We don’t need to be measuring temperature to millionths of a degree and every few milliseconds in order to get pretty solid data about the temperature.

      So there’s not much point in being pedantic about whether something is an “estimate”. It’s all estimates. The question is always “how good are they?” And here, they’re pretty good.

      • Your reply “all are estimates” shows that you could use several courses in experimental physics.

        Of course measurements are estimates — now whose being pedantic? The point missed is that measurements occupy a unique position in physics reasoning. Measurements should not be changed, but that doesn’t mean they must all be treated alike in estimating the past. It means only you estimate and estimate error should reconcile with the measurements they estimate.

      • Windchasers

        Measurements should not be changed, but that doesn’t mean they must all be treated alike in estimating the past. It means only you estimate and estimate error should reconcile with the measurements they estimate.

        I agree. And as far as I can tell, that’s being done. They identify the errors, they derive the adjustments, and they test the adjustments, giving a range on the errors for the adjusted data.

        I appreciate the BEST work, which generally includes error bars on their temperature charts. I’d love to see that done more consistently by the other groups, though, and not just see the adjustment error estimates left in the literature.

        But I don’t think it makes a lot of difference for the big picture. The errors in the adjustments are relatively small.

    • Steven Mosher

      Phillip.
      you do realize that many of the raw records are not information output by a sensing device.

      Prior to the automation of reporting, a human walked out to a thermometer looked, rounded, and wrote a number down.

      and none of the reports actually report the physical property of temperature.

      • Mosh,
        I think you must be confused about what is a measurement, unless you think an old physics professor of mine at Ga Tech was wrong to teach measuring length of objects using the human eyeball and a meter-stick to record rounded values with estimates of error. Humans can be and were part of the sensing and recording of measurements and these measurements are what you have, so work with them.

        As to whether ” the reports actually report the physical property of temperature,” I have no idea what you mean. Is it that thermometers don’t really measure the same property that today’s device do? If so, we need a whole new discussion.

      • Steven Mosher

        no phillip I was was just trying to make sure you actually understand what the records are.

        as for what they measure.

        tell me how an LIG thermometer works.

  76. “If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.”

    Until the 1970s, there were fewer than 1000 stations in the US according to this NOAA chart, and less than 2000 until about 2005. (I don’t know if all of these were used in generating the USHCN data sets, but if there were fewer, my following questions remain).

    http://www.ncdc.noaa.gov/oa/climate/isd/caption.php?fig=station-chart

    In how many locations are there “a number of stations” “a few kilometers” from one another?

    There are approximately 9.6 million square kilometers in the continental US. (If Alaska is included in the network, the number obviously goes up.)
    By my rudimentary math, with 1000 stations, that’s 9,600 kilometers per station. (Whew, I need a nap.) With 2000 stations (here the math gets hard), that’s 4,800 kilometers per station.

    If you have one station “within a few kilometers” of several other stations, and being generous and defining “a few” as four, then you have three or more stations within a 16 square mile area.

    Now I can see how that would happen in the real world; you want measurements where the people are. But it raises a couple of questions that may well have been answered, but I have not seen the answers and am curious.

    Here in the Chicago area, a few kilometers can make a real difference in temperature regardless of time of day. And the differences are not uniform – Skokie is not always warmer than O’Hare; the Chicago lakefront is not always cooler than Schaumburg.

    So:

    Question 1: Are the numbers above correct, or even close, as far area covered per station?

    Question 2: Don’t urban stations require more and broader adjustments, not just for UHI, but in general?

    Question 3: Are urban stations weighted differently because of their proportionally greater number in determining trends?

    Question 4: Are stations not “within a few kilometers” of several others, ever similarly adjusted, and if so, how?

    • Steven Mosher

      “Until the 1970s, there were fewer than 1000 stations in the US according to this NOAA chart, and less than 2000 until about 2005. ”

      WRONG.

      those are just ISD stations.

      the entire population of stations is substantially larger..

      If you seek understanding do not pull random charts from the internet.

      Go to sources.
      All the sources.

      • At least Zeke Hausfather mentioned that the larger number of stations is used for homogenization, rather than your obscurantist tack of implying they are included in the average.

        His figure 1 in the main post referenced ” Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth.” That is why I sought the number of NCDC stations.

        I missed this reference in the post: “A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.”

        But while the number of stations changes the math, it does not answer the underlying question. Whether 2,000, 7,000 or 10,000, I do not see how all the stations, as he says elsewhere in this thread, have several others within “a couple kilometers” of them.

      • The first sentence should have been deleted, poor editing. I saw that the reported average does include 7,000 stations.

      • Steven Mosher

        “At least Zeke Hausfather mentioned that the larger number of stations is used for homogenization, rather than your obscurantist tack of implying they are included in the average.”

        I implied no such thing

        in a discussion about USCHN, you linked to a unverified chart of a a different dataset entirely.

        obsfucator.

      • Mosher,

        That point I caught myself, as I noted in my second comment. I just failed to delete the snark before posting. My bad.

        But of the 4 questions I asked, Zeke Hausfather half answered one and neither of you addressed the other 3. Which is fine. No one is under any obligation to respond. But I read this thread as an attempt to address the concerns skeptics have regarding reported temps. An admirable goal. Sort of like Gavin Schmidt agreeing to answer all questions at Keith Kloor’s…once.

        But no answers are of course required.

        I am guessing his claim that each of the stations is “within a couple kilometers” was just a bit of hyperbole. I just don’t see that sort of coverage given the numbers.

    • Windchasers

      Here in the Chicago area, a few kilometers can make a real difference in temperature regardless of time of day.

      Heck, you can get big changes in temperature over just a few hundred feet, if the elevation change is big enough. I grew up at the base of a hill in Florida, and top of the hill was consistently warmer than the bottom.

      But the real question is the temperature anomaly. Does the temperature at the top and the bottom of the high change in sync? Yeah, pretty well. The correlation between them is pretty high.

      And that holds across most of the country. Temperature stations that are a few hundred miles apart still have very well-correlated anomalies, though I expect things like lakes and mountain ranges may tend to interfere with this.

      Also, in searching for data on this, I found this past post from Zeke:
      http://rankexploits.com/musings/2013/correlations-of-anomalies-over-distance/

      • Windchaser,

        I am not sure using anomalies as proxies for temperature simplifies the matter. I understand they give results more in line with what the consensus measuring them expect, but I think the prospect of determining actual average temperature in one given location is more complex than plotting anomalies.

        In prior blog threads some time ago I asked whether there was any experimentation to determine the accuracy and precision of anomalies as a proxy for temperature. Did anyone ever take actual hourly temperature readings at a range of sites over a period of time, and compare them to the average inferred from the anomalies. How do you know how accurate the long term temp trend against which you are calculating the anomaly is?

        At any rate, my questions are not about what is the best way to determine temperature trends. My questions are about whether any of the methods give the accuracy and precision claimed by those reporting them.

      • Windchasers

        GaryM:

        I am not sure using anomalies as proxies for temperature simplifies the matter. I understand they give results more in line with what the consensus measuring them expect

        We don’t use anomalies as a proxy for temperature. Rather, we use the anomalies to show how the temperature has changed.

        It’s actually somewhat difficult to define the average temperature of a region, because of things like the changes in temperature with elevation over even short distances. But it’s a bit easier to define the average anomaly, and besides, this shows us what we’re concerned with – how the temperature changes over time.

        How do you know how accurate the long term temp trend against which you are calculating the anomaly is?

        Whoa, anomalies aren’t calculated against long-term trends, but against a baseline temperature.

        If you subtract some temperature X from the temperature, record you get the anomaly – the temperature relative to some baseline temperature X. If you subtract the linear trend, though, you get something entirely else – the detrended data, which shows you how the temperature diverges from the trend. It’s not really that useful in comparison.

      • Windchasers,

        The results are reported as “average temperature” according to figure 1 in the main post. The plot shows a trend, but it is trend of temperatures.

        As for what is used to determine an anomaly, I know Mosher hates it when people link to those dang internet sites, but:

        “The term temperature anomaly means a departure from a reference value or long-term average.”

        http://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php

        The underlying question is not whether anomalies are consistent over large distances, but whether average temperatures are, because that is what is being sold to the public as the basis for public policy. That is why I refer to anomalies as a proxy of average temperatures, and why I ask if there is any research confirming their4 accuracy and precision as proxies.

        I have read the arguments behind their use, but I have not seen any testing to verify them. Not saying there isn’t any, just that I haven’t seen it. (And I don’t mean statistical comparisons to model generated data, I mean comparisons to actual temp measurements.)

      • Windchasers

        The results are reported as “average temperature” according to figure 1 in the main post. The plot shows a trend, but it is trend of temperatures.

        Aye. It’s the spatial average, and it shows a temporal trend, with temporal anomalies. (Note they y-axis label).

        “The term temperature anomaly means a departure from a reference value or long-term average.”

        Aye. So you get the anomaly by subtracting a reference value. I just want to distinguish that from subtracting the trend.

        The underlying question is not whether anomalies are consistent over large distances, but whether average temperatures are,

        The temporal averages definitely aren’t consistent over long distances. The average yearly temperature in Winnipeg is pretty different from the average yearly temperature in Miami.

        The spatial averages? Well, they’re spatial averages, so it doesn’t make sense to talk about how they vary in space. The number is derived for an entire region. The average US temperature is the same no matter where you go. You could be in Moscow, and the average US temperature would still be the same.

        I feel like I’m missing your point. The anomalies aren’t proxies for temperature in the same way that, say, the tree ring data is. The anomalies are just the temperature data with some number subtracted from the entire temporal series. Calculating the anomaly just shifts the entire temperature chart up or down, and doesn’t change how the temperature changes with time.

    • Steven Mosher

      Gary.

      are your questions about NOAA or BEST.

      if you can be specific then zeke or I can answer or get an answer.

      • Steven,

        Either one. I would be interested in the answers as to any data set.

      • Steven Mosher

        I will answer on BEST

        Question 1: Are the numbers above correct, or even close, as far area covered per station?

        Area “covered” by station varies widely across the surface of the earth.
        in some places the stations are dense ( say on average 20km apart) in other places ( south pole) they are sparsely sampled.

        Question 2: Don’t urban stations require more and broader adjustments, not just for UHI, but in general?

        The UHI effect ( ON AVERAGE0 is much smaller than people imagine.
        part of the reason is that the media and literature has focused on UHI MAX rather than UHI mean.
        In terms of adjustments I havent looked at the number of adjustments for urban versus rural. More generally I just eliminate all urban stations and look for a difference.

        Question 3: Are urban stations weighted differently because of their proportionally greater number in determining trends?

        Urban stations is a misnomer. There isnt a clear or validated way of categorizing urban versus rural. Several methods have been tried.
        Rather than a categorical scale I prefer a continous scale..
        For example.. rather than saying, as hansen does, that urban = population greater than X, whereas rural = population less than X, it makes more sense to just use population as a continuous variable.
        so there isnt any specific weighting applied on the basis of “urbanity”
        What we did was A/B testing . two piles, one urban the other rural.
        no difference.

        Question 4: Are stations not “within a few kilometers” of several others, ever similarly adjusted, and if so, how?

        There Isnt an adjustment

  77. GaryM,

    To answer some of your questions, the homogenization process uses the full co-op network (~8,000 total stations) rather than just the USHCN stations (1218 total) to detect breakpoints. It also only covers the conterminous U.S. (not Alaska and Hawaii). For all but the very early part of the record (pre-1930s), there are multiple nearby analogues for pretty much every station.

    • Zeke Hausfather,

      Thanks for the answer. Even using 10,000 stations, that seems like it would be an average of about 900 square kilometers per station.
      I still don’t see how does each station can have several others within a few kilometers, other than urban stations.

      And are you saying that stations that are not suitable for inclusion in the reported average are used to homogenize those that are?

      • Well, USHCN is a subset of the larger co-op network, where the primary criteria for inclusion is simply a long continuous record. There is nothing wrong with the rest of the co-op network per se, most of the stations just have much shorter records. Still quite useful for breakpoint detection.

        “A few kilometers” might be putting it a bit too strongly, but there are generally many stations within, say, 50 kilometers of any given station. You don’t really expect long-term climate changes to manifest as localize effects unless they are related to some change in the local condition. In that case, they are best not used to create a regional average, as you’d end up overweighting some localized change, be it due to vegetation change, instrument change, station moves, etc..

    • Is there a unique signature of thermometer type; say (Tmax-Tmin)/(Tmax+Tmin) that independently shows when transfers occurred?
      I don’t know how you can make these adjustments to individual stations, unless you know when the transitions occurred.

  78. Thanks Zeke Hausfather. I found the post valuable. I paused at this though:
    “…If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.”

    I see it’s an attempt to find errors. However what if the suspect station is a boundary one or effected by geography? Coastal compared to inland, high elevation compared to not so high, river valley compared to flat plain, many lakes versus few, forest land versus farmland, or on the Canadian border. Figure 7 shows a balanced result and is good for illustrating homogenization.

    A station location may react differently to non-error changing conditions and that’s where it gets interesting. The rich variety of the system meets the average and we’re after the real climate signal.

    • Steven Mosher

      “I see it’s an attempt to find errors. However what if the suspect station is a boundary one or effected by geography? Coastal compared to inland, high elevation compared to not so high, river valley compared to flat plain, many lakes versus few, forest land versus farmland, or on the Canadian border. ”

      Very good question.

      In some approaches the procedure that finds the errors is sensitive to
      geographical differences.

      I can speak to Berkeley earth

      A) Coastal compared to inland, this is a potential issue
      working on an improvement, however, the cases where it could cause
      a problem are small. the biggest effect of being by the coast
      is a suppression of variance. The effect drops off exponentially and is
      gone by about 50km or so.
      B high elevation compared to not so high. fully accounted for

      C river valley.. etc. the more important geomorphic type to be concerned
      about is mountain valley and cold drainage areas. Its a nasty problem
      as the DEM required is huge. Luckily these areas are small and isolated
      but users find them and complain to me.
      D. Lakes. I looked at this extensively. could not find any statistically meaningful effect. I know its there. haha. just cant find it.
      E. Land type. I have the data to assess this. Nothing has jumped out.
      but the historical metadata is low res ( 5 minute data )

  79. Zeke

    Thanks for the effort you have put into his. I will look forward to the next two articles so that the material in this post can be put in context.

    When you have completed all three would it be possible to then issue the series as one PDF suitably topped and tailed?
    Tonyb

  80. Trying to follow this narrative is like playing a game of intellectual “whack-a-mole.”
    I am willing to accept the Occam’s Razor claim and not impute malicious motives. However, I know of no other field (e.g medicine or flight test) that would allow use of infilled, estimated, or “zombie” data; particularly without either identifying it as such or putting error bands around it.
    Adjusting data more than once indicates there is little confidence in the adjustments that were originally made. If you not longer believe in the original adjustments why should I have any confidence in the latest adjustments?
    My 40 years experience in data acquisition and analysis leads me to believe that “best practices” are not being used.
    Can the defenders find nothing wrong with the methods and processes being used.
    Perhaps the strident defenders of the status quo should also accept an Occam’s Razor claim and not impute malicious motive to those not satisfied with the explanations they are being given.

    • Steven Mosher

      up until recently I would have agreed that best practices are not being used.

      however, the testing regime currently in use and the papers being published on the process have made me change my mind.

      See zekes forthcoming paper.

      • Do you believe that those “best practices” in the climate arena would qualify as “best practice” in the medical or flight test world?

      • Rud Istvan

        I follow all the math and statistics arguments. I cannot fault Zeke’s logic. But it is still possible to challenge underlying assumptions, which you do not, since the outcomes (more than just USHCN) do not pass common sense tests. For a Graphic example, see Joe D’Aleo’s Maine history from NCDC Drd964x 2013 compared to newly revised nClimDic 2014, posted at WUWT per this kerfuffle last week. HUH?!? Both charts officially NOAA labeled, and less than one year apart. Maine went from no AGW to lots of AGW on ‘official’ government provided charts.
        For a closer to home example, BEST station 166900 was changed from basically no trend raw to modest ‘expected’ warming. Your only reply has been to distinguish actuals from BEST ‘expectations’ and not explain why 26 months of cold extremes were rejected by your BEST algorithm (according the the BEST own information) because they did not agree with your modeled ‘regional expectation’. To repeat again, 166900 is the US Amundsen- Scott research station established in 1957. The most expensive, scientifically important station in the world. Your algorithm rejected 26 months of its reported temps because they did not agree with your model ‘expectation’. Words are off your website. Now, the nearest equivalent Antarctic station to compare is US McMurdo, roughly 1300 km away and roughly 2700 meters lower along the Antarctic coastline where it can be resupplied by icebreaking ships. Your notion of a region is flawed, as is your BEST process. It only takes one example to falsify an algorithm. There it is. Deal with it, preferably in less that cryptic brush off ‘read the literature, all of it’ style. Because I have read it all. And you still fail.

      • Matthew R Marler

        Rud Istvan: But it is still possible to challenge underlying assumptions, which you do not, since the outcomes (more than just USHCN) do not pass common sense tests. For a Graphic example, see Joe D’Aleo’s Maine history from NCDC Drd964x 2013 compared to newly revised nClimDic 2014, posted at WUWT per this kerfuffle last week. HUH?!?

        It’s the “expected value” of the conditional distribution of the true values for that locality, given all of the data, evidence about each thermometer’s bias and random variation, and testable assumptions about the site-to-site random variation (heteroskedastic Gaussian, most likely.) Nothing is statistics is common-sensical, because neither the generalities of nature nor the random variation are closely matched by our common sense. The elements of Bayesian inference are explained in the text by Francisco Samaniego called “A comparison of Frequentist and Bayesian methods of estimation”; and in the text by Rob Kass, Uri Eden, and Emery Brown called “Analysis of Neural Data” (which has a larger exposition of analyses of time series records.)

        In short, the Bayesian posterior mean has the smallest achievable expected squared error ( [true value – estimate]^2 ) of all estimators, the exact improvement depending on how much data there are, how accurate the individual records are, and how closely the distributions of the random components are approximated by the mathematical assumptions.

        D’Aleo’s selection of a seemingly bad outcome expresses the same naive view that lots of people have when viewing statistics: a treatment that improves almost everyone’s symptoms will seem to have made some selected person worse. Does the drug work as desired or not? Well, the existence of contrary cases shows that there is more to be learned, not that the statistical analysis was wrong or that the drug does not work. Same here: the Bayesian hierarchical modeling improves the estimate of the nationwide trend and of almost every local trend. That some trends do not seem to have been improved is, in this case, evidence that there is more to be learned, probably something about that locale.

        You asserted that someone (Mosher?) did not challenge the underlying assumptions. Actually, the BEST team have reported lots of challenges.

      • Matthew R Marler

        Rud Istvan: It only takes one example to falsify an algorithm.

        That is false. The most that one example can show is that the knowledge is not perfectly reliable, not that the algorithm used doesn’t achieve the best attainable estimate.

        If you have substantial evidence that the uncorrected record of a locale is exceptionally reliable, you can change the algorithm by fiat: reduce the size of the variance estimate of that site. But you need substantial evidence. Merely declaring yourself satisfied with the uncorrected version isn’t sufficient. I should note that if the error variance in one locale is sufficiently close to 0, the algorithm will not change its value by much: the posterior mean will be nearly exactly equal to the data.

        More detail can be found here: Kass, R.E. and Steffey, D. (1989) Approximate Bayesian inference in conditionally independent
        hierarchical models (parametric empirical Bayes models) Journal of the American Statistical Association, 84: 717-726.

        Plus, you can look up “Kriging” in many books that cover spatial statistics or multivariate time series.

      • Steven Mosher

        “It only takes one example to falsify an algorithm. There it is. ”

        ah no. The algorithm is a prediction with an error bounds. we fully expect with 40000 stations and millions of data points for a bunch of them, but not too many, to be outside the error bounds.

        simple stats rud.

      • Steven Mosher

        “PMHinSC | July 7, 2014 at 5:43 pm |
        Do you believe that those “best practices” in the climate arena would qualify as “best practice” in the medical or flight test world?”

        No. fields develop best practice over time based on the interaction with customers.

        different fields, different customers, different pratice.

        of course you can learn things from others

  81. As temperatures have warmed, the past has cooled. Temperatures have been level for a long time, and maybe be beginning to cool. So, if current temperatures cool, will the past warm back up- just curious?

  82. All very interesting, but it seems with so much data and so many adjustments, there still isn’t much credibility in CAGW science, and therefore no need for the war on CO2 or wasting trillions of dollars.

  83. RobertInAz

    Has anyone done a study of the independent proxies that would validate the US temperature record? I read a lot of anecdotes. Where have growing seasons changed? Where has the agricultural mix changed?

    I would think there would be ample independent confirmation the current temperature is significantly warmer than the early 20th century.

  84. Alexej Buergin

    A thermometer shows Tmax and Tmin during the last period of observation, usually 24 hours. It is an easy job to determine, which number belongs to which day. When TOBS is changed, an additional effort may be needed on the day of change. But afterwards it is just the same as before.
    So where is the problem?

    • Steven Mosher

      1. read the papers
      2. get some hourly data and study the problem
      3. wait for the rest of the series.

      Not that hard.

    • Windchasers

      When TOBS is changed, an additional effort may be needed on the day of change. But afterwards it is just the same as before.

      There are two types of TOB.

      1) Changing the time of observation. Can result in an extra half-day or so of temperatures being ascribed to the wrong day and month, which can be particularly significant during spring and autumn, when temps are changing the fastest.

      2) Bias from double-counting a Tmax or Tmin. E.g., if you record the temperature at 4pm today and it’s 100 degrees, then reset the thermometer, then the Tmax on the thermometer will be automatically still set to 100 degrees. Then say you come out the following day, and the high today was only 90 degrees, but the thermometer will still show 100 degrees as Tmax from the previous day. The Tmin will be unaffected. You’ve double-counted the max temperature.

      Recording in afternoons makes for double-counting hot Tmax, while recording in mornings makes for double-counting cold Tmins. Counting halfway in between or so is best (around noon or midnight).

      • Alexej Buergin

        You are so right, and it is so obvious, surely the people doing the job in, say, the year 1900, already were aware of it.

      • Alexej Buergin

        And the easiest way to correct (or not to have) the problem would be to reset the maximum-thermometer in the morning and the minimum thermometer in the evening; needs two visits, though.
        Fahrenheit must have thought of that.

      • Steven Mosher

        windchaser has read the papers.

        you guys should listen to him.

        he is fluent in these matters.

  85. Zeke, There do appear to be times when adjustments seem to go too far, such as converting a cooling trend into a warming trend, is this incompetance ? fraud ? a beserk computer program ? can you justify this ?

  86. Scott Basinger

    Great post, thanks for doing this Zeke.

  87. Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth.

    And, I can say, there is conspiracy in consensus. “Lamont made the same statement, you don‘t use consensus if you have a proof.” ~Richard Lindzen

    • A fan of *MORE* discourse

      wagathon proclaims  “There is conspiracy in consensus.”One thing is for *SURE* … scientists of all ages, genders, nationalities, and persuasions are united in wanting *NO* part of Wagathon’s Consensus-Conspiracy

      Question  How many Climate Etc readers have ever visited wagathon’s web-site “evilincandescentbulb”? Yikes. There is abhorrent material there.

      Conclusion  Scientists and (rational) skeptics and voters alike utterly reject the anti-science willfully ignorant extreme-ideology consensus of “Planet Wagathon”.

      Fortunately.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • A fan of *MORE* discourse

      Fixed indenting (hopefully) …

      wagathon proclaims  “There is conspiracy in consensus.

      “One thing is for *SURE* … scientists of all ages, genders, and nationalities want *NO* part of Wagathon’s Consensus-Conspiracy

      Question  How many Climate Etc readers have ever visited wagathon’s web-site “evilincandescentbulb”? There is abhorrent material there.

      Conclusion  Scientists and (rational) skeptics utterly reject the toxic extremist consensus of “Planet Wagathon”.

      `Cuz there are better planets.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      morediscourse@tradermail.info
      A fan of *MORE* discourse

  88. catweazle666

    that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

    Assuming?

    I thought you lot were supposed to be scientists.

    • OK, I’ll bite. Assume the contrary: that every word Zeke says, and all the other climate scientists as well, is a lie. And why stop there? Why isn’t Dr. curry a liar, too? Why assume her good faith? Heck, why assume that your ISP hasn’t altered the contents of your incoming and outgoing messages? How can you know that the words I am responding to are even the ones you wrote?

      You can assume bad faith and conspiracy theories and drive yourself nuts (if you aren’t already there), or you can assume that a conspiracy as vast as the one that would be needed to cook the climate science books would have overtly manifested itself by now.

      Your choice. Lewandowsky awaits.

    • If I am thinking of buying shares in a company, the last thing that I want to hear is that the auditors started from an assumption of good faith.

      Auditors should start with no assumptions about “faith”, good or bad.

      • Steven Mosher

        i think what zeke means is clear

        skeptics start with a belief that adjustments DEFINED IN 1986 are somehow suspect because of climategate.

        good faith means assume no evil intention.

        you have no evidence these guys, these PARTICULAR GUYS,
        had evil intentions.

        so look at the work, not who did it

        good faith

  89. The essence of temperature adjustments is to create a model of temperature data. There is nothing inherently wrong with doing this IF the model is validated against actual temperature measurements. It should be standard practice to regularly collect samples of actual temperature data and compare them to the estimated values. Clearly if there are errors one must question the accuracy of the model. At a minimum these discrepancies should be reported so there if full awareness of them. Science is the investigation of what is true. It is not and should not be a process of fabricating what one wants to justify.

    • Steven Mosher

      to validate estimations we hold out samples.
      to test the robustness of correction algorithms they are tested against synthetic series in a double blind fashion

      • Why not test estimations against actual data? I asked a similar question above.

        If there is a change in equipment, leave the old equipment in place for a year at say 100 sites. Keep series of measurements from both sets of equipment. Have one researcher prepare the correction based on the difference between the two types of equipment at 50 sites. Use that correction on the data of the old equipment at the other 50 sites, and then compare it to the actual measurements there of the new equipment. If they match, you know your can have real confidence in your correction. The same could be done with time of observation changes, station moves, etc.

        The real, essential problem I have with temperature reports, GCMs, paleo-climate and much of the rest of climate science is that they rely on statistics for validation, rather than comparison to actual data. If you once test a proposition, correction or model against actual data, and it proves accurate and precise enough, it would inspire a lot more confidence when you use it elsewhere.

        (Actually, I should correct myself in one instance. We are now able to test the consensus’ GCMs against 17 years of the consensus’ temperature data. And the results are not impressive. That divergence is starting to make the one hidden by Mike’s Nature trick look like a hiccup policywise.)

        But statistics is full of assumptions, Bayesian priors, estimated trends and the like. Such a process might well provide results useful for some purposes. But climate science is being used to push for massive public policy initiatives with enormous costs and negative economic impacts.

        “Trust us, we compared our results against our synthetic data,” is not good enough under those circumstances. The policy question isn’t whether your corrections and algorithms are the best available. It’s whether they are as precise as you claim, for the purpose for which you offer them.

      • Windchasers

        GaryM,

        Why not test estimations against actual data? … The same could be done with time of observation changes, station moves, etc.

        Essentially, that’s what’s being done (at least for TOB; not sure about the others). Once we have hourly readings, we can actually verify what the TOB is, rather than just estimating it. Then we apply it to the old data.

        see a different comment on this thread:
        https://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-605662

        When it comes to station moves, it’s generally sufficient just to show that the anomalies are well-correlated. If the temperature at the bottom of a hill is f(t) and the temperature at the top of the hill is f(t) + X, i.e., a correlation of 1, either one will do just fine for use in constructing the national anomaly.

        If you only had those two data points, and you move the station sharply, with no overlap, yeah, you’re going to have problems correlating their anomalies. But IIUC, there’s usually another station nearby with a contiguous record, and you can cross-check the correlation of the hill and valley stations with the correlation of the stationary station, using that other station to check the correlation of the two stations with each other. We’ve over-sampled the US for temperature measurements, so this isn’t a big problem.

      • Steven Mosher

        “If there is a change in equipment, leave the old equipment in place for a year at say 100 sites. Keep series of measurements from both sets of equipment. ”

        That was done for the MMTS change. Its also being done for CRN

        also you dont understand what holding out data means.

        and you dont understand why you have to test with synthetics data AS WELL

  90. Zeke,

    “This post will focus primarily on NCDC’s adjustments, as they are the official government agency tasked with determining U.S. (and global) temperatures. The figure below shows the four major adjustments (including quality control) performed on USHCN data, and their respective effect on the resulting mean temperatures.”

    Thank you for effort. This is a very helpful post.

    I do have one ‘curiousity’ question at this point: considering the tasked responsibility quoted here and considering the process as outlined in your Figure 4–does the NCDC have a formal quality assurance program for this task? If there is a (USHCN) quality assurance program then is there a site or repository where that material is brought together? In a sensitive high profile endeavor such as USHCN I would expect QA to be very visible–for example, touting the quality implementation of an appropriate and structured approach to getting what is needed. ;o) [ I am aware of USHCN ORNL/CDIAC-118 NGP-070 and TD-3200 . While they certainly provide some of the information one expects to find in a QA program they do not constitute one and leave open/evoke questions about QA, e.g., extent, content, and frequency of human review, signatures, external audits, QA program documentation, etc.]

    Thanks again for your effort.

    • All I am looking for is a simple yes or no: is there a formal USHCN quality assurance program in place?

    • Moot question…times are changing. Really excellent recent postings, Zeke and Mosh. Made me think, search and read for a couple of days on a topic I’ve been happy to ignore — geostats much more interesting. USHCN is a convenient sandbox dataset with which to hone tools and to explore but has its limits.

  91. Zeke,

    Thanks for the detailed analysis!

    I had to stop reading at Figure 8, though. You talk about the Tmax adjustment pre-1980, and the Tmin adjustment for the entire graph, but don’t really mention the apparent +0.4 degree adjustment from 1980 to present. I’d rather read a couple of paragraphs on that than two more parts to the series.

    That seems to address the entire debate in a single graph on a small part of the overall topic: a Tmin adjustment dip in the 1940’s would lower the past’s average, while a Tmax adjust that zooms upwards from the 1980’s would obviously raise the present’s average substantially. A conspiracist’s dream in a single graph and so few words spent explaining it.

    • Hi Wayne,

      The big post-1980s adjustment in maximum temperatures in figure 8 is mostly a correction for the ~0.5 C cooling bias introduced by moving from LiG to MMTS or ASOS thermometers. As I mentioned in the article:

      “While fairly symmetric in aggregate, there are distinct temporal patterns in the PHA adjustments. The single largest of these are positive adjustments in maximum temperatures to account for transitions from LiG instruments to MMTS and ASOS instruments in the 1980s, 1990s, and 2000s.”

      • Zeke, Thanks, I hadn’t focused on that. I did some reading on NOAA’s site, and it seems to indicate that: 1) many of the discrepancies were associated with snow cover on the ground conditions, 2) lows were raised almost as much as highs were lowered — something not indicated in Figure 8 — and 3) the highs were lowered less than 0.5 C.

        A more detailed discussion of the whole LiG to MMTS/ASOS move correction still seems to be the crux of the matter. If lows truly were raised almost as much as highs were lowered, the average would pretty much be nearly the same, or perhaps more like a +0.1 C correction.

  92. I don’t believe in global or national temperatures – raw, cooked or improved – and don’t want to be shown them, especially in graphic format. They’re like Dr Johnson’s walking dog, interesting because attempted, not because they serve.

    Graphs can be handy but they are naturally rigged for facile belief. Give me what you have, and, in the case of Australia, don’t connect and average together massively diverse climatic zones just because of current political boundaries. And don’t assume about huge areas of ice or desert which had no measurements because they had no people to measure. I find all that particularly silly.

    In the language we share, just tell me what you know, however shabby, however poor. And tell me what you don’t know, however vast.

    • +1.

      The BoM’s mysterious and freshly minted “national temperature” metric, is a case in point.

      Doing some sort of calculation based on where weather stations were historically located, across an entire continent, is BS which is still in the air before hitting the ground.

    • “I don’t believe in global or national temperatures.”

      I don’t either, and for many of the same reasons, at least as far as the tenths of a degree precision claimed. I don’t think anyone can tell the average temperature of Illinois on a given day, let alone CONUS or the entire Earth. And I reject the notion that you can get a trend to within tenths of a degree when you can’t get your starting data points with that precision.

      But I appreciate the efforts of Zeke Hausfather to explain why he disagrees. Mosher too when he wakes up on the right side of the bed. I’ve been wrong about enough important things in my life that I always am open to the possibility of being shown I am wrong again. It’s not happening here so far to my mind, but I’m open to listening.

      The fact that progressives, including the warmists, are incapable of critical analysis of their own positions is no reason for us to follow suit. So as long as they continue arguing for their knowledge of “global average temperature,” I’ll keep listening and asking questions.

      I disagree with the claims made about average temperatures. But I find that engaging on the issue and listening to the other side is the best way of addressing it. It helps me understand my opponents’ position. If what he says does not change my mind, I am better equipped to argue against him in the future. If he does, well the benefits of that are obvious, I can simply change my position. Either way, it helps to understand the other side.

      This is how I work as a litigator as well. I try to understand, make my opponent’s arguments, and criticize my own positions as I think he should. It is a practice that has resulted in the settlement of many cases; and I can’t remember the last time I was surprised by an opponent in court. I even oftentimes find myself disappointed that the other side did not make the arguments, or present the evidence, that I would have if I had been been representing them. It is a useful practice.

      (This is not about the dishonesty of a Gleick or Mann, or the data sets Joanna is referring to, but rather the subject of this thread, and the efforts of Zeke Hausfather in particular.)

      • I guess what I’m saying is that numbers are pretty dumb on their own and of limited value at other times. A Great Australian Temperature is a pretty silly thing for reasons which should be screamingly obvious, but I guess it’s a harmless enough bit of fluff compared to other confections.

        Talk all you like about the terrors of ENSO, Eastern Australia’s deadliest year for heat was a La Nina year (1939) flanked by neutral years. In spite of assumptions about PDO our longest drought (though not our worst) occurred between the late 1950s and late 1960s. This does not mean that the work of Walker and Mantua is not valid or of great value. It just means that data is pretty useless unless you use your loaf while handling. Putting data in the hands of the mechanists and literalists has proven to be an intellectual catastrophe.

        Got one of those great lumps of meat called a human brain? Drop the joystick and use it.

      • Windchasers

        GaryM,
        I appreciate your process. That’s actual critical thinking, when you think through both your position and your opponents’, and find the flaws in each.

        And I reject the notion that you can get a trend to within tenths of a degree when you can’t get your starting data points with that precision.

        If you have a solid understanding of the distribution of the errors and you have enough data points, it’s actually pretty straightforward. Though of course, the greater the range on the errors, the less certain the trend will be. But the fact that we’re averaging over a large area helps quite a bit, reducing the error of the average substantially compared to the errors of the individual stations.

        I don’t think anyone can tell the average temperature of Illinois on a given day, let alone CONUS or the entire Earth.

        Finding the average temperature is a different, harder problem. But usually it’s not relevant, so usually it’s ignored (USHCN not withstanding). If we mostly care about the trend – and we do – then we don’t need the average temperature.

        Re: average temperature, I’ll go back to my earlier example with a hill. Say you want to know the average temperature of a hilly square mile. And let’s say (for the sake of argument) that the temperature is perfectly correlated across this area – everywhere in this square mile, the temperatures move in lockstep up or down. What’s the average temperature?
        Where the elevation is lower, temperatures tend to be lower. And the type and amount of vegetation can change the temperature, shrub vs grass vs trees vs dirt. There are plenty of different sampling techniques you could apply, to try to get the average temperature across all the terrain and vegetation changes, but suffice it to say that finding the average temperature is going to be a pain.

        But what about the anomaly? Because the temp is perfectly correlated within this area, you only need 1 measurement location to get the temporal anomaly. Considerably easier.

        The point is that getting the average surface temperature requires a lot more sampling, and requires accounting for local spatial changes (topo, vegetative, etc.) that getting the surface temperature anomaly does not.

      • Windchasers,

        “And let’s say (for the sake of argument) that the temperature is perfectly correlated across this area – everywhere in this square mile, the temperatures move in lockstep up or down.”

        Average temperature for an area is hard, but average anomaly for the same area is easy? If I am reading you correctly.

        Your answer parallels what the NOAA site I linked to earlier says. (Item 7 in the list.)

        http://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php

        The fact that it is easier does not convince me it is more accurate. What you and the NOAA site both indicate is that my concerns about determining average temperature locally, let alone globally, as extremely difficult are correct.

        I have a lot of difficulties accepting the argument that a trend in anomalies gives you the same result with more accuracy. Just as I have difficulty accepting the consensus argument that it is easier to more accurately predict temperatures 100 years out than 10.

        To calculate an anomaly, you need an average to start with. I don’t see how you avoid dealing with the difficulties in finding an average temperature, when calculation of your anomalies requires you to do so as a first step.

        If it were one station, I don’t suppose it would make much difference, Even if you got the initial average wrong, at least you would be comparing future data against a norm.

        But for numerous stations over a wide area, your initial average must be based on numerous assumptions about the average temp in the first place. And the average would certainly be different in different areas. Which brings us back to the same place we started at. If it is so difficult to determine average temperature for a single location, how is it “easier” to determine the appropriate average for a larger area to compute anomalies from that average?

        The fact that the statistics work out does not convince me that the process is accurate or precise. In fact, Mosher has made the statement in the past that it doesn’t matter if you subtract warming stations, or cooling stations, or stations based on seemingly any other factor. The trend in anomalies stays the same.

        To this layman, this sounds remarkably similar to the fact that Mann’s original model always gave a hockey stick, no matter what data was input.

        The primary problem is that the entire global warming movement is being sold based on telling people that the global average temperature of the Earth is increasing at a dangerous rate. And that this rate is detectable to within tenths of a degree per year, per decade, per century.

        You write “If we mostly care about the trend – and we do – then we don’t need the average temperature.” But average temperature is what is sold to the public. And average temperature is what you need to calculate anomalies, and therefore a trend in anomalies.

        There seem to be just too many assumptions in the whole process to claim that precision.

        I would have no problem if the public were told that “we estimate that the country’s average of interpolated, estimated, krigged, infilled anomalies is increasing by one tenth of a degree per decade,” because then it would be clear that there is a lot more than measurement of temperature going on. And the argument would properly be over the validity of the various assumptions, corrections and estimations. Just as is occurring in this thread.

        But that has not been the public debate. They are told simply that “the average temperature of the US has increased by x tenths of a degree per decade.” Or “this year’s average temperature is one tenth a degree higher than last year’s.” And anyone who dissents from the claims of precision is labelled a denier.

      • GaryM,
        “To calculate an anomaly, you need an average to start with.”
        I don’t know how they do it, but I determine a daily anomaly for each station on min/max temps. Once I do that, I don’t have to calculate an average till I aggregate my station list.

      • Windchasers

        GaryM,
        Average temperature for an area is hard, but average anomaly for the same area is easy? If I am reading you correctly.
        Aye, that’s right. Or at least, the temperature anomaly is easier. It varies less in space than the absolute temperature does.

        The fact that it is easier does not convince me it is more accurate.
        When I say that it’s “easier”, of course I mean that the accuracy is higher, the errors smaller and it’s easier to verify the accuracy. I don’t actually mean that the calculations are necessarily easier.

        Just as I have difficulty accepting the consensus argument that it is easier to more accurately predict temperatures 100 years out than 10.
        It’s not actually easier to predict temperatures 100 years out than 10 years out. That’s not right. It’d be better to say that the error bars on our predictions grow quite rapidly as you look past a few years, and then they settle down into a range bounded by the climatic conditions.
        So 10 years out is easier than 100 years out, though neither has great accuracy. IOW, both will have substantial error bars.

        Of course, it may be easier to predict the average temperature for 70-100 years from now, than it is to predict the exact temperature 10 years from today. But that’s an apples-and-oranges comparison; weather and climate. Over similar time periods and similar areas, shorter-term predictions will be better than long-term (though maybe not much better).

        To calculate an anomaly, you need an average to start with.
        Not as I understand it.
        If I were constructing the US temperature trend, I’d start by identifying the offset that gives the best anomaly correlation between nearby stations. This gives you the best estimate of the normal temperature difference between a pair of stations. Then you can use that to get the average.. but you’ve already started by calculating the anomalies first. You have to.

        Why do it this way? Well, if any of your stations move/start/stop, you can’t just average together the temperature data. Going back to the example of a hill, let’s say a station moves from valley to hilltop, with a slight overlap in time, and both locations have a flat temperature trend, like this:
        Valley: 2 2 2 2 x x x
        Hilltop: x x x 4 4 4 4
        Then the average of the two is: 2 2 2 3 4 4 4. That’s not right. We specified that it was a flat trend at the start.

        If we get the mean-based anomalies before we average the stations, then we get:
        Valley anomaly: 0 0 0 0 0 x x x
        Hilltop anomaly x x x 0 0 0 0 0
        Average anomaly: 0 0 0 0 0 0 0 0.
        Then you can average the offsets (2 and 4), add them back in, and get the actual average temperature: 3 3 3 3 3 3 3. This answer makes sense, at least.
        So stations being added or dropping out is one reason we start by working with the anomalies, not absolute temperatures.

        The primary problem is that the entire global warming movement is being sold based on telling people that the global average temperature of the Earth is increasing at a dangerous rate.
        Yep. And that’s the point – we care about how quickly the Earth is warming, not what its average temperature is.
        If you have a function plus a constant, f(x) + c, it increases at the same rate regardless of the value of the constant. Derivatives do not “care” about constant offsets. IOW, the rate of change calculated from the anomaly will be exactly the same as the rate of change calculated from the average.

        And anyone who dissents from the claims of precision is labelled a denier.
        Not that I’ve seen. Your reasoning and approach is what matters, more than the conclusions you reach.

        If someone hasn’t read the literature, didn’t know about the adjustments, finds out about them, still puts no effort in to understand the science, but says the scientists are fraudsters and the adjustments are wrong, he may well get called a “denier”.
        If someone hears about the adjustments, reads up on them, studies the statistical techniques involved, and finds an error or a missed assumption in the adjustments, and this provides the basis of his skepticism, then I applaud him and thank him for his contribution to the science.

        The difference:
        The first person developed his opinions without sound information, shot his mouth off, and didn’t apply any critical thinking to test his own beliefs.
        The second person went and got educated, thought about the problem, and formed his beliefs on the basis of the best available data.

        I really don’t see too many people take the second approach. But man, those people are a lot more fun to argue with, since they approach the problem rationally and generally have some data behind whatever their beliefs. I learn a helluva lot more from them.

  93. stevefitzpatrick

    Zeke,
    Good post. You have more energy than I do.

  94. John B. Lomax

    Zeke Hausfather: Thank you very much for your excellent article (Understanding adjustments to temperature data) and the references to significant published articles providing more details. To the extent that I could understand most of it, it would appear to me that the adjustments have been well conceived, each having a specific goal to correct what is perceived as an error (not the desired/expected temperature).
    I do not understand the correction for time of measurement. I followed the procedures used and they appeared to achieve a result that met the analysts’ expectations. However, it would seem to me that a raw maximum or minimum temperature is, within the measurement accuracy of the equipment, by definition correct. It needs no adjustment. What is in error is the time of measurement. Even if the station has correctly reported when the readings were taken, we do not know when, in the previous 24 hours, those temperatures occurred. If we are looking over a century in time, does anyone really care which hour or day? Yes, an extreme measurement taken on January 1 could be reported as the extreme for that year when it was actually in the previous year; again do we care?
    Lastly, does everyone else understand that the Tavg probably has no sensible meaning? Sorry, I’m just and engineer.

    John B. Lomax

    • Hi John,

      Time of observation corrections are hard to intuitively understand, and involve essentially double-counting hot (or cold) days in the min or max temperature. The next post in this series (hopefully some time next week) will look at some in-depth examples, taking hourly data from the pristine Climate Reference Network and looking at how the daily and monthly means change based on the observation time.

  95. bit chilly

    many thanks to both zeke and steve for the replies here ,this discussion has certainly opened my eyes to some of the issues involved in measuring something that to this layman initially seemed fairly straightforward.

    i will certainly think long and hard before commenting again (it may not alter the stupidity level of my post,but you will know i tried.

    i asked a question up thread that probably got lost in the discussion,now the thread appears to be calming down i will try it again with a slight difference . sorry if this does not make sense , 8 schools in 3 different countries and the attention span of a gnat do not consistent coherence make.

    if the time series was expanded by splitting it down the middle and placing a 500 year manufactured data set in the middle with 1910 to 1960 data being the first 50 years and 1960 to 2010 the last 50 years,with the infilled data averaging the mean of the current trend,would the resultant trend begin and end at the same levels after the homogenization ,tobs and pha calculations ?

    • Not really, because you’d have 500 years of random data in between that (if truly random) would have zero trend. What you are talking about, testing how the algorithms work with synthetic data, was done quite well in Williams et al 2012, which might be worth a read if you are interested: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

      • thanks again for the reply zeke. that was an interesting read, a bit heavy going for a layman but informative nonetheless ,particularly as to the scale of the task faced.

        initial points would be ,in terms of tobs and instrument changes,surely in these cases what we are looking at is an absolute change only to the data at point in time change.
        so for each station the raw numbers would change by the difference resultant in the tobs change and instrument change,but not the trend ?

        whereas in the case of uhi, the trend would indeed change and manifest in an accelerated warming trend ?

        i can understand the creation of analog cases ,but from gcm,s ? as the gcm,s appear to be following a far greater trend than observed this suggests inputs that bear no relation to what happens in the real world.to be fair this point was addressed ,but i did detect an underlying feeling in the papers conclusions that the “real” trend would be found to be closer to gcm output if all the issues could be resolved,which hints at confirmation bias at some level.

        whether significant or not would be for those at least capable of proper technical investigation of the methods used,and well beyond me. another small point of note,again to a layman ,there are a lot of assumptions being made ,understandable in the situation ,but hard to reconcile with the apparent confidence levels shown.

        again i appreciate the time and effort required to respond to layman as well as the more informed poster, very different to the position maintained by others who would do well to follow the example.

  96. WhyNotAdmitYouCannotControlTheClimate

    I stood on the Gulf coast as the hurricane sent bullets of rain against my face and the wind swept me off my feet, I dug in the sand with my fingers trying to hold my ground, I prayed “dear God save me from this monster”. And God answered ” Be of good cheer, I’ll just take a little CO2 out of this mess”, and he did and behold the seas were calm.

  97. So what we have in this thread is someone making a lot of comments defending a Just So Climate Story. All the adjustments are correct, so the squiggly line describes the climate in some meaningful way, and skeptics are just trying to change the subject, and he was against the adjustments before he was for them, and his objectivity is beyond question, and and and…

    Right.

    Andrew

    • michael hart

      Something like that, Andrew. I stopped reading and, out of curiosity, scrolled down to this point just to see if he’d ever shut up. After long enough diluting points that Zeke or others might be making I imagine a few other readers may have given up too.

    • Matthew R Marler

      Bad Andrew: So what we have in this thread is someone making a lot of comments defending a Just So Climate Story.

      Do you have a specific claim that it is a “Just So Climate Story”? Most specific criticisms have been adequately addressed many times (in fact, all specific criticisms that I have read so far), leaving nothing but a sort of anti-intellectual residue of bias of some kind.

      • “Do you have a specific claim that it is a “Just So Climate Story”?

        Sure. We have BEST Climate Product Team Spokespeople who claim they can reduce understanding of the history of earth’s complex climates into a squiggly line drawing… of course the only way to do it is complicated and full of assumptions, after-the-fact-adjustments and exclusion of adverse data, and can the only way it can be done is the way they do it.

        Right.

        Andrew

  98. I look forward to the future TOBS entry.

    Based on my understanding of the issue, past years have been cooled by making negative adjustments to TMax while modern years have been warmed by making positive adjustments to Tmin.

    Past Tmin should be fine without TOBS adjustments, and modern Tmax should be fine without TOBS adjustments.

    So, as a sort of sanity check, perhaps you could produce a comparison for pre-1980 data between TOBS adjusted average temps and Tmin (unadjusted) temps. And then you could produce another comparison between post-1990 TOBS adjusted final data and modern Tmax (unadjusted) temps.

  99. Zeke: I want to thank you very much for taking the time to post here and at Lucia’s about the temperature record. I’m sorry you need to deal with so much dubious thinking about this subject. Anomalies, TOB adjustment and instrument change adjustments make perfect sense to me, as long as you include the uncertainty in the adjustment in the overall uncertainty adjusted output. Perhaps you can answer some of these concerns in your next posts.

    I’m most concerned about the fact that you find a breakpoint that needs correction about once every ten(?) years and that the average breakpoint is 0.5-1.0 degC in magnitude (your Figure 7), each comparable in size to the 20th-century warming you are trying to detect. If a breakpoint is caused by slow deterioration of observing conditions, followed by maintenance that restores original observing conditions, that breakpoint shouldn’t be corrected. For example, FWIW Wikipedia tells me that a Stevenson screen needs to be painted every two years to keep a constant high albedo so that the temperature inside is in equilibrium with air at 2 meters, and not perturbed by some sort of radiative equilibrium with SWR. If all stations were suffering from a slow warming bias and only some of them were being maintained frequently enough to prevent a significant bias from accumulating, pairwise homogenization will transfer that bias to all stations. If you’ve got 10 breakpoint corrections and neighboring stations transfer 0.02 degC of bias with each correction, you’ve got a serious problem.

    I’m also concerned about misapplying the lessons from the US to global record. If I understand correctly, we don’t have much information about TOB and instruments outside the US. Are all adjustments to the global record pairwise homogenization? How many adjustments are being made? How big is the average adjustment? How much does that contribute to 20th-century warming? Do the adjustments create a better or worse fit to the satellite record?

    Thanks.

    • Steven Mosher

      The US is somewhat unique in a systematic change in TOBS.
      the other countries that have a few stations effected are
      Japan, australia, norway and canada.
      But in the US it was systematic.

      good question.

    • ” How big is the average adjustment? “
      I’ve done a post on that here. US GHCN adjustments, in terms of effect on trend, are about 50% bigger than non-US.

      • Alexej Buergin

        How come that the people in other countries could (and can?) do better measurements?

      • Some might say that the US has gone for quantity rather than quality.

        But really, the answer is TOBS. Other countries mostly prescribed reset times and stuck to it.

      • Steven Mosher

        Alexej.

        As Nick points out its historical.
        the US started with volunteers.
        TOB was not uniform.
        They changed that.
        Other countries had a better process.
        american exceptionalism

  100.  
    Do we really want to know the truth?

    How badly do we want to know?

    One ‘sensed’ that there was something wrong. But you see, sensing isn’t knowing. One hears things which make one feel uncomfortable, without being able to put one’s finger on anything specific. It’s almost an atmosphere — a way people talk, their conduct, or perhaps their gestures or even just their tone of voice. It is so subtle. How can one explain it to anyone who hasn’t experienced that time, those small first doubts, that kind of unease, for want of a better word? We couldn’t have found words to explain what we felt was wrong. But to find out, to look for an explanation for that… that ‘hunch’, well, that would have been very dangerous… One did know very early on that there were dangers in knowledge.

    (Sereny 1996, 458; my emphasis, as taken from, Thomas S. Kubarych, Self-Deception and Peck’s Analysis of Evil)

  101. Funny thing happened while adjusting global temperatures—e.g.,

    These energy-deprived people [in India, Africa and elsewhere around the globe] do not merely suffer abject poverty. They must burn wood and dung for heating and cooking, which results in debilitating lung diseases that kill a million people every year. They lack refrigeration, safe water and decent hospitals, resulting in virulent intestinal diseases that send almost two million people to their graves annually. The vast majority of these victims are women and children.

    The energy deprivation is due in large part to unrelenting, aggressive, deceitful eco-activist campaigns against coal-fired power plants, natural gas-fueled turbines, and nuclear and hydroelectric facilities in India, Ghana, South Africa, Uganda and elsewhere. The Obama Administration joined Big Green in refusing to support loans for these critically needed projects, citing climate change and other claims.

    ~Paul Driessen

    • A fan of *MORE* discourse

      Wagathon emits the usual “rollin` coal” clouds of anti-science propaganda.

      Ain’t yah got the memo, wagathon?

      Solar has won.
      Even if coal were free to burn,
      power stations couldn’t compete

      Last week, for the first time in memory, the wholesale price of electricity in Queensland fell into negative territory – in the middle of the day … largely because of the influence of one of the newest, biggest power stations in the state – rooftop solar.

      Get checks from utilities? No more writing checks?

      *EVERYONE* likes *THAT* energy-economy, eh Climate Etc readers!

      Good on `yah, Green Energy!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Imagine living under conditions endured by impoverished, malnourished, diseased Indians and Africans whose life expectancy is 49 to 59 years. And then dare to object to their pleas and aspirations, especially on the basis of “dangerous manmade global warming” speculation and GIGO computer models.

        ~Paul Driessen

      • “few coal generators in Australia made a profit last year”

        And when they go out of business solar will take over. At night. Right?

        Solar subsidies are killing off baseline = blackouts.

      • A fan of *MORE* discourse

        sunshinehours1 foresees libertarianism’s demise  “Solar subsidies kill-off baseline = blackouts.”

        Charles Koch! Is that *YOU*?

        “Solar power enjoys bipartisan support across the country and any ostensible attack on renewable energy is going to have the effect of showing the attacker’s interests to be misaligned with the American public as a people, the United States as a country, and our future as a planet.”

        Good on `yah, Green Power!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • “Britain could be at risk of blackouts by next winter, the boss of one of the Big Six energy companies has warned, as old power plants are closed and have not yet been replaced.

        “Keith Anderson said the green levy will force coal-fired plants to close too quickly.

        http://www.dailymail.co.uk/news/article-2520633/Npower-boss-warns-energy-blackouts-NEXT-WINTER-closures-coal-fired-power-plants.html

    • The reality in Queensland is very different. Power bills have doubled in the past few year chasing industry such as aluminium smelting offshore. The returns on solar installations are not sufficient to pay interest on the costs of installation even at hugely advantageous tariffs – and the bills keep coming in. A far bigger burden falls on those who don’t have solar panels for network costs that are fixed. Coal stations continue to operate in the background – simply shedding load until needed again.

      Australia has gone from having some of the lowest energy prices in the world to having some of the highest. It is an utter disaster for everyone driven by distorting energy subsidies.

  102. Re TOBS, the Schall and Dale paper describes it in nice laymans terms. It’s written in the 70s before things got political, and it backs up Zekes fine post here.
    Here it is: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281977%29016%3C0215%3ATOOTBA%3E2.0.CO%3B2

    • I saw there were papers clearly documenting TOBS in the 30’s, the 50’s and the 70’s.
      Innocent question, since this was a known issue for a very long time, are we certain that we are not now adjusting temp records that were already adjusted for a known issue?
      In other words did a reasonable adjustment get applied twice?

  103. It’s interesting reading Mosh and Zeke’s “good guy-bad guy”routine. The mere fact that there is so much argument about how to measure and adjust temp readings leads me to the conclusion that in spite of all your graphs and codes you do not have a clue.Lots of money involved in trying to make a silk purse out of a sow’s ear. Give me medical research any day of the week.After all the billions spent,all the years researching nothing has changed when it comes to predicting the weather. Why is nobody in the media questioning how much money is being wasted on futile science? I see medical breakthroughs in the news all the time,meanwhile paper after paper is published in climate science,then disputed,then rehashed,then disputed and so it goes on and on and on.

    • Steven Mosher

      “It’s interesting reading Mosh and Zeke’s “good guy-bad guy”routine. ”

      Damn give that person a prize.

      I wondered how long it would take.

      Note that everybody who asks a good science question gets a nice answer from zeke.

      object lesson over.

    • Matthew R Marler

      Noelen: in spite of all your graphs and codes you do not have a clue

      Sorry, but on this issue you are clueless.

      I see medical breakthroughs in the news all the time,

      Studies by Ioannides and others show that 40% of the results published in the medical journals can’t be reproduced. On the whole, there isn’t the evidence to determine whether climate science on the whole does better, but some cases of climate science analyses have gotten a lot of press.

  104. This answers my question why some of the adjustments make the trend more positive. Thanks, and I look forward to the next posts. I haven’t had time to read all the links or all the comments, but so far, so good.

    • That being said, if I want to know what month was the hottest ever, I’ll still use satellite temp series due to more uniform and consistent sampling – this in spite of satellite and sensor changes.

  105. I think we understand very well what it’s all about. No matter how you dress up the pig… it’s still a Left versus right issue and there are no more useful explanations. Simply take sides and get it over with: allowing what is going on to continue is a vote for racism based on access to energy.

    Poverty, in the sense of deprivation of basic goods and services, in very large part is a result of insufficient access to energy. Access to energy means electricity for our homes, businesses and computers; it means transportation, in the form of automobiles, trains and planes; it means heating in cold weather and cooling in hot weather; it means functioning hospitals and health care facilities; it means mechanized agricultural methods that ameliorate the effects of bad weather and pests; it means access to information; and many other things equally important. Without access to energy, people are trapped in local areas to lead a life of basic subsistence if not periodic hunger and starvation.

    ~Francis J. Menton, Jr. (The Looking Glass World of “Climate Injustice”)

    • A fan of *MORE* discourse

      Wagathon claims it’s simple  “It’s Left versus Right issue and there are no more useful explanations”

      Love yer weblog, waggy!

      `Cuz yer blog makes it real simple to show young scientists  This is what anti-science ideology looks like.

      “The schoolteachers that peddle climate p**n in the nations’ classrooms are **** about their underlying motives and don’t know **** from **** about global warming or what it takes to earn a living in the real world.

      It’s a pleasure to advise Climate Etc readers to carefully and thoughtfully contrast “Waggy-World” with a grown-up world-view.”

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • The Left dissembles while, “even today,” says Menton,, “over 1.2 billion people, 20% of the world’s population, lack access to electricity.”

      • A fan of *MORE* discourse

        Wagathon is worried!  “20% of the world’s population, lack access to electricity.””

        Where Solar’s ALREADY is making inroads … well ahead of Old Fossil!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Menton says, “Here is the World Bank’s description of what it means to lack access to electricity:

        Without access to energy service, the poor will be deprived of the most basic of human rights and of economic opportunities to improve their standard of living. People cannot access modern hospital services without electricity, or feel relief from sweltering heat. Food cannot be refrigerated and businesses cannot function. Children cannot go to school in rainforests where lighting is required during the day. The list of deprivation goes on.

        “The World Bank,” says Menton, “actually projects that the number of people in Africa without access to electricity will increase, not decrease, between now and 2030!”

      • Access to potable water is a bigger issue. So is access to vaccines, even if the same warming-doubint munchkins with in the US think “vaccines are bad for you, m’kay!?”

        It turns out it takes no particular effort to be totally ignorant, just a belief system that overrides one’s intellectual abilities.

  106. ‘Observation and reanalysis-based studies have shown that moist enthalpy (TE) is more sensitive to surface vegetation properties than is air temperature (T). Davey et al. (11) found that over the eastern United States from 1982 to 1997, TE trends were similar or slightly cooler than T trends at predominantly forested and agricultural sites, and significantly warmer at predominantly grassland and shrubland sites. Results from Fall et al. (12) indicate that TE (i) is larger than T in
    8 areas with higher physical evaporation and transpiration rates (e.g. deciduous broadleaf forests and croplands) and (ii) shows a stronger relationship than T to vegetation cover, especially during
    the growing season (biomass increase). These moist enthalpy-related studies confirm previous results showing that changes in vegetation cover, surface moisture and energy fluxes generally lead to significant climatic changes (e.g. 41-43) and responses which can be of a similar magnitude to that projected for future greenhouse gas concentrations (44, 45). Therefore, it is not surprising that TE, which includes both sensible and latent heat, more accurately depicts surface and near-surface heating trends than T does.’

    http://pielkeclimatesci.files.wordpress.com/2011/11/nt-77.pdf

    There is no potential to obtain an artifact free record from surface temperature data. Evaporation varies seasonally – with surface type – decadally and longer for many reasons. Even if it is accepted that ‘adjustments’ provide a better surface T record – it is still far from adequate for climate purposes.

  107. “The alternative to this would be to assume that the original data is accurate”

    No, an alternative would be to question the hypothesis of an ever increasing temperature trend.

    The oil drop experiment of Millikan and Fletcher is illustrative. It is said the original experiment measured the charge of an electron within 1% of its currently accepted value. There is also dispute about the legitimacy of some of Millikan’s tests and whether he massaged the data to fit his hypothesis. It is also known that it took many years and many experiments to refine the value of an electron charge. Richard Feynman argues this is so because scientists were biased to ignore results that differed from the accepted value. Scientists fooled themselves because they were ashamed to report results that were too far outside the “consensus”.

    Unlike the charge of an electron a historic temperature data point can never be remeasured. There will never be another July 25, 1936 in Lincoln Nebraska and the temperature on that date can never be retested. The same applies to any other historic temperature measurement. Lacking proof of instrument or recording error it is hubris beyond comprehension that one would alter the temperature record. To do so is presumptuous. It would only be done to impress one’s bias on the science.

    Yet presumption is exactly the position Zeke and Moshner and others defend. Rather than acknowledge the limitations of their temperature model they alter the data so that the trend will work. That is the key not to be missed. What is most important in this exercise is the trend. Because the trend cannot be questioned. So either the past temperature must be lowered or the present temperature must be increased.

    But the present temperature cannot be changed. Given the ubiquity of current temperature data it would be too much of a lie to change the present. So the past is altered and the trend is protected.

    What we need, in Feynman’s words, are scientists who refuse to allow themselves to be fooled. What is needed are scientists who are not ashamed to trust the data as it is and to refute the consensus when the data does not support it.

    • I’m not sure I understand. Are you saying that after making whatever adjustments they think appropriate, the NCDC people are destroying the original temperature data and thereby making impossible for inquisitive investigators to check or recreate their work? I would agree that doing so would be bad behavior. But I don’t think that’s the reality of the situation.

      • Don Monfort

        He did not say anything about destroying the original data. Nothing. You just made that up. Where do you people come from?

      • ==> “Where do you people come from?”

        Too funny. So much for the notion that “skeptics” don’t doubt that the Earth has warmed and that ACO2 is partially responsible (the only question is the magnitude of the effect).

        Eh?

        But don’t worry. As soon as Judith puts up another post, you can pretend that all these “people” don’t exist.

      • Don Monfort

        Just for fun joshie, can you elaborate on wtf it is you are talking about? You are always finding irony in all the wrong places, runt.

    • “Lacking proof of instrument or recording error it is hubris beyond comprehension that one would alter the temperature record.”

      No-ne is altering the temperature record. You will find the data as recorded on the GHCN Daily file. In fact, if you really want authenticity re July 25, 1936 in Lincoln, Nebraska, NOAA will provide you with the handwritten record.

      With TOBS, at least, they are redoing the calculation that goes from the reading of the max-min markers at a particular time (which is the real record) to the calculation of a monthly average. That requires knowledge of diurnal variation, and we now have lots of hourly data, unavailable when it was first done. But the original record is what they work from.

      • Wouldn’t the diurnal variation change with conditions? For example, if there’s a drought, the variation would probably be greater due to less humidity. So using modern records to correct older ones might not work out? Do you agree?

      • Don Monfort

        Do you consider the handwritten stuff the official temperature record, nicky? Is that what they present to the public as the temperature record? The stuff as it was written down? Then why the f does it keep changing? Where do you people come from? You characters are manufacturing straw men out of Dan’s very coherent statement. Why are you compelled to try to make everybody who doesn’t think exactly like you do to look like a dunce?

      • Steven Mosher

        Don yes, the written records will make it into ITSI as level 0 data.
        part of the public record

      • Don Monfort

        Steven, nobody here is claiming that they don’t keep the original data in a file somewhere. Can we at least get that straight?

      • Steven Mosher

        Sure Don.

        But some guys seem to be insisting that every monthly prouncement of temperature

        A) be done in a color scheme to their liking
        B) be annotated with every detail for every calculation done.
        c) be ISO9000.

        You know even in business we let people report toplline numbers with pointers to the entire justifaction.

        having lost the science battle, guys are shifting to the PR frame.

        Thats ok, as long as they clearly state. “the record is good” now lets talk about the presentation.

      • Don Monfort

        Steven, we have been talking about Dan W’s comment. He didn’t say anything about color schemes, or destroying data. Can we agree on those two things, at least?

        Why not address what Dan actually said. I will repeat what I wrote below:

        “What Dan is talking about is the fact that the data that is revealed to the public and trumpeted on the 6 o’clock news is the adjusted data. The warmed over data. The hot stuff. Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again. And when it was again, it was done very quietly.”

        Are we talking misplaced decimal points and station moves? How many decades does it take to figure that out?

        You know in business, if a company keeps adjusting prior years’ financial data, investors assume incompetence and/or dishonesty.

      • I like your handwritten record.

        I noticed that NONE of the Tmax values recorded during the month were duplicated from one day to the next.

        Since the entire basis for making TOBS adjustments to reduce the Tmax of temperatures recorded back in years like 1934 is that a very high reading might get double counted for two days due to recording temps in the afternoon, doesn’t the lack of a single double-value utterly refute the rationale?

        If there are no duplicate values, there is no need for a TOBS adjustment to “correct” the data.

      • “doesn’t the lack of a single double-value utterly refute the rationale?”
        No. “Double counting” simply means that two max readings were taken on the same afternoon. They won’t usually be the same. On hot Monday, the max was at 3pm, but it was still warm enough at 5 pm to be the Tuesday max.

      • Steven Mosher

        Don you want me to defend stupid pr in a post about math methods?

        Let me stipulate. Every public statement about warmest month is stupid.

        Now back to science

      • Don Monfort

        Steven,

        “Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again.”

        It’s about math methods. It’s also about credibility. Does BEST know which month is the warmest on record?

        Please note that I promise I am not talking about destroying data, color schemes, the price of potatoes…

      • Don Monfort

        Zeke,

        Can you help Steven?

        Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again.

        It’s about math methods. It’s also about credibility. Does BEST know which month is the warmest on record?

      • Steven Mosher

        “Steven,

        “Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again.”

        1. You wont find us saying anything about the july 1936 being the warmest month.
        2. If you are asking about NOAA, then ask NOAA.
        3. I try to spend as little time as i can wondering about or trying to explain why NOAA does some of the things they do.

        In general the warmest month will change because its an estimate.
        its not scientifically interesting to me. If you are interested, then waste your time on it, not mine.

    • Bill Illis

      We are going to be having the same argument for another 9 decades.

      They need to ramp up the adjustments from the current +0.9F (surprising that number hasn’t come up so far) …

      … to +5.8F in the next 86 years in order to meet the projected temperatures.

      Did you know that there was no corn crop in Minnesota in 1903? It was too cold to reach full maturity.

      • Bill Illis wrote:

        They need to ramp up the adjustments from the current +0.9F (surprising that number hasn’t come up so far) …

        … to +5.8F in the next 86 years in order to meet the projected temperatures.

        Alternatively, the physics behind the projections is sufficiently correct and the forcings by the real world climate drivers don’t deviate too much from what has been prescribed for the model simulations.

      • David Springer

        “Alternatively, the physics behind the projections is sufficiently correct”

        Good one! LOL

        Far more likely technology progresses to the point where we can reverse deleterious effect in the unlikely event it should arise.

      • Matthew R Marler

        Jan P. Perlwitz: Alternatively, the physics behind the projections is sufficiently correct and the forcings by the real world climate drivers don’t deviate too much from what has been prescribed for the model simulations.

        That is definitely one of the alternatives. It can be discussed independently of whether the temperature record, with the best possible statistical analysis, is a reliable estimate of climate change.

    • Steven Mosher

      ” Lacking proof of instrument or recording error it is hubris beyond comprehension that one would alter the temperature record. To do so is presumptuous. It would only be done to impress one’s bias on the science.”

      Nobody alters the record.
      It is still there.

      However, we answer the following question.

      What is your best estimate of the temperature on july 16, 1936 in grand rapids michigan.

      Suppose you look at that record

      it says
      july 15 : 15C
      july 16 155C
      july 17 15C

      Opps. how does the official record show 155C opps they moved the decimal spot.

      So, we offer a “corrected’ dataset. One that provides an estimate, our best estimate of what actually should have been recorded.

      or suppose the station move from 50meters above the ground to the ground.

      You can estimate the effect of that as well.

      So, the record is there. intact.

      When you want to do monthly averages, when you want an accurate prediction of what should have been recorded then you do an estimate.

      when people do these estimates the call the data “adjusted”

      • Don Monfort

        “Nobody alters the record.
        It is still there.”

        We know that. Can’t we at least get that straight? What Dan is talking about is the fact that the data that is revealed to the public and trumpeted on the 6 o’clock news is the adjusted data. The warmed over data. The hot stuff. Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again. And when it was again, it was done very quietly.

      • “So, we offer a “corrected’ dataset. One that provides an estimate”

        That is not what it is being sold as.
        It is being sold as the “global temperature” not an “estimate” of global temperature. Climate “science” is the only field I know of where it is acceptable to “estimate” data. If you don’t believe the data you should discard it. Aside from the issue that the data is now compromised, you have lost creditability. Once you justify “estimating” data one has to wonder what else you are “estimating” (perhaps with good intentions). One poor habit leads to another poor habit.
        Correct me if I am wrong but in another thread I believe it was argued that it doesn’t change the result. You are defending doing something that theoretically doesn’t change the result. Occam’s Razor argues against “estimating.”

      • “trumpeted on the 6 o’clock news is the adjusted data”
        Hardly ever. The today temps aren’t adjusted. Nor if they say “hottest day for city X”. What is likely adjusted is a monthly or annual average. That isn’t raw data. It’s a calculation. People figure out more stuff and re-do calculations.

  108. Wordoress: My comments just disappeared for no reason before I posted them.

    So I can now only summarise what they were. Figure 8 would indicate a drop in Tav in 1940, whereas Figure 1 shows Tav increased, as indeed do all other records. This requires explanation,

  109. Whoever is careless with the truth in small matters cannot be trusted with important matters.
    Albert Einstein.
    ___________________

    Albert E. summed up quite nicely what a lot of us out here in the climate interested street level public who no longer have any trust left in the “we are the experts and therefore don’t need to answer questions from those low level ignorants ” posturing of climate science advocates nor believe most of what they try and push as the ” adjusted out of it’s cotton picking mind”, so called “climate science”.

    • First bogus accusations against the scientists, single out ones or in general, are being made, then those accusations are used as pretext to dismiss the science.

  110. At the risk of being damned as a conspiracy theorist, may I just ask if there is an easy way to observe the ‘raw’ data,as in hand written records and the digitized ‘raw’ data we have come to know and love?

    • Hi DocMartyn,

      The new International Surface Temperature Initiative is trying to archive photocopies of all the original handwritten records that they can get their hands on. Its a slow process though, as there are literally millions of pages of logs.

      • Zeke, can you do me a favor?
        I went to Best and looked up Portland, Oregon
        Berkeley ID#: 174154
        % Primary Name: PORTLAND PORTLAND-TROUTDALE A
        % Record Type: TAVG
        % Country: United States
        % State: OR
        % Latitude: 45.55412 +/- 0.02088
        % Longitude: -122.39996 +/- 0.01671

        http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Text/174154-TAVG-Data.txt

        Then looked at the same station’s written records, for 1950.

        http://www.ncdc.noaa.gov/IPS/lcd/lcd.html?_page=1&state=OR&stationID=24229&_target2=Next+%3E

        The numbers for monthly average in the official record ( in F) do not match Berkeley Earth database after -32*(5/9).

        Am I doing something very stupid here?

      • David Springer

        I spot checked February,. 1950. Reporting station 38.8F vs. BEST 38.1.

        What you’re doing wrong is expecting unpaid amateurs to produce good working code.

      • David Springer

        Seriously, that’s about right, Doc. REAL raw data from 1950 found on station records is cooled by 0.7F by TOBS and SHAP (or equivalent) before BEST presents it as raw data. What BEST calls raw data is not what you thought it was.

        Isn’t that just precious? Now you know.

      • David, I want to know if the stations are the same. If they are the same we can then investigate why the first station and first year I examined have difference ‘raw data’.

      • David Springer

        They’re the same station. Portland Troutdale Airport. The old NCDC report mentions in the notes it’s the airport and on the 2nd of February was the lowest temperature recorded since its establishment in 1940.

        http://www1.ncdc.noaa.gov/pub/orders/IPS/IPS-BD3C4874-3F13-4A16-B1DF-CFFA20B93FB5.pdf

      • David Springer

        I already told you why the data is different. What you consider RAW isn’t what BEST considers RAW. Funny Mosher didn’t mention that when he was ranting about RAW vs. EXPECTED huh? I’m pretty sure he knows RAW is not the figures taken straight off the printed page turned in from the weather station.

      • Steven Mosher

        Doc they are different stations

        The written record for the FSO is an hourly station
        TOBS wont apply as Springer surmises.

      • Steven Mosher

        “I already told you why the data is different. What you consider RAW isn’t what BEST considers RAW. Funny Mosher didn’t mention that when he was ranting about RAW vs. EXPECTED huh? I’m pretty sure he knows RAW is not the figures taken straight off the printed page turned in from the weather station.

        ########################

        1. The raw records we consider are those that are in files not photocopies of written records.
        2. If you have a way of reading in PDFs and sucking out the numbers reliabaly then knock yourself out.

        3. The records Doc pointed to are just ONE of multiple records for that location. apparently a hourly FSO, wait for the series of posts that explains how all the various sources are prioritized ( hint how do you merge hourly and daily )

        Knock yourself out. When you find the problem report it.

      • Steven Mosher

        Doc.

        First you have to determine WHICH Portland station you are talking about

        One berkeley station you linked to was TROUTDALE airport

        your written record is not for TROUTDALE

        There are multiple portland stations, and multiple sources for each
        station

        http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Text/164883-TAVG-Data.txt

      • David Springer

        The 1950 scanned report is for Portland-Troutdale. Don’t make up mistruths.

        The problem is that your system is a big steaming heap of spaghetti code and the 1950 Troutdale data is one strand of it and you can’t phucking follow one strand because of the mess you made. Amateur.

  111. Speaking of USCRN …
    From the article:
    Government Data Show U.S. in Decade-Long Cooling

    The National Oceanic and Atmospheric Administration’s most accurate, up-to-date temperature data confirm the United States has been cooling for at least the past decade. The NOAA temperature data are driving a stake through the heart of alarmists claiming accelerating global warming.

    Responding to widespread criticism that its temperature station readings were corrupted by poor siting issues and suspect adjustments, NOAA established a network of 114 pristinely sited temperature stations spread out fairly uniformly throughout the United States. Because the network, known as the U.S. Climate Reference Network (USCRN), is so uniformly and pristinely situated, the temperature data require no adjustments to provide an accurate nationwide temperature record. USCRN began compiling temperature data in January 2005. Now, nearly a decade later, NOAA has finally made the USCRN temperature readings available.

    http://www.forbes.com/sites/jamestaylor/2014/06/25/government-data-show-u-s-in-decade-long-cooling/

    • jim2 wrote:

      The National Oceanic and Atmospheric Administration’s most accurate, up-to-date temperature data confirm the United States has been cooling for at least the past decade. The NOAA temperature data are driving a stake through the heart of alarmists claiming accelerating global warming.

      Apparantly, the author of this piece of text doesn’t know that the contiguous US covers only about 1.5% of the globe. He probably also has never heard of statistical significance and things like that. I am not surprised about these kind of statements considering where they are coming from, though. Forbes and Heartland are such reliable sources for arguments about these topics. It couldn’t get less based on science.

      • Jan – What??? From the quote on this page …
        The National Oceanic and Atmospheric Administration’s most accurate, up-to-date temperature data confirm the United States has been cooling for at least the past decade.

      • If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.

      • jim2, my comment is to be understood also with the following sentence I quoted:

        “The NOAA temperature data are driving a stake through the heart of alarmists claiming accelerating global warming.”

        This statement is a non-sequitur. It doesn’t follow from the alleged cooling (“alleged” because the “cooling” is not statistically significant) of the United States because of the small area of the whole globe, which is covered by the United States.

        jim2 also wrote:

        If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.

        I am not worried, if I make an “unscientifc speculation” at this place here. This is just an opinion blog, isn’t it?

        The alternative to my previous “speculation” would be that Taylor of the Heartland Institute is deliberating talking nonsense. This could be a valid explanation, also. Or a combination of both.

      • jim2, this amused me:

        If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.

        As Steven Mosher apparently feels perfectly comfortable calling me a liar. Earlier this year, he said “no skeptic has seen fit to test the hypothesis” the UHI effect “MUST make its way into the global average.” I pointed out I had done so. That’s when he said:

        knowing Brandon,I would say he is lying.

        and if he is asked to publish his test he will quicky cobble something together and back date it.

        I’m not sure how Mosher justifies portraying me as completely dishonest based upon nothing but his “knowledge” of me. He had far less of a case than anyone claiming dishonesty on this topic has.

      • Is the the same person who earlier wrote
        “First bogus accusations against the scientists, single out ones or in general, are being made, then those accusations are used as pretext to dismiss the science.”

      • Yes, and? Have I made any bogus accusations against any scientists who published results from scientific research I didn’t like? No. Then I also can’t have used this as pretext to dismiss any science.

        As for Taylor of Heartland Institute. So you tell me, is it what I wrote first, or the alternative, that he was deliberatly talking nonsense? Which one is it?

      • “This statement is a non-sequitur.”
        Agreed. While I’m happy with the temperature results, their actual weight has many contexts to consider.
        Also, so many stakes have been driven without effect, I’ve heard that phrase many times, the hypothesis has been falsified, it’s not a vampire.

      • Jan – more from that article:
        Second, for those who may point out U.S. temperatures do not equate to global temperatures, the USCRN data are entirely consistent with – and indeed lend additional evidentiary support for – the global warming stagnation of the past 17-plus years. While objective temperature data show there has been no global warming since sometime last century, the USCRN data confirm this ongoing stagnation in the United States, also.

      • jim2, the global data sets would be at least the proper basis for claims about global warming indeed.

        You quote following claim by Taylor:

        While objective temperature data show there has been no global warming since sometime last century,

        Based on what data specifically is this claim made about “no global warming”? (at the surface?) These are the surface temperatur trends in K per decade (with 2 sigma intervals) since 1997 (using 17 plus years as claimed by Taylor):

        GISSTEMP: plus 0.078+/-0.114
        NOAA: plus 0.05+/-0.104
        HadCRUT4: plus 0.05+/-0.108
        Berkeley: plus 0.084+/-0.113 (ends in 2013)
        HadCRUT4 krig v2: plus 0.106+/-0.119
        HadCRUT4 hybrid v2: plus 0.117+/-0.133
        (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html)

        Whas is the scientific reasoning used for the assertion that these data showed there wasn’t any global warming at the surface? Or what is the scientific reasoning for the claim, based on these data, that there was a “pause”?

  112. Wouldn’t it be interesting to compare the ten year temp
    record of the 114 pristinely sited temperature stations
    with all the other temperature stations and those closely
    correlating back measurements looked at to check out
    the earlier period a decade before to see whether non-
    cooling or warming. Seeing you seem to have a quality
    control bench mark here?
    jest-a-serf.

    • Yes, that would be VERY interesting. Just use the highest quality, pristinely sited, even if it’s just 10 or 20 stations (the more the better of course), but as evenly distributed as possible. No adjustments at all, not even gridding (it’s just a temperature index after all).

  113. Steven and others. Regarding “assuming good faith”. The fact that people like myself find the word of the consensus difficult to accept can be traced to the actions of a small but influential group of people. When Steve McIntyre, Judith Curry, the Pielkes are derided, the same kind of questions you are asking sunshine guy come to mind. People will assume good faith when people like Mike Mann and Jim Hansen show a bit themselves. There are wingnuts on both sides of this debate. There are others, yourself and Nick Stokes on the consensus side, Anthony and Steve M. on the skeptic side who engage and look first and argue when they are right. That being said, to assume that we have not been given ample cause for assuming bad faith is to be willfully obtuse. I was once neutral in this debate. A visit and question at Realclimate put me on the path of the skeptic. Reading the posts of your telescope friend and others like him keep me on that path. Not for the content, but for the tone. Derisive, condescending, poorly argued and lacking any assumption of good faith.

    • Well said. Same experience.
      The only redeeming thing I get from some of these guys is that, as a self acknowledged poor communicator, at least I know there are some in the world orders of magnitude worst! Comforting.
      Zeke is an excellent communicator.

      • Neckels communicates that Claes Johnson should be listened to. Kind of bizarre considering Claes is a lead skyyydragon.

      • WebHubTelescope | July 8, 2014 at 7:48 pm |
        Neckels communicates that Claes Johnson should be listened to. Kind of bizarre considering Claes is a lead skyyydragon.

        Telescope dude. Thanks for showing up! Could not have forged a better reply from you to illustrate your tendency to be “Derisive, condescending, poorly argued and lacking any assumption of good faith.” Assuming that “Zeke” and “Claes” are one person. Please explain for the class how Zeke, being a “skyyydragon” detracts from Nickels assertion that Zeke is an excellent communicator. Indeed. Where does he assert that Zeke / Claes should be listened to?

      • Assuming that “Zeke” and “Claes” are one person.
        Pretty sure they are not.
        http://www.yaleclimateconnections.org/author/zhausfather/

      • Claes had good references for aposteriori error estimation which is a useful tool for exploring the computability of diffeqs. I dont know much about skyyyyypedragon cause Im waiting till the third movie comes out so I can see them all together.

      • Oh, that must be a different movie then, I was thinking about the one with the dwarf king.

    • yes, one of the main things that got my attention and keeps my light skepticism going (skepticism mainly about uncertainties being understated, plus thinking there is some risk of massive group-think having lead the CC science seriously astray) is seeing how some prominent AGW scientists circle the wagons and trash folks like Curry, the Pielkes, etc. It is a huge red flag for the two concerns I mentioned (there are other concerns too).

    • Steven Mosher

      you have no cause to assume bad faith. none.
      the original TOBS work was done in 1986.
      the first skeptic to look at it vindicated the work.
      then others attacked.
      Another verification was done. successful
      then more attacks.. basically people saying “i dont understand”
      More explaination
      more verification.

      There is bad faith. but not by the guys who did the work in 1986.
      the bad faith is here and now. practiced by the likes of you

  114. One day all these suspect and ill advised and suspected as mainly spurious “adjustments” to the global and national temperature records and all the arguments going on about it will be looked upon as today’s stupidity equivalent of the medieval thesis of “How many angels can dance on the head of a pin?”.

    Just gives us, the public, the real as measured data, warts and all, right from the very beginning of records and let the interested us sort through and make of it what we will.
    With the other grossly “adjusted” data you climate scientists can go and play with and keep right on adjusting to your heart’s content or at least until the money runs out.
    Given human nature over that hundred plus years of records, it is highly likely that all the foibles and faults of those thousands of observers and their measuring equipment will through sheer numbers and bulk have about evened out to a neutral point around which the real actual temperature will be centered.
    And consequently which would not then need anything in the way of the very doubtful and unverifiable and unprovable for accuracy adjustments so beloved of the climate data manipulators.

    Of course if the actual recorded at the time temperature data was released as a full data base with official blessing it would probably only be a matter of quite as short time before it would be decided by the climate interested public and politicals that all that morphed out of reality, adjusted data those scientists were playing with on their play stations wasn’t really needed as it bore no resemblance to reality nor had any sort of any perceptible impact or effect on society and their funding should and would consequently cease.

    The CET to my knowledge has not been “adjusted” in any way we hope, yet it is accepted as a reasonable proxy for the temperatures in central England from 1659 to today.

    So why all those from the public viewpoint, mainly illogical “adjustments” particularly to any temperature recordings more than about 5 years old?.

    What was recorded was recorded to the best of the abilities of those many thousands of usually conscientious observers over those tens of decades past.
    Just leave it at that and use that original data, warts and all, and I suspect that the true average from all those past temperature data recordings will one day be found to be much closer to the reality of those past temperatures than all those ever so clever and highly sophisticated and totally useless temperature adjustments, which seemingly are only of use for the dis-information dissemination so beloved of today’s catastrophe advocating climate scientists.

    • Dear ROM,

      I recommend that you read Manley’s original CET papers. There are two of them and they’re both available free from the Wikipedia page on their author:

      http://en.wikipedia.org/wiki/Gordon_Manley

      Cheers,
      John

    • ROM

      Manley’s papers are included in my article that compared CET to the historic reconstruction by Mann and Lamb and which included my own to 1538

      https://judithcurry.com/2011/12/01/the-long-slow-thaw/

      Last year I was at the Met Office and saw David Parker who created the CET record from 1772. This latter relies on daily instrumental readings. That of Manley’s are monthly ones and also rely on other data, which to my mind makes it stronger as it helps to correlate the record.

      By themselves instrumental readings can have lots of flaws which range from observer error to instrumental error to the basic problem that very often the maximum and minimum temperatures were not captured.

      The Manley record has somewhat fallen out of favour, not the least because of the astonishing rise in temperatures from 1695 to 1739 when they were brought to a screeching halt by a very severe winter.

      Many people have looked at this period including Phil Jones who admitted that it had caused him to believe that natural variability was much greater than he had previously thought.

      We must bear in mind with all reconstructions Lambs wise words that ‘we can understand the tendency but not the precision’

      We place far too much reliance on believing that data such as is discussed here is of a standard that allows it to be put through the computer and come out the other end as an extremely accurate data base.

      To paraphrase the motto about models

      “All temperatures are wrong but some are useful.’

      I have it straight from the Horses mouth that the historic temperature record is not given much credence these days due to scientific uncertainties and they are somewhat played down.

      tonyb

      • tonyb
        Thankyou both for this and helpful past comments correcting some of my previous misunderstandings.
        . Your ID “climatereason” is well chosen.

  115. Kindly tell us:
    1.Who initiated this project
    2. When did it start
    3. Who is doing the adjustments
    4. What are their qualifications.
    5. What justification has been documented
    6. How does this change conclusions drawn from the previous curve.
    7. What official organizations are involved
    8. Are they the only organizations involved
    9. Are any of them global warming advocacy groups
    10. Where is data about before and after temperatures available.
    11. What peer-reviewed articles have appeared
    12. What other sources of publication exist
    13. Cite author affiliations for all of the above.
    14. In 200 words, explain the purpose of this project

  116. The TOBS issue seems to have been the reason Watts pulled his paper at the last minute two years ago. Watts doesn’t believe in the TOBS adjustment being skeptical that the official recording times were the actual recording times so that a systematic adjustment assuming those times can’t be made. However McIntyre was added to Watts’s paper and held it up to do the due diligence with TOBS that he viewed as necessary. I only get this part of the story from here which ends with McIntyre going off to check Watts results using TOBS 2 years ago. Note sure how that ended up or how it is today between McIntyre and Watts on TOBS, but his paper is still in limbo.
    http://climateaudit.org/2012/07/31/surface-stations/

    • McIntyre has recently gone on Paul Homewood’s page and said things to try to lead him in the right direction on TOBS ending with
      ” I got results that look similar to those shown in the NOAA graphic. I think that most of your comments and allegations are incorrect and should be withdrawn.”
      Homewood you may recall is one of those trying to criticize the NOAA temperature record.
      You can read through McIntyre’s comments to Homewood here
      http://notalotofpeopleknowthat.wordpress.com/2014/07/01/temperature-adjustments-in-alabama-2/
      Other comments include
      “I certainly do not think that the evidence that you have adduced warrants the somewhat overheated rhetoric on this topic and urge you to dial back your language.”
      This is fairly strong coming from McIntyre, and these show his thinking within the last 4 days. A lot of skeptics listen to what McIntyre says, so maybe they will take this into account.

    • Steven Mosher

      Jim

      Yes it was seen as a major gaff when they forgot to use the TOBS data.

      fixing it was a one day job. just switch datasets.

      instead, Anthony who doesnt trust the metadata ( except when he does )
      decided to use only those stations that have no TOBS (trusting the data he doesnt trust )

      We will see what results come out.

  117. Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
    Mine would focus on 3 key points.
    The first of adjustment of past temperatures from current ones.
    The second of a possible flaw in TOBS as used.
    The third on the number of what are referred to a Zombie stations
    1. Zeke says this is incremental and unavoidable using current temperatures as the best guide and adjusting backwards.
    “NCDC assumes that the current set of instruments recording temperature is
    accurate, so any time of observation changes or PHA-adjustments are done
    relative to current temperatures. Because breakpoints [TOBS] are detected through pair-wise comparisons, new data coming in may SLIGHTLY change the magnitude of recent adjustments by providing a more comprehensive difference series between neighboring stations.

    When breakpoints are removed, the entire record prior to the breakpoint is
    adjusted up or down depending on the size and direction of the breakpoint.
    This means that slight modifications of recent breakpoints

    The incremental changes add up to WHOPPING changes of over 1.5 degrees over 100 years to past records and 1.0 degree to 1930 records. Zeke says the TOBS changes at the actual times are only in range of 0.2 to 0.25 degrees. This would mean a cumulative change of 1.3 degrees colder in the distant past on his figures, everywhere.
    Note he is only technically right to say this ” will impact all past temperatures at the station in question though a constant offset.”
    But he is not changing the past 0.2 degrees.It alters all the past TOBS changes which cause the massive up to 1.5 degrees change in only 100 years.

    • Steven Mosher

      “The first of adjustment of past temperatures from current ones.
      The second of a possible flaw in TOBS as used.
      The third on the number of what are referred to a Zombie stations”

      I would suggest 3 different papers.

      Start with TOBS.

      Focus is better.

  118. Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
    Mine would focus on 3 key points.
    The first of adjustment of past temperatures from current ones.
    The second of a possible flaw in TOBS as used.
    The third on the number of what are referred to a Zombie stations
    2. TOBS and break adjustments are made on stations which do not have data taken at the correct time.
    The process is automated in the PHA.
    Infilling is done on stations missing data, ie not correct time . Zombie stations have made up data, ie not correct time.
    This means that potentially half the 1218 stations, the zombie and the ones missing data have an automatic cooling of the past done every day with the result of compounding past temperature altered levels.
    This should not be allowed to happen.
    Once a TOBS change has originally been made in the past eg 1900 should have been 0.2 warmer thern this altered estimate should stay forever and not be affected by future changes.

  119. The Models are wrong and the stop/pause is the cause. How many statistical angels can dance on the head of a pin? Man made CO2 will not cause cAGW, no matter how you re-arrange those statistical chairs on the cAGW Titanic’s deck.

    • Cult members are reassuring themselves by an endless repetition of their talking points to each other (like “the models are wrong”, “stop/pause”, “fraud/lies”).

  120. Finalky got TOB papers. Concepts make sense. A lit of loosey goosey quadrature is a concern, which could add permicious bias, but I cant say which way they might go. More study warranted….

    • And the next question will be how the 1958-64 training data performs through all years.
      By quadrature Im referencing the tendancy to ‘average at will’ without much consideration of the error introduced…
      If I understand right, with the fit done the TOB is just a simple function? Is it available? Unless the calendar stuff is a pain….

  121. Wow, the number of oxen this post has Gored. 500 comments in 12 hours.

  122. The post comes at a good time. Regardless of where you stand — skeptic or convinced — we all need to see the raw data.

    I imagine a useful table would contain fields like:
    station_id,
    lat,
    long,
    elevation,
    created_timestamp,
    modified_ts
    temperature_reading_ts
    temperature,

    And another table for adjustments:
    created_ts
    adjusted_temp,
    adjustment_reason_code,
    adjustment_reason_text,
    station_id

    Maybe someone with access can populate a SQL db and share it or dump a flat file of every station reading every taken and stick it on DropBox.

    The bad news is there could be millions of records. The good news is it’s not 1980 anymore and we have database systems that process records in the billions.

    When we have the data, we can easily do clever things like produce infographics of a station whose data was adjusted for TOB, how many times it was adjusted, how much it was adjusted, etc.

    I assume the adjustments are entirely necessary because I choose to believe that folks steeped in this stuff every day know what they’re doing. However, I’d still like to see the raw data at the most granular level possible with all applicable metadata.

    • I may have found the data:
      http://cdiac.ornl.gov/ftp/ushcn_daily/

      There’s also a slick web interface for pulling data and basic reporting:
      http://cdiac.esd.ornl.gov/epubs/ndp/ushcn/ushcn_map_interface.html

    • I pulled the daily data for Alabama, imported it into Excel and have data for almost every day and every month going back to 1926 up to 2012. Awesome.

      I don’t see the actual vs adjusted temps data, only the end result. If these flat files containing state data were updated daily, storing “yesterday’s” data and checking it against “today’s” data would be easy. Unfortunately the files last mod dates are Feb 27, 2013.

      I’m a climate science novice and more of a listener/reader than a contributor (to the science not to AGW ;-). I encourage anyone interested enough to form an opinion on AGW to pull the source data used to “prove” it in the USA and do some basic analysis on your own.

      • I noticed that all TMAX and TMIN recs for Alabama are rounded to the nearest whole numbers. Why is this? Since we’re talking about climate change anomalies in matters of fractions of degrees, I expect decimal places. What am I not thinking about correctly?

    • Steven Mosher

      go to our site.
      download the data
      or get the links to all 14 sources.

      and for gods sake dont put it in SQL.

      the metadata, ya, we have that in mysql.

      But if you want to put time series with incomplete data into a normalized table be my guest. that’s gunna be ugly.

      • David Springer

        The author imported into Excel. That’s not a database. Not sequel. Get a clue.

      • A flattened warehouse structure that can be sliced and diced by a BI system may work better.

        Thank you for the links!

      • WebHubTelescope


        David Springer | July 10, 2014 at 10:22 am |
        The author imported into Excel. That’s not a database. Not sequel. Get a clue.”

        The guy said he wanted “SQL db”. Springer is wrong on cue.

    • If you follow the URL in my name, I have sql, reports (in csv), and where to get the raw data I used.

  123. Statistics is probably not one of my strong points:-)
    I would suggest the following for the US UK and other countries with established temperature records:-
    1 Provide anomaly graphs of the un-adjusted readings min and max.
    2 ditto above for readings taken at the same time of day for stations which have not been moved and where UHI has little influence.
    3 ditto above where UHI has a lot of influence.
    Then take a look?

  124. A good post, fair comments by Matthew R Marler, too many repetitive comments from certain parties, eventually I just skipped to the end.

    One post by F Leanme asked about stations in, e.g., Russia. I don’t offer this as scientific evidence, but there was a strange but intense and gripping Russian film in 2010 called “How I ended this summer,” which was set in a very remote two-man weather station. The one “ending” (doing an out-of uni project to finalise his degree) got fed up with battling out in the weather to read instrumental values, and made them up or repeated them. The long-term observer got pretty wild when he found out. Worth a look if you’re interested in variety in weather stations.

    • In my youth I worked casually for a government utility. We were supposed to keep records of public attendance, though nobody ever wanted to define what constituted attendance, nor was it easy to keep count. So we made it up, often days late. Even the conscientious people were making it up, since they had no guidelines.

      We made it up…for years! Yet the records became official figures somewhere and were used in budgets, policies, department and government politics, council split-ups, media releases etc.

      Hey, maybe you were one of the policy makers, Faustino. If so, sorry about that.

    • nottawa rafter

      I am not asserting anything about individual actions but when any system relies on thousands of individuals to perform a task, the variable of errors, for whatever reason, have to be taken into consideration. Regardless of how simple the instructions and the task, some humans will screw it up.

  125. The Daily Mail (second highest selling newspaper in the UK) today says:
    “American ‘climate change experts’ have been exposed for fiddling temperature records to make it appear the past was colder than it actually was.”

    • Matthew R Marler

      Paul Matthews: “American ‘climate change experts’ have been exposed for fiddling temperature records to make it appear the past was colder than it actually was.”

      Perhaps now you can understand why the Daily Mail was wrong to publish that.

  126. son of mulder

    “Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends.”

    Why is this, have not practices changed in other parts of the world eg replacing LIGs with MMTS and changing times of observations? Whether or not shouldn’t the rest of the world historic data also be reduced by amounts similar to the US as LIGs and reading practices would have been similar in the past? At least to make US and rest of the world measurements like for like.

    Then to keep history consistent wouldn’t the 0.4 deg C reduction be expected to have to ripple back through time to the middle ages and earlier because the calibration of proxies would have been against temperatures under the unreduced regime? Little iceage 0.4 deg cooler, MWP 0.4 deg cooler.

    What about adjustments required to the Central England record of a similar type?

    That much cooler history then would have to be squared against the qualitative descriptions of the times. Has that work been done?

    • “Why is this, have not practices changed in other parts of the world eg replacing LIGs with MMTS and changing times of observations?”
      MMTS is relatively small. TOBS is an issue in US because COOPs were mostly volunteers, and if they wanted to change they could (by agreement). Elsewhere observers were mostly employees and observed as directed.

      • son of mulder

        ” Elsewhere observers were mostly employees and observed as directed.”

        Has this been quantified formally so we know it is less of an issue as opposed to an assumption? Is it fair to assume that as it looks from the global 5-year smooth graph above that global past has reduced by about 0.25 deg C. And as US is 5% of global only 0.4/20=0.02 deg C of the global reduction is due to US and the rest ie 0.23 deg C is down to global LIG to MMTS or is there other reasons why global is down?.

      • “And as US is 5% of global only 0.4/20=0.02 deg C of the global reduction is due to US and the rest ie 0.23 deg C is down to global LIG to MMTS or is there other reasons why global is down?.”

        I have numbers on that here. It depends on how you average. On simple average by stations reporting, US can be 20% or more.

        But the key graph is Zeke’s Fig 1. Using unadjusted or adjusted for global gives essentially the same result for the last 50 years or so.

      • son of mulder

        “On simple average by stations reporting, US can be 20% or more.”

        Surely it must be area weighted on the global picture. So You haven’t answered why global adjustment down over 60 years ago was around 0.25 deg C. What was the cause? Anyone?

  127. Zeke, regardless of how you explain it, the thing that outsiders find hard to accept is that the supposed errors don’t balance out as would be expected from normal data collection and especially of TOBS. Why do all adjustments increase the trend? It makes sense only if the adjusters systematically ignore any warming biases. In fact, due to the well-observed increase in populations around the measuring sites (UHI) we really would expect adjustments to go the other way.

    Hence we don’t get the overall impression that the adjusters are making things better, just warmer: A zero trend for the US has effectively been converted into a warming trend purely by adjustments. Everyone should be naturally skeptical of that! Sure that doesn’t affect the global temperature much but it does influence policy in the US. And then other purely warming adjustments are further added to the global trend. It just smells really bad! Nobody would care if we weren’t jeopardizing our future by ditching old energy sources before we have decent replacements based on these numbers and even more iffy models.

    Like others here I’ll assume good faith again when I hear more critical voices from within the climate community of those in their ranks that just make stuff up, call them an irrefutable facts and then denigrate anyone who legitimately disagrees from a pretence of highly dubious moral superiority. When we are truly worried about the cure being far worse than the putative disease, it really grates to be called childish names. When scientists can debate like adults then they might regain my respect. For now though they seem to be the enemies of industrial progress and hence the enemies of prosperity.

    • Steven Mosher

      “Zeke, regardless of how you explain it, the thing that outsiders find hard to accept is that the supposed errors don’t balance out as would be expected from normal data collection and especially of TOBS. ”

      Then you have not listento the explanation or read the papers.

      The errors WOULD balance out if the Time of observation change were random.

      But the change to TOBS is not random. See figure 3.

      For example. there are 24 hours in the day.
      If the stations had observation times that were uniformly distributed over these 24 hours and Then you changed the TOB randomly, THEN you would expect the biases to sum to zero.

      BUT that is not what you have. See Figure 3.
      If you had, for example, all the stations reporting at NOON, and then they ALL switched to morning, Then you DONT expect the change to sum to zero.

      So the premise you guys have is wrong from the start. The stations change in observation time is NOT random. It is highly skewed. As a result the bias will be in one direction. or rather you should not be surprised to see that it tends to be in one direction more than another.

  128. Like others here I’ll assume good faith again when I hear more critical voices from within the climate community of those in their ranks that just make stuff up, call them an irrefutable facts and then denigrate anyone who legitimately disagrees from a pretence of highly dubious moral superiority.

    Should they lie to agree better with your prejudices?

    It’s funny to see, how impossible it is for many to accept the obvious truth about the analysis of instrumental temperature data.

    • I don’t have prejudices, I observe! I was merely explaining why there is continued skepticism despite all explanations. As for lying, I have observed many scientists seem to have no difficulty with lying when they connect, without a shred of evidence, supportive modeling or any data or often even any theory such things as extreme weather is getting worse or is linked to CO2, wet areas will get wetter and dry areas will get drier, that the ocean swallowed the ‘missing heat’, using a proxy upside down doesn’t matter, the models are still adequate for policy even after such a huge divergence from reality, coral die-back is due to manmade warming rather than fishing, all warming must be bad rather than beyond a certain threshold, etc, etc, etc.

      As for obvious truth. When I have seen every single hockey stick graph turn out to be phoney and every adjustment making the trend warmer then there is no such thing as obvious truth. If you accept there is, despite the reservations (above) of even the people who compile these graphs then you are the prejudiced one here! As I said, none of this trivia would even matter if policy – and generally bad policy at that – was not being based upon it.

      My only particular bugbear here is with the TOBS adjustment because it makes zero sense and contrary to some statements made above, even Karl’s paper that this adjustment is derived from admits it is largely guesswork. Rather than guess I’d have left it alone – especially since it makes little difference to the global temperature anyway! But adding just that adjustment changes the larger warmth from the 1930’s to the present day: Quite important then!

      • verytallguy

        My only particular bugbear here is with the TOBS adjustment because it makes zero sense

        Have a read of the following, report back afterwards?

        there is a bias, and it’s a scientific duty to estimate and allow for its effect. The objectors want to say it is zero. That’s an estimate, baseless and bad. We can do much better.

        http://moyhu.blogspot.co.uk/2014/06/tobs-nailed.html

      • Nick did nail that one. I do wish people would attempt to understand the issues, before making baseless accusations of lying. It isn’t a particularly difficult thing to grasp, so this either speaks to intelligence (which I doubt) or a lack of honesty on the part of the accuser (which I do suspect).

      • Steven Mosher

        “I don’t have prejudices, I observe!

        Your first prejudice is your belief that you have none.
        Witness your inability to understand figure 3.

        you DIDNT observe. your are prejudiced to thinking that you do. but you dont actually observe. neither did you understand what was written.
        And I bet you didnt observe the actual data ( made available) or the code.

        you chose to stop observing, before you finished the job. why?
        because you have a prejudice.

  129. Weighting

    I understand that of course if you’re to try and give a global average then you need some degree of weighting if you wish to avoid the bias effects of spatial clustering.

    Some of the gridding methods employed by some groups only work at the grid resolution (such as box averaging) as each cell (box) is victim to an underlying clustering within. Change the box sizes or reference position and you’ll get a different result.

    Other methods employed such as Kriging deal with clustering implicitly and are therefore better at this sort of thing. However, you must first model the underlying structure of the data. Kriging is then applied to the resulting residuals (residual = observation – spatial model prediction) before adding the structural model back into the gridded series. If you’re spatial model is based on the raw station positions then any benefit from the kriging only applies to the random component! The underlying structural model is still more sensitive toward stations in poorly sampled regions.

  130. peter azlac

    Zeke Hausfather | July 7, 2014 at 11:18 am | says:

    “Ooh, can I try? :-p

    If you average absolutes and the composition of the network is changing over time you will be absolutely wrong because the change in underlying climatology will swamp any signal you are looking for.´´

    That is the real problem with BEST and the other series, the composition of the network changes over time but it does not need to. I have seen it stated (Mosher?) that only 26 stations are required to reproduce the BEST and other global temperature series and have certainly seen claims that the CET record five year smooth is a good proxy for global temperature anomalies:
    http://www.metoffice.gov.uk/hadobs/hadcet/ParkerHorton_CET_IJOC_2005.pdf
    CET is made up of only four stations at a time, though over the period 1878 todate there have been seven involved for differing periods – it is difficult to understand why as Oxford (Radcliffe, Stonyhurst, Ringway, Rothamsted and Ross-on-Wye) have continuous records over the complete period and whilst Ringway may have beemn removed because of urbanization and development of the airport Rothamsted that remians has also been subject to the same effects. It should be noted that whilst BEST claim no discernible effect of UHI on their record the UK Met Office acknowledges corrections of up to 1.5 C for this, largely to the minimum temperatures where most global warming is found.
    If CET compiled in this way can be used as a global proxy then why not the sixty plus equally long term stations with data from pre 1880 to 2013 from Europe, Russia, China, Japan, Australia, New Zealand, S America and Canada be used to compile a global series without the use of all the statistical manipulations of BEST. Instead we see a number of these long term series ´corrected´with data from adjacent station data from only the 1960´s onwards. For example Berlin Dahlem with continuouis data from 1769 and Berlin Templehof with data from 1701 are ´corrected´using data from the airports at Tegel, Schonefeld dating from 1953/63 where there was heavy military and civilian airtrafic and Alexanderplatz from 1991 all of which introduce a large UHI effect that shows in the ´corrected data as +0.12C – a seriosu undertimate of the UHI effect in my opinion as Templehof (an airport) already shows an increase over Dahlem (semi rural) of 0.15C. There are other examples of this type of ´correction´with very local data and the other 10,000 BEST ´stations/scalpel bits´only makes it worse no matter what statistical tricks are used.
    In they days before ´post normal science´when hypothesese were falsified or not with real empirical data it was expected that if one wanted to determine a change in some factor – for example response in corn yields to different rates of types of fertilsier the test was done on the same soil type in the same years. The same should be true for climate change we should evaluate the changes in temperature (not anomalies) over time at the same stations and present the data as a spaghetti graph showing any differing trends and not assume that regional or climates in gridded areas are the same – which they are not as is obvious from the climate zones that exist or microclimates due to changes in precipitation, land use etc. Most, if not all of these long term stations are run by scientists and meta data must exist showing any changes and the result of such changes so we do not need to guess or use scalping, kreiging, homgeneinsation and certainly not gridding to arrive at a completely useless global value that does not allow any meaningful analysis of responses to solar inputs, ocean cycles etc.
    Note that I am not saying that warming has not taken place just that it is not global – BEST admits that 30% of the stations have cooled and that is true of severla of therse long term stations – but that we should concentrate on finding a useful set of temperature trends in regional and zonal areas that reflect the impacts of climate change, as for example the Sahel, and understand the true reasons without assuming carbon dioxide to be the culprit.

    • Steven Mosher

      “That is the real problem with BEST and the other series, the composition of the network changes over time but it does not need to. I have seen it stated (Mosher?) that only 26 stations are required to reproduce the BEST and other global temperature series and have certainly seen claims that the CET record five year smooth is a good proxy for global temperature anomalies:”

      err no.

      Shen’s paper on this question suggests that 60 OPTIMALLY placed stations will suffice.

      we dont have 60 optimally placed. But playing around with this over the years you do get good answers at 60, better at 100, even batter with 300..
      and so forth.

      So, first start with a definition of what is “good enough”
      Century trend to +- .1C? century trend to +-.15c?

      Start with your definition of what is “good enough” and then given the data the answer can be computed.

      Theoretically ( see Shen) it wouldnt be less than 60

  131. Ian Blanchard

    As a bit of an aside, it is probably worth noting that the US has probably one of the most reliable historical (raw) temperature records available – large country with lots of measurements in rural areas, technologically advanced and reasonably to very wealthy throughout its history (so with good equipment maintenance relative to most other areas) and probably most importantly, no conflicts on its own soil since the 1860s, so there should generally be a long archive of records and stations.

    Compare with continental Europe, which was majorly disrupted by two world wars in the 20th century (so destruction of many archival documents) and even worse the ‘developing’ world, where equipment and record keeping are probably the biggest drawbacks to a reliable extended historic record.

    I look forward to Zeke’s post on TOBS – I think I understand the concepts ( principally that changing the time of readings influences the risk of double counting extreme values), but intuitively I suspect the size of the adjustment is too large. How frequently do these double counting issues acttually come about for mid-morning or mid-evening measurements?

  132. What would it look like measured in Fahrenheit? Just for fun.

  133. Joe D’Aleo had a paper which showed that if they didn’t do these dubious TOBS adjustments (that by themselves produce the warming trend) then all the solar reconstructions then match perfectly to the US data as well as for the Arctic data; ie the only two ‘good’ datasets we have. Food for thought!

    • Steven Mosher

      Joe D’Aleo had a paperr?

      No he wrote a paid for piece.
      and he was wrong
      and his co author uses the TOB adjustment

      Nice appeal to an uncited, unreviewed, wrong “paper” whose co author does not practice what that paper preaches.

      excellent.

  134. I like what Zeke and Mosh are doing. I think they are trying very hard at doing very tough data intensive work in a rigourous and honest manner. I don’t think their motives should be questioned.

    If they have an unconscious bias, join the crowd, we all do, that can never be helped, and I don’t think any unconscious bias they might have is affecting their analysis, as far as I can tell. I recall that Most posted several months ago that in his view, El Nino would cause global temps to set a record this year. Latest evidence is that El Nino is going bust. So maybe Mosh has an unconscious bias about temps. So what? Even if he does, if you think it might affect his analysis, show that their analysis is wrong in some way. Doesn’t look that way to me.

    I do think that Mosh goes a little too lightly on the issue of why the climate change community basically reacted with silence over Climategate. Yes, some scientists probably had their noses deep in their work and were only vaguely aware, at best. But many such scientists were only too aware, and with very few exceptions (thanks, Judy), they either did nothing, or in some cases attacked the (skeptical) messengers.

    My take is that climate science has been mostly politics for the last 15 years. It is warfare, tribal warfare, and it isn’t about the science, it is about the interpretation of science and whether you are on the right team. If you are on the climate change team, you defend your team, you don’t give the other side ammunition, as Mann (among others) famously said. If a university’s research depends on government money, the university’s PR department makes sure there is some dire implication in their press releases about their research findings. If an individual scientist thinks that climategate was a scientific fraud, it will do his career, and funding, no good to say so. If you are in the government, and the government has made it clear what its position is, you don’t rock the boat.

    So if Mosh has some unconscious biases, and if they affect his perception of things (as my unconscious biases no doubt do as well), it doesn’t play out in his rigourous assessment of the temperature record,. But it may play out in what seems to me to be a bit of a lack of recognition that the mainstream science community has multiple and converging non-scientific reasons to keep their mouths shut about climategate.

    • Steven Mosher

      “I do think that Mosh goes a little too lightly on the issue of why the climate change community basically reacted with silence over Climategate.”

      I think they remained silent for the some of the same reasons skeptics remain silent when Goddard makes mistakes, or when Scafetta refused to release code, or when denizens here say stupid stuff.

      I think they get defensive for the same reason commenters at WUWT get defensive or jonova get defensive.

      They are humans.

      As an experiment ( I love doing these ) go criticize someone on your own team. watch what happens. Go criticize a friends science. see what happens
      Willis and I are friends. But to people on the outside we look like enemies.

      Now, people have this idealized vision of the scientist. He’s the objective one. the one who operates with no alligience, well his alligence is to the truth. Sorry, Im not buying it. He’s a human. he has interests and feelings and biases and quirks and blindspots.

      So what do we do.

      I do some science. I show you, I give you my data. I show you, I give you my method. That allows you to CONTROL for the researcher BIAS.

      Your job is to FIND and DEMONSTRATE the ACTUAL BIAS.

      You dont DO this by arguing.
      You dont do this by questioning
      You dont do this by MERELY doubting.

      You DO this by actually DOING THE WORK of DEMONSTRATING the bias
      with data or with a method.

      Until you can SHOW the BIAS, you have nothing but PHILOSOPHICAL objections.

      Science aint philosophy

      • Mosh, I have mostly criticized commenters, or articles, at WUWT. Don’t confuse me with other people. Below see my latest comments, on the thread that questions whether disposal wells have caused earthquakes in Oklahoma. Perhaps you are thinking of someone else.

        That said, as someone that has been on the recieving end of denigration when I responded with science to a friend’s view point that sea levels would be 3 feet higher by 2060, I stand by my view that a lot of the failure of the climate community to address climategate is because of tribalism, don’t want to give ammunition to the opposition. I reported on that incident here about 10 months ago.

        ——
        Here are my latest comments on WUWT, just so you will know.

        John says:

        July 5, 2014 at 7:54 am

        We need to distinguish between earthquakes caused by fracking, and those caused by high volume disposal of liquid waste products. The largest earthquakes by far are those caused by disposal. There has been an earthquake as high as 5.7 on the Richter scale caused by disposal wells in Oklahoma. That big, and you can have several thousand dollars of damage to your house. The ones caused by actual fracking are usually between 1 and 2, barely noticeable if you are right on top. Big difference.

        If wastewater was recycled more, there would be much less need for disposal wells. And places like Oklahoma and Texas often don’t have all that much water to spare. If the industry wants to avoid a PR disaster the first time someone is killed by an earthquake caused by disposal, they have to recycle water more. It will cost a bit more, but it will be worth it.

        Face it, none of us would want a magnitude 5 earthquake near our house. Fracking is very good for the US. It makes tons of tax money for cash starved states (Pennsylvania in particular), provides many jobs, reduces our imports. The industry can afford to recycle water a lot more to reduce the bigger earthquakes caused by disposal well.

        John says:

        July 5, 2014 at 7:56 am

        Here is the link for the 5.7 earthquake near Prague, Oklahoma caused by disposal wells, not by fracking:

        http://www.reuters.com/article/2014/03/11/energy-earthquake-oklahoma-idUSL2N0M80SP20140311

      • Matthew R Marler

        Steven Mosher: As an experiment ( I love doing these ) go criticize someone on your own team.

        On this topic, you and I are on the same team. We were also on the same team when this topic (or a related topic) was discussed at WUWT.

        Since I criticized you, fairly I think, let me say that in reading this thread I am favorably impressed by your willingness to answer the same questions over and over again.

        Also, you spelled my name correctly, which I appreciate. I think things like that make a favorable impression on those readers who never comment.

      • A C Osborn

        I did demonstrate BIAS in BEST and you agreed that BEST can’t handle Islands and Coastal data.

        It doesn’t matter how much you prove that the “Maths” are good, the Adjusted data does not reflect reality, instead of changing the past, which should be Set In Stone because that is what human beings experienced at the time, adjust the present to fit instead.

      • A C Osborn

        Let me quote Mosher from a previous Post about BEST.
        If you want to know what the Temperature was Use THE RAW DATA.
        If you want the best Estimate use the “ESTIMATED FIELD”.

      • Mosh, please take another, closer look at my comment.

        I didn’t criticize the science that you and Zeke do, to the contrary I said I liked it. I didn’t criticize your data gathering or the way you handle it, or your results. Period.

        I said we all have UNCONSCIOUS biases, myself included. That shouldn’t be controversial.

        I thought perhaps, from your prediction a couple of months ago that we would have a new temperature record this year because of El Nino, that you might have such a bias in terms of when temperarures would rise again; perhaps you think (consciously or unconsciously) that the pause will soon come to an end, and model forecasts in a few years time won’t look as bad as they do now. That was speculation. I didn’t say that such an unconscious bias, should this particular one exist, affected your science.

        I did think then, and do think now, that tribalism is a major reason why the climate change science community has not criticized the climategate emails and perpetrators: we can’t give ammunition to the other side, we can’t suggest to our funders that we aren’t fully committed. You and I may have to agree to disagree on this point. But even if we do disagree on this point, it isn’t a criticism of your science.

        So – please read my email a bit more carefully next time!

      • Skeptics didn’t produce a false record used to influence massive policy decisions that required correction by anyone with any pretense to morality and ethics. Skeptics hadn’t accepted enormous sums in research funding that was exposed as questionable. Skeptics didn’t have a stake in maintaining the integrity of the institutions of science.

        Big difference. Huge.

      • Steven Mosher

        AC you didn’t demonstrate bias

      • @ Matthew Marler

        Upthread you asked the following, which I never directly answered:

        “You are not advocating that the whole temperature record be ignored, are you? ”

        As justification for political action to ‘control climate change/control global warming/control climate weirding/control ACO2’ or for any other ‘climate policy’, that is exactly what I am advocating.

        After reading Zeke’s explanation of the data processing (an excellent job, by the way, along with his follow ups to other other commenters), Mosh’s continuing efforts to educate us on BEST’s work, and a host of other data related posts and commentary that have appeared here over the years, it is patently apparent that the historical data record is simply not able to support the conclusions that are being so heroically extracted from it. It lacks precision, it lacks geographic coverage, it lacks any semblance of QC, it lacks continuity, ad infinitum. And no amount of heroic adjusting, infilling, kriging, correcting, or whatever, no matter the ‘need’ for precision data, is going to convert historical temperature data into a data base from which the monthly temperature of the Earth can be compared on a year to year basis with a precision that justifies press releases like the following:

        “The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago. In April, the globe tied the 2010 record for that month. Records go back to 1880. ”

        especially since the ‘record’ was broken by 0.02 C. Do you, Zeke, Mosh, or anyone else believe that the planetary temperature records going back to 1880, no matter how carefully massaged, can support the above as a statement of scientific fact?

        Scottish Sceptic made the following statement earlier: “From that experience I learnt that it was impossible to reliably measure the temperature of a glass slid about 1cm across to within 0.01C let alone an enclosure a few tens of cm.”

        He is right; you can’t make a meaningful measurement of room temperature with 0.01 C precision, never mind the monthly or yearly surface temperature of the planet, and anyone who has ever tried to measure temperature knows it and is instantly suspicious when faced with breathless headlines saying otherwise, especially when the headlines are based on century old, hand written data, collected from sub-optimably distributed locations using uncalibrated mercury thermometers by untrained observers, that has been heavily massaged by scientists funded by the government and are cited as justification for political action by the politicians who provided funding .

      • k scott denison

        Bob: +1000

      • Steven Mosher | July 8, 2014 at 5:08 pm |

        AC you didn’t demonstrate bias

        The Swansea Final Best trend is approximately 1.25 degrees, the Raw Best Data shows approximately 0.75 degrees so Best Final BIAS 0.5 degree.

        Plus the starting point of the trend in Final is 1.0 degree higher than raw.

        Like you said if you want the temperatures use RAW, if you want BIASED Climate Scientist modelled fantasy use “Expected” values.

      • Matthew R Marler

        Bob Ludwick: “The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago. In April, the globe tied the 2010 record for that month. Records go back to 1880. ”

        I agree that some people are claiming more precision and accuracy for some of the estimates than is warranted.

        Sorry I took so long getting back to you, but I am trying to “cut down” on my intrusions.

    • Again Bob Ludwick +1000.

  135. It is always a challenge to sort througj a climate paper and try to discover mathematically what is happening. I think i see it:
    Average monthly temp is a double integral over time (month) and space on a manifold (earth) of a somewhat nasty function ( http://www.eol.ucar.edu/cgi-bin/weather.cgi?site=fl&fields=tdry&site=fl&units=metric&period=monthly e.g. for the time varying part). A saving grace is that the integral is then divided by a months time and the surface area of the earth.
    Now ideally one would have stations at a nuce set of Gauss points in time and space. Instead we have some very sparse set of samples for T unevenly spaced.
    So the approach is to reconstruct T with scattered spatial interpoltion (gridding) and to use training data and periodicity of T to reconstruct in time TOBs. The we integrate and divide to average.
    A problem I have with TOBs papers is that the statement of the method is not clearly posed, the integration and interpolation are done simultaneously and hence any sort of standard quadrature error estimation is unavailable.
    This is the challenge of interdisciplinar work. Math guys could help with this in a major way, but climate culture is too proud to involve them.

    There is surely a formal development of this problem from a statisticians point of view as well. I would love to hear it and the associated (standard) error estimates.

  136. I think Zeke did an excellent job explaining how and why adjustments to temperature data were made. To me, it makes sense. The next questions are – given the questionable reliability of much of the raw data (especially the historical data), the gaps in coverage, and the number of adjustments that have been applied to the raw data, what is the confidence level that 1) a significant rise in temperature has been observed 2) that the trend is unprecedented 3) that the trend is accelerating 4) that any rise in temperature is directly attributable only to Co2 increases? Put another way, what is the confidence level that if we stopped burning fossil fuels tomorrow, we would see a decline in temperature and how long would it take for the decline to occur?

    Other questions I can think of that are not directly related to this post are 1) is a decline in temperature desirable 2) are increases in Co2 actually beneficial and 3) is now the time to impose legislation that will cripple our economy and limit our ability to adapt to severe weather events and changes in climate that will occur no matter what we do?

    • @ Barnes

      Thank you; have asked the same questions, and similar ones, often, and got no coherent answers. Maybe you’ll have better luck.

      • plutarchnet

        When the answerer is barraged with assertions of bad faith, personal attacks, vaporings of ‘you’re wrong’ — unsupported by evidence, ‘questions’ that arise from the questioner having not read the article they’re supposedly asking a question about, and so forth, it’s surprising that you get any answers at all.

        If you want answers to non-gutter questions, standing in the gutter isn’t a good place to ask from.

    • Barnes

      You asked an excellent set of questions.

      ‘… The next questions are – given the questionable reliability of much of the raw data (especially the historical data), the gaps in coverage, and the number of adjustments that have been applied to the raw data, what is the confidence level that 1) a significant rise in temperature has been observed 2) that the trend is unprecedented 3) that the trend is accelerating 4) that any rise in temperature is directly attributable only to Co2 increases? Put another way, what is the confidence level that if we stopped burning fossil fuels tomorrow, we would see a decline in temperature and how long would it take for the decline to occur?’

      —– ——

      I have looked at many historic sets of temperature readings and wrote about the difficulties with them in a previous article; The basic raw data-each individual temperature reading – is often more like a rough stone which can not be turned into a useful and reliable record than a gold nugget, which carefully prepared, has some value. You certainly wouldn’t bet your house on their reliability to anything more than plus or minus half a degree C.

      In answer to your questions

      1) A rise in temperatures can be observed which, taken with other records,can be traced back some 350 years. The glaciers first started melting again around 1750.

      2) The trend is unprecedented in the last 50 years. However our records are very short and a global average is of dubious value as it disguises the regional nuances. in this context I would say the trend is likely to be similar to the ones going from the dark ages cold period into the mwp and the lia into the modern warming period, so in human terms it is not unprecedented

      3) Even the Met office admits to the pause in land temperatures so the trend can only be seen to be accelerating if it resumes its upwards curve over the next 50 years. The MWP lasted 400 years, the Modern warm period at similar levels around 30 years with a hiatus, so the modern warm period may well have a long time to run.

      4) Co2 must have an effect, but whether that effect tails off at 30ppm, 300ppm or much higher needs resolving. Looking at historic temperatures co2 appears to be one of many passengers on the climate coach but is not the driver.

      To answer your other questions we can not dial up a perfect temperature to order. This current warm period is very benign and I would go with that as being desirable over any others that we can be confident of.

      It would take centuries before we saw any temperature decline-assuming co2 to be responsible-even if we cut emissions today.

      tonyb

      • But this is all just your personal opinion, nothing more, isn’t it?

      • Barnes

        Further to my reply to you,

        It must be said this is a good post by Zeke. We must wait for the other two in order to be able to put it into context. In my reply I was not inferring that Zeke or Mosh are in any way trying to pull the wool over our eyes. I also do not believe in hoaxes or conspiracy theories.

        However, much of climate science revolves around data that are more rough stones than potential gold nuggets.

        tonyb

      • Tony – thank you for your reply. Frankly, I ask those questions due in part to work of yours that I have read. If I recall correctly, your examination of historical temperature records show that abrupt climate changes are more the norm than the exception, and that we don’t readily know why.

        I think the work that Zeke and Mosher/BEST is doing is valuable, but I question the fidelity of the data WRT making drastic policy decisions that will clearly have an impact on our economy, quality of life (negative impact), and our ability to help those that the left claim to care so much about – the poor. I am clearly on the side of the “deniers” and think we have a lot to learn before we can attribute changes to climate due to anything beyond natural variability with some minor influences by humans – and those influences include things other than just burning of fossil fuels.

      • Tony – just saw your second post and agree. I don’t see anywhere where Zeke is claiming that this post demonstrates anything beyond explaining how and why adjustments were made. I think that may be why Mosher plays the bad cop through much of this thread.

        However, the warmest (like FOMBS) will hyperventilate over the results claiming proof of CAGW and further demand immediate and drastic action. Unfortunately, it’s not just the likes of FOMBS, it’s also too many of our political leaders, and virtually all of the MSM.

      • Steven Mosher

        Barnes is wise.
        Zeke is explaining what is done.
        For that mere action people attack his motives.
        Skeptics who demand attention to the data
        Attack the man.

      • > I think that may be why Mosher plays the bad cop through much of this thread.

        Some call it grooming.

    • Barnes wrote: “However, much of climate science revolves around data that are more rough stones than potential gold nuggets.”

      Scientists like Zeke and others have made heroic efforts to extract the most reliable global warming signal from the inadequate data we have. They have polished your “rough stones”. Unfortunately, we are left with several mysteries: 1) What has been happened at average station (producing breakpoints about once a decade) that has caused then to report on the average cooler temperatures after such events? Does station “maintenance” produce breakpoints? 2) Some stations must be biased warm by urban heat islands, but their influence on the global trend can’t be detected with any of the techniques available for separating urban and non-urban stations. How do we identify stations biased by UHI so we can prove they haven’t effected that global record? 3) What is the best way to present the uncertainty arising from re-processing historic data? The uncertainty in calculating a mean global temperature anomaly from homogenized data from thousands of stations is probably much smaller than the possibility of systematic errors from homogenization.

      • @ Frank

        ‘Scientists like Zeke and others have made heroic efforts to extract the most reliable global warming signal from the inadequate data we have.’

        You make the point that I and others have been trying to make for a long time, with no obvious success to date:

        The purpose of the thousands of man years and billions of dollars that have been spent torturing the patently inadequate historical climate data we have has nothing to do with understanding how the Earth’s climate works.

        The purpose IS as you said: to ‘extract the most reliable global warming signal’, which is POSTULATED, not theorized, to exist, certify that it is caused by ACO2, and provide a laundry list of undesirable to catastrophic consequences which ARE befalling us (present tense) and which will continue and escalate unless political action is taken to drastically curb our use of fossil fuels.

        By the way, the latest ‘bad thing on the laundry list’ (never any ‘good things’) caused by CAGW is apparently this: “Climate change could lead to the extinction of redheads in Scotland, a DNA expert has claimed.”, which made headlines around the world. Instantly.

    • I don’t think anyone has put serious resources into the “what would happen if we stopped all CO2 emissions tomorrow” because it isn’t a serious question worth devoting time and money to answering. It isn’t going to happen, so what’s the point?

      Onthe other hand, many models are devoted towards many different (and more realistic) emissions scenarios and they are publicly available if you are really interested.

  137. Without clearly defining the impact of collective political group think as opposed to the straw-man response contained in the article; “They’re is no conspiracy” it’s difficult to communicate at all to AGW advocates, “believers” and followers. I appreciate Zeke’s post but his minimizations of agenda driven culture regarding climate research reduce his credibility.

    Political bias requires no “conspiracy”. IRS, EPA, Academia, NY TImes…..NOAA…..NCDC….do you seriously think millions who generally “hope” any question of “evidence” doesn’t fit their narratives doesn’t impact a result??

    Forget “conspiracy” and look at the total culture of climate “research” before such an arbitrary claim is made; “the books aren’t cooked”. It’s pretty clear a good section of the rank and file climate research community are at least sympathetic to the warming narrative. We should explore all the people involved and their underlying political views if they were or are involved insensitive and abstract data “adjusting”. Disclosure builds confidence.

  138. What these adjustments boil down to is being able to produce a graphic that shows Warming. Without it, there is no Sciencey-Looking Climate Change Marketing to the masses. This is why the desperation Warmer defense.

    Andrew

  139. What it means is that we don’t have a temperature record.

    • It’s always been a simple minded affair, surface temp records and relating it to “climate”. Most of the ocean isn’t measured, the standards of even 20 years ago were very primitive let alone the claims of ice cores and tree rings.

      One unfortunate outcome for these discussions is the false validation of surface temperature as the exclusive climate driver. Most credible climate “scientists” would denounce this concept to my mind but almost all go along for the ride. From all this noise people try to dictate “policy” and demand control over vast private and national interests.

    • True in the sense that even though the average temperature reconstruction ‘makes sense’ there is zero formal error estimation of either the interpolation error of the global surface temperature reconstruction, or the quadrature thereof and hence the uncertainty in the temperature record is completely unknown (save maybe sound extreme bounds that one could probably work out on a napkin).

    • @ rhhardin

      “What it means is that we don’t have a temperature record.”

      Of course we do, rh. And a very fine record it is, too.

      Otherwise, how would our temperature experts be able to justify press releases like this one?

      “Driven by exceptionally warm ocean waters, Earth smashed a record for heat in May and is likely to keep on breaking high temperature marks, experts say. The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago.”

      It is worth noting that the record that was ‘smashed’ was 15.52 C, proving that the temperature records are able to resolve year to year variations in the ‘monthly temperature of the Earth’ with hundredths of a degree precision.

      If that doesn’t prove the quality of our temperature records, what would it take? After all, if NOAA didn’t have a pretty high level of confidence that their records were accurate to at least 10 millidegree precision, would they be reporting that a four year old record was ‘smashed’ by 20 millidegrees?

      • Bob

        On the last thread John Kennedy of the met office said that due to uncertainties the May 2014 figure was certainly in the top 10 but they could not be more certain than that.

        Perhaps NOAA or more likely their press department are more certain than the met office that the record has been ‘smashed’ by the huge amount cited.

        This certainty over fractions of a degree does no one any favours does it?

        Tonyb

      • @ Tony

        “This certainty over fractions of a degree does no one any favours does it?”

        Well, it certainly does a favor to the CAGW cause, in that the headlines that reach ‘Average Joe’ did say that the record was smashed.

        It is also worth noting that neither the headlines nor the reporting in general mentioned previous record or by HOW MUCH it was smashed.

        It took a bit of digging for me to find the smashed record and confirm that the margin of smashing was actually 0.02 C.

      • Bob

        ‘smashing’ certainly suggests a much much bigger record margin than has possibly occurred. The met office are right to be circumspect. NOAA really ought to issue a clarification or be accused of hubris.

        Good digging btw
        Tonyb

      • +0.36 degree Fahrenheit, or close to it.

      • Let me try: .036 degree Fahrenheit, it was only off by one zero.

    • Think of all the space they saved though.

  140. … under the banner of so-called “climate justice,” the U.N. is doing exactly the opposite. It is doing its best to hobble, hinder and obstruct development of the cheapest and most reliable sources of energy in the third world. ~Francis Menton

  141. “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.
    Point of view surely since when portrayed as real temperature in the past it is exactly cooking the books.
    “no grand conspiracy to artificially warm the earth”
    ” I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer”… No answer given, evah

    So to be clear
    there were “ 1218 real stations (USHCN) in the late 1980s
    There are now approximately 609 original real stations left-
    There are 870 total real stations
    There are 161 new real stations , all in airports or cities
    There are 348 made up stations and 161 selected new stations.

    You are using 348 made up stations, Infilling others who are not reporting.
    Using an algorithm which puts past temperatures down and passing it of as real historical data
    Plus you say you do not see why you have to label the crockery as being an estimate and not real data for people using the graph.
    Well it is damn important when you present it as historical fact and let it be used to promote the idea of global warming due to Co2.
    It becomes a conspiracy when you refuse to acknowledge that the real past temperatures were ever at the most 0.2 degrees C higher and that in some sites only.
    When you cannot see the basic flaw you are perpetrating on everyone, not just yourself you are not a conspirator, just badly leading yourself up a garden path.

  142. Zeke:

    This article has been very helpful to me. I am sure the next two will be helpful also.

    On the issue of pair-wise comparison (which I am jumping ahead of your post on) – would we still do that in a perfect future world?

    Say we have identical weather stations every square kilometer. They take readings every 5 minutes. They are constantly calibrated. We do this for 100 years.

    Is there still a reason to compare each station to its nearest 10 stations and adjust if the trend of one is different than the trend of the nearest 10?

    Is that not really just averaging the 10 nearest stations and spreading that average over their area?

    In a perfect future world – with these identical stations every kilometer, after 100 years of data gathering – it seems like we would want to retain the micro -climate data and just use it all as is – rather than do the pair-wise homogenization step.

    What are your thoughts on this issue.

    Thanks in advance.

  143. Changing the Past? by Zeke
    ” The alternative to this would be to assume that the original data is accurate,
    and adjusted any new data relative to the old data (e.g. adjust everything
    in front of breakpoints rather than behind them). From the perspective of
    calculating trends over time, these two approaches are identical, and its
    not clear that there is necessarily a preferred option.”

    Go for it Zeke the morally right approach .
    The correct scientific approach and the past is left unchanged.
    Gee I would even give you a 0.2 degree TOBS adjustment to 1934 once off if you did this and we could all go home.

    • Angech:

      I suggested above that we could even do both approaches. Zeke said that would be possible – but might be confusing to some people. However, not to the readers of this blog (probably).

      I find the changing of the past (or at least the changing of the estimated past) very unsettling and would much prefer t see the present change relative to the past – or at least have the option to see that.

      I would even like to see some of the classic graphs – but showing with and without each of the four adjustments Zeke is talking about (and also both ways on changing past relative to present and changing present relative to past) – just to see the classic graph with the raw data, the classic graph with the QA – the classic graph with the TOBS correction and the classic graph with the pair-wise homogenization. Since all those files exist – the scientists could easily show all four (or five) each time – just for fun! (and for people like me who just want to gauge the difference all of these processing steps make to the raw data).

      That would be the best of all worlds – the processed data is there for the scientists who like to work with the tweaked data (cause it is the most accurate – probably). However, we could see the difference between the processed data and the data at each stage of the processing, all the way back to raw.

      Then with worldwide distribution of really good automated weather stations, after 100 years we would have really really good data and we may not need so much processing.

      • Steven Mosher

        “I suggested above that we could even do both approaches. Zeke said that would be possible – but might be confusing to some people. However, not to the readers of this blog (probably).”

        You have to be kidding

        If we changed the present, then people would say

        HEY! I was in dallas, no way was 14.2C They are changing the PRESENT.

        And then goddard would do charts showing the adjusted present to the ‘real’ present and argue that its colder now.

        There isnt a single of one of you who would call these people to task if zeke made the change you suggest.

        Do you think angech or you would go around on blogs and dispell that nonsense?
        Not on your life.

        Do you think you’d go around and say.. “wait guys, I asked Zeke to do that?” Not on your life.

        You like to give busy work. and then walk away.
        seen it before. And I seriously doubt that either you or angech would clean up the mess such a change would cause.

      • Steven Mosher

        hell angech cant even be bothered to count the dang stations for himself.

      • Mosher:

        Look – I merely suggested that if both approaches are equivalent and if a lot of people don’t like the past changing daily – then it might be a good idea to add a file where the present is changed relative to the past.

        If you don’t like that than ignore my suggestion.

        I think it is a good idea.

        I am not assigning busy work to anybody – merely dropping a suggestion in the suggestion box.

        Sure – people will complain no matter what is done.

        So what.

        The question is would it be better to show it both ways – I say YES.

      • Mosher, Zeke said he could do it, not me.
        read his introduction 887 comments back.
        Under changing the past.
        He said it was valid, kosher, doable, real.
        Do your own reading.
        I only said it was a very good idea.

  144. Meanwhile, the advocates of climate justice look to as their leaders the likes of Al Gore, who preach abstinence for others while living in multiple massive high-carbon-footprint mansions (http://www.snopes.com/politics/business/gorehome.asp ) [and] (http://www.huffingtonpost.com/2010/05/17/photos-al-goree-new-8875_n_579286.html ) and flying around the world on private jets. It is time for the advocates of climate justice to recognize the immorality of their campaign to keep the poor poor. ~Francis Menton

     

  145. My suggestion for you cnference at NCAR would be focus on a more formal and simple statement of the problem, and let the complicated methodologies spawn from that simpler framework. Either pick a standard deterministic reconstruction/interpolation/quadrature formality or a statistical one.
    The problem with trying to sort through the literature on the uncertainty in this problem is that without any simple formal statement of the problem and a well know statistical or mathematical approach, it feels like death by ‘is this paper even relevant, its so frigging weirdly complicated’.

    Simplicity. Good luck!

  146. “Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends”

    Golly. So by changing the figures in a spreadsheet one can ACTUALLY CHANGE GLOBAL TEMPERATURES?

    Is there a Nobel prize for Applyde Magick? Because this man rates one.