Understanding adjustments to temperature data

by Zeke Hausfather

There has been much discussion of temperature adjustment of late in both climate blogs and in the media, but not much background on what specific adjustments are being made, why they are being made, and what effects they have. Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends. The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

Slide1

Figure 1. Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth. Series are aligned relative to 1990-2013 means. NCDC data is from GHCN v3.2 and USHCN v2.5 respectively.

Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best to interpret large datasets with numerous biases such as station moves, instrument changes, time of observation changes, urban heat island biases, and other so-called inhomogenities that have occurred over the last 150 years. Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

This will be the first post in a three-part series examining adjustments in temperature data, with a specific focus on the U.S. land temperatures. This post will provide an overview of the adjustments done and their relative effect on temperatures. The second post will examine Time of Observation adjustments in more detail, using hourly data from the pristine U.S. Climate Reference Network (USCRN) to empirically demonstrate the potential bias introduced by different observation times. The final post will examine automated pairwise homogenization approaches in more detail, looking at how breakpoints are detected and how algorithms can tested to ensure that they are equally effective at removing both cooling and warming biases.

Why Adjust Temperatures?

There are a number of folks who question the need for adjustments at all. Why not just use raw temperatures, they ask, since those are pure and unadulterated? The problem is that (with the exception of the newly created Climate Reference Network), there is really no such thing as a pure and unadulterated temperature record. Temperature stations in the U.S. are mainly operated by volunteer observers (the Cooperative Observer Network, or co-op stations for short). Many of these stations were set up in the late 1800s and early 1900s as part of a national network of weather stations, focused on measuring day-to-day changes in the weather rather than decadal-scale changes in the climate.

Slide1

Figure 2. Documented time of observation changes and instrument changes by year in the co-op and USHCN station networks. Figure courtesy of Claude Williams (NCDC).

Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves. Most of the stations have changed from using liquid in glass thermometers (LiG) in Stevenson screens to electronic Minimum Maximum Temperature Systems (MMTS) or Automated Surface Observing Systems (ASOS). Observation times have shifted from afternoon to morning at most stations since 1960, as part of an effort by the National Weather Service to improve precipitation measurements.

All of these changes introduce (non-random) systemic biases into the network. For example, MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location. There is a very obvious cooling bias in the record associated with the conversion of most co-op stations from LiG to MMTS in the 1980s, and even folks deeply skeptical of the temperature network like Anthony Watts and his coauthors add an explicit correction for this in their paper.

Slide1

Figure 3. Time of Observation over time in the USHCN network. Figure from Menne et al 2009.

Time of observation changes from afternoon to morning also can add a cooling bias of up to 0.5 C, affecting maximum and minimum temperatures similarly. The reasons why this occurs, how it is tested, and how we know that documented time of observations are correct (or not) will be discussed in detail in the subsequent post. There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings.

Because the biases are large and systemic, ignoring them is not a viable option. If some corrections to the data are necessary, there is a need for systems to make these corrections in a way that does not introduce more bias than they remove.

 What are the Adjustments?

Two independent groups, the National Climate Data Center (NCDC) and Berkeley Earth (hereafter Berkeley) start with raw data and use differing methods to create a best estimate of global (and U.S.) temperatures. Other groups like NASA Goddard Institute for Space Studies (GISS) and the Climate Research Unit at the University of East Anglia (CRU) take data from NCDC and other sources and perform additional adjustments, like GISS’s nightlight-based urban heat island corrections.

Slide1

Figure 4. Diagram of processing steps for creating USHCN adjusted temperatures. Note that TAvg temperatures are calculated based on separately adjusted TMin and TMax temperatures.

This post will focus primarily on NCDC’s adjustments, as they are the official government agency tasked with determining U.S. (and global) temperatures. The figure below shows the four major adjustments (including quality control) performed on USHCN data, and their respective effect on the resulting mean temperatures.

Slide1

Figure 5. Impact of adjustments on U.S. temperatures relative to the 1900-1910 period, following the approach used in creating the old USHCN v1 adjustment plot.

NCDC starts by collecting the raw data from the co-op network stations. These records are submitted electronically for most stations, though some continue to send paper forms that must be manually keyed into the system. A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.

Quality Control

Once the data has been collected, it is subjected to an automated quality control (QC) procedure that looks for anomalies like repeated entries of the same temperature value, minimum temperature values that exceed the reported maximum temperature of that day (or vice-versa), values that far exceed (by five sigma or more) expected values for the station, and similar checks. A full list of QC checks is available here.

Daily minimum or maximum temperatures that fail quality control are flagged, and a raw daily file is maintained that includes original values with their associated QC flags. Monthly minimum, maximum, and mean temperatures are calculated using daily temperature data that passes QC checks. A monthly mean is calculated only when nine or fewer daily values are missing or flagged. A raw USHCN monthly data file is available that includes both monthly values and associated QC flags.

The impact of QC adjustments is relatively minor. Apart from a slight cooling of temperatures prior to 1910, the trend is unchanged by QC adjustments for the remainder of the record (e.g. the red line in Figure 5).

Time of Observation (TOBs) Adjustments

Temperature data is adjusted based on its reported time of observation. Each observer is supposed to report the time at which observations were taken. While some variance of this is expected, as observers won’t reset the instrument at the same time every day, these departures should be mostly random and won’t necessarily introduce systemic bias. The major sources of bias are introduced by system-wide decisions to change observing times, as shown in Figure 3. The gradual network-wide switch from afternoon to morning observation times after 1950 has introduced a CONUS-wide cooling bias of about 0.2 to 0.25 C. The TOBs adjustments are outlined and tested in Karl et al 1986 and Vose et al 2003, and will be explored in more detail in the subsequent post. The impact of TOBs adjustments is shown in Figure 6, below.

Slide1

Figure 6. Time of observation adjustments to USHCN relative to the 1900-1910 period.

TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.

Pairwise Homogenization Algorithm (PHA) Adjustments

The Pairwise Homogenization Algorithm was designed as an automated method of detecting and correcting localized temperature biases due to station moves, instrument changes, microsite changes, and meso-scale changes like urban heat islands.

The algorithm (whose code can be downloaded here) is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally. If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.

To detect localized biases, the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors (separately for min and max) and looks for breakpoints that show up in the record of one station but none of the surrounding stations. These breakpoints can take the form of both abrupt step-changes and gradual trend-inhomogenities that move a station’s record further away from its neighbors. The figures below show histograms of all the detected breakpoints (and their magnitudes) for both minimum and maximum temperatures.

Slide1

Figure 7. Histogram of all PHA changepoint adjustments for versions 3.1 and 3.2 of the PHA for minimum (left) and maximum (right) temperatures.

While fairly symmetric in aggregate, there are distinct temporal patterns in the PHA adjustments. The single largest of these are positive adjustments in maximum temperatures to account for transitions from LiG instruments to MMTS and ASOS instruments in the 1980s, 1990s, and 2000s. Other notable PHA-detected adjustments are minimum (and more modest maximum) temperature shifts associated with a widespread move of stations from inner city rooftops to newly-constructed airports or wastewater treatment plants after 1940, as well as gradual corrections of urbanizing sites like Reno, Nevada. The net effect of PHA adjustments is shown in Figure 8, below.

Slide1

Figure 8. Pairwise Homogenization Algorithm adjustments to USHCN relative to the 1900-1910 period.

The PHA has a large impact on max temperatures post-1980, corresponding to the period of transition to MMTS and ASOS instruments. Max adjustments are fairly modest pre-1980s, and are presumably responding mostly to the effects of station moves. Minimum temperature adjustments are more mixed, with no real century-scale trend impact. These minimum temperature adjustments do seem to remove much of the urban-correlated warming bias in minimum temperatures, even if only rural stations are used in the homogenization process to avoid any incidental aliasing in of urban warming, as discussed in Hausfather et al. 2013.

The PHA can also effectively detect and deal with breakpoints associated with Time of Observation changes. When NCDC’s PHA is run without doing the explicit TOBs adjustment described previously, the results are largely the same (see the discussion of this in Williams et al 2012). Berkeley uses a somewhat analogous relative difference approach to homogenization that also picks up and removes TOBs biases without the need for an explicit adjustment.

With any automated homogenization approach, it is critically important that the algorithm be tested with synthetic data with various types of biases introduced (step changes, trend inhomogenities, sawtooth patterns, etc.), to ensure that the algorithm will identically deal with biases in both directions and not create any new systemic biases when correcting inhomogenities in the record. This was done initially in Williams et al 2012 and Venema et al 2012. There are ongoing efforts to create a standardized set of tests that various groups around the world can submit homogenization algorithms to be evaluated by, as discussed in our recently submitted paper. This process, and other detailed discussion of automated homogenization, will be discussed in more detail in part three of this series of posts.

Infilling

Finally we come to infilling, which has garnered quite a bit of attention of late due to some rather outlandish claims of its impact. Infilling occurs in the USHCN network in two different cases: when the raw data is not available for a station, and when the PHA flags the raw data as too uncertain to homogenize (e.g. in between two station moves when there is not a long enough record to determine with certainty the impact that the initial move had). Infilled data is marked with an “E” flag in the adjusted data file (FLs.52i) provided by NCDC, and its relatively straightforward to test the effects it has by calculating U.S. temperatures with and without the infilled data. The results are shown in Figure 9, below:

Slide1

Figure 9. Infilling-related adjustments to USHCN relative to the 1900-1910 period.

Apart from a slight adjustment prior to 1915, infilling has no effect on CONUS-wide trends. These results are identical to those found in Menne et al 2009. This is expected, because the way NCDC does infilling is to add the long-term climatology of the station that is missing (or not used) to the average spatially weighted anomaly of nearby stations. This is effectively identical to any other form of spatial weighting.

To elaborate, temperature stations measure temperatures at specific locations. If we are trying to estimate the average temperature over a wide area like the U.S. or the Globe, it is advisable to use gridding or some more complicated form of spatial interpolation to assure that our results are representative of the underlying temperature field. For example, about a third of the available global temperature stations are in U.S. If we calculated global temperatures without spatial weighting, we’d be treating the U.S. as 33% of the world’s land area rather than ~5%, and end up with a rather biased estimate of global temperatures. The easiest way to do spatial weighting is using gridding, e.g. to assign all stations to grid cells that have the same size (as NASA GISS used to do) or same lat/lon size (e.g. 5×5 lat/lon, as HadCRUT does). Other methods include kriging (used by Berkeley Earth) or a distance-weighted average of nearby station anomalies (used by GISS and NCDC these days).

As shown above, infilling has no real impact on temperature trends vs. not infilling. The only way you get in trouble is if the composition of the network is changing over time and if you do not remove the underlying climatology/seasonal cycle through the use of anomalies or similar methods. In that case, infilling will give you a correct answer, but not infilling will result in a biased estimate since the underlying climatology of the stations is changing. This has been discussed at length elsewhere, so I won’t dwell on it here.

I’m actually not a big fan of NCDC’s choice to do infilling, not because it makes a difference in the results, but rather because it confuses things more than it helps (witness all the sturm und drang of late over “zombie stations”). Their choice to infill was primarily driven by a desire to let people calculate a consistent record of absolute temperatures by ensuring that the station composition remained constant over time. A better (and more accurate) approach would be to create a separate absolute temperature product by adding a long-term average climatology field to an anomaly field, similar to the approach that Berkeley Earth takes.

Changing the Past?

Diligent observers of NCDC’s temperature record have noted that many of the
values change by small amounts on a daily basis. This includes not only
recent temperatures but those in the distant past as well, and has created
some confusion about why, exactly, the recorded temperatures in 1917 should
change day-to-day. The explanation is relatively straightforward. NCDC
assumes that the current set of instruments recording temperature is
accurate, so any time of observation changes or PHA-adjustments are done
relative to current temperatures. Because breakpoints are detected through
pair-wise comparisons, new data coming in may slightly change the magnitude
of recent adjustments by providing a more comprehensive difference series
between neighboring stations.

When breakpoints are removed, the entire record prior to the breakpoint is
adjusted up or down depending on the size and direction of the breakpoint.
This means that slight modifications of recent breakpoints will impact all
past temperatures at the station in question though a constant offset. The
alternative to this would be to assume that the original data is accurate,
and adjusted any new data relative to the old data (e.g. adjust everything
in front of breakpoints rather than behind them). From the perspective of
calculating trends over time, these two approaches are identical, and its
not clear that there is necessarily a preferred option.

Hopefully this (and the following two articles) should help folks gain a better understanding of the issues in the surface temperature network and the steps scientists have taken to try to address them. These approaches are likely far from perfect, and it is certainly possible that the underlying algorithms could be improved to provide more accurate results. Hopefully the ongoing International Surface Temperature Initiative, which seeks to have different groups around the world send their adjustment approaches in for evaluation using common metrics, will help improve the general practice in the field going forward. There is also a week-long conference at NCAR next week on these issues which should yield some interesting discussions and initiatives.

2,044 responses to “Understanding adjustments to temperature data

  1. Adjustments to data ought always be explained in an open and transparent manner, especially adjustments to data that become the basis for expensive policy decisions.

    • David Springer

      Good faith was undermined about the time James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988. Good faith collapsed completely with the Climategate emails two decades later.

      Good faith my ass.

      • I realised HADCRUT couldn’t be trusted when I started realising that each and every cold month was delayed (I think it was 1day per 0.05C), whereas each and every hot month was rushed out.

        I realised HADCRUT could be trusted, when I went back to check my figures a year later and found that nothing was the same any longer.

        I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet

        I realised HACRUT couldn’t be trusted when I saw the state of their code.

        I realised HADCRUT couldn’t be trusted when I realised the same guys were doing it as those scoundrels “hiding the decline”.

        And I still know I can’t trust it … when academics like Judith Curry still don’t know the difference between “Quality” as in a system to ensure something is correct and “Quality” as in “we check it”.

        This is not a job for academics. They just don’t have the right mind set. Quality is not a matter of figures but an attitude of mind — a focus on getting it right for the customer.

        I doubt Judith even knows who the customer is … I guess she just thinks its a vague idea of “academia”.

      • Of course, none of that contradicts anything Zeke said. Do you have a substantive argument to make?

      • Quality are all those features and characteristics of a product or service that bear upon the ability to meet stated or implied needs.

        The problem with this definition is the word “needs” if there is a need to confuse or give rise false conclusions then tinkering with the data may well give rise to quality data, I.e. It achieved it’s purpose.

        Quality in terms of data does not imply accuracy or truth.

      • David may appreciate new Earth-shattering insight into global warming:

        http://stevengoddard.wordpress.com/2014/07/07/my-latest-earth-shattering-research/

      • David Springer wrote:

        Good faith was undermined about the time James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988.

        Such a claim sounds very nuts to me. How is James Hansen supposed to have sabotaged the air conditioning at such an event in such a building? If you don’t want to be called a liar who spreads libelous accusations, what about you provide the evidence for such an assertion?

      • David Springer

        A noob who didn’t know. Precious. The air conditioning was sabotaged by opening all the windows the night before so the room was filled hot muggy air when the congressional testimony took place. The testimony was scheduled on the historically hottest day of the year. One of the co-conspirators, Senator Wirth, admitted to all of it in an interview.

        http://www.washingtonpost.com/wp-dyn/content/article/2008/06/22/AR2008062201862.html

        http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html

        PBS: And did you also alter the temperature in the hearing room that day?

        Wirth: What we did it was went in the night before and opened all the windows, I will admit, right? So that the air conditioning wasn�t working inside the room and so when the, when the hearing occurred there was not only bliss, which is television cameras in double figures, but it was really hot.

        So Hansen’s giving this testimony, you’ve got these television cameras back there heating up the room, and the air conditioning in the room didn’t appear to work. So it was sort of a perfect collection of events that happened that day, with the wonderful Jim Hansen, who was wiping his brow at the witness table and giving this remarkable testimony.

      • David Springer: “James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988.”

        No, it most assuredly was not James Hansen who switched off the air conditioning. And no doubt if somebody had closed the windows, instead of opening them, you’d be making the same claim it was done purposely to trap heat in the room. Nothing Hansen said that day hinges on whether the windows were open or closed. All very silly.

      • It wasn’t Hansen himself, it was (then) US Senator Timothy Wirth, who boasted of doing so on the PBS program “Frontline” —

        And did you also alter the temperature in the hearing room that day?

        … What we did it was went in the night before and opened all the windows, I will admit, right? So that the air conditioning wasn�t working inside the room and so when the, when the hearing occurred there was not only bliss, which is television cameras in double figures, but it was really hot. …

        So Hansen’s giving this testimony, you’ve got these television cameras back there heating up the room, and the air conditioning in the room didn’t appear to work. So it was sort of a perfect collection of events that happened that day, with the wonderful Jim Hansen, who was wiping his brow at the witness table and giving this remarkable testimony. …

        http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html

        “Wirth served as a U.S. Senator from Colorado until 1993, when he left the Senate to serve under President Clinton in the State Department. He is now president of the United Nations Foundation. Wirth organized the 1988 Senate hearing at which James Hansen addressed global warming, and he led the U.S. negotiating team at the Kyoto Summit. In this interview, Wirth describes the debate surrounding global warming within the Bush I and the Clinton administrations, including his experience of the Kyoto negotiations, and asserts that partisan politics, industry opposition and prominent skeptics have prevented action from being taken. This is an edited transcript of an interview conducted Jan. 17, 2007.”

      • David Springer

        Senator Wirth said WE opened the windows the night before. He wasn’t alone. The “we” was purportedly him and Al Gore. Hansen was the originator of the idea that if the hearing was scheduled during hot weather it would be more effective.

        http://www.aip.org/history/climate/public2.htm

        The trigger came that summer. Already by June, heat waves and drought had become a severe problem, drawing public attention to the climate. Many newspaper, magazine, and television stories showed threatened crops and speculated about possible causes. Hansen raised the stakes with deliberate intent. “I weighed the costs of being wrong versus the costs of not talking,” he later recalled, and decided that he had to speak out. By arrangement with Senator Timothy Wirth, Hansen testified to a Congressional hearing on June 23. He had pointed out to Wirth’s staff that the previous year’s November hearings might have been more effective in hot weather. Wirth and his staff decided to hold their next session in the summer, although that was hardly a normal time for politicians who sought attention.

      • Wirth also boasted (same interview) of how they intentionally picked the hottest day/week of the year in DC, how the weather co-operated, and how the original campaign was integral to politics of the Democratic Party and to that year’s (unsuccessful) presidential campaign by Michael Dukakis. So, whatever Hansen thought he was doing, he certainly allowed himself to be the political tool of manipulative and dishonest political partisans of the Democratic Party:

        What else was happening that summer? What was the weather like that summer?

        Believe it or not, we called the Weather Bureau and found out what historically was the hottest day of the summer. Well, it was June 6 or June 9 or whatever it was, so we scheduled the hearing that day, and bingo: It was the hottest day on record in Washington, or close to it. It was stiflingly hot that summer. [At] the same time you had this drought all across the country, so the linkage between the Hansen hearing and the drought became very intense.

        Simultaneously [Mass. Gov. Michael] Dukakis was running for president. Dukakis was trying to get an edge on various things and was looking for spokespeople, and two or three of us became sort of the flacks out on the stump for Dukakis, making the separation between what Democratic policy and Republican policy ought to be. So it played into the presidential campaign in the summer of ’88 as well.

        So a number of things came together that, for the first time, people began to think about it. I knew it was important because there was a big article in, I believe, the Swimsuit Issue of Sports Illustrated on climate change. [Laughs.] So there was a correlation. You figure, well, if we’re making Sports Illustrated on this issue, you know, we’ve got to be making some real headway.

      • Greg Goodman

        Scottish sceptic says: “I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet”

        And why would a competent programmer want or need to use a spreadsheet.for data processing?!

        Spreadsheets are for accountants. It is pretty amateurish to use one data processing. However most amateurs that manage to lash up a “chart” in a spreadsheet for some reasons think they are then qualified to lay into anyone who is capable of programming and has never needed to rely on point and click , cut and paste tools to process data.

        You’d also look a lot more credible if you could at least get the name of dataset right and realised that it is the work of two separate groups.

        There’s plenty to be criticised at CRU, at least try to make credible criticisms.

      • Skiphil wrote: “Wirth also boasted (same interview) of how they intentionally picked the hottest day/week of the year in DC”

        How could Timothy Wirth have known it was going to be the hottest day of the week–let alone the entire summer–weeks in advance of the hearing having been scheduled? Seriously, show a modicum of scepticism. It transpires the air conditioning wasn’t even switched off; it was simply made less effectual because a senator had opened some windows the night before. People believe this diminishes Hansen’s testimony. It does not. Enough distraction. Can we move forward now?

      • thisisnotgoodtogo

        Anon said:

        “How could Timothy Wirth have known it was going to be the hottest day of the week–let alone the entire summer–weeks in advance of the hearing having been scheduled?”

        They checked the records and found the most-often hottest day of the year in the city.

      • thisisnotgoodtogo

        “People believe this diminishes Hansen’s testimony.”

        No, people believe it “embellished” it.
        Hansen’s testimony itself was bogus. It needs no diminishing.
        He used part of a hot year to make his point about anthro warming.

      • I agree that they defenestrated “good faith” when East Anglia lost the original climate they had collected, at the same time, writing in the Climategate emails that they would rather destroy the data than hand it over to skeptics.

        So they need to keep all versions of the data. It is not like it wouldn’t fit on $50 worth of hard drive. Except they don’t. They keep it hid and the only way people find out about adjustments is if they take their own snapshots.

        The time for “assuming good faith” is long gone, “trust but verify” is more what is needed today.

      • Anon,

        can you READ?? It is Wirth who boasted of seeking the hottest day… of course he couldn’t be sure he would get the very very hottest, but that is what he sought and that is (according to him) what he got. As for distraction, when people like you can explain how Wirth and co. are honest and competent, then we can move on.

      • Jan,

        While sabotaged is not the best term, (air conditioning was turned down or off), it doesn’t change the overall point. Steps were taken to ensure the hearing room was hotter than it normally would have been in order to emphasis the point Wirth and Hansen wanted to get across.

      • timg56,

        the accusation was made by David Springer specifically against James Hansen. Regardless, whether you call it “sabotaging” or “turning off”, I am still waiting for the evidence to back up this accusation. So far, nothing.

      • David Springer

        Opening up windows the night before on the historically hottest day of the year overwhelmed the air conditioner. Sabotage is exactly the right word. It was Hansen’s suggestion to Wirth to hold the hearing on the hottest day of the year so there’s collusion in black & white. Wirth admitted “we” opened up the windows the night before. The only question is whether “we” included Hansen whose idea it was to stage the hearing in hot weather to be more effective.

      • Don Monfort

        Please continue to wait, perlie. Watching you make a fool of yourself over a throwaway comment that you want to blow up into libel is very amusing. Are you going to hold your breath? And stamp your little feet? We can tell that you are not a lawyer, perlie.

      • David Springer wrote:

        A noob who didn’t know. Precious.

        Noob? We will see who has the last laugh.

        The air conditioning was sabotaged by opening all the windows the night before so the room was filled hot muggy air when the congressional testimony took place. The testimony was scheduled on the historically hottest day of the year. One of the co-conspirators, Senator Wirth, admitted to all of it in an interview.

        http://www.washingtonpost.com/wp-dyn/content/article/2008/06/22/AR2008062201862.html

        http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html

        I can even better, thanks to Anthony Watts with his junk science blog. Here is a video excerpt of the TV broadcast, where the opening of the windows and the AC issue is addressed and Wirth is asked about this. Watts had tried this one on me already some time ago, and linked the video himself, apparantly totally delusional about what it would prove.

        https://www.youtube.com/watch?v=wXCfxxXRRdY

        Not a single word in there that implicates James Hansen in the matter. Neither by Wirth, nor by the narrator. So how does this work with such an accusation in “skeptic” land? By some “skeptic” assigning of guilt by association?

        It’s all just about throwing dirt, isn’t it? Facts don’t matter.

        As someone else has already correctly pointed out. The windows and AC thing is irrelevant for the content of Hansen’s statement anyway.

      • David Springer

        Like I pointed out with links Hansen suggested to Wirth that his November testimony would have been more effective in hot weather. Wirth then says in an interview “we” (maybe his staff, maybe a climatologist) determined that June 23rd was on average the hottest day of the year in Washington and scheduled the hearing on that day. Then “we” (Wirth and unnamed others) opened up all the windows the night before so the hot humid air overwhelmed the air conditioning. I don’t but usually the way these things work is Hansen would have flown in the day before and spent some face time with those in the senate on his side. Al Gore was US Senator from Tennessee so almost certainly all three were in town that night and no one is going to question two United States senators prepping a hearing room. It went off like a frat club stunt. Given the heat was Hansen’s idea in the first place and knowing how guys behave probably all three of them were in on it and not exactly sober either. But hey, that’s just a guess. Wirth knows and didn’t say.

      • Let’s see, Jan Perlwitz!

        “A Climate Hero: The Testimony

        Worldwatch Institute is partnering with Grist to bring you this three-part series commemorating the 20-year anniversary of NASA scientist James Hansen’s groundbreaking testimony on global climate change next week. Read part one here.

        “The greenhouse effect has been detected, and it is changing our climate now,” James Hansen told the Senate Energy Committee in 1988.An unprecedented heat wave gripped the United States in the summer of 1988. Droughts destroyed crops. Forests were in flames. The Mississippi River was so dry that barges could not pass. Nearly half the nation was declared a disaster area.

        The record-high temperatures led growing numbers of people to wonder whether the climate was in some way being unnaturally altered.

        Meanwhile, NASA scientist James Hansen was wrapping up a study that found that climate change, caused by the burning of fossil fuels, appeared inevitable even with dramatic reductions in greenhouse gases. After a decade of studying the so-called greenhouse effect on global climate, Hansen was prepared to make a bold statement.

        Hansen found his opportunity through Colorado Senator Tim Wirth, who chose to showcase the scientist at a Congressional hearing. Twenty years later, the hearing is regarded as a turning point in climate science history.

        To build upon Hansen’s announcement, Wirth used the summer’s record heat to his advantage. “We did agree that we should figure out when it’d be really hot in Washington,” says David Harwood, a legislative aide for Wirth. “People might be thinking of things like what’s the climate like.”

        They agreed upon June 28. When the day of the hearing arrived, the temperature in the nation’s capital peaked at 101 degrees Fahrenheit (38 degrees Celsius). The stage was set.

        Seated before the Senate Committee on Energy and Natural Resources, 15 television cameras, and a roomful of reporters, Hansen wiped the sweat from his brow and presented his findings. The charts of global climate all pointed upward. “The Earth is warmer in 1988 than at any time in the history of instrumental measurements,” he said. “There is only a 1 percent chance of an accidental warming of this magnitude…. The greenhouse effect has been detected, and it is changing our climate now.”

        Oh, a one percent chance of a heat wave.

        Great science testimony too, Jan!

  2. A fan of *MORE* discourse

    Question  Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?

    Answer  Because considering *ALL* the available information yields *FAR* better betting strategies.

    Question  Why does the strongest climate science synthesize historical records, paleo-records, and thermodynamical constraints??

    Answer  Because considering *ALL* the available information yields *FAR* better assessments of climate-change risk.

    These realities are *OBVIOUS* to *EVERYONE* — horse-betters and climate-science student alike — eh Climate Etc readers?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Why do climate scientists hide the raw data? Why do they use anomalies and 5 years smoothing to hide the data?

      You can’t spell anomalies with LIES.

    • Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?

      Because that is what the customer wants.

      Now answer me this … would you be happy with a bank statement with “adjusted” figures for each and every transaction.

      And what would you say if they saidAnswer Because considering *ALL* the available information yields a *FAR* better assessmentk.

      • Matthew R Marler

        Scotthish Sceptic and A fan of *MORE* discourse: Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?

        Because that is what the customer wants.

        Now answer me this … would you be happy with a bank statement with “adjusted” figures for each and every transaction.

        And what would you say if they saidAnswer Because considering *ALL* the available information yields a *FAR* better assessmentk.

        The issue relates to how accurately the fundamental data have been recorded in the first place. There are people, including auditors, who do sample financial records and perform Bayesian hierarchical modeling in order to assess the overall effects of errors, and their likely prevalence.

      • Don’t give the banksters any ideas, Scottish. ;)

    • The Beyer speed analogy got to me. It succeeds at what it was designed to do. Kudos.

      As I am oft wont, lay curiosity (in climate science and horse betting) forced an immediate investigation into Beyer speed.

      As a thought and pattern matching exercise, the Beyer speed analogy is quite good. However, within a few minutes, I found an erudite bettor who supplies a different take on the underlying premise that Beyer speed, while working as designed, furnishes reliable data on which to bet one’s wad of cash. He wrote:

      “The theory:
      Horses that can win races are the ones that can significantly IMPROVE their previous race speed figure. Today’s winner is not the horse with the highest figure from its last race but the horse that is most likely to REACH its highest figure today. Bold-face Beyer figures function essentially as mirages, optical illusions that distort racing reality. Yes, they are more than reasonably accurate most of the time. But they are not worth their face value, for an accurate rendering of the past is not the same thing as an objective prediction of the future. Better stated, the past performances are something that should be seen dynamically, as if they were part of a moving process.”

      It seems climate science and horse betting share more than one initially thinks.

      I enjoyed the analogy. As we attempt to understand scientific research, numskulls like me could use more of them.

    • Fan,
      What is your favorite conspiracy today?

      • He is too busy working on his “Climate Youth” project to bother answering a question like that.

  3. Rob Bradley

    The author states: “Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.”

    But surely incentives matter. Peer pressure matters. Government funding matters. Beware of the ‘romantic’ view of science in a politicized area.

    • When an auditor checks accounts, they do not assume bad faith.

      Instead they just assure the figures are right.

      So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?

      Because academics don’t have a culture of having their work checked by outsiders

      The simple fact is that academics cannot stomach having outsiders look over their figures. And this is usually a symptom of an extremely poor quality regime

      • Here’s an audit of HADCRUT3

        In July 2011, Lubos Motl did an analysis of HADCRUT3 that neatly avoided all the manipulations. He worked with the raw data from 5000+ stations with an average history of 77 years. He calculated for each station the trend for each month of the year over the station lifetime. The results are revealing. The average station had a warming trend of +0.75C/century +/- 2.35C/century. That value is similar to other GMT calculations, but the variability shows how much homogenization there has been. In fact 30% of the 5000+ locations experienced cooling trends.

        What significance can you claim for a 0.75C/century claim when the standard deviation is 3 times that?

        Conclusions:

        “If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.

        Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority.
        Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.”

        http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html

      • Steven Mosher

        “So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?

        We dont.

        But imagine this.

        Imagine an auditor came into your company

        A= audior
        S= Scottish

        A: Can I see your books.
        S: Yes here they are.
        A: (ignoring the books). Here is a chart I found on the internet showing
        your bogus adjustments to income.
        S: please look at our books.
        A: no first explain this random stuff I found on the internet.
        S: here are the books, can you just audit us?
        A: you should be audited
        S: I thought thats what you were doing, here are the books. please look.
        A: What are your interests in this company?
        S: I own it. I make money
        A: AHHHH, so how can I trust these books
        S: can you just look at the books.
        A: first I want to talk about this youtube video. See this chart, the red is really red.
        S: I didnt make that video, can you just look at the books.
        A: do you have an internal audit.
        S: ya, here are some things we published, you can read them.
        A: Ahhh, who reviewed this.
        S: It was anonymous, just read the paper.
        A: How do I know your friends didnt review that, I dont trust those papers.
        S: well, read them and ask me questions.
        A: I’m giving the orders here tell me what is in the papers.
        A: and where are your books?
        S; I gave you the books.
        A: who is your accountant?
        S: my wife, she does all the books
        A…. Ahhh the plot thickens… you need to be audited.
        S: err, here are the books.
        A: oh trying to make it my job huh.. Im here in good faith
        S: ah ya, to audit, here are the books.
        A.not so fast, youre trying to shift the burden of proof

    • Steven Mosher

      “But surely incentives matter. Peer pressure matters. Government funding matters. Beware of the ‘romantic’ view of science in a politicized area.”

      When JeffId and RomanM ( skeptics) started to look at temperature series the incentive was to
      A) find a better method
      B) Show where GISS and CRU went wrong

      Their results showed more warming.

      When I first started looking at temperatures my incentive was simple.
      I wanted to find something wrong, specifically with adjustements.
      7 years later I can only report that I could find nothing of substance
      wrong with them.

      When Muller and Berkeley started to look at this matter their incentive
      was to build a better method and correct any mistakes they found.
      Koch and others found this goal laudable and funded them.
      With this incentive what did Berkeley find? Well, the better method
      extended the record, gave you a higher spatial resolution and showed
      that the NOAA folks basically get the adjustments correct.

      Many people, all with the incentive to find some glaring error, some mistake that would overturn the science, all came to the same conclusion.
      While NOAA isnt perfect, while we can make improvements at the margin,
      the record is reliable. The minor issues identified dont change the fundamental facts: It has been warming since the LIA. There are no more
      frost fairs in London. The estimates of warming since that time using
      some of the data, or all of the data, using multiple methods
      ( CAM, RSM, Kriging, Least Squares, IDW ) all fall within narrow bounds.
      The minor differences are important to specialists or to very narrow
      questions ( see Cowtan and Way ), but the big picture remains the same

      • Steve Fitzpatrick

        Yup, that is right. Small changes (a la Cowtan and Way) at the margins do happen. But nothing fundamental has changed. Is there still some uncertainty? Sure, at the margins, but the data are quite clear: there has been average warming in the range of 0.8C to 0.9C since the mid 19th century.

      • Mosh: One of the things that personally gives me faith in some of the newer temperature records is that skeptics like you, Roman, Jeff and then Muller et al get similar results. Unfortunately, dealing people like Goddard is now prompting you say dubious things like: “Well, the better method extended the record, gave you a higher spatial resolution and showed that the NOAA folks basically get the adjustments correct”. Several years ago, you would have recognized that no one knows the “correct adjustments”. You would remember that the half-dozen reconstructions that “reproduced” Mann’s hockey stick did not make Mann “correct”. Pairwise adjustment are hypotheses than make assumptions about the nature of the events that produced undocumented breakpoints, not tested theories. More than half of US warming and about a quarter of global warming can be traced back to breakpoint corrections and the total number of breakpoints identified has risen to about one per decade (if I remember correctly). Only a modest fraction of these breakpoints are due to properly-studied phenomena like TOB and instrumental changes. Any undocumented breakpoint could represent a return to earlier observing conditions (which had gradually deteriorated) or a shift to new conditions. Worst of all, temperature change still appears to reported as if all the uncertainty arises from scatter in the raw data and none from systematic errors that could arise from processing the data.

      • Steven Mosher | July 7, 2014 at 4:16 pm |

        “So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?
        We dont.
        But imagine this.
        Imagine an auditor came into your company

        A= audior
        S= Scottish

        A: Can I see your books.
        S: Yes here they are.

        This also happens:
        A: Can I see your books?
        S: No – you just want to find something wrong with them.

        Trust is not a part of the game and hasn’t been for some time. About the time cordiality disappeared from the landscape.

      • @Frank 5:14 pm
        Pairwise adjustment are hypotheses than make assumptions about the nature of the events that produced undocumented breakpoints, not tested theories. ….. Worst of all, temperature change still appears to reported as if all the uncertainty arises from scatter in the raw data and none from systematic errors that could arise from processing the data.

        Agree. Every adjustment adds error.

        Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.

        But is temperature uncertainty reported as if it derives from the average anomaly and not derived from the measured daily Tmin and Tmax? If a month’s mins and maxes are 10 degrees C apart, the Trmse (mean standard error) of the month’s Tave is a minimum of 0.67 deg C.

      • Matthew R Marler

        Stephen Rasey: Every adjustment adds error.

        That is not true. Errors and random variation are in the data, but the best adjustments (like the BEST adjustments) do the best job of reducing the error. This is proved mathematically for some cases, and it has been shown computationally by simulations where the “true” values and “errors” and “random variation” are known by fiat. I put some references in my comments to Rud Istvan.

      • Steven Mosher

        “Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.”

        Proof by assertion.

        Not backed up by any example, any data, or any analysis showing what is claimed.

        Typical skeptic.

      • Matthew R Marler,

        “…but the best adjustments (like the BEST adjustments) do the best job of reducing the error.”

        If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods. Indeed, BEST removes very effectively the fixes present in the raw data in the form of discontinuities.

        For this reason the average of absolute temperature is a better method than anomalies.

      • @Matthew R Marler at 11:42 am |
        Stephen Rasey: Every adjustment adds error.
        That is not true. Errors and random variation are in the data, but the best adjustments (like the BEST adjustments) do the best job of reducing the error.

        It is true. Every adjustment, even the subtraction of the mean to create the anomaly is the addition of an estimated parameter. Error is always added.

        What may be confusing is that adjustments can improve signal to noise as you add error. Or more precisely, the act of improving signal to noise, must add error in the process, but jin some circumstances the signal adds faster than the error.

        A case in point is the seismic common depth point move-out correction. It is a process by which a recorded signal, offset by a known distance from he source, is variably compressed in the time-domain to estimate an adjusted record equivalent to a zero-offset source-receiver pair. The velocity used in the move-out is estimated, an average of subsurface velocities, but the right estimate increases coherence of events that arrive at different times in the raw data. When you get it right, it greatly increases signal/noise ratio. But high signal to noise doesn’t prove it is right. It is possible to make noise coherent, too.

        Homogenization could a act in much the same way as seismic stacking. It is possible that “stacking” temperature anomalies will improve the signal to noise ratio as it adds error to the process. The question is, does it? It adds error — of that there is no doubt. Does signal improve faster than error? Or are we just making coherence out of noise and added error?

      • (reposted, first attempt was at the wrong parent in the thread)
        @Steven Mosher at 11:57 am |
        Rasey: “Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.”
        Proof by assertion.
        Not backed up by any example, any data, or any analysis showing what is claimed.

        Please argue any of the following points by methods that exclude ad hominem.
        1. Breakpoints are derived from something.
        2. Breakpoints are created where documentation of changes to the station are do not exist.
        3. BEST, and others, use krigging to create a regional field to compare to the station under study.
        4. Breakpoints, empirical undocumented breakpoints, can be created from a function of differences between the station and the krigged field.
        5. The krigged regional field is defined by control points.
        6. These control points are other temperature record stations.
        7. Every temperature record contains error and thus contain some uncertainty. (I will expand on this in a following comment)
        8. When at least one control point of a krigged surface has uncertainty, i.e. error bars, the krigged surface itself is fuzzy — every point of the uncertain control point influences the surface gains uncertainty.
        9. All stations have uncertainty, so all control points of the krigged surface have uncertainty. Therefore the krigged surface is fuzzy at all points.
        10. Zeke himself said it was an iterative process.

        the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors

        11.a source for huge amounts of error. Well, now there you have me…. I didn’t define “huge”. Huge in this case means “at least on the order or larger than the signal sought.”

      • A C Osborn

        What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
        Better for what, certainly not the historic record.
        How can declaring old temperatures “WRONG” by replacing them with “calculated temperatures” be right.
        The people that lived through the 30s in the USA did not experience “calculated” temperatures, they experienced the real thing as reported by the thermometers of the day. They experienced the real affects of the temperatures and the Dust Bowl droughts.
        In Australia in the 1800s they experienced temperatures soo high that Birds & Bats fell out of the air dead of Heat Exaustion, in the early 1900s they had the biggest natural fires in the world and yet according to the Climate experts after adjustments it is hotter now than then.

        It is like historians going back to the second world war and changing the number of Allied Soldiers who died and making it far less than the real numbers. Try telling that to their familes and see how far you would get.

        Based on these CRAP adjustments we hear the “Hottest” this and “Unprecedented” that, the most powerful storms, Hurricanes & Typhoons, more tornadoes, faster sea level rise when anyone even over 60 knows, based on their own experiences that they are Lies.
        I remember as a child in Kent in the UK during the 50s & 60s the Tar in the road melting in the summers due to the heat, followed by a major thunderstorm and flooding with cars washed down the streets and man hole covers thrown up by the water. It is no hotter in the UK now than it was then.

        THE ADJUSTMENTS DO NOT MAKE IT A MORE ACCURATE ACCOUNT OF HISTORY.
        It is not REAL, that is why the work that Steve Goddard does with Historic Data is so important, it SHOULD keep scientists staight but it doesn’t.

      • Matthew R Marler

        Stephen Rasey: What may be confusing is that adjustments can improve signal to noise as you add error. Or more precisely, the act of improving signal to noise, must add error in the process, but jin some circumstances the signal adds faster than the error.

        I think that you are going in circles. The Bayesian hierarchical model procedure produces the estimates that have the smallest aggregate mean square error. They do not add error to the data, or add error to the estimate.

      • Matthew R Marler

        A. C. Osborne: What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
        Better for what, certainly not the historic record.

        The procedure used by the BEST team produces estimates that have the smallest attainable mean square error. There is a substantial literature on this topic.

      • Matthew R Marler

        phi: If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods.

        How is that known? The BEST team and others have made extensive efforts to estimate and account for UHI effects, and they are not the major source of warming in the instrumental record.

      • More on Point 7 above:
        7. Every temperature record contains error and thus contain some uncertainty.

        Let us list the sources of uncertainty in each temperature record:
        1. Systematic temperature miscalibration of the instrument.
        2. Weathering of the instrument as a function of time
        3. Instrumental Drift away from calibration.
        4. Precision of daily reading
        5. Accuracy of daily reading, including transposition in record)
        6. Instrument min-max reset error resulting from Time of Observation policy.
        7. Data gaps from vacation, instrument failure, etc.

        There are others, but I want to turn to the big errors that occur in processing.
        A great deal of the temperature record used is based upon the station’s Average monthly temperature Anomaly. What are the sources of uncertainty involved with it? What is the Temp Anomaly “Mean Standard Error” (TArmse)
        First we must find the Trmse of the month’ avg temp.
        Trmse(Month i) = StDev(30 Daily Ave. Temp) / sqrt (30)
        Right?
        Wrong. We never measure a Daily Ave. Temp. We measure instead a min and a max. Instead,
        Trmse(Month i) = StDev (30 Daily Min + 30 Daily Max) / sqrt (60)
        If we assume a flat constant avg temp of 10 deg C for the month, coming from thirty 5 deg C min readings and 15 deg C max readings.
        Trmse = 0.645 deg C.
        So the Mean for a month is 10.000 deg C, but the 90% confidence range is 8.92 to 11.08. deg C. That is a big error bar when you are looking for 0.1 to 0.3 deg C/decade.

        You want to convert Tave(month) to an anomaly TAavg.
        Well that’s just a bulk shift of the data. There is no uncertainty.
        Wrong.
        A bulk shift would apply if and only if each station and each month received the same bulk shift. But we don’t do that. Each station-month is adjusted by an estimate of the mean for that month and that station

        Ok. Suppose we have 30 years of the very same month: 30 days of 5 deg low and 15 deg high. The 30 year mean is 10 deg C. What is the Trmse(30 year, month I)? It is (Trmse(month i)/sqrt (30). In this case
        Trmse(30 year, month I) = 0.645 / sqrt(30) = 0.118 deg. C.

        So, the 30 year Tavg for a month is known to +/- 0.193 deg C at an 90% confidence.

        But, we are going to create the anomaly for the month: that quantity is (Tave(month), Trmse(month)) + (-Tave( 30 year, month), Trmse(30 year, month)
        The temp anomaly mean is a nice fat zero.
        but the rmse of the anomaly = sqrt(0.645^2 + 0.118^2)
        TArmse (month, 30 year base) = 0.656 deg C. or +/- 1.079 deg C at 90% confidence.

        The uncertainty in the 30 year mean did not add much to the TArmse of the month, but it did never reduces it. Furthermore, in this discussion of breakpoints, if we make segments short, say 5 years, then the uncertainty of the mean, Trmse(5 year, month) = 0.289 deg C. Adjusting by a 5 year mean between breakpoints would yield a
        TArmse(month, 5 year base) = sqrt(0.645^2 + 0.289^2) = 0.716 deg C
        or +/- 1.179 deg C at a 90% confidence interval.

        So more breakpoints, shorter segments, increases the uncertainty in the Temperature Anomaly data stream. If you want to tease out climate signals of a fraction of a degree, you need long segments.

      • Matthew R Marler,
        Excuse me, but you write a lot on this thread while you do not seem to master the subject. I suggest some literature:

        http://onlinelibrary.wiley.com/doi/10.1029/2001JD000354/pdf
        http://onlinelibrary.wiley.com/doi/10.1002/joc.689/pdf

        Good reading.

      • @Matthew R Marler at 1:59 pm |
        I think that you are going in circles

        No. I don’t deny that you can reduce mean standard error or mean squared error through increasing the sample size when errors are random. But in the process, the errors, the variance, to be more specific, add at each step. The mean error can be reduced by an increase in number of samples.

        You cannot subtract error, at least not when the error is random. Errors accumulate. Every estimate and adjustment contains error.

      • Matthew R Marler,
        I specify that Hansen et al. 2001 will show you why the BEST method is inadequate in case of increasing UHI. Regarding Böhm et al. 2001, you will find an interesting evaluation of the UHI effect on the Alpine Network at the end of the nineteenth century (greater than 0.5 ° C).

      • Windchasers

        Stephen Rasey says:

        You cannot subtract error, at least not when the error is random.

        Well, it’s a good thing that the errors aren’t random! =D

        Seriously, though. TOB is a systematic error, not random.

      • Matthew R Marler

        phi: http://onlinelibrary.wiley.com/doi/10.1029/2001JD000354/pdf
        http://onlinelibrary.wiley.com/doi/10.1002/joc.689/pdf

        I have written enough for one thread, but I do thank you for the link to the paper.

      • Matthew R Marler

        phi, I read the paper that you linked to, and here is a quote from the summary: This paper discusses the methods used to produce an Alpine-wide dataset of homogenized monthly
        temperature series. Initial results should illustrate the research potential of such regional supra-national
        climate datasets in Europe. The difficulties associated with the access of data in Europe, i.e. related to the
        spread of data among a multitude of national and sub-national data-holders, still greatly limits climate
        variability research. The paper should serve as an example of common activities in a region that is rich
        in climate data and interesting in terms of climatological research. We wanted to illustrate the potential
        of a long-term regional homogenized dataset mainly in three areas:
        (i) the high spatial density, which allows the study of small scale spatial variability patterns;
        (ii) the length of the series in the region which shows clear features concerning trends starting early in
        the pre-industrial period; and
        (iii) the vertical component in climate variability up to the 700-hPa level.
        All these illustrate the advantage of using carefully homogenized data in climate variability research.

        Not only did they “homogenize”, but they worked with deviations rather than restricting themselves to absolute temps, and they estimated breakpoints. They were able to identify a trend “like” UHI, despite your assertion that such methods were the worst when such trends are present. I don’t see how it supports your original claim: If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods. Indeed, BEST removes very effectively the fixes present in the raw data in the form of discontinuities.

        For this reason the average of absolute temperature is a better method than anomalies.

        The main obvious difference is that the Best team carried out an explicitly Bayesian hierarchical model, whereas this team seems not to have.

      • @Windchasers at 4:41 pm |
        Well, it’s a good thing that the errors aren’t random! =D
        Seriously, though. TOB is a systematic error, not random.

        I agree. Systemic corrections can be added —- as long as they contain the uncertainty in the magnitude of the correction. That flows back to the move-out example I used above. It is a real effect whose magnitude must be estimated, perhaps by looking for the value that maximizes coherence.

        TOB is a valid correction under some circumstances. (personally, I think it is overrated, but valid) The magnitude of the correction can only be estimated, even if it is a Bayesian estimation. But the mean standard error of the estimated TOBS correction is no zeros and could be more than half the size of the correction itself. We must estimate how much to apply at that station, at that month, at that year (when the time of the change is not documented).

        To apply a TOBS correction AFTER the recording time policy was really changed is certainly adding error.

      • Matthew R Marler,

        To remove discontinuities is a bad method if these discontinuities are in fact corrections​​. The results of Böhm and BEST are identically bad since both remove these fixes to recover the bias in its full amplitude. I proposed Böhm because he explains the bias of discontinuities by a large UHI effect on the network in the nineteenth century. If it was important at this time, it could only have progressed until today.

        Otherwise, I can only encourage you to read chapter 4 of Hansen et al. 2001 You will read, for example:”…if the discontinuities in the temperature record have a predominance of downward jumps over upward jumps, the adjustments may introduce a false warming, as in Figure 1.”.

        This character is actually present in the raw temperature data worldwide.

      • RE: Stephen Rasey at 5:34 pm |
        TOB is a valid correction under some circumstances. …. The magnitude of the correction can only be estimated, even if it is a Bayesian estimation. But the mean standard error of the estimated TOBS correction is no zeros and could be more than half the size of the correction itself.

        I must add that the error associated with the uncertain estimation of the magnitude and time of application of the TOBS correction is also a systematic, non-random error. If you over or under estimate the TOBS correction for one month, you will do so systematically for many other months. So we cannot assume the error will decrease by the sqrt(number of months it is applied).

        Likewise, when we create the temperature anomaly, we must add the negative of the mean for the month with its mean standard error. The error applied for May 2013 and June 2013 come from different estimates of the mean and so the errors added random. But the error added between TA(May 2013) and TA(May 2012) come from the same estimate of the mean, so the mean standard error is NOT random between years for the same month, but is likely random between stations.

      • Windchasers

        To apply a TOBS correction AFTER the recording time policy was really changed is certainly adding error.

        No, I don’t think so. The TOB creates an ongoing bias – a hot bias if temperatures are recorded near the hottest part of the day, and a cold bias if temperatures are recorded near the coldest part.

        If we switched from recording in the afternoon to recording in the morning, I’d rather see us adjust for both biases, not just one. It seems more logically consistent that way.

      • @Windchasers at 6:03 pm |
        If we switched from recording in the afternoon to recording in the morning, I’d rather see us adjust for both biases, not just one. It seems more logically consistent that way.

        I cannot argue it wouldn’t be more consistent.
        If you want to apply a different TOBS(morning), a TOBS(Afternoon), a TOBS(Noon), and a TOBS(late evening), I have no theoretical objection —– Provided the mean standard error of the adjustment is applied and another error source is added to account for the probabilistic uncertainty that the wrong adjustment is used.

        You want to apply a 0.05 deg C TOBS(morning) adjustment with a 0.15 deg C mean standard error uncertainty? Knock yourself out.

  4. “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

    I used to laugh at accusations of conspiracy among establishment climate scientists. Then I read the climate-gate emails. I’m not laughing anymore.

    If there is an ‘unfortunate narrative,” these guys have no one to blame but themselves.

    • Steven Mosher

      please do not tar the NOAA people with the same brush as the CRU people.

      You know early on in climategate when the focus was on CRU, I used to get mails from right wing organizations and people telling me that ‘we” had to find a way to turn this into a NOAA scandle.

      needless to say they got an earful from me.

      Climategate is not an indictment of the whole profession.
      peoples attempt to make climategate about the temperature series or about all climate scientists is part of the reason why the investigations were botched

      • Steve: Surely you don’t believe the Climategate investigation was botched ONLY because of a need to protect the validity of CRUTemp? The profession had other temperature records to fall back upon. Has the profession even recognized the mistakes that were made? What actions have been taken to ensure that problems don’t occur again? How about releasing all data and processing programs with publication? (You might wish to re-read your own book.)

      • Matthew R Marler

        Steven Mosher: please do not tar the NOAA people with the same brush as the CRU people.

        the difficulty there is that some NOAA people (including writers at RealClimate) defended the bad practices revealed in the CRU emails. So the NOAA people tarred themselves.

        I have to make this my last post, so if you reply you’ll have the last word. Your tenacity in defense of Zeke’s post and the BEST team is admirible, though I disagree with you here and there.

    • => “If there is an ‘unfortunate narrative,” these guys have no one to blame but themselves.”

      Indeed. They made you do it.

      • Hole on there big fella.

      • They could have published all adjustments, with original data, and justifications based on the literature, instead of having skeptics discover it in the worst possible way, suspecting something was up, recording a snapshot, then watching the data change unanounced, always in ways that increased the warming trend. So yeah, they made skeptics do it.

      • Windchasers

        They could have published all adjustments, with original data, and justifications based on the literature, instead of…

        The adjustments and justification are right there in the literature, in papers ranging from 10-30 years old. And the data, justifications, adjustments, and explanations are available on the NCDC website:
        http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php

        How much longer were they supposed to wait, for you to do your DD?

        Don’t blame the scientists for your laziness.

      • Since the antics of Phil Jones and the CRU data, there is a certain Caesar’s wife expectation of historical climate data, on which depends decisions regarding trillions of dollars.

        Every time published data is modified, it should be noted as modified where it is published, along with a link to the previous data, and a link to the peer reviewed justification for the change.

        I am just suggesting strategies for coping with the appearance of a “thumb on the scale” since the apparent fact that the adjustments strongly trend in a single direction already looks bad enough.

        You guys are just trying to make skeptics, I swear. Take the steam out of these criticisms up front. Treat this data as transparently as if it were a bank statement to the owner of the money, because it is far more important than that.

        “Trust us” and name calling or motivation questioning of anybody who doesn’t automatically trust such important data on the say so of obviously politically motivated climate scientists like Hansen, for example, is simply no longer an option.

    • thisisnotgoodtogo

      Mosher said:

      “Climategate is not an indictment of the whole profession.”

      Oh, so the profession took care of it in a timely , open and and transparent manner.

      Thanks for bringing truth, Steven

      • Mark Lewis

        That is an interesting question. How much can we hold the profession responsible for the actions of some of its prominent members?

        Mosher – how do you rate the profession’s response to CRU emails?

        For me, how the profession reacts to their outing is critical. Certainly, my information about the response by the profession was partial and probably biased, but the reaction of the climate/temperature profession to the CRU emails as a whole did not bolster my confidence in it.

      • Most in the profession were probably either a) doing climate science and not paying attention or b) frightened by the furore and decided to keep their heads down.

        Climategate is an indictment–of about half a dozen people who chose one of the worst times possible to act like complete bozos. It is in no way an indictment of climate science or the overwhelming majority of climate scientists.

      • And, the whitewash of the climategate investigations is and indictment – of what ?

      • thisisnotgoodtogo

        Tom Fuller,

        Keeping your head down and being too fearful/busy, is an offense and an indictment of the profession. Who spoke out publicly?

        A rotten bunch for sure.

      • When somebody who is purported to be a responsible scientist and the custodian and curator of a central repository of historic temperature data writes “I would rather destroy the data than hand it over to skeptics” then, amazingly, like the IRS, the very data in question is destroyed, I would say that the ‘profession’ has taken a severe black eye and has some serious reputation restoration work to do.

  5. How could you have written this article without once mentioning error analysis?

    Data, real original data, has some margin of error associated with it. Every adjustment to that data adds to that margin of error. Without proper error analysis and reporting that margin of error with the adjusted data, it is all useless. What the hell do they teach hard science majors these days?

    • Steven Mosher

      the error analysis for TOBS for example is fully documented in the underlying papers referenced here.

      first rule. read the literature before commenting.
      looking at the time of your response I have to wonder what you were taught.
      you didnt read all the references

      • I read about TOBS. They had a set of station data to analyze from the 50s and 60s (no hourly data was stored on mag tape after 64 or 65).

        One station moved 20km and one moved 5 km and other moves were “allowed” up 1500m … but they broke the rules for those two stations.

        How many stations were in the same place from the beginning to the end of the data?

        It could be zero.

      • Steven Mosher

        Bruce,

        Looks like you didnt read the papers. read the original paper and then the 2006 paper.

        And then do your own TOBS study.. oh ya, dont make the same mistakes you made with Enviroment Canada data

      • Zeke posted a link at WUWT to the papers.

        ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/

        The stations moved. The height of the thermometers changed.

        What a crappy “reference” collection …

        Karl 1986

        ” For these reasons seven years of hourly data (1958—64) were used at 107 first order sta-tions in the United States to develop equations whichcan be used to predict the TOB (Fig. 4). Of these 107 stations, 79 were used to develop the equations, and28 were reserved as an independent test sample. The choice of stations was based on their spatial distribution and their station histories.

        Spatial station relocations were limited to less than 1500 m except for two stations—Asheville, North Carolina, and Talla-hassee, Florida.

        These stations had relatively large sta-tion moves, 20 km and 5 km respectively, but they were retained because of their strategic location with respect to topography and major water bodies.

        At 72 of the 79 stations used to develop the TOB equations,temperature was recorded very close to 2 m above thesurface.

        At the remaining seven stations, the instru-ments were repositioned from heights in excess of 5 m to those near 2 m sometime between 1958 and 1964.

        Changes in instrument heights from the 28 independent stations were more frequent: at nearly 50% of these stations the height of the instruments was reduced to 2 m above the ground from heights in excess of 5 m sometime in the same period”

      • “first rule. read the literature before commenting.”

        The great thing about this site is that is not nanny moderated like some of the other climate sites…

      • Steven Mosher

        two papers bruce.

        read them both.

        post your code

        stop calling people dishonest unless you have proof.

      • Mosher, aren’t you going to thank me for reading the first TOBS paper and point out the serious problems with the data?

        Whats the name of the 2nd paper?

    • Matthew R Marler

      Patrick B: Every adjustment to that data adds to that margin of error.

      That is not true. The best adjustments reduce the error the most, whereas naive adjustments do not do a good job at all (example of a “naive” adjustment: concluding that a data point is “bad”, and omitting it from the analysis is computationally equivalent to a second-rate method of adjustment.) This is explained in the vast quantity of mathematics and simulation analysis of diverse type of estimation, including the methods used by the BEST team. The papers of the BEST team explain their analyses in good detail, with supporting references. I put some references in comments on the posts by the estimable Rud Istvan.

  6. Congratulations, you’ve written a long post, managing to avoid mentioning all the main issues of current interest.

    “Having worked with many of the scientists in question”
    In that case, you are in no position to evaluate their work objectively.

    “start out from a position of assuming good faith”
    I did that. Two and a half years ago I wrote to the NCDC people about the erroneous adjustments in Iceland (the Iceland Met Office confirmed there was no validity to the adjustments) and the apparently missing data that was in fact available. I was told they would look into it and to “stay tuned for further updates” but heard nothing. The erroneous adjustments (a consistent cooling in the 1960s is deleted) and bogus missing data are still there.
    So I’m afraid good faith has been lost and it’s going to be very hard to regain it.

    • Hi Paul,

      Iceland is certainly an interesting case. Berkeley doesn’t get nearly the same scale of 1940s adjustments in their record: http://berkeleyearth.lbl.gov/stations/155459

      I wonder if its an issue similar to what we saw in the arctic? http://www.skepticalscience.com/how_global_warming_broke_the_thermometer_record.html

      GHCN-M v4 (which hopefully will be out next year) and ISTI all contain many more stations than GHCN-M v3, which will help resolve regional artifacts due to homogenization in the presence of sparse station availability.

      • Zeke, I have no idea who you are, but posting links to sks, a website that still doggedly defends the Hockey Stick in public while trashing it when they thought nobody could see, is just over the top.

        How are we supposed to know what they really think when their editorial positions, when exposed, showed that they value propaganda over true “skeptical science”?

    • Steven Mosher

      Paul

      I dont see how good faith is lost.

      Like you I’ve reported any number of errors to NCDC
      remember NCDC collects data supplied by sources.
      In some cases the errors have been corrected. NCDC informs
      the source and the change is made upstream.
      in some cases NCDC informs the source and changes are not made
      in one case the change was made upstream and then in the next
      report the mistake was back in the record.

      you assume bad faith on one data point.

      bad science.

      • “I dont see how good faith is lost.” Seven Mosher

        “Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.” – CRU

        They couldn’t have printed it?

        ” If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send it to anyone.”” – Phil Jones.

        Nope, nothing to see here with Phile “Rosemary Woods” Jones.

        Honestly, pretending that Climategate never happened and so no good faith has been lost is lunacy.

    • Paul, even if they cocked up Iceland data completely, it’s kind of a postage stamp in terms of global temps, isn’t it? And of course you could legitimately reply that the entire globe is made up of postage stamps, but I would then ask if you have noticed similar problems elsewhere.

      If it were a conspiracy to drive temp records in one direction, wouldn’t they choose to fiddle with statistics in a wider region on smaller scales?

      • What if it is not a conspiracy, but bungling? They design a bad adjustment algorithm, run it and it gives them data that looks like what they expect to see. So they declare it good, publish it in a journal with scant review and no data to speak of. Then when you look under the hood you find that the actual adjustments don’t fit reality, the errors aren’t uniform but are most prevalent where data is less dense but the data was never tested at the station level. Just compared to the expected result, and since it confirmed the expected result the details were never looked at or understood. Then people will defend it saying it is based on 30 year old published results failing to notice that it gives the ‘correct’ answer by getting it all wrong.

    • Matthew R Marler

      Paul Matthews: Congratulations, you’ve written a long post, managing to avoid mentioning all the main issues of current interest.

      That is unfair.

      Could you mention specifically one of the main issues of current interest that managed to avoid mentioning? clearly, he couldn’t address every issue of current interest in a posting of finite length, but perhaps you have a specific issue he might bring up next time, relevant to adjustments to the temperature data.

    • Don Monfort

      An issue of current interest:

      http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/

      Anthony Watts:
      “This isn’t just some issue with gridding, or anomalies, or method, it is about NOAA not being able to present historical climate information of the United States accurately. In one report they give one number, and in another they give a different one with no explanation to the public as to why.

      This is not acceptable. It is not being honest with the public. It is not scientific. It violates the Data Quality Act.”

  7. What are you still using anomalies? There are only 50 US stations with relatively complete monthly data from 1961 to 1990 in USHCN ? The “anomaly” baseline is corrupted.

    Secondly, why not use Tmin and Tmax temperatures? Tmin is corrupted by UHI and therefore so is Tavg.

    Thirdly … 5 years smooth? Quit tampering Zeke.

    https://sunshinehours.wordpress.com/2014/07/03/ushcn-tmax-hottest-july-histogram-raw-vs-adjusted/

    • Steven Mosher

      Smoothing is not tampering.
      I suggest you go to Jonova and tell david Evans that smoothing TSI is tampering.

      dare you.

      • Smoothing is misleading in this case since we are trying to determine relatively small changes in trends.

        Smoothing removes data pertinent to this discussion.

      • Steven Mosher

        bruce,

        go to jonova. accuse them of being dishonest.
        prove you have principles.
        post your code.

      • If Zeke posts his R code for his infill graph, I’ll fix it and add trend lines and do one graph per month. And I’ll post his code.

        I have a bunch of USHCN data already downloaded.

      • Matthew R Marler

        sunshine hours: Smoothing removes data pertinent to this discussion.

        That is not true. Smoothing does note “remove” data. Do you perhaps have evidence that Zeke Hausfather has “removed” data. You are not disputing that they preserve their original raw data, and write out the adjustments and many other supporting statistics in separate files, are you?

    • Hi Bruce,

      Anomalies only use infilled data in the fourth case examined (QC + TOBs + PHA + infilling). In all other cases missing months during the baseline period are simply ignored. They are rare enough that the effect will be negligible.

      The reason I used a 5-year smooth on the first graph is that using monthly or annual data makes the difference between adjusted and raw data too difficult to see due to monthly and annual variability in temperatures. Smoothing serves to accentuate the difference if anything. The rest of the graphs show annual differences (though I could have been clearer in stating this in the text).

      • The mean # of Estimated values for tmax December 1961-1990 is 3.14.

        A little over 10%. Not rare.

        I haven’t checked for distribution by Elevation or Lat/Long.

      • Steven Mosher

        when you do bruce, post your code.
        we want to ISO9000 audit you.
        given your mistake with Env canada..

      • Mosher, you really are a bitter man. Just ask Zeke to redo his infilling graph to bolster his claim infilling doesn’t change the trends.

        You read way too many climategate emails. You just want to be as bloody-minded as them.

      • Estimated data is about 30m in elevation higher than non-estimated for the 1961-1990 period.

      • The reason I used a 5-year smooth

        Hopefully the frequency response of your smoothing method doesn’t have large side lobes.

        The question of how to smooth was discussed here at length some months ago in a post by Greg Goodman. For smoothing as a low-pass filter, a Gaussian filter can be taken as a good starting point. The many comments at Greg’s post by a number of contributors considered variants of the Gaussian filter with different criteria for how to minimize side lobes. No one spoke up in defense of moving-average smoothing.

        More sophisticated methods get into band-pass filters, for which even-order derivatives of the basic Gaussian filter are good, starting with the so-called Mexican hat or Ricker filter.

      • A C Osborn

        Zeke Hausfather | July 7, 2014 at 12:14 pm | Reply

        Hi Bruce,

        Anomalies only use infilled data in the fourth case examined (QC + TOBs + PHA + infilling). In all other cases missing months during the baseline period are simply ignored. They are rare enough that the effect will be negligible.

        The very thing Steve Goddard was slated for.

  8. David in Cal

    OK, but I still have two concerns:
    1. Can purely formula adjustments be fully adequate. That is, wouldn’t it be better to look at the actual characteristics of each weather station over time? (Granted, that’s a big job.)
    2. How much variation is added by the adjustment process? Is this variation reflected in various models? My impression is that this source of variation is ignored; that models take the adjusted values as if they were actual certain readings.

    I

    • David, you said at WUWT, If you want to understand temperatures changes, you should analyze temperature changes, not temperatures. You are right, and that is what Motl did on the HADCRUT3 dataset.

      http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html

    • Steven Mosher

      “OK, but I still have two concerns:
      1. Can purely formula adjustments be fully adequate. That is, wouldn’t it be better to look at the actual characteristics of each weather station over time? (Granted, that’s a big job.

      Be more specific.
      A) instrument changes. A side by side test was conducted on the
      LIG versus MMTS. MMTS was demonstrated to introduce a bias.
      That bias has a mean value and an uncertainty. This correction
      is applied uniformly to every station that has the bias.
      What would you suggest.
      B) how do you handle stations that started in 1880 and ended in 1930?
      time travel to investigate the station?
      C) yes formal adjustments are adequate.

      2. How much variation is added by the adjustment process? Is this variation reflected in various models? My impression is that this source of variation is ignored; that models take the adjusted values as if they were actual certain readings.

      A) what models?
      B) what do you mean by “variation added” the best estimate of the bias
      is calculated. It is added or substracted from the record.
      Roy Spenser does the same thing for UAH, ask him how it works

      • Absolutely, I vote for time travel. Then we can educate all those farmers about ISO 9000.

      • Steven Mosher | July 7, 2014 at 11:35 am | Reply | Reply w/ Link |

        A) instrument changes. A side by side test was conducted on the
        LIG versus MMTS. MMTS was demonstrated to introduce a bias.

        This made me curious, and is probably more rhetorical as opposed to being actual questions. Did they not calibrate them in a metrology dept? And did they compare more than one? You could be adding a half degree adjustment for an issue with potentially only a subset of the actually deployed thermometers.

        Which, is yet another reason why IMO any adjustment after the fact is based on less information than was available when the record was recorded, at least generally. I understand why you want to correct the data, but as I tell my data customers, at some point after enough changes, it’s not your data anymore, it’s made up. I’ll even go as far as saying it’s probably more accurate, but the error of that data is larger, it has to be.

  9. Why does figure 5 use 1900-1910 as the reference period when the graph it is trying to emulate uses 1900 to 1999?

    • Steven Mosher

      Its not the “reference period”

      1900-1910 is used to show the difference over the whole series, so you can clearly see the change from the begining

  10. It all sounds very logical except for the assumptions e.g. assuming current measurements are more accurate. And from what I can see from studying this for close to a decade now is that the ‘revisions’ always seem to make the past colder to the point that they are now in conflict with non NOAA & NASA temperatures records. There is no way I would believe that the data is not being manipulated to some degree without an ‘independent’ and openly published study.

    • Steven Mosher

      “There is no way I would believe that the data is not being manipulated to some degree without an ‘independent’ and openly published study.”

      See BerkeleyEarth.

      “It all sounds very logical except for the assumptions e.g. assuming current measurements are more accurate.”

      There are 114 pristine stations called CRN that have been in operation for a decade. these stations are stamp with a gold seal by WUWT.

      Guess what happens when you compare these 114 to the rest of the stations: NO DIFFERENCE.

    • Steven Mosher

      where is your code bruce.
      Iso9000 for you.. get crackin.

      • Are these comments really necessary? They seem more like a past issue that Mr. Mosher cannot get over.

      • Matthew R Marler

        FTA: Are these comments really necessary? They seem more like a past issue that Mr. Mosher cannot get over.

        There is that problem that “seems” is in the mind of the beholder. It seems to me that sunshinehours1 and some other people are posing the same misunderstandings over and over (ignoring the substantial statistical literature on methods of estimation and their error rates), forcing Steven Mosher and some others to make the same statistical points over and over.

      • Steven Mosher

        No FTA.

        I hold all people to the same standard.
        where were you we we badgered hansen for code?
        in your mothers basement?

      • Matthew than wouldn’t it simply be better to refer readers to that fact? The comments from Mosher simply don’t help the dialogue along and instead turn it combative and nonproductive.

        Mosher – you don’t seem capable of being civil from my perspective – as a newcomber to this topic. I’ll note you as an idealogue and focus my interest in learning towards others such as Zeke (who presents an excellent article and continues to answer professionally).

  11. It appears there should be a limited number of stations that did not change their TOBS. How does the trend of those stations, assuming they wouldn’t require a TOBS adjustment, compare to the trend of the stations in the same region where the adjustment has been made? Has this analysis been done? If there is no difference the TOBS corrections are probably accurate. If not why don’t they match up?

    • Steven Mosher

      tobs has been validated by out of sample testing TWICE. see the references.

  12. David Springer

    Stepwise differences due to USHCN adjustments.

    http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif

    As one can clearly see in this breakdown, straight from the horse’s mouth, that without TOBS and SHAP adjustments there is no warming trend in the US instrument record.

    • Steven Mosher

      Zeke addressed that read harder.

    • Thats the old graph for USHCN version 1 that I was trying to update in my Figure 5. Its fairly old and refers to adjustments (SHAP, for example) that are no longer made.

      • David Springer

        The raw data didn’t change so it remains true that there is no temperature trend in the raw data.

    • David Springer

      I can’t seem to find where Zeke said “There is no temperature trend in the raw data”.

  13. Cool! Thanks for writing this. I look forward to working through it.

  14. “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

    I think the genie is out of the bottle. At best, we can conclude that due to the fact that there are adjustments being done, means there hasen’t been and is not a good process in place for measuring and reporting temperature.

    Andrew

    • Without making too much of it, I would have to agree. Unfortunate for the hotties that the stations switched time of day.
      TOB has to be difficult. Living in Colorado tells me that. Without thunderstorm knowledge the temperature adjustment has got to be incredibly difficult. I cant reach the ftp site yet. Interested in the further discussion of TOB.

    • Matthew R Marler

      Bad Andrew: I think the genie is out of the bottle. At best, we can conclude that due to the fact that there are adjustments being done, means there hasen’t been and is not a good process in place for measuring and reporting temperature.

      That is just plain ignorance.

    • Steven Mosher

      sadly you dont post your code so I cant find your error.
      unlike the time you botched the enviroment canada data when you error was obvious.

  15. In view of all you’ve written Zeke, should the record ever be used to make press releases saying ‘warmest on record’ or unprecedented when no matter how honest the endeavour, the result has to be somewhat of a best guess? Especially when the differences between high scorers are so small.

    • If I ran the organisation doing these stats and anyone even so much as implied anything “good” or “bad” about the temperature, I’d kick them out so fast that their feet would not touch the ground.

      That is what you need in an organisation doing these stats. Instead, it is utterly beyond doubt that those involved are catastrophists using every possibility to portray the stats in the worst possible light.

      That is why I’d kick the whole lot out. The principle aim indeed, perhaps the sole aim should be to get the most impartial judgement of the climate.

      Instead we seem to have people who seem no better than greenpeace activists trying to tell us “it’s worst than we thought”.

      Yes, it’s always worse than they thought – but not in the way they suggest. It’s worse, because nothing a bunch of catastrophists say about these measurements can ever be trusted.

    • Steven Mosher

      Press releases claiming ‘warmest” or “coolest” are rather silly in my mind.
      precisely for the reason you state.

      now, back to the science.

      • Steven, In I think it must be March 2007, whilst I was waiting for the February HADCRUT figure to come out, there was a deluge of climate propaganda so that nightly the news was full of climate related stories. Then eventually (I would guess more than a week late) the figure came out and it showed the coldest February in 14years. Of course there was no official press release, and in retrospect it was obvious the propaganda and late release of the data was to saturate the media with stories so that they would not pick up on the story that global warming had come to an end (at least for that month).

        Over the next few months/years that figure has “warmed”. For anyone working in a quality environment, that kind of creeping change is a total anathema. For those producing climate data it seems to be a given that they can constantly change the data in the past without so much as an explanation.

        That February 2007 was the point I realised the figures are so bound up in propaganda that even with the best will in the world, the people involved could not be trusted. Climategate proved me right.

        Now 7 years later, nothing really has changed. We still have people making excuses for poor quality work. And to see the difference between “trying your best” and “fit for purpose”, see the image on my article:
        https://scottishsceptic.wordpress.com/wp-admin/post.php?post=3657&action=edit&message=6&postpost=v2

        None of them are accused of not “trying their best” – it was just that they didn’t produce something that met the requirements of the customer.

      • Matthew R Marler

        Scottish Sceptic: For those producing climate data it seems to be a given that they can constantly change the data in the past without so much as an explanation.

        Given the plethora of explanations, why the cllaim that there has not been an explanation?

      • Steven Mosher

        More pr comments about pr.
        Back to the science

  16. Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best

    Well it isn’t good enough.

    You sound like someone talking about a charity where no one quite knows where the money has gone and some are claiming “they are doing their best”.

    We don’t need the academics “best”, what we need is the standard of qualify, accountability and general professionalism you see in the world outside academia.

    So:

    1. Fully audited methodology and systems
    2. Quality assurance to ISO9000
    3. Some come back WHEN we find out they weren’t doing the job to the standard required that doesn’t involve putting them in jail.
    4. Accountability to the public – that is to say – they stop saying “we are doing our best” and start saying “what is it you need us to do”.

    • ISO 9000 on readings taken by farmers 100 years ago?

      • Think of it as repairing cars – the cars may be junk, but that does not mean you can’t do a good job.

        ISO9000 cannot improve the original data, but it will create a system which ensures quality in handling that data and the key to the system is the internal auditing, fault identification and correction.

        Instead, the present system is:
        1. Pretend its perfect
        2. Reluctantly let skeptics get data – “only because you want to find fault”.
        3. Deny anything skeptics find
        4. When forced to admit they have problems – deny it is a problem and claim “we are only trying our best”.

        Basically: Never ever admit there is any problem – because admitting problems shows “poor quality”.

        In contrast to ISO9000 … only by searching for problems and admitting them can you improve quality.

    • Steven Mosher

      “We don’t need the academics “best”, what we need is the standard of qualify, accountability and general professionalism you see in the world outside academia.”

      standard of “qualify”?

      stones and glass houses.

      The data is all open
      The code is all there.

      Yes in a perfect world everyone would be ISO9000. But as you know you are very often faced with handling data that was generated before the existence of ISO9000.

      According to ISO9000 how are these situations handled.

      Be specific, site the standard.

      • This is not the scientific standard as I understand it.

        http://www.nytimes.com/2014/07/07/us/how-environmentalists-drew-blueprint-for-obama-emissions-rule.html?_r=1

        What do you say?

      • @ Steve Mosher

        “The data is all open
        The code is all there.”

        And as Zeke went to great lengths to point out, the actual data stinks. Without going into motivations, the simple fact is that the actual data is being heavily massaged and used to produce headline after headline that states some variation of ‘Year X or Month Y is the hottest year/month of the last thousand years (or some other long period), beating the old record by a small fraction of a degree, and proving that we need to take action now to control ACO2 to avoid catastrophic climate change.’. And no amount of correcting, kriging, infilling, adjusting, estimating, or any other manipulation of sow’s ear data is going to turn it into silk purse data capable of detecting actual century or multi-century anomalies in the ‘temperature of the Earth’, whatever that is, with reliable hundredth or even tenth of a degree precision. The actual instrumentation system and the data collected by it is not ‘fixable’, no matter how important it is to have precision data, how hard the experts are trying to massage it, or how noble their intentions are in doing so. Using the previous analogy of the auditor, if the company to be audited kept its books on napkins, when they felt like it, and lost half of the napkins, no auditor is going to be able to balance the books to the penny. Nor dollar.

        We are told that anthropogenic climate change is the most important problem facing the human race at this time and for the foreseeable future. If so, why don’t the climate experts act like it?

        Want to convince me that it is important? Develop a precision weather station with modern instrumentation and deploy a bunch of them world wide.

        Forget the 19th century max/min, read them by hand thermometers and deploy precision modern instruments that collect data electronically, every minute if necessary, buffer it, and send it back to HQ at least daily for archiving. Make sure that they include local storage for at least a year or two backup, in case of comms failure. Storage is cheap, in the field and at HQ.

        Deploy the stations in locations where urban heat is not a factor and in a distribution pattern that guarantees optimum geographic coverage. It is no longer necessary to have humans visit the stations for anything other than routine maintenance or, for really remote sites where electronic data forwarding is not feasible (Where would that be nowadays?), periodic data collection.

        Set up a calibration program; follow it religiously. Ensure that the ACTUAL data collected is precise enough for its intended purpose and is handled in a manner that guarantees its integrity. If data is missing or corrupted, it is missing or corrupted. It cannot be ‘recreated’ through some process like the EDAC on a disk drive. It’s gone. If precise data can be generated through kriging, infilling, or whatever, why deploy the collection station in the first place?

        Collect data for a long enough period to be meaningful. Once collected, don’t adjust, correct, infill, krig, or estimate the data. It is either data or it isn’t.

        Oh, and give up the fiction that atmospheric CO2 is the only important factor in climate variability, the climate models that assume that it is, and the idea that we can ‘adjust the thermostat of the Earth’ by giving the government, any government, taxing and regulatory authority over every human activity with a ‘carbon signature’

      • k scott denison

        You know that Mosher, your repeating “the data is open, the code is all there” doesn’t relieve you of responsibility. You act as if this absolves you and your colleagues. Those of us out here in the real (regulated) world find that attitude arrogant and counterproductive. My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. Until then, your snide remarks are undoing what credibility you may have had.

      • site the standard?
        stones and glass houses indeed!

      • Bob Ludwick
        +1000

      • Matthew R Marler

        Bob Ludwick: the simple fact is that the actual data is being heavily massaged and used to produce headline after headline that states some variation of ‘Year X or and blah, blah, blah.

        The BEST team is doing the best possible with the records that exist. Silk purses and sow’s ears are not in the picture. That some people may be motivated to prove global warming and others may be motivated to prove there is no global warming, there is no justification for ignoring the temperature record outright or using purely naive methods.

        Whether CO2 is important or not, getting the best inferences possible out of the data that exist is the best approach.

        You are not advocating that the whole temperature record be ignored, are you? If not, what exactly is wrong with the BEST team using the best methods?

      • Steven Mosher

        “You know that Mosher, your repeating “the data is open, the code is all there” doesn’t relieve you of responsibility. You act as if this absolves you and your colleagues. Those of us out here in the real (regulated) world find that attitude arrogant and counterproductive. My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. Until then, your snide remarks are undoing what credibility you may have had.”

        1. Who said we were relieved of responsibility.
        2. you find it arrogant. boo frickin hoo. your job is to find the mistake.
        you dont like my attitude, see your therapist. get some meds.
        3. What makes you think that ISO9000 is even the right standard?
        4. No amount of process will change your mind. You are not the least
        bit interested in understanding. Look you could be a skeptical hero.
        go do your own temperature series.
        5. credibility. Whether or not you believe me is immaterial. You dont matter. get that yet? when you do work and find the problems, then you matter. or rather your work matters. Appealing to credibility is the flip side of an appeal to authority.

      • Is the current product worth the price paid?

      • “2. you find it arrogant. boo frickin hoo. your job is to find the mistake.
        you dont like my attitude, see your therapist. get some meds.”

        Okay Mr. go read a book. Go read this book:

        http://www.amazon.com/How-Sell-Yourself-Winning-Techniques/dp/1564145859/ref=sr_1_3?ie=UTF8&qid=1404838524&sr=8-3&keywords=selling+yourself

        Some relevant quotes:

        “Communication is the transfer of information from one mind to another mind…. Whatever the medium, if the message doesn’t reach the other person, there’s no communication or there’s miscommunication….
        We think of selling as being product oriented….Even when there’s a slight price difference, we rarely buy any big-ticket item from someone we really dislike.
        Ideas aren’t much different. The only time we pay close attention to an idea being communicated by someone we don’t like is when we have a heavy personal investment in the subject….
        Don’t waste your time with people on your side. They’re already yours…Forget about trying to convince the people on the other side. You’re not likely to make a convert with a good presentation. They’re already convinced that you’re wrong, or a crackpot, or worse. The only people who matter are the folks who haven’t made up their minds. The undecided. And how do you win them? By presenting yourself as a competent and likable person.”

        You can thank me later.

      • @ Mathew R. Marler

        “……….headline that states some variation of ‘Year X or and blah, blah, blah.

        The BEST team is doing the best possible with the records that exist. Silk purses and sow’s ears are not in the picture. That some people may be motivated to prove global warming and others may be motivated to prove there is no global warming, there is no justification for ignoring the temperature record outright or using purely naive methods.

        Whether CO2 is important or not, getting the best inferences possible out of the data that exist is the best approach.

        You are not advocating that the whole temperature record be ignored, are you? If not, what exactly is wrong with the BEST team using the best methods?”

        WHY is the BEST team doing the ‘best possible with the records that exist’? Why is it important that multi-century old data, collected by hand using data handling procedures that in general would earn a sophomore physics student a D-, at best, using instruments wholly unsuited to the task, be massaged, corrected, infilled, kriged, zombied, and otherwise tortured beyond recognition in order to tease out ‘anomalies’ of small fractions of a degree/decade, if NOT for the ‘……..headline that states some variation of ‘Year X or and blah, blah, blah……’? What OTHER purpose justifies the billions of dollars and thousands of man-years of effort? Were it not for the headlines, and the accompanying demands for immediate political action to control ACO2 to stave off the looming catastrophe that it will cause if we don’t control it, all citing the output of the ‘best efforts’ of the BEST team and others as evidence, would anyone notice that we are, as we speak, being subjected to the ongoing ravages of ACO2 driven climate catastrophe?

        Are you actually claiming that the ‘best efforts’ of the data massagers are able to not only tease out temperature anomalies with hundredth degree resolution for the ‘annual temperature of the Earth’ going back a thousand years or more, all but the most recent couple of hundred years based solely on a variety of ‘proxies’, but, having teased them out, are able to successfully attribute them to some specific ‘driver’, like ACO2?

      • Steven Mosher

        Nickels. You are not the customer.
        I am not interested in selling to you or anyone else.
        Folks who want the data get it for free.
        Psst
        You did a bad job of selling the book.
        Perhaps you should reread it

      • k scott denison

        Sorry Mosher, I didn’t realize your efforts were all mental masturbation. By all means, carry on both with your efforts to create information from data that isn’t up to the task and at trying to convince whomever it is you are trying to convince of whatever it is you are trying to convince them. Because honestly, most of the scientific world doesn’t believe you or your data.

        Good luck with that.

      • k scott denison wrote:

        “My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. “

        I think anyone who has worked in the regulated world has an appreciation for that comment but also can see the fleeting sardonic smile on your face when your wrote the above.

  17. Thanks for the sensible post Zeke…you may not get the kindest reaction here for suggesting there’s no massive conspiracy.

    • Zeke is doing a good enough job of proving there is a small conspiracy to mislead.

      • Steven Mosher

        Lewandowsky loves skeptics like you.

      • Mosher, Zeke has had since June 5th to prove me wrong by emulating my graphs.

        http://rankexploits.com/musings/2014/how-not-to-calculate-temperature/

        If I’m wrong I will apologize.

      • Sorry Bruce, averaging absolute temperatures when the network isn’t consistent gives you screwy results. The graphs in this post are nearly identical to those in Menne et al 2009, and use a method (anomalies + spatial weighting) used by pretty much every published paper examining surface temperature data.

      • Zeke, you wrote in this blog post: ” infilling has no effect on CONUS-wide trends.”

        Yet you won’t post a graph with trendlines or post the trend difference.

        And you graph has a -.2 to .5 scale and the data barely gets away from 0.

        We could be arguing about the trends if your post had numbers.

      • The graph has a scale consistent with all the other graphs. The impact of infilling is pretty much trend-neutral (rather by definition since it mimics spatial interpolation). The big adjustments are TOBs and the PHA.

      • Matthew R Marler

        sunshinehours1: Zeke is doing a good enough job of proving there is a small conspiracy to mislead.

        This is total ignorance. You plain and simply do not understand how the statistical analysis procedure works. And your evidence for a conspiracy to mislead is that your demonstrably inferior inferences are different in some cases?

    • Chris,

      From the outside looking in, the direction the adjustments almost always go seems pretty “convenient.” But the implication that most of us believe AGW is a “massive conspiracy” is also convenient. Seems the true conspiracy whack jobs are on your side of the fence.

      Would you care for a little cream and sugar with your straw man?

      • Steven Mosher

        see the sunshine.

      • Uh, when Obama says he is making $1 billion available to fight “climate change”, just who in academia do you think will get this through grants? Anything even REMOTELY skeptical will not even be allowed the light of day. Yes, that computes to MASSIVE…..
        Should Marcott get more grant money?

      • Steven Mosher

        DAYHAY

        changes the subject. not interested in understanding science

      • Mosh, you don’t want to be accused of being only interested in those who change the subject ;-)

      • Steven Mosher

        phatboy

        I think I could build a bot to parse comments and classify them

    • So I assume my graphs are not wrong, you just disagree with me on their significance.

      What do you mean by “screwy results” since you left the trend lines out of your infilling graphs

      http://sunshinehours.wordpress.com/2014/07/07/misleading-information-about-ushcn-at-judith-currys-blog/.

    • stevefitzpatrick

      Chris Colose,
      In a development which is nearly as shocking as Nixon going to China, for once I agree with you; Zeke has done a good job of explaining a fairly messy process. I also agree he won’t convince some people of anything, but at least he has laid out a clear explanation. Lets hope it influences the less strident.

    • You left-wing scientivists love conspiracy theories far more; eg every skeptic apparently receives money from Exxon or the Koch Brothers (who?) or about the USA going to war with Iraq because of oil. I bet most of you believe some other big whoppers too. Where did the expression Big Pharma come from anyway? So physician heal thyself!

      Of course conspiracies do actually happen but I don’t believe you are a conspiracist. I believe you and your fellows genuinely believe the planet is warming dangerously due to manmade emissions. The main problem is that nature fundamentally disagrees with you. This is actually a very common occurence in the history of science and is perfectly normal, even necessary for science to progress. It is also perfectly normal to find it difficult to admit you have been teaching (or been taught) the wrong thing for years. So conspiracy no, cognitive dissonance hell yeah!

      We have now conducted the experiment of adding a large slug of manmade CO2 and planet earth just shrugged it off. This expoeriment tells us that CO2 is clearly no more than a minor feedback to the climate system. Never mind the skeptics, that is what the actual data is screaming at you. You and your cronies just refuse to believe it, for reasons that are likely nothing to do with climate should you bother to think about it objectively.

    • I agree that Zeke’s post is sensible and helpful. It underscores the absurd nature of the task of trying to make sense of massive amounts of data collected in a haphazard way over the course of many many years by a lot of different groups. To further assert that the results of analyzing the data are adequate to determine that CAGW is real and the most important problem facing mankind is troubling.

    • Matthew R Marler

      Chris Colose: Thanks for the sensible post Zeke…you may not get the kindest reaction here for suggesting there’s no massive conspiracy.

      Thank you for that.

  18. Zeke,
    I’m a bit confused by figure 3, the distribution of Tobs over the USHCN. There are now only ~900 actual stations reporting rather than ~1200. However, the total station count in figure 3 appears to remain constant near 1200. How can a Tobs be assigned to a non-reporting station?

  19. Zeke, which version of USHCN was used? Because USHCN recalculates a lot of its temperatures daily I always try to put version numbers on the graphs.

    http://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-updated/

    The changes tend to warm the present as usual.

  20. “Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves”

    What is the major cause of station moves?

    Is the general trend to move from a more urban environment to a more rural environment?

    Can we surmise that just after the move of a station the data is likely to be less wrong than at any other time in the station history?

    • In the 1940s there was a big transition from urban rooftops to more rural locations. When MMTS instruments were installed most stations had to move closer to a building to allow for an electric wired connection. Other station moves happen frequently for various other reasons.

      • surely in this situation the adjustments to the raw data for an individual station should only apply at the point in time the change in location/instrument/tobs took place ?

      • Zeke, that you mentioned you had worked with many of the people involved would prevent you from any analysis in the private sector. By definition, you are biased not only because of this, but also because you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions. Did you honestly believe you’d be viewed as objective?

      • Steven Mosher

        “By definition, you are biased not only because of this, but also because you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions.”

        The problem with this is that you havent read any of my comments on the issue of adjustments between 2007 and 2010.
        in short I wass highly skeptical of everything in the record.
        until I looked at the data.

        Then again perhaps we should use your rule.
        Anthony is a non warmist. he is not objective
        Willis is a non warmist he is not objective.

        All humans have an interest. We cannot remove this.
        We can control for it.
        How?

        Publish your data. Publish your method. let others DEMONSTRATE
        how your interest changed the answer.

        Oh, two years ago WUWT published a draft study. no data. no code.
        and you probably believe it.

        Scaffetta argues its the sun. no data. no code. you probably believe it.

      • bit chilly,

        They are only applied when and where the breakpoint is detected. However, because these breakpoints tend to add a constant offset going forward (e.g. 0.5 C max cooling when switching to MMTS), you need to either move everything before the breakpoint down 0.5 C or everything after the breakpoint up 0.5 C. NCDC chooses the former as they assume current instruments are more accurate than those in the past, though both approaches have identical effects on resulting anomaly fields.

      • “you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions”

        I’ve pointed this out many times. The chances of them presenting information that contradicts their own declarations is zero.

        Andrew

      • andrew adams

        And by the same logic any chance of you accepting information which contradicts you own declarations is also zero. So basically none of us can ever really learn anything, or educate others, so we may as well give up on any hope of improving human knowledge.

      • Steven Mosher

        “I’ve pointed this out many times. The chances of them presenting information that contradicts their own declarations is zero.”

        Actually not.

        see my declarations about adjustments and UHI and microsite before I actually worked through the data. I used to be skeptical. I declared that.
        I was dead wrong.

        The chances of you looking at my past declarations is zero.

      • “see my declarations about adjustments and UHI and microsite before I actually worked through the data”

        Why don’t you post one in a comment and link a reference to it? Should be easy.

        Andrew

      • Steven Mosher

        easy
        start there
        http://climateaudit.org/2007/06/14/parker-2006-an-urban-myth/

        there are tons of other.

        read much.

      • Mosher,

        Why do I have to dig for it? Why don’t you just quote what you had in mind?

        Andrew

      • Andrew, why not grow a pair and do your own leg work.

      • I looked through Mosher’s link to CA and there are no “declarations” from him concerning adjustments and/or UHI.

        Thanks for nothin Mosher, as usual.

        Andrew

  21.  
     
    Adjust this:

    http://evilincandescentbulb.files.wordpress.com/2013/09/uhi-effect.jpg
     
     
    Such is, the Socio-Economics of Global Warming!

    • Steven Mosher

      Easy.

      Zeke shows you how in his paper. The sum total of UHI in the US is around
      .2C. Correctable.

      However, linking to a chart from the EPA that has no documentation of its source data, effectively one data point, is just the sort of science one expects from Wagonthon.

      one data point. from an EPA chart. that doesnt show its source..

      man, if you were Mann trying to pull that sort of stunt, Styne would write a column about it

      • Kristen Barnes (Ponder the Maunder) at 15 years old could figure this out. Making decisions based on a climate model that is a simple construct of, “a climate system,” according to Pete Spotts of, The Christian Science Monitor, “that is too sensitive to rising CO2 concentrations,” would be like running a complex free enterprise economy based on the outcome of a board game like Monopoly. There is a “systematic warm bias” that, according to Roger Pielke, Sr., “remains in the analysis of long term surface temperature trends.” Meanwhile, the oceans that store heat continue to cool.

      • Steven Mosher

        “Kristen Barnes?”

        you realize that her work was really done by someone else..

        hmm maybe I should dig those emails up..

  22. US Temperatures – 5year smooth chart.
    As a layman I cannot comprehend how “adjustments” to around 1935 RAW can generate a 0.5C cooling to the RAW recordings. Sorry, but I just do not believe it and see it as an attempt to do away with 1935 high temperatures and make current period warmer all in the “cause”. As stated above, it is suspicious that all adjustments end up cooling the past to make the present look warmer.

  23. To those of us who have been following the climate debate for decades, the next few years will be electrifying. There is a high probability we will witness the crackup of one of the most influential scientific paradigms of the 20th century, and the implications for policy and global politics could be staggering. ~Ross McKitrick

  24. This is entertaining, a tweet from Gavin:

    Gavin Schmidt ‏@ClimateOfGavin 1m
    A ray of sanity in an otherwise nonsenslcal discussion of temperature trends and you won’t believe where! http://wp.me/p12Elz-4cz #upworthy

    • Oh geez. You’ve poisoned the well by saying Gavin liked the post.

    • Judith, this is hardly a trivial matter. You are yet again trying to defend a culture which does not allow outside scrutiny to ensure it is producing quality work by saying “they are trying their best”.

      In my experience in industry almost everyone “tries their best”, but that in no way guarantees quality. But instead it is those in a culture that accepts rigorous inside and outside scrutiny and then have a system to identify and correct problems and then drive through improvement that ever achieves the highest quality.

      And in my experience, those that “sweep problems under the carpet” and have a general culture of excusing poor quality because they are “trying their best that are usually the ones with the greatest gap between the quality they think they are producing and the actual quality of what comes out.

      • Steven Mosher

        “defend a culture which does not allow outside scrutiny to ensure it is producing quality work by saying “they are trying their best”.

        outside scrutiny?

        Zeke doesnt work for NOAA

        They provided him ( and you) access to their data
        They provided him ( and you) access to their code.

        you dont work for NOAA.

        Zeke applied outside scrunity
        You can apply outside scrunity and you are not even a customer.
        Zeke has the skill
        You have the skill ( If I believe what you write)

        Take the data
        Take the code.
        Do an Audit
        Be a hero.

    • The comments prove Gavin right, again.

      • Matthew R Marler

        Chris Colose: The comments prove Gavin right, again.

        Very droll. They are an instance of his not being wrong.

  25. i really hope sunshinehours1 questions do not get lost in the comment thread. the answers to them should lead the discussion.

  26. Jeepers. The denizens are not showing their best side in the comments. “Consider that you may be mistaken.”

    • In the UK there is a sale of goods act that gives us the right to ask for our money back for goods or services that are “not fit for purpose”.

      We are just trying to exercise that right – except there is an academic cartel of like minded catastrophists who are stopping a reliable and impartial organisation coming in to do the job in a way that can be trusted.

      Let me put it this way. A cowboy builder comes in and puts up your house without proper foundations. They may well have done “the best they are able”, but that doesn’t mean it wasn’t good enough.

      We want people in charge on these temperature measurements who stop trying to excuse bad quality work and instead some organisation that takes quality seriously.

      And to start – they have to understand what quality means – so Judith go read up about ISO900o

      Then tell me how many of those organisations doing these temperature figures even know what ISO9000 is let alone have it.

      • Matthew R Marler

        Scottish Sceptic: We are just trying to exercise that right – except there is an academic cartel of like minded catastrophists who are stopping a reliable and impartial organisation coming in to do the job in a way that can be trusted.

        You continue to miss several important points. (1) the statistical methods used by BEST are in fact the best available; (2) they have described their methods in published papers and have made their data and code available to anyone who wishes to audit them; (3) no one is stopping anyone from coming in to do the job in a way that can be trusted.

    • Steven Mosher

      They are all experts.
      And they forget their feynman about the ignorance of experts.
      Note how NONE of them address the science.
      Note how many commented before reading the papers zeke linked to.
      Note that none took time to look at the data or the code.

      Why?

      because they are not interested in understanding.
      period.

      • Actually I designed temperature control and monitoring systems ran a factory with several thousand precision temperature sensors and then went into meteorological weather stations for the wind industry.

        From that experience I learnt that it was impossible to reliably measure the temperature of a glass slid about 1cm across to within 0.01C let alone an enclosure a few tens of cm.

        Then I came across a bunch of academics who told me the end of the world was nigh because they were absolutely certain global temperature had risen since the days of hand-held thermometers to the modern era of remote instrumentation.

        … and I laughed … until I realised they were serious … and worse … people actually took them seriously. And then I was down right despairing when I saw that rather than the carefully planned sites I had imagined, there were sensors in parking lots.

        And then when those responsible said that none of that mattered and then started calling us “deniers” – in any other walk of life, ministers would resign and those responsible would go to prison.

      • Steven Mosher

        really sceptic?
        I dont believe you.
        show your data and code.
        appeals to personal experience and authority by someone who calls themselves a sceptic..
        tsk tsk.
        also, your iso9000 certs.
        thanks Ill wait

  27. One of the issues you’ve ignored is how the picture has been changed in the last few years. Back in 2000 the US temperature plots showed clearly that the 1930s were warmer than the 1990s, with 1936 0.5C warmer than 1998. Since then this cooling has been removed by the USHCN adjustments. This is Goddard’s famous blinking gif that appears regularly at his site. On the other hand it still seems to be acknowledged that most of the state record highs occurred in the 1930s (there are lists at various websites).

  28. HaroldW,

    Figure 3 ends in 2005, when there were still about 1100 stations in the network still reporting.

    • Zeke,
      I agree with your point that figure 3 goes only to 2005, but that doesn’t explain the situation. From figure 2, the station count in 2005 was between 1000 and 1100, say 1075.

      Reading the most recent (2005) values from figure 3:
      AM: 750
      PM: 350
      Midnight: 120
      Other: 10
      The total is over 1200. There’s a minimal error involved in reading these values under magnification, and it’s not large enough to reconcile this total with an active station count below 1100. Non-reporting stations were associated in Menne with a time of observation, which is puzzling.

  29. Pingback: Did NASA and NOAA dramatically alter US climate history to exaggerate global warming? | Fabius Maximus

  30. I am unconvinced of the need to “adjust” the data. There are thousands and thousand of data points and associated error margins. The results are by their very nature statistical.

    “Adjustments” invariably invite abuse, whether intended or not.

    • Mike, I think Zeke’s explanation for why the adjustments are absolutely essential for calculating temperature changes over space and time was clear and compelling. I find it difficult to think of a cogent argument against it.

  31. Pingback: Have the climate skeptics jumped the shark, taking the path to irrelevance? | Fabius Maximus

  32. Pingback: Comment threads about global warming show the American mind at work, like a reality-TV horror show | Fabius Maximus

  33. BS baffles brains….you can bet every apostrophe was double checked on this message to say as little as possible.
    “But I want to say one thing to the American people. I want you to listen to me. I’m going to say this again: we did not screw around with the temperature data”

  34. The Fig. 8 caption appears to be incorrect.

    Figure 8. Time of observation adjustments to USHCN relative to the 1900-1910 period.

    Shouldn’t it say Pairwise Homogenization Algorithm adjustments?

    • Good catch. Asking Judy to fix it.

      • should the years also be 1900-2010 period ?

      • The adjustments are shown relative to the start of the record to better show their cumulative effects over time. This is following the convention from the USHCN v1 adjustment graph on the NCDC website to use a baseline period of 1900-1910. In reality, what matters is the impact of the adjustments on the trend, so the choice of baseline periods is somewhat irrelevant and only really impacts the readability of the graph.

  35. If someone could explain why, after the initial adjustments are made to raw data (assuming they are valid/correct which may or may not be the case), additional adjustments are made on a nearly annual basis. I might accept that there is “good faith” in making these adjustments.

  36. Zeke’s graph fig 5 shows that the total effect of the adjustments is a warming of about 0.5C from the 1930s to now.
    There is a graph at Nick Stokes’s Moyhu blog, also at Paul Homewood’s, showing 0.9F, ie the same.
    And I think this is what Goddard says also, so maybe that’s something everyone agrees on?

  37. using current methodology ,if the time series was extended by 500 years at either end,and random data from the existing data input,would these adjustments even out to create a realistic manufactured temperature record ,or would the past temperatures continue to decline,and future temps continue to rise,at a similar rate.
    if so,whilst your current methodology may be the best mathematically possible,it would indicate a problem.

  38. Andy Skuce of SkS tweets:

    Andy Skuce ‏@andyskuce 13m
    Great piece by @hausfath at @curryja blog, but don’t read the crazy comments. http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/

  39. An excellent and informative post. This is a “must read” by anyone who would hope to understand the complexities of this subject. Thanks for taking the time to write this, and thanks to Judith for providing the opportunity!

    • Steven Mosher

      Do a count of denizens who actually engage the science.
      you know a count of those who want to understand
      Do a count of denizens who

      A) invoke conspiracy
      b) question zeke’s motives.
      c) derail the conversation
      d) say they dont believe but provide no argument.
      e) refuse to do any work with the data or code, and yet call themselves engineers. eg springer.

      • Mosher, you spend a lot of time attacking me instead of the graphs I post.

        Maybe you should politely ask Zeke to add trendlines to this infilling graph. And change the scale a little. And do it by month.

      • Cut it out Mosher. Defensiveness is unbecoming. We didn’t say we didn’t believe Zeke. We said he is not in a position to be objective. Tell us you agree that!

      • I’ve read the emails of a lot of denizens. I’ve read takedowns of the remarkably poor quality of their work. They are totally untrustworthy people. Anyone who relies upon or has endorsed their work, knowing that they are untrustworthy, is also untrustworthy.

      • Steven Mosher

        sunshine.

        your graphs come from your code.
        in the past you made boneheaded mistakes.
        I’ll comment on the graphs when I study the sources and methods.

        See. I treat every problem the same.
        Zeke makes a claim. I go to the sources. FIRST
        You make a claim. I want to go to the sources. FIRST

        So, cough up your code. I will audit you and let you know.

      • Steven Mosher

        “Cut it out Mosher. Defensiveness is unbecoming. We didn’t say we didn’t believe Zeke. We said he is not in a position to be objective. Tell us you agree that!”

        huh. I already said that.
        Every human including you has an interest.
        none of us are objective, none of us are free from interest.
        We CONTROL for this by releasing our data and code.
        that way you can look all you like to see if you can DEMOSTRATE
        any piece or part where our interest changed the result.

        Doing science means you accept that individuals are not objective.

        Now, can I be objective about my judgements about zekes objectivity?
        Can you be objective about your observations?

        theres a paradox for you. go think about that.

      • No code yet Steve? No trend for infilling?

    • Matthew R Marler

      R. Gates: An excellent and informative post.

      I agree.

  40. Please don’t be afraid of space again.

  41. The root cause of the bias between MMTS and LIG measurements was not determined past some generalities: closer to buildings, wood temperature changed via coating type. I didn’t see any testing that swapped or paired the thermometers in the housings. Nor were hosing maintenance and temperatures paired. It’s not unusual for some instrumental methods to have biases with some changes, for example, gas chromatography. However, there are methods to correct those biases. I didn’t see any of that here.

    I haven’t seen the description of QC procedures for the instruments. Were they calibrated to some traceable reference standard once or periodically? If the latter, then what adjustments and annotations have were made to the data based on calibration and drift corrections? If this hasn’t been done, then you don’t know the accuracy of the measurements. I’ve been required by government or customers to recertify NIST traceable thermometers, including the master reference thermometer at 2-5 year periods and check the ones I used for actual measurements periodically. Anything like that going on with these measurements?

    Continuously adjusting past data products to match some current activities? I think it is a poor practice and in some cases, such as environmental data, it could be quite problematic. The same goes for infilling missing data. You either have the data for that station or you do not. It may be a reasonable assumption that the temperature at stations 10-30km apart will be similar, but you don’t know that and the estimate has to add significantly to uncertainty.

    • “It may be a reasonable assumption that the temperature at stations 10-30km apart will be similar”

      Actually, the Pielkes studied a region that included multiple stations and found that sites even a few km apart show very different climate records. And none of the stations replicated the regional averages.

    • Hi Bob,

      I dug into the MMTS issue in much more detail a few years back here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

      The best way to analyze the effect of the transition is to look at pairs of otherwise similar stations, one of which transitioned to MMTS and the other of which remained LiG. There is a pretty clear and consistent drop in maximum temperatures. The rise in minimum temperatures is less clear, as there is significant heterogeneity across pairs. I’ve suggested in the past that the difference in min temperature readings might be a result of the station move rather than the instrument change, as many MMTS stations are located closer to buildings than their LiG predecessors.

      • Thanks Zeke,
        I read that from the link in your post. It sounds like a reasonable way to estimate a bias in the absence of basic QC validation of the equipment change. For all the data messaging going on with this, I’d expect the adjustments to be made using a higher level of QC. Instrument/method validation is a pretty standard QC practice. Did they put the MMTS out thinking any difference was minor for the purpose (agriculture) and now we are trying to force fit it into something more serious, like a data source for rearranging economies?

      • Bob,

        The MMTS transition was dictated by the desire of the National Weather Service to improve hydrologic monitoring and forecasting. The climate folks at the time were very unhappy with this choice, as they wanted a consistent record, but climate monitoring was presumably less of a priority than weather monitoring back in the 1980s, and the stations were used for both.

      • Also, Bob, here is a good side-by-side study conducted after the transition: http://ams.confex.com/ams/pdfpapers/91613.pdf

      • Did no one think to run MMTS and LiG measurements in parallel at the same location for a few years (hell, days) to estimate the bias?

    • Steven Mosher

      “The root cause of the bias between MMTS and LIG measurements was not determined past some generalities: closer to buildings, wood temperature changed via coating type. I didn’t see any testing that swapped or paired the thermometers in the housings”

      read Qualye and then comment.

      • Specific link or should I just find the first Qualye on google?

      • Steven Mosher

        Bob proves that he did not read zeke.
        had bob read zeke and followed all the references
        he would have found Qualye
        instead bob wants me to do his homework

        Here is the link that zeke provided
        http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

        read everything there. you are qualye hunting now.

      • Zeke, thanks for the Hubbard-Lin link, clarifies for me what has been done
        Mosher, (a) I had “read zeke” (b) It’s “Quayle” not “Qualye” (c) I suppose it is easier to make cryptic remarks than actually put up a link and discuss the what you consider important.
        My questions on root cause analysis of the differences seems to be answered. It wasn’t done. Instead, comparisons were made using large numbers of stations and only one proximate set (CSU). In Quayle, mention was made of some stations having both CRS and MMTS for a while, but the data were ignored for months 0-5. I assume, but don’t believe it was mentioned, that they may not have been recording both. The differences between the stations are conjecture: liquid separation (but no documentation of readings with this), differences between heating of shelters (but no documentation), siting (but no documentation). No discussions of instrument drift, calibrations or any of those messy QA/QC things.

        I’m late to this game and my questions were an attempt to form an opinion on the quality of this high-quality dataset and the adjustments. As has been said, the system wasn’t designed for what it is being used for.

    • Mr. Greene, these measuring instruments were not put into place to monitor climate change, as Zeke explains. They were pressed into service decades later. This has caused problems, obviously. Many of those problems have been cited by skeptics for a decade now. I think Zeke in this post has gone a long ways towards answering the questions posed by most and does, in my opinion, serve as an honest guide for anyone with an open mind.

  42. From: Tom Wigley
    To: Phil Jones
    Subject: 1940s
    Date: Sun, 27 Sep 2009 23:25:38 -0600
    Cc: Ben Santer

    “It would be good to remove at least part of the 1940s blip,
    but we are still left with ‘why the blip.'” and

    ‘So … why was the SH so cold around 1910? Another SST problem?
    (SH/NH data also attached.)’

    So they “fixed” the Southern Hemisphere as well.

    Well that certainly proves “good intentions” to me.

    • The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes.

      • thisisnotgoodtogo

        And there as no land blip

      • There is a blip in the land-only data too, and both blips occur around 1940. It seems to be a robust feature of the data, even if we do have to make a bucket correction for some of the SST measurements.

      • thisisnotgoodtogo, see this:

        http://www.columbia.edu/~mhs119/Temperature/

      • Wood for trees comparison:

        BEST, CRUTEM3 and HadSST2

        The argument that it’s an artifact does not seem to be a plausible one.

      • thisisnotgoodtogo

        Hi Carrick.

        “There is a blip in the land-only data too, and both blips occur around 1940. It seems to be a robust feature of the data, even if we do have to make a bucket correction for some of the SST measurements.”

        Yes, there is. WHUTTY was trying to slide stuff by again.

        We see that Tom and Phil were confabulating on how to adjust by figuring how much they wanted to take away from appearances. Like this: “Must leave some because there is a land blip, how much removal can we get way with?”

      • WebHubTelescope

        WWII was nasty. It affected measurements in ways that we will never quite figure out. The SST bias is well known and the data is patchy, the land measurements are possibly biased as well . But since the ocean is 70% of the global temperature signal, that is the one that clearly stands out.

      • WHT wrote
        “The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes”

        As noted by Tom and Phil , and circumlocuted by WHT, that does not explain the land blip.

        His response:
        “since the ocean is 70% of the global temperature signal, that is the one that clearly stands out”

        Clearly ! And getting rid of it by off-the-cuff figurings on what they could get away with, would affect Global average so much more ! Perfect.

      • ClimateGuy commented

        The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes.
        WHT wrote
        “The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes”

        The early 40’s blip was due to a warm AMO and a warm PDO overlapping.

      • “The early 40′s blip was due to a warm AMO and a warm PDO overlapping.”

        Partly, and that is accounted for in the natural variability. There is still a tenth of a degree bias due to mis-calibration as military vessels took over from commercial vessels during WWII.

    • Steven Mosher

      Chuck, the mail is about SST.
      This post is about SAT.

      Note another skeptic who cant stay on the topic of adjustments to the LAND data.

      doesnt want to understand.

      When Zeke shows up to discuss land temps, change the topic to SST.

      • Matthew R Marler

        Steven Mosher: doesnt want to understand.

        Assume good faith, and a range of intensities in “want”. Point out the error and then stop.

      • Steven Mosher

        mathew,

        How about this.
        How about YOU police the skeptics.
        Spend some time directing them to what the real technical issues are.

      • Yea Marler, during WWII the navy and merchant marine took over the responsibility for collecting SST measurements. Do you have any clue as to the calibration issues that resulted from that action?

        What are they supposed to say in emails? That Hitler and Hirohito really messed things up?

      • Matthew R Marler

        steven mosher: How about YOU police the skeptics.

        I read most of your posts and I skip most of the posts of some others. I’d rather not be distracted by the junk that you write.

        “Assume good faith” was taken from Zeke Hausfather. I guess you don’t think it’s a good recommendation.

      • I know what the post is about. I am questioning whether some of the players have “good intentions.” (No aspersions are being cast on what Zeke and even you, despite your drive-by cryptic arrogance, are doing.)

      • Mosh, these deniers see exactly what they want to see. Amazing that they can put blinders on to WWII — its almost a reverse Godwin’s law.

      • Matthew R Marler

        WebHubTelescope: That Hitler and Hirohito really messed things up?

        Well they did, dontcha know?

      • Steven Mosher

        Matthew

        Again,

        how about you police the skeptics.
        give it shot.
        show your chops.
        its good practice to call out BS whereever you see it.
        be a hero.

      • Matthew R Marler

        Mosher: its good practice to call out BS whereever you see it.

        I can’t do everywhere. In particular, I try to ignore people who are always wrong. There are a couple who are right or informative just barely often enough, but others whom I never read.

      • Hey Steve–the skeptics don’t need to be policed. Some of them might benefit from being ignored a bit…


      • Tom Fuller | July 8, 2014 at 4:29 am |

        Hey Steve–the skeptics don’t need to be policed.

        That’s right, you don’t “police” little kids that make a mess of the house and get chocolate all over their face.

      • Steven Mosher

        “Hey Steve–the skeptics don’t need to be policed. Some of them might benefit from being ignored a bit…”

        yes you ignore them and they show up to say that there questions were never answered, their demands never met, that zeke is hiding something, blah blah blah.

        I suggest that people who suggest ignoring should start by ignoring me as I play wack a mole.

        Its fun

        I get to have fun.

    • Matthew R Marler

      Chuck L. :From: Tom Wigley
      To: Phil Jones
      Subject: 1940s
      Date: Sun, 27 Sep 2009 23:25:38 -0600
      Cc: Ben Santer

      Why exactly is that relevant to Zeke Hausfather and Steven Mosher and the BEST team?

  43. Pingback: Misleading Information About USHCN At Judith Curry’s Blog | sunshine hours

  44. Zeke

    Well done for writing this long and informative post. It warrants several readings before I would want to make a comment. I do not subscribe to the grand conspiracy theory nor that scientists are idiots or charlatans or a giant hoax is being perpetrated on us. Which is not to say that I always agree with the interpretation of data or that often extremely scant and dubious data is given far more credence than it should.

    I will read your piece again and see if I have anything useful to say but thanks for taking the time an effort to post this.

    tonyb

  45. In my opinion, Zeke and Mosh are just two more “scientists” who are trying to change history by waving their hands. Leave the 1930 alone! You are no better than Mann and Hansen.

    • Steven Mosher

      dont address the science, attack the man.

      sceptical Lysenkoism

      • Leonard Weinstein

        Steve,
        I appreciate what Zeke has done here, and consider both he and you as basically reasonable and trying to be honest. However, this last comment is strange, since 99% of those that attack the scientists, are attacking skeptics (Lindzen, Christy, Spencer, etc,), and do exactly attack the man, not the science. It is a fact that many skeptics (including myself) started out accepting the possibility of a problem, and by studying the facts in depth came to the conclusion that CO2 effects are almost certainly small, dominated by natural variation, and mainly are desirable. I agree that there has been warming in the last 150 years, and a small part of that likely due to man’s activity. I really don’t care if it was 0.5C or 0.8C total warming, and if man contributed 0.1C or 0.4C of this. The flat to down temperature trend of the last 17 or so years, and likely continued down trend clearly demonstrate the failure of the only part of CAGW that is used to scare us: The models. I think the use of data adjustment and then making an issue of 0.01C as a major event is the bug in many of the skeptics here.

      • Steven Mosher

        leonard.

        good comment.

        here is the problem.

        there is all this skeptical energy. it should be focused on the issue that matters.

        how can I put this. After 7 years of looking at this stuff.. this aint where the action is baby.

      • +1 to Leonard

      • I totally agree. The focus on the “measured” temperature record is akin to mental mas…bation.

        So where do you think the action is?

      • Matthew R Marler

        Leonard Weinstein: The flat to down temperature trend of the last 17 or so years, and likely continued down trend clearly demonstrate the failure of the only part of CAGW that is used to scare us: The models. I think the use of data adjustment and then making an issue of 0.01C as a major event is the bug in many of the skeptics here.

        It is useful to address the measurement and temperature problems, and then to address the modeling and theoretical problems separately. Some of the people who have posted “skeptical” comments here clearly (imo) do not understand the statistical methods that have been employed in the attempt to create the best attainable temperature record. That’s independent of whether the same people or different people understand any of the CO2 theory or its limitations.

        This thread initiated by Zeke Hausfather is very informative about the temperature record and the statistical analyses. His next two promise more information about the temperature record and the statistical analyses.

      • David Springer

        “dont address the science, attack the man”

        Ah. Like you did earlier attacking me for calling myself an engineer? Actually it was my employers since 1981 who insisted on calling me an engineer. I prefer to call myself “Lord and Master of all I survey.”

        You are such a putz, Mosher. Of course you know that already.

  46. Regardless of whether these adjustments are made in good faith or not, I would like NASA to run some experiments. Take the pre global warming scare algorithms, and run them against the 1979 – current temperatures. Compare these to UAH. Then take today’s algorithms. Compare them to UAH. At least then the amount of adjusting that’s going on would be known.

    • Hi Ed,

      You can do one better: compare raw data and adjusted data to UAH. Turns out that over the U.S., at least, UAH agrees much better with adjusted data than raw: http://rankexploits.com/musings/wp-content/uploads/2013/01/uah-lt-versus-ushcn-copy.png

    • Steven Mosher

      Well ed?

      Zeke answered your complaint.

      Are you interested in understanding? can you change you mind based on evidence.

      It was your question..

      Second, you realize that UAH is highly adjusted.
      right?
      you realize that the UAH records has changed many times by adjusting for
      instrument changes..
      right?

      • Zeke,

        Yeah, absolute temperatures are interesting, but I’m mostly interested in the change in the shape of the graphs. If modern day adjustments more closely follow the UAH shape than the algorithms of ten, or twenty years ago, then that gives food for thought. Specifically, I’m thinking UAH methodology is completely different from NASA’s, and so it’s unlikely errors in one are identically reflected in errors in the other. If the modern day adjustments more closely reflect UAH, that’s a good indication the approach is getting better. On the other hand, if modern algorithms yield cooling in the 1980s and warming in the 2000s vs. UAH and this effect is pronounced compared to earlier NASA algorithms, then that could indicate bias in either NASA or UAH algorithms, though probably in the NASA algorithms since now the previous NASA algorithms must be wrong and UAH too must be wrong.

        Why look at previous NASA algorithms? In my view bias is a subtle thing, and even people with very solid credentials and the best of intentions can get snookered.

        Mosher:

        What complaint am I making?

      • Steven Mosher

        Ed
        UAH and SAT are two different things.

        Suppose I had an method for calculating unemployment
        Suppose I had a method for calculating CPI

        both methods require and use adjustments.

        You dont learn anything by comparing them.

    • “You dont learn anything by comparing them.”

      How then do you interpret Zeke’s comment within the prism of your claim?

      “You can do one better: compare raw data and adjusted data to UAH. Turns out that over the U.S., at least, UAH agrees much better with adjusted data than raw:”

      I can think of several, one being he doesn’t agree with you. Here, Zeke is using UAH to bolster the idea that NASA adjustments make for a better temperature record. If you agree with that, then these are comparable data-sets. If not, take it up with Zeke.

      Meanwhile, I’m still waiting for you to explain my “complaint.”

  47. Pingback: The Skeptic demands: temperature data (draft) | ScottishSceptic

  48. Good post Zeke, but I’m curious that if you have several readings for day and average them to a temperature mean, wouldn’t that wipe out any need for a TMax or Tmin adjustment?

    • Dale,

      If you had hourly readings you would no longer need TOBs adjustments. You would still have to do something about station moves and instrument changes, however. I’m a bit more of a fan of the Berkeley approach of treating breakpoints as the start of a new station record, rather than trying to conglomerate multiple locations and instruments into a single continuous station record.

    • Dale, hourly data is what is used to estimate the TOBS correction.

      See for example this post from John Daly’s site:

      http://www.john-daly.com/tob/TOBSUMC.HTM

  49. Judith Curry

    When I have had to change instruments, I’ve run concurrent outputs for the same experiment to see if the results are the same: i.e., overlap.

    When I see that there has been a change using Liquid in Glass and two automated systems which necessitated physically moving the automated systems closer to buildings as well as Time of Observation changes, I am curious as how long the readings run concurrently so that there is overlap in using all of the instruments.

    For example: TOB, when there was a switch to AM from Afternoon, how long (and I am assuming there was overlap observations) was the observation period that had morning and afternoon recorded, a season? a year? a decade? ongoing?

    When the switch from LiG to MMTS or ASOS, how long was the overlap field observation? Or was this another in lab experiment?

    “NCDC assumes that the current set of instruments recording temperature is
    accurate,” Electronics don’t drift? go haywire? Issues with my computer tell me otherwise.

    I am first concerned with the fundamentals/integrity of the observations vs the fiddling with the outputs. Output fiddling is the game of statisticians on whom I am dependent for their own integrity.

    • RiH008:

      There have been a number of papers published looking at differences between side-by-side instruments of different types. This one for example: http://ams.confex.com/ams/pdfpapers/91613.pdf

      The NCDC folks unfortunately had no say over instrument changes; it was driven by the national weather service’s desire to improve hydrologic monitoring and forecasting. Per Doesken 2005:

      “At the time, many climatologists expressed concern about this mass observing change. Growing concern over potential anthropogenic climate change was stimulating countless studies of long-term temperature trends. Historic data were already compromised by station moves, urbanization, and changes in observation time. The last thing climatologists wanted was another potential source for data discontinuities. The practical reasons outweighed the scientific concerns, however, and MMTS deployment began in 1984.”

      • Zeke Hausfather

        Thank you for your response. As I understand it, NWS made the decision to change the instrumentation and in some cases location of the observing stations.

        I did not see anywhere how the transition took place.

        A 20 year retrospective analysis of one station in Colorado:

        “Is it possible that with aging and yellowing of the MMTS radiation
        shield that there is slightly more interior daytime
        heating causing recent MMTS readings to be more
        similar to LIG temperatures. But in a larger
        perspective, these changes are very small and
        would be difficult to detect and explain, except in a
        controlled co-located environment. Vary small
        (less than 0.1 deg F) changes in MMTS-LIG
        minimum temperatures have also been observed,
        with MMTS slightly cooler with respect to LIG. The
        mean annual MMTS-LIG temperature differences
        are unchanged.
        Just as in the early years of the
        intercomparison, we continue to see months with
        larger and smaller differences than the average.
        These are likely a function of varying
        meteorological conditions particularly variations in
        wind speed, cloud cover and solar radiation.
        These are the factors that influence the
        effectiveness of both the MMTS and LIG
        radiations shields.”

        If I am understanding what the article you provided said: There was no side-by-side comparisons of LiG and electronic observer in a proscribed way. There may have been some side-by-side, and there are anecdotes, but the transition was not geared for climate research, particularly longitudinal. The instrument period observations are influenced by meteorological conditions not quantitated.

        It appears to me that the instrument period, at least from the transition onward, is spurious because of that transition. The adjustment mechanisms are ill designed and suited to this data set, and there is @ 0.5 C adjustments based upon a best….estimate. This is all here in the USA. What happened around the world?

        I am still curious.

      • RiH008,

        There is no prescribed 0.5 C adjustment for MMTS transitions. Its handled by the PHA, which looks for breakpoints relative to neighbor difference series. Instrument changes tend to be really easy to pick up using this approach, as they involve pretty sharp step changes up or down in min/max temperatures.

        In that particular case its pretty clear that there is a ~0.5 C difference in max temp readings between the instruments. I looked at many other examples of pairs of stations here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

  50. In case anyone wondered whether the Karl 1986 TOBS paper had good data …

    http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-605470

    • Steven Mosher

      See the follow up paper.

      • Which one? Is the data any better?

      • Steven Mosher

        read zeke again.
        take notes.
        write down the references.
        read the papers
        get the data.
        get the code.
        write your own code.
        compare the results.
        write a paper.
        be a hero.

      • Second paper

        Ooops:

        ” Data for the analysis were extracted from the Surface
        Airways Hourly database [Steurer and Bodosky, 2000]
        archived at the National Climatic Data Center. The analysis
        employed data from 1965 –2001 because the adjustment
        approach itself was developed using data from 1957 –64.
        The Surface Airways Hourly database contains data for 500
        stations during the study period; the locations of these
        stations are depicted in Figure 2. The period of record
        varies from station to station, and no minimum record
        length was required for inclusion in this analysis”

        Wow. The stations could and would have moved spatially and elevation.

      • Steven Mosher

        yes bruce.
        and the station moves are part of the reason why the error of prediction is
        what it is.

        If you had been reading my comments from 2007 to 2010,you’d know
        how important the error of prediction is.

        Its not that hard to understand.

        give it a try.

        you could actually go through the records and find the stations that moved. its pretty simple.

        show us your chops.

        Oh when you do tell roy spenser he uses the same data without accounting for the moves.

  51. After the appalling comment by Judith “that they are only trying their best”, it seemed to me rather than saying what is currently wrong with the present system, what I really wanted to do is to say what we needed instead. So, I’ve decided to “list my demands” on my own website. I would welcome any comments or additions.

    https://scottishsceptic.wordpress.com/wp-admin/post.php?post=3657&action=edit&message=6&postpost=v2

    • ScottishSceptic: I had a problem with your link

    • Steven Mosher

      read ISO9000 for starters. thats my advice.

    • Scottishsceptic

      Your link goes to a place which asks for my email mail AND a password.

      I have no wish to create yet more passwords. When I bought underlay for my carpet online I was required to create a password so these days I tend to steer clear of new places that require one for no good reason.

      Tonyb

  52. Mr Hausfather,

    I tend to agree with some comments regarding the lack of credibility caused by the “scientific community´s” bad apples as they try to evolve into “scientific manipulators”. I can see they are giving you a headache.

    The problem, as I see it, is that data manipulation is quite evident. They do tend to treat the public with a certain contempt.

    And I´m not referring to the temperature adjustments. I´m referring to the use of the red color palette by NOAA to display worldwide temperatures, and similar issues, or the use of tricked graphs and similar behaviors. You know, if we use a search engine and start searching for climate change graphs and maps, there´s a really insteresting decrease in the number of products after 2010. It seems they realized the world´s surface wasn´t warming, and they stopped publishing material. This is seen in particular in US government websites. Is the US President´s “science advisor´s” political power reflected in the science they show us?

    Anyway, I realize this thread is about temperature adjustments in the USA. But I do wonder, does anybody have a record of the temperature adjustments by independent parties, for example Russia and China? Do you talk to personnel in the WMO Regional Climate Center in Moscow?

    • Steven Mosher

      If you dont like the colors download the data and change the palette.

      • Mr Mosher, I´m sophisticated enough to catch “palette bias”. I don´t need to download the data. However, US government websites intended for the general public do have a significant bias. And it´s not reasonable to expect individual members of the public to understand there´s a bias, download the data, and plot it using software most of them lack.

        I´m extremely cynical when it comes to honesty by government leaders in general, and this applies to the whole spectrum. Thus my social criticism isn´t aimed at a particular population of politicians (although I do admit I have an issue with real red flag waving communists).

        Take US politics. Those of us who are smart enough realize we got lied about the Tonking Gulf Incident, that Clinton lied about genocide in Kosovo, that Bush lied about WMD in Iraq, etc etc etc.

        Therefore I´m not really surprised to see government agencies toe the line and use deceit to plug the party line du jour. On the other hand, I do write and talk to explain these deceptions do go on. During the Tonking Gulf Incident I was sort of innocent and I wasn´t too aware of what went on out there. Later, as I realized things were being distorted, i made it my hobby to research what really went on. And what I found wasn´t so nice.

        This climate warming issue is peanuts. How do you like the fact that we spent $1 trillion invading Iraq looking for those fake WMD and here we are 11 years later watching a Shia thug allied with Iran fighting a civil war against a bunch of Sunni radicals? This climate warming issue is peanuts compared to the lies and the mistakes the US government makes when it lies to the people to justify making irrational moves.

      • Steven Mosher

        “Mr Mosher, I´m sophisticated enough to catch “palette bias”. I don´t need to download the data. However, US government websites intended for the general public do have a significant bias”

        Show me the experiment you did to prove the bias.

        If you dont like the palette, do what I do.
        change it.

      • David Springer

        Fernando – excellent. It went completely over Mosher’s head of course so his instinct was to simply repeat the unreasonable demand.

    • Rud Istvan

      Jennifer Marohasy has documented “cool the past and/or warm the present for specific stations in Australia by its BOM (equivalent to NCDC), in their so called High Quality (HQ) data set. The bias was so obvious that a national audit of HQ was demanded under an Australian Law. The BOM response was to drop HQ and commence with a new homogenization program.
      In New Zealand, NIWA has aggressive and apparently unjustifiable cooled the past. A lawsuit was filed seeking technical disclosure. It got rebuffed at the highest court level on dubious legal grounds similar to Mass. V. EPA. Appeals Courts are not well positioned to determine matters of fact rather than law, and depending on how laws are written have to defer to fact finders like EPA or NIWA even if biased.
      Frank Landsers RUTI project has similarly documented at least regional warming bias in HadCrut.
      Steriou and Katsoyiannis documented warming homogenization bias in global GHCN using a sample of 163 stations. Paper was presented at EGU 2012 and is available on line from them. Quite a read.

  53. Mosher and Zeke,

    After all the adjustments, how do you determine if the information is more accurate than before the adjustments?

    Andrew

    • Steven Mosher

      simple. out of sample testing.

      With TOBS what you do is this ( this is how it was developed)

      you take 200 stations

      you make two piles

      You use 100 to develop the correction.

      your predict what the other 100 should be recording.

      you compare your prediction to the observations.

      You see that your prediction was solid

      You publish that paper years ago.

      Then you watch skeptics avoid reading the paper, and you watch them demand proof.

      When you point them at the proof, they change the subject.

      When you point out that they are avoiding reading the proof they demand, they get nasty an attack zeke’s motives.

      • “you take 200 stations

        you make two piles

        You use 100 to develop the correction”

        Doesn’t sound very scientific to me. Just sounds like you are making group A more like group B. There is no scientific reason to do this.

        Andrew

      • Tom Scharf

        This assumes the stations are independent of each other and not affected by independent variables, which is not always the case. If the in sample and out of sample data consistently read incorrectly the same way, a “confirmation” could still occur. Out of sample testing can be very useful, but there are many ways to do it wrong and sometimes no way to do it right depending on the data sets available. Not saying it was done wrong here, only saying that stating OOS testing was done is not a blanket confirmation. Certainly better than not doing it at all.

        Another example, if one claimed the post 1980 divergence issue in tree rings was out of sample confirmation data, then it would fail and clearly invalidate the tree ring proxy record. So we have an OOS failure but the reconstruction still holds for many.

      • “Then you watch skeptics avoid reading the paper, and you watch them demand proof.

        When you point them at the proof, they change the subject.

        When you point out that they are avoiding reading the proof they demand, they get nasty an attack zeke’s motives.

        All the good work Zeke is doing to help improve communication on this issue…..
        another “just saying”….

      • “you make two piles
        You use 100 to develop the correction.
        your predict what the other 100 should be recording.
        you compare your prediction to the observations.”

        It seems to me the only way to actually verify a “correction” for a change in equipment, location or procedure, is to continue taking temps at the same location(s) using both methods/instruments over an extended period of time. If you do that, with enough stations, and the change in each is the same within a certain range, it seems to me that that gives you your correction with error bars for that change. (You could then use it to “predict” the change in temps at other sites, but I don’t see the purpose. How do you know the temps/average temps/trends of the other stations remained the same?)

        Is this what “develop the correction” means?

        If on the other hand, you are making a statistical “correction” based on assumptions and then comparing it against other stations to see if your “predictions” are correct, I don’t see the value in that at all.

      • David Springer

        The time of observation ate my global warming.

        Priceless.

  54. If Zeke is to be allowed three long guest posts here, how about allowing Goddard to write one?

  55. Pingback: Adjustments To Temperature Data | Transterrestrial Musings

  56. Skeptics are better off barking up another tree than the temperature record.

    I trust they can read a thermometer without letting their political activism get in the way. This is one measurement area where attempting to corrupt the record would be easy to identify, as opposed to the paleo record which is a mess of assumptions, guesses, and questionable statistics.

    One problem I have with the temperature record is when it is presented without a vertical scale in the media, which seems to happen much more often than one would expect. The same goes for sea level rise.

    Another issue is when it is shown only from 1950 or 1980 which hides the fact that first half of the 20th century had significant warming which was not AGW based. This is such old news that it is never discussed anymore, but I think it is significant relative to how much natural forces may be responsible for the last 50 years of warming.

    Presenting the magnitude of the temperature change over the past century relative to how much the temperature changes on a daily or yearly basis can be quite an eye opener to many people who seem to believe this warming is “dramatic”.

    http://instituteforenergyresearch.org/wp-content/uploads/2012/05/Nordhaus-4.png

    • There’s no problem with the temperature record. The problem is with the ‘adjustments’, which with each revision add in more and more warming. The first USHCN version added in 0.5F warming, now they are adding 0.9F. It’s the so-called scientists who can’t read a thermometer without their political activism getting in the way.

      • Steven Mosher

        “It’s the so-called scientists who can’t read a thermometer without their political activism getting in the way.”

        Im a libertarian.

        Your theory is that liberal scientists are making stuff up because of their activism.

        TOBS was first done in 1986. before the IPCC
        Im a libertarian, where is my activism.

        So much for your theory.

        more bad science from you.

      • You have excelled yourself here. It’s all about you! As with climategate, you seem to have a delusional view of your own importance.

      • “So-called scientists”? No ad homs here. No sirree.

      • Paul Matthews:

        There’s no problem with the temperature record.

        I thought you were smarter and better informed than this.

        Of course there are problems with the (raw) temperature record. Given the manner in which the data were collected, the issue isn’t whether the data should be adjusted to correct for the errors, but whether sufficiently good adjustments could ever be made, and whether we could know that they had been made.

      • @Carrick
        … and how much error is added to the data with each estimated correction and adjustment and how much uncertainty flows to the results of the analysis.

      • Steven Mosher

        ” and how much error is added to the data with each estimated correction and adjustment and how much uncertainty flows to the results of the analysis.”

        That is a good question.

        One thing that I droned on about for maybe 3 years was the propagation of errors due to adjustment.

        Its one reason I like the Berkeley approach better. Its top down
        AND we have much larger errors than Jones.

        He flipped out when he read this and could not understand the math

    • Matthew R Marler

      Tom Sharf: Skeptics are better off barking up another tree than the temperature record.

      I agree, but I am glad that other people are watching this with energy and alertness.

    • Tom Scharf

      I would certainly say that “miraculously” many temperature adjustments seem to make the past colder and the present warmer, and the adjustments mostly trend that direction over time. This certainly brings confirmation bias into question, but you have to look into what they actually did, and I don’t see any authentic corruption here.

      Enough people have looked into it (particularly BEST in my opinion) that it seems good enough to me and not likely to get much better, or change much from here on out.

    • Nice point about the presentation. I’d thought of that but your link was the first time I had seen the presentation in a normal scaling… Telling, eh?

  57. Pingback: More Globaloney from NOAA - Page 4 - US Message Board - Political Discussion Forum

  58. Thanks, Zeke and Judith, for this post. It is exactly the kind of thing I look for on climate blogs: basic information to better my own personal understanding (and with less hype, even if the lower hype makes it a bit less exciting than the latest post ostensibly threatening to up-end the field).

  59. Don Monfort

    The USCRN doesn’t seem to be working properly:

    http://www.forbes.com/sites/jamestaylor/2014/06/25/government-data-show-u-s-in-decade-long-cooling/

    Adjustments will be needed.

  60. Matthew R Marler

    Zekd Hausfather, thank you for your post, and the responses to comments. I look forward to your next posts.

    Steven Mosher, thank you for your comments as well.

    From Zeke: Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

    Yes to understanding exactly what has been done.

    “Assuming good faith” is a problem. The assumption should be that errors have been committed, and that the people who made the errors will be very defensive about having them pointed out. Sorry. It’s hard to avoid thinking that a check of your work (or my work) is an assault on your integrity or value as a person (or mine). Assuming good faith is why journal editors generally have trouble detecting actual fraud; everybody makes mistakes, and the reputation of academia is that they do not do as good a job checking for errors in programs as do the pharmaceutical companies, who have independent contractors test their programs. “Assuming good faith” ought to be reciprocal and about equal, and equally conditioned.

    Should FOIA requests be granted the “assumption of good faith”, however conditioned or qualified? Say the FOIA requests made to the U of VA by news organizations and self-appointed watchdogs for the emails of Michael Mann? Or perhaps the re-analyses by Stephen McIntyre of data sets that have had papers published about them? It’s a tangent from your post, which is a solid contribution to our understanding.

    • Steven Mosher

      ““Assuming good faith” is a problem. The assumption should be that errors have been committed, and that the people who made the errors will be very defensive about having them pointed out.”

      Err no.

      Assuming good faith is not a problem.
      you do work for me. I assume you will make mistakes. that is not bad faith.
      you do work for me. I claim you must have made mistakes because you
      are self interested and because some one across the ocean made mistakes in a totally different field, and I refuse to look at your evidence
      until you prove you are a virgin. that is what most skeptics do.

    • Matthew R Marler

      Mosher: Err no.

      Assuming good faith is not a problem.
      you do work for me. I assume you will make mistakes. that is not bad faith.
      you do work for me. I claim you must have made mistakes because you
      are self interested and because some one across the ocean made mistakes in a totally different field, and I refuse to look at your evidence
      until you prove you are a virgin. that is what most skeptics do.

      How you do go on.

      There are professionals whose work is always audited. I mentioned the pharmaceutical companies, whose programs are always checked by outsiders. Financial institutions have their work audited; professional organizations like AAAS and ASA have their finances audited; pharmaceutical and other scientific research organizations maintain data audit trails and they are subject to audits by internal and external auditors.

      Whether the auditors assume good faith or not, mistakes are so prevalent that it ought to be assumed by auditors that they are present.

    • Steven Mosher

      “Whether the auditors assume good faith or not, mistakes are so prevalent that it ought to be assumed by auditors that they are present.”

      I can tell you with CERTAINTY that there are mistakes in our product.
      it is not a can of pringles.

      Lets start form the top.

      1. De duplication of stations.
      We decide algorithmically when two stations are the same or different
      starting with over 100K stations we reduce this to 40K unique.
      there WILL BE errors in this. even if our algorithm were 99% perfect
      Central Park was a nightmare of confused source data.
      Another user pointed out an error that led to a correction of 400 stations.
      There are error in the EU where the metadata indicates two stations
      and some user insist that historically there was only one.
      These errors dont effect the global answer but the Local detail will
      not be the best you could do with a hand check of every station record.

      2. The climate regression. we regress the temperature against elevation
      and latitude. This captures over 90% of the variation. However, these
      two variables dont capture all of the climate. Specifically if a station is in an area of cold drainage the local detail will be wrong in certain seasons.
      Next, because SST can drive temps for coastal stations and because the
      regression does not extract this, there will be stations where the local detail is wrong. However, adding distance to coast doesnt remove any
      variance on the whole. so the global answer doesnt change. If you’re really interested in the local detail, then you would take that local area and do
      a targeted modelling effort.

      3. Slicing. the slicing can over slice and under slice. It relies on metadata
      and statistical analysis. So there will be cases of over slicing and under slicing. This is one area where we can turn the slicing knob and see the effect. there will be a local effect and a global effect.

      4. Local detail. one active question under research is how high a resolution can we drive to. Depending on choices we make we can oversmooth the local or undersmooth it. Some guys like Prism drive the resolution down to sub 30minutes.. this tends to give answers that are thermodynamically suspect. On the other hand you have CRU which works at 5 degrees.
      Now, you can play with this resolution. from 5 degrees down to 1/4 degree
      what you find is the global answer is stable, but the local detail is increase.
      The question is “is this local detail accurate”

      The question of bad faith is this. Are these errors which we freely admit the result of my libertarian political agenda? or Zeke’s more liberal polilitcal agenda? please decide which one of our agendas created these errors which we freely admit to

  61. New Rule: Anyone who doesn’t trust the temperature data, can’t use that data as evidence for the Pause.

    • Which set of data?

    • Don Monfort

      It’s like this davey, if the Soviet Union admitted one year that production of cement had declined, you could believe them.

    • Steven Mosher

      David, you are expecting consistency from skeptics.
      they will question the record when it fits their agenda
      they will endorse the record when it fits there agenda.

      They will ignore that the very first skeptical attempt to construct a record
      (jeffid and romanM) actually showed more warming

      • Don Monfort

        Pointing out that their record shows no warming is not necessarily endorsing their record. You know that.

      • Steven Mosher

        Don citing the record AS PROOF of a pause,
        citing the record AS PROOF of c02 is not the cause,
        requires, logically, endorsement.

        Merely pointing is one thing. citing as proof is another.

        I own a gun.
        you find your enemy dead.
        the bullet matches my gun.
        You argue against the match, you raise doubts.
        You find your dog dead.
        the bullet matches my gun.
        You argue I killed your dog

      • Mosher will be denying the Pause any moment now.

      • Steven Mosher

        no bruce.

        I’m pretty clear on the pause.

        wishful thinking.

        1. If you assume that the underlying data generating model is linear
        2. And you fit a straight line model to the data.
        3. the model will have a trend. not the data, data just is.
        4. The trend in that model will have an uncertainty.
        5. Depending on the dataset you select and the time period you can

        find a period in the recent passed where the trend of the assumed model is “flat”.

        some people refer to this as a pause, hiatus, slowing, etc.

        Its just math.

      • David Wojick

        “The data just is” with no properties? What a strange concept of reality!

      • Don Monfort

        You are making sweeping generalizations about skeptics, Steven. Maybe you should say ‘some skeptics’ blah…blah…blah. That’s to differentiate yourself from apple and the rest of that mob.

      • Steven Mosher

        “David Wojick | July 7, 2014 at 3:25 pm |
        “The data just is” with no properties? What a strange concept of reality!

        yes david. data doesnt have trends
        the data is what it is.

        you produce a trend by making an assumption and doing math.

        hmm go read briggs. Then come back.
        no link for you, you have to do the work for yourself.

        hint. you have to choose a model to get a trend. the trend is not ‘in’ the data.

        the trend is in the model used to apply to the data.

      • “they will question the record when it fits their agenda
        they will endorse the record when it fits there agenda.”

        Yeah, that’s why I put “pause” in quotes, and refer to the “reported” temperature record.

        A fair number of skeptics I have read doubt, as I do, that anyone knows what the global average temperature/heat content is with the accuracy and specificity claimed. Let alone knows past averages and can predict future temps with the same precision.

        It is totally different to show the flaws in the reported averages (ie. UHI, uniformity of adjustments, etc.) The argument “Even assuming you are right about A, you are clearly wrong about B” does not admit that you are correct on A.

        “Your reported temperature trends are garbage, but even your own reports undermine your overall theory because they don;t show the warming you all uniformly predicted.” See how it works?

        But of course, you know all that. You’re just being an obscurantist.

      • Steve Mosher,
        Stop feigning indignation! When did marketing presentations get accepted without argument.

        I enjoy you but Zeke’s effort speaks for itself. A dam good effort so far (but you weaken his argument by being so over the top). The methods are worth discussing (some questions are fair and some are not) but what is new about that in climate discussion? Like R.G.B. at Duke points out continually to everyone (you on several occasions) their are weaknesses in the physics and arguments on both sides. Deniers?

        Business as Usual?
        The I988 projections for CAGW (science is settled talking heads) were pretty tough on everyone (even those much smarter than themselves F. Dyson, etc.). Zeke is doing just fine without winning every point.

      • Steven Mosher

        Don

        seriously, you should note that Zeke is answering every good question.
        with patience and good humor. he amazes me.
        me? i get to police skeptics who are out of line.
        you could always do that,
        you could be nice and gentle about it.

        But quite frankly Zeke puts a lot of effort into this stuff. Normally at Lucias
        there is 1 troll for every 10 good questioners. But here the ratio is reversed.

        if me pounding on a few off topic people bugs you, then pull them aside and do it yourself.

      • Don Monfort

        Steven, Zeke is doing fine. He is answering almost every question with plausible explanations. You are not helping him by echoing davey apple’s tarring of skeptics with a broad brush. Isn’t that an off topic distraction?

        Nobody has answered my question on why the warmest month on record changes from July 1936 to July 2012 with great fanfare, then changes back to July 1936, without a peep fro NOAA. Have their own algorithms stabbed them in the back and they are blissfully unaware?

        https://www.google.com/search?q=july+2012+hottest+month+ever&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb

        http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/

        I didn’t see any comment on Anthony’s post by you or Zeke. Would either of you care to comment on the unreported flip flop?

      • Don Monfort

        According to NOAA one state set a record high temperature in July 2012, while 14 states had their record high temperatures recorded in July 1936. Yet when homogenized and anomalized, July 2012 was declared the warmest month on record.

        http://www.ncdc.noaa.gov/extremes/scec/records

    • Matthew R Marler

      David Appell:: New Rule: Anyone who doesn’t trust the temperature data, can’t use that data as evidence for the Pause.

      Why? If the pause persists despite (possibly motivated) adjustments, does that not warrant greater credence in the pause?

    • Tom Scharf

      Does the inverse of this rule also apply?

      “Anyone who trusts the temperature data can’t deny this as evidence for the Pause.”

      • Tom Scharf w0ite:

        “Anyone who trusts the temperature data can’t deny this as evidence for the Pause.”

        No, not quite. The temperature data by itself aren’t the evidence. You will have to provide some analysis of it, like demonstrating that there is a “Pause” based on some statistical metric. No?

      • Don Monfort

        I have an analysis for you, perlie. The pause is killing the cause.

      • And that is all fake skeptics have to offer.

      • Don Monfort

        I don’t have time to waste on pause deniers, perlie. That’s all you get.

      • I know, since you are actually not interested in the scientific question at hand. You are just an ideologue, like fake skeptics in general, who try to further their anti-science propaganda whatever their particular economic interest or political or religious motivation is for doing so.

      • Truth.

      • The temperature data by itself aren’t the evidence. You will have to provide some analysis of it

        The anomaly data already represent an analysis

      • phatboy wrote:

        The anomaly data already represent an analysis

        So, tell me then. How do you derive the assertion about the alleged “pause” from the anomaly data themselves? How do you recognize the “pause”. You don’t need any trend analysis, any statistical metrics, nothing?

      • Don Monfort

        That’s right, perlie. We are all motivated by some combination of ideology, profit and religion. Very scientific. You are going to save the world with that crap.

      • Don Monfort

        Perlie hasn’t heard:

        google.com/search?q=the+climate+pause&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb

      • A graph of the anomaly data is effectively a trend in itself. So you just need to use your eyeballs. Trying to get a trend of what is effectively a trend produces all sorts of wonderful results, as you would know from following the comments of certain individuals.

      • Tom Scharf

        I can look at the temperature trend over the past century and state this trend is increasing over the past 100 years.

        I can look at the same trend over the past 20 years and say this same trend is essentially flat.

        Can you not bring yourself to do that? At all?

        Arguments that 20 years are too short for this analysis, or other forces are causing this phenomenon are worth debate, but simply ignoring the trend slow down (when it was supposed to be accelerating with BAU CO2) is not a very convincing argument.

        Equivocating that the pause means something other than the flat temperature trend-line in the much monitored and accepted global trend(s) is moving the goalposts.

      • Don Monfort wrote:

        Perlie hasn’t heard:

        google.com/search?q=the+climate+pause&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb

        Obviously, monfie thinks one can prove an assertion regarding a scientific question as true by being able to present a list of links from a Google search for a combination of keywords related to the question. He should try to get a paper published, applying such an approach.

        I know one too:
        https://www.google.com/search?client=ubuntu&channel=fs&q=alien%2Babduction%2Banal%2Bprobe&ie=utf-8&oe=utf-8#channel=fs&q=aliens%2Babduction%2Bprobe

      • Don Monfort

        The pause is a reality, perlie. We don’t have to show you no stinking trends. Everybody knows about it. Google it. Try to catch up, perlie. Stop being a nuisance.

      • Tom Scharf wrote:

        I can look at the temperature trend over the past century and state this trend is increasing over the past 100 years.

        I suspect, here you actually mean “positive” instead of “increasing”? The more important fact here is that the trend of the surface temperature over the last 100 years is not just positive (ca. 0.073-0.085 K/decade), it is also statistically significant with more than 13 standard deviations.

        I can look at the same trend over the past 20 years and say this same trend is essentially flat.

        Can you not bring yourself to do that? At all?

        To do what? To state falsehoods? Why would I do that? The trend over the last 20 years is not flat. These are the trends (in K per decade) over the last 20 years for the various surface temperature data sets together with the 2-sigma intervals:

        GISTEMP: 0.116+/-0.096
        NOAA: 0.097+/-0.089
        HadCRUT4: 0.101+/-0.094
        Berkeley: 0.126+/-0.094 (ends in 2013)
        HadCRUT4 krig v2: 0.143+/-0.099
        HadCRUT4 hybrid v2: 0.15+/-0.109
        (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html)

        All positive, and all even statistically significant with more than 2 standard deviations.

        but simply ignoring the trend slow down (when it was supposed to be accelerating with BAU CO2)

        Who are supposed to be the ones who allegedly said that the trend for every 20-year period would always be larger than the previous one, moving forward year by year? Please provide a quote and proof of source.

        The temperature trends over same length time periods, e.g. 20 years, have a frequency distribution, too. The individual trends will lie around a median value. In about 50% of the cases they will be larger than the median value, and the other ones they will be smaller (or about equal the median value). The shorter the time interval, the wider the distribution (with sufficient short time periods Zero or even negative trends will be part of the distribution also). No one has claimed that the trends will always only be increasing. Like no one has claimed that CO2 is the only factor influencing temperature variability. This is the next “skeptic” strawman often presented in this context and also hinted by you here.

        Equivocating that the pause means something other than the flat temperature trend-line in the much monitored and accepted global trend(s) is moving the goalposts.

        The logical fallacy of “moving the goalpoast” is only given, if the one who is allegedly doing that had before defined a normative criterion about something, which then is changed, when it is fulfilled. Have I done that? Otherwise, your accusation against me of applying the logical fallacy is false.

    • thisisnotgoodtogo

      Appell, if you say there is no pause then its you that can’t use temp data to say temps rose.

    • thisisnotgoodtogo

      Mosher, cluttering the thread with very many similar comments, says:
      “David, you are expecting consistency from skeptics.”

      While showing his own inconsistency by directing his criticism only to skeptics, as per his agenda, and ignoring Appell’s position that there has been no pause.

      • Steven Mosher

        I beat on david Appell all the time.
        Today is his lucky day.
        its pretty simple, police your own team.

      • thisisnotgoodtogo

        Yet in this thread you chose to not notice what he did.
        Instead you chose to protect your investment.
        Police yourself, Mosher.

      • thisisnotgoodtogo

        I thought you said you don’t have a team, Mosher. Why police the team you aren’t on?
        I don’t have one.
        Who could you be talking at?

    • Wrong. The pattern of the global temperature indices is probably roughly correct, only the trend, especially the late 20th warming (the AGW period) may be exaggerated. Furthermore, as Don Monfort correctly says, pointing out that the record shows no warming is not necessarily endorsing the record.

      Example:
      http://www.woodfortrees.org/plot/hadcrut4gl/plot/hadcrut4gl/from:1950/detrend:0.4

  62. Don Monfort

    Do we know what the warmest year on record for the U.S. is, today?

  63. Zeke:

    First, I want to thank you for your posts – here and elsewhere. I always read them and learn something, and I really appreciate the time you are contributing.

    I have a couple of questions.

    1. On the issue of adjusting for MMTS versus LiG – I was not clear on whether the LiG (the older style) is being adjusted to conform to the MMTS or visa versa. Could you clarify?

    Also, is one type of instrument more accurate than the other?

    One would assume the MMTS is more accurate than the LiG (just because it is newer) – however I am just guessing that.

    It would seem to make sense to adjust the less accurate to conform to the more accurate, but I just want to clarify which way the adjustment runs.

    2. Time of Observation. This is probably a stupid question – but are the measurements being taken more than once per day? Moving the time of observation from afternoon to morning sounds like we are shifting the time we look at the temperature (like one time) – but that doesn’t make sense to me. I assume we want to capture the minimum and maximum temperature at each site daily – which would seem to require more measurements (hourly or ever more frequent). So could you clarify that point.

    In a perfect world – with automated stations, going forward, I would assume we would capture data fairly frequently. In a 100 years with data every minute (or 5 minutes or whatever), we would capture the min/max – is that is where we are going?

    3. As to the “changing the past” issue – that is deeply unsettling to me and I assume many others. What is the point of comparing current temperatures to past temperatures if the past changes daily?

    How about doing it both ways and providing a second set of data files where they adjust the new relative to the old in addition to the old relative to the new. I would love to see how that would feel (see the data over time adjusted compared to the old) just to see the difference.

    4. UHI adjustment. When you write your third post could you perhaps explain the philosophy of this adjustment. I don’t get it. From my point of view we pick a spot and decide to plop a station down. For years it is rural and we have one trend. Then over a decade or so, that spot goes urban and there is a huge warming trend, then once it is urban the trend settles back down and is what it is (just warmer than rural).

    Why do we adjust for that? That station did get warmer during that decade – so what are we adjusting it to? Are we trying to forever make that station be adjusted to read rural even though it is now urban? Or change the rural past to read urban? I just don’t understand the reason for this adjustment if the instrument was accurately reading the temperature throughout its history.

    What if something (like a hot spring forming or a caldara forming) where to change the reading – would we adjust for that also?

    Anyway – thanks in advance for looking at my lay person questions and hopefully responding.

    Rick

    • Rick,

      NCDC makes a general assumption that current temperature readings are accurate. Any past breakpoints detected in the temperature record (e.g. due to an instrument change) are removed such that the record prior to the breakpoint is aligned with the record after the breakpoint. In this sense, MMTS instruments are assumed to be more accurate than liquid in glass thermometers for min/max temperature readings.
      .
      As far as TOBs go, both LiG and MMTS instruments collect a single maximum and minimum temperature since they were last reset. The issue with TOBs isn’t so much that you are reading the temperature at 10 AM vs. 4 PM, but rather that when you are reading the temperature at 4 PM you are looking a the max/min temps for 12 AM to 4 PM on the current day and 4 PM to 11:59 PM on the prior day. This doesn’t sound like much, but it actually has a notable impact when there is a large temperature shift (e.g. a cold front coming through) between days. I’m writing a follow-up post to look at TOBs in much more detail, but for the time being Vose et al 2003 might be instructive: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/vose-etal2003.pdf

      TOBs isn’t relevant to modern instruments that record hourly temperatures, and certainly not to the new Climate Reference Network that records temperatures every 5 minutes or so.
      .
      For “changing the past”, either way results in identical temperature trends over time, which is what folks studying climate are mostly interested in. Its not a bad idea to provide both sets of approaches, though it might prove confusing for folks.
      .
      UHI is fairly complicated. The way its generally detected is if one station (say, Reno NV) is warming much faster than its more rural neighboring stations, it gets identified as anomalous through neighbor comparisons and adjusted back down after it diverges too far from its neighbors. Menne et al 2009 has a good example of this further down in the paper: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-etal2009.pdf

      Our recent paper looked in more detail at the effect of pairwise homogenization on urban-rural differences. It found that homogenization effectively removed trend differences across four different definitions of urbanity, at least after 1930 or so, and did so even when we only used rural stations to homogenize (to reduce any chance of aliasing in an urban signal).

      • Zeke:

        Thanks for the answers.

        So for UHI – it sounds like it only gets adjusted relative to its neighbors during the transition from rural to urban and then once fully urban, assuming its trend is similar to its neighbors no further adjustments would need to be made. Is that correct?

      • RickA,

        Yes and no. If a switch from rural to urban introduces a step change relative to neighbors, that will be corrected. If an urban-located station has a higher trend than rural neighbors due to micro- or meso-scale changes, that will also generally be picked up and corrected. Its not perfect, however, and some folks (like NASA GISS) add additional urban corrections. For the U.S., at least, it seems to do a reasonably good job at dealing with UHI.

      • Zeke, I don’t suppose that there were stations with overlapping max/min thermometers+ LiG readings and then overlapping LiG and MMTS reading?

      • Thanks Zeke, so even a correction for the LiG and MMTS is non-trivial; this is not a simple off-set problem but both instruments give different Tmax/Tmin off-set reading, at different months.

  64. Conspiracy theorists wonder:
    “Can anyone reach either rankexploits of http://ftp.ncdc.noaa.gov?
    I can’t.
    I’d like to read this stuff….

      • ???? rankexploits seems to have a hyperactive ip blocker…

      • I guess unless some other people cant reach them we’ll just assume its my setup here…..

      • I’ve had my IP blocked by Lucia’s blog a number of times.

        She uses a blacklist to block IP addresses associated with malicious behavior. Unfortunately, those IP addresses often belong to ranges owned by ISPs who serve many customers. Since any customer can get any (dynamic) IP address with the ISP’s IP ranger for their area, people can often wind up using IP addresses which have previously been responsible for malicious behavior.

      • Blocking ip’s isnt so great. Tor can get you an ip anywhere in the world you would like and anyone really up to something….

      • nickels, Lucia also blocks Tor connections.

      • ah, nifty. cleverer than your general ip blocker!

      • But not a very useful site for links since they are blocked…. :(

      • nickels, you can e-mail lucia. She’s pretty good about helping legitimate users access her site.

        As for the other site, it may be a coincidence, but you provided the link as http://ftp.ncdc.noaa.gov. Zeke responded by saying ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/ works for him. As you’ll note, his link begins with ftp, not http. That’s because the link is to an FTP server. That may be why you are having trouble. (Of course, you could have used the right link but typed the wrong one here.)

      • I realize I should have been more careful typing that one:
        ncftp -u anonymous 205.167.25.101
        fails.
        Must be something weird in my firewall. My main intent for the post was just in case something was down, in which case there would have been some chime in. I do need to get the TOB papers, but I guess I’ll wait until that posts comes out and email someone!!

      • nickels there are lots of sites that don’t trust or permit raw ftp. You might try a proxy server for that.

      • Lucia blocks anonymizers.

    • I’m not into conspiracy theory, but links that don’t work don’t help….???

    • I couldn’t reach it either, but the other one I could

    • stevefitzpatrick

      If you contact Lucia by email and explain, she may unblock you IP address. She’s done it for me a couple of times when I have been overseas in ‘bad’ regions. She says: ” If you need to contact me: my name is lucia. The domain name is ‘rankexploits.com’. Stuff an @ in between and send me an email.”

      • given the fragmented references I find to rankexploits that would be a bit of pain…. but its a nice offer…. if its a critical paper I’ll do it.

  65. If you have two piles of stations, a la Mosher’s comment, and you compare the two and let’s say they give similar results.

    So, it could be that all the thermometers have no problem, and they compare well.

    Or, it could be that many thermometers in both piles have similar problems, but they still compare well.

    So, what degree of confidence does this sort testing give me? Only that the results from the two piles are similar, not that the over-all result from all of them are accurate or meaningful.

    • Windchasers

      jim2, now that we have accurate hourly/daily records, we can predict what the time of observation bias (TOB) would be, if we were still recording temps the same way today as 50 years ago. Which means we can build a model for the TOB *just* off of the high quality CRN stations, if we want. Then we can apply that model to the old records, get the adjusted temps, and compare those temps to those of the gold-standard stations. They should match up.

      This is a pretty good test, since the ‘piles’ are different. It’s *out*-of-sample testing, not in-sample testing.

      There are other ways to test the TOB adjustments. One way the TOB shows up in the temperature record is with a fingerprint of reduced intradaily variability. It’s from max or min temperatures effectively being counted twice, and how often the double-counting occurs depends on the time of day that temperatures were recorded, as well as how quickly temperatures change from one day to the next.

      Based on the modern hourly/daily records, we can say that if the temperature was recorded at, say, 4 pm every day, then we should have X number of double-recorded days. So look at the historical data for days recorded at 4pm. Do we see a number close to X? Yes.

      Or we can turn it around. Can we just look at the X, and infer the time of day that the data was recorded? Also yes.

      There may be other ways of checking the TOB adjustments that I’m missing. These are just a few off the top of my head and from reading some of the papers that Zeke linked.

  66. Thanks, Zeke, your efforts are appreciated by at least some of us.

  67. Zeke: Have there been any adjustments to the USHCN data based on USCRN observations?

    • Nope, while the full co-op network is used in the pairwise homogenization process, the USCRN network is not. However, from a CONUS-wide standpoint USCRN and USHCN have been identical since USCRN achieved U.S.-wide coverage in 2004/2005: http://rankexploits.com/musings/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-1.25.23-PM.png

      • @ Nick Stokes

        I followed your link to Watts 2011 post and found this comment, which is a pretty good summary of my opinion of climate data and the analysis thereof. I might add that the adjusting, infilling, correcting, kriging etc described by Zeke are superimposed on the basic problem described by Ms. Gray in her comment to Watts: We have no intrinsic method of separating signal and noise, even given pristine temperature records, and the existing records are anything BUT pristine. And can’t be made so.

        “Pamela Gray says:
        March 6, 2011 at 8:16 am
        When I was searching for a signal in noisy data, I knew that I was causing it. The system was given a rapidly firing regular signal at particular frequencies. By mathematically removing random brain noise, I did indeed find the signal as it coursed through the auditory pathway and it carried with it the signature of that particular frequency. The input was artificial, and I knew what it would look like. It was not like finding a needle in a haystack, it was more like finding a neon-bright pebble I put in a haystack.

        Warming and cooling signals in weather noise is not so easy to determine as to the cause. Does the climate hold onto natural warming events and dissipate it slowly? Does it do this in spurts or drips? Or is the warming caused by some artificial additive? Or both? It is like seed plots allowed to just seed themselves from whatever seed or weed blows onto the plot from nearby fields. If you get a nice crop, you will not be able to say much about it. If you get a poor crop, again, you won’t have much of a conclusion piece to your paper. And forget about statistics. You might indeed find some kind of a signal in noise, but I dare you to speak of it.

        This is my issue with pronouncements of warming or cooling trends. Without fully understanding the weather pattern variation input system, we still have no insight into the theoretical cause of trends, be they natural or anthropogenic. We have only correlations, and those aren’t very good.

        So just because someone is cleaning up the process, doesn’t mean that they can make pronouncements as to the cause of the trend they find. What goes in is weather temperature. The weather inputs may be various mixes of natural and anthropogenic signals and there is no way to comb it all out via the temperature data alone before putting it through the “machine”.

        In essence, weather temperature is, by its nature, a mixed bag of garbage in. And you will surely get a mixed bag of garbage out.”

  68. Curious George

    Zeke, thank you for an explanation of what is going behind the scenes. I’ll need time to digest your text. Meanwhile, one question is in my mind: Is the treatment of data that you describe a standard statistical technique? Can you estimate how many professional statisticians are involved?

    • Hi Curious,

      David Brillinger was involved in the design of the Berkeley approach. Ian Jolliffe and Robert Lund are involved in the benchmarking process for homogenization through the International Surface Temperature Initiative. I’m sure there are a few more folks that are “professional statisticians”; I know a number of the scientists have degrees in mathematics, but aren’t professional statisticians.

  69. Oops, I had an unfortunate typo in the article. When I said “There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings”, I should have said “There are also significant positive minimum temperature biases from urban heat islands, with urban stations warming up to 0.2 C faster than rural stations”. The two are not the same, as not all the stations in the network are urban.

    • A fan of *MORE* discourse

      Scottish Sceptic gets juvenile   “Would you be happy with a bank statement with ‘adjusted’ figures?”

      Matthew R Marler wears blinders  “There are people, including auditors, who do sample financial records …”

      Climate Etc readers are invited to verify for themselves that auditors require a 180 page code of ethics to even *BEGIN* to grapple with ‘adjustment practices’ of the financial world that are *ACCEPTED* and *LEGAL*.

      In a nutshell, nowhere in business or finance or insurance do we *EVER* encounter numbers that are “unadjusted.”

      Conclusion Skilled climate-auditors like Zeke Hausfather and Steven Mosher — and team efforts like Berkeley Earth (BEST) and the International Surface Temperature Initiative (ISTI)  — deserve all of our appreciation, respect, and thanks … for showing us a world whose warming is real, serious, and accelerating.

      Of course, there are *PLENTY* of Climate Etc conspiracy theorists and/or astroturfers who *INSIST* that Zeke and Steve and BEST and ISTI are one-and-all agents of a vast conspiracy.

      Of course, no amount of reason and evidence *EVER* convinces a conspiracy theorist, eh Climate Etc readers?

      But what Zeke and Steve and BEST and ISTI are showing us *is* enough to convince the next generation of young scientists. And in the long run, that’s what matters, eh?

      Good on `yah, Zeke and Steve and BEST and ISTI!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Rud Istvan

        Real, yes, to some degree debated here concerning USHCN. Partly unreal owing to homogenization, also debated here with respect to the quality thereof. That’s what happens when the world emerges from an LIA caused by natural variation.
        Serious depends on other context, not debated here.
        Accelerating, no. That darned pause again, even showing up in BEST.

      • Steven Mosher

        FAN ITSI is really cool.

        it is everything we asked for after climategate.

        even more cool is they have 2000 stations that we dont have.

        So,

        Our approach makes a prediction about what “would have been recorded” in every location were we had no data.

        Now, thanks to data recovery ITSI has additional sources.
        sources that we did not use constructing our prediction.

        Do you think skeptics will make their own predictions about what this out of sample data says?

        I think not.

      • “In a nutshell, nowhere in business or finance or insurance do we *EVER* encounter numbers that are ‘unadjusted.’”

        Yes, ENRON adjusted its financial numbers, and so does Berkshire Hathaway.

        Saying everybody adjusts numbers tells you precisely nothing about how accurate the adjustments are.

        The primary problem skeptics have in temperature trends is that virtually every reported adjustment in trends results in lower figures for the past, and warmer figures for the present. The most famous “adjustment” being the hokey stick. (Yes that’s paleo, not temp measurements, but the principle seems to work the same in both.)

        As an industry, the CAGW consensus is always “discovering” that “it’s worse than we thought,” including in temperature reports. And the apparent total lack of any skeptic involved in generating these adjustments just makes that less acceptable as mere coincidence.

        But the alternatives are not an evil conspiracy of BEST, NOAA, et al, and pure, pristine, accurate, precise temperature trends. Confirmation bias, faulty shared assumptions, shared over confidence in the raw data and the accuracy of the adjustments are more likely to cause bad results than any conspiracy.

        For example, I don’t see Mosher as being willing to engage in any conspiracy even if there were one. But I also know that he has tied his entire sense of self to defending climate models and temperature reports. He has spent years ridiculing those who disagreed with him or questioned the results he defends. So I simply do not see him as a credible check on his fellow tribesmen.

      • Windchasers

        But the alternatives are not an evil conspiracy of BEST, NOAA, et al, and pure, pristine, accurate, precise temperature trends. Confirmation bias, faulty shared assumptions, shared over confidence in the raw data and the accuracy of the adjustments are more likely to cause bad results than any conspiracy.

        So challenge those biases and assumptions. Point out flaws in the methodology. Improve it!

        This is how science progresses. Get educated on the problem, then make it better. Don’t just sit around wringing your hands and talking about potential biases.

        I was unconvinced, so I got educated on the subject. I read the literature, checked the data, checked the calculations, and now I’m pretty satisfied with the adjustments. But that takes work, and most people aren’t going to bother doing it. It’s far easier to just be suspicious than it is to do your DD.

        And the apparent total lack of any skeptic involved in generating these adjustments just makes that less acceptable as mere coincidence.

        No one’s saying that the one-sidedness of adjustments are the result of coincidence. They’re the result of how we recorded data in the past and how we record it now, and the well-documented biases that result.

      • “Mosher
        Do you think skeptics will make their own predictions about what this out of sample data says?

        I think not.”

        I expect to see waves of heat crashing into the Western and Eastern seaboards, matching the Atlantic and Pacific ocean warming/cooling cycles.

        You can go to the McDonalds website and enter a Zip code and it will give you the nearest 5 McDonalds and the distance.
        My guess is that if you prepare a McDonald index, Dist 1/1 + Dist 2/2 +Dist 3/3, you will find that the areas with the highest McDonalds index have the greatest level of warming.

      • Steven Mosher

        “The primary problem skeptics have inhow temperature trends is that virtually every reported adjustment in trends results in lower figures for the past, and warmer figures for the present. ”

        yes. exactly as they should.

        For example. When you change instruments from type A to type B
        You can expect there to be a bias. the bias will be up or the bias will
        be down. if the bias is zero, well then thats no bias. duh.

        So the change to MMTS caused a bias.
        how much?
        what direction?
        easy, test them side by side.
        yup, that science was done.

        read it for change of pace

      • Matthew R Marler

        A fan of *MORE* discourse: Of course, there are *PLENTY* of Climate Etc conspiracy theorists and/or astroturfers who *INSIST* that Zeke and Steve and BEST and ISTI are one-and-all agents of a vast conspiracy.

        Plenty?

      • Fan once again brings the AICPA ethics links. Keep those coming! Many adjustments are of the kind, No you’re not worth quite as much as you think, and No you didn’t make quite as much as you thought. Some of these are timing differences. If the client is asked to show a bit less income in the current period, generally at least some of that income will simply be pushed into the following time period though this is an extremely simplified example and each client has a unique situation. This conservative approach has served them well for a long time.

  70. Absence of correlation = absence of causation.

    There is no correlation between planetary climate (Earth’s paleoclimate, or Venus) and CO2 concentration. Your theory (and your models) may say that there “should” be warming, but the real world says it ain’t happening.

    The hypothesis “CO2 causes warming” is falsified by this lack of correlation (except in reverse — warming driving increased CO2). This is why I rule in favor of those -protesting- data diddling — no matter how noble the purposes or intentions of the data-diddlers.

    Data-diddling to try to show that CO2 causes warming is AT BEST some true believer trying to salvage his or her career claims with fancy hand waving. (“At worst” is left as an exercise for the reader.)

    A scientist worthy of the name says, “Oh, look at that, the hypothesis was wrong” and moves on.

    • Windchasers

      mellyrn,

      The temperature record should stand on its own, regardless of any imputed effects from CO2 or anything else. It’s a non-sequitur to say that the adjustments are wrong because scientists are trying to show that CO2 causes warming.

      You should either find a legitimate problem with the adjustments, or you should accept them.. but your acceptance of the temperature data should not be based on what you think about CO2.

      Just focus on the data. That’s how science is done.

    • Steven Mosher

      Off topic.

      This is about adjustments to the temperature record.

      people who dont want to understand change the topic

  71. “TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.”

    I’m just a novice at this stuff, but how is this possible?

    If you take a reading at 5 PM, I can understand how a hot day might be double counted, and thus influence the average Tmax for the month. But how would the Tmin for the month possibly be affected?

    If you take a reading at 7 AM, I can understand how a cool morning might be double counted, and thus influence the average Tmin for the month. But how could Tmax for the month possibly be affected?

    For a station that switched observation time from late afternoon to morning, there should be a TOBS adjustment to reduce the Tmax prior to the switch, and a TOBS adjustment to raise the Tmin after the switch. Once a station is reading at 7 AM, there should be NO additional TOBS adjustment applied to Tmax. Likewise, there should be NO TOBS adjustments applied to Tmin prior to the switch.

    • Windchasers

      Once a station is reading at 7 AM, there should be NO additional TOBS adjustment applied to Tmax. Likewise, there should be NO TOBS adjustments applied to Tmin prior to the switch.

      Yep, that’s right. We used to record temps in the afternoon back in the ’30s, and later that was changed to the morning. So the raw data had a hot bias in the past, and a cold bias now.

      “TOBs adjustments affect minimum and maximum temperatures similarly”

      I’d wager this means that the hot bias from measuring near the hottest part of the day is about as big as the cold bias from measuring near the coldest part of the day. Same magnitude, opposite sign of bias.

      • It seems that a much simpler and more logical way to estimate the trend over time would be to track the change in Tmin temps prior to the switch, then the Tmax temps after the switch, where NO ADJUSTMENT would be necessary.

        Why pollute the dataset by using averages that require adding in temps that are clearly biased by the time readings are being taken?

      • Steven Mosher

        write it up KTM
        get it published
        be a hero

    • Steven Mosher

      how is this possible?

      1. read the posts on the skeptical site run by John Daly. its explained.
      2. read the posts on CA. its explained.
      3. read the papers zeke linked to. its explained.
      4. Wait for the second in the series, where it will be demonstrated for the umpteenth time.

      • I guess my main critique is how the data is being presented. According to the graph, the Tmax TOBS adjustment was near zero in the past, and is currently near +0.2C. This makes no sense, Tmax TOBS adjustments should be large in the past and near zero today.

        I think it would be much more informative and accurate to show what the actual TOBS adjustments are for Tmin and Tmax over time. The two curves would not overlap, since they are being applied very differently over time.

        I also question the logic behind making all these adjustments, since it is possible that even at a midnight reading you could get double-counting of cold temps on two consecutive days. Why set the standard for USHCN at midnight when the vast majority of observations at being made at other times?

        Also, where are the error bars for these graphs?

      • Steven Mosher

        “This makes no sense,”
        it does make sense.
        read harder.

  72. Alexej Buergin

    Now is the time to repost this:

    “A C Osborn | July 2, 2014 at 2:34 pm | Reply
    You jest, BEST Summaries show Swansea on the South West Coast of Wales in the UK a half a degree C WARMER than LONDON.
    Now anybody living in the UK knows that is not correct due to location and UHI in London.
    It also shows Identical Upward Trends for both areas of over 1.0C since 1975, obviously BEST doesn’t know that the west coast Weather is controlled by the Ocean and London by European weather systems.
    So what does the Met office say about the comparison, well they show that on average Swansea is 0.6 degrees COOLER than London.
    So who do you believe, The people who live in the UK and the Met Office or BEST who have changed the values by 1.1 degrees C?”

    • Steven Mosher

      The values are not changed.
      you are looking at an expected value, not data.

      next, this post is about NOAA.

      stay on topic.

  73. Alexej Buergin

    According to the Icelandic WXmen, the adjusted average temperature in Reykjavik 1940 was 5°C. According to GISS, it was 3°C.

  74. Alexej Buergin

    I never look at the numbers fromm GISS, and I do not read the (very long) posts by Mr. Housefather.

    • Alexjej

      If you do not read he nformation I hope you will not complain if they show something that you do not agree with?

      Zeke has gone to a lot of trouble to post information, the least denizens can do is read it

      Tonyb

      • Alexej Buergin

        On NASA-GISS I would refer to Astronauts Schmitt and Cunningham.
        On the “lot of trouble” I agree, but my experience is this: If you really, really understand something, you can explain it in one paragraph.

      • Matthew R Marler

        Alexei Buergin: If you really, really understand something, you can explain it in one paragraph.

        You can be terse, clear, accurate, and complete, but generally not more than 2 at a time. Zeke Hausfather achieved an excellent balance: not too long, real clear, accurate, and with links to more complete details.

      • No. You can really understand something but be unable to describe it adequately because of poor communications skills.
        Equally, you can be a good communicator but with poor understanding of your subject.

    • Steven Mosher

      another example of a denizen who does not want to understand.

      • Alexej Buergin

        Actually, I would like to understand how anybody could get the results mentioned (Reykjavik, Swansea/London). But nobody wants to (or can) explain that, and they are obviously wrong.

      • Steven Mosher

        huh I explained.
        go read harder.

      • Alexej Buergin

        Your “explanations”:

        Reykjavik: “GISS is not NCDC.”
        We agree that GISS is producing Dreck?

        Swansea/London: “expected value, not data”.
        If by “expected value” you mean the sum of T(i)*p(i), that should not change the fact that Swansea is cooler than London (and the ridiculously named BEST is nonsense here).

      • Steven Mosher

        “Swansea/London: “expected value, not data”.
        If by “expected value” you mean the sum of T(i)*p(i), that should not change the fact that Swansea is cooler than London ”

        No.

        there is no changing of the fact.

        We create a model to estimate the temperature WHERE IT WASNT MEASURED. to do that we create a model

        T = C + W +e

        the climate of a place is estimated via regression as a function of Y, Z and time or season.

        the raw data is used to create this surface

        This surface is subtracted from the raw data to create a residual

        the residual is W.. the weather.

        Now, since the model is simple ( lat, alt and season ) the residual WILL contain some structure that is not weather but is actually climate

        these cases can be handled two ways

        A) increase terms in the regression — like coastal/non coastal
        B) keep a simple regression because these cases are small in number
        and zero biased.

        we do B. That means you will find that there are a small number of cases
        were the expected value of the model deviates from the raw.
        this happens in places where the climate is NOT dominated by Latitude altitude and season. For example, places where coastal/seasonal effects dominate

        to test this we add a variable for coastal to the regression.
        yes we see local changes.. BUT the R^2 stays the same.. no additional variance is explained. so adding it to the model doesnt change the overall performance of the estimate.

        We have acouple ideas how to squeeze some more explanatory power out of the regression, but we would only be fiddling with local detail and not the global answer

  75. Speaking of adjusters, does Gavin Schmidt still believe that the MWP did not really exist…?

    • As a global phenomena happening all over the world at the same time?

      • In terms of global phenomena, it seems rather than regions which have always cooled and warmed during global warming or cooling trends, the metric of rising sea levels [which have been occurring throughout our current interglacial period [10,000 years] should be metric used.

        So one could compare rate of rising sea levels of MWP, LIA, and during the current period in which we recovering from the Little Ice Age- the time period after 1850.

      • Claimsguy

        Is the modern warming period happening synchronously everywhere in the world?

        Tonyb

      • “Before the most recent Ice Age, sea level was about 4 – 6 meters (13 – 20 feet) higher than at present. Then, during the Ice Age, sea level dropped 120 meters (395 ft) as water evaporated from the oceans precipitated out onto the great land-based ice sheets. The former ocean water remained frozen in those ice sheets during the Ice Age, but began being released 12,000 – 15,000 years ago as the Ice Age ended and the climate warmed. Sea level increased about 115 meters over a several thousand year period, rising 40 mm/year (1.6″/yr) during one 500-year pulse of melting 14,600 years ago. The rate of sea level rise slowed to 11 mm/year (0.43″/yr) during the period 7,000 – 14,000 years ago (Bard et al., 1996), then further slowed to 0.5 mm/yr 6,000 – 3,000 years ago. About 2,000 – 3,000 years ago, the sea level stopped rising, and remained fairly steady until the late 1700s (IPCC 2007). One exception to this occurred during the Medieval Warm Period of 1100 – 1200 A.D., when warm conditions similar to today’s climate caused the sea level to rise 5 – 8″ (12 – 21 cm) higher than present (Grinsted et al., 2008). This was probably the highest the sea has been since the beginning of the Ice Age, 110,000 years ago. There is a fair bit of uncertainty in all these estimates, since we don’t have direct measurements of the sea level.”
        http://www.wunderground.com/blog/JeffMasters/sea-level-rise-what-has-happened-so-far

    • Steven Mosher

      changing the subject.
      doesnt want to understand.

    • A fan of *MORE* discourse

      Wagathon “[smears Gavin Schmidt]”

      Wagathon, your personal endorsement of the Harold Faulkner/Save America Foundation climate-change worldview and the novel economic theories of its associated Asset Preservation Institute are enthusiastically supported by the world’s carbon-asset oligarchs and billionaires.

      That”s how it comes about that *EVERYONE* appreciates the focus of your unflagging efforts, wagathon!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • I did find Gavin‘s comment a little amusing because in fact 8,000 years ago, at a peak of warming much higher than today, you know what the climate people call it? The climate optimum. In other words it‘s actually perceived as more optimal in terms of vegetation and other factors. ~Philip Stott

  76. It should be noted that there are at least the two uses of the word “data”
    – information output by a sensing device or organ that includes both useful and irrelevant or redundant information and must be processed to be meaningful.
    – information in numerical form that can be digitally transmitted or processed.

    The raw temperature measurements, along with instrument quality information and locations, are data in the first and second senses. Adjusted temperatures and anomalies are data only in the second sense.

    To adjust historic instrument readings seems sloppy measurement practice and will produce poor scientific thinking — eg, the adjusted results are estimates of the temperature record, not the record itself. These estimates should be reported with error estimates that had better span the measurements too.

    • Windchasers

      The raw temperature measurements, along with instrument quality information and locations, are data in the first and second senses.

      Sure, but it’s rather useless data by itself, sans adjustment. Even a spatial average of temperature is some sort of “adjustment”; to get to the national temperature chart we have to start applying math. And once you start using math, it’s math all the way down. ;-)

      Basically, you can’t take something like a 7am temperature reading in New Jersey and two 4pm temperature readings in Illinois and build a national temperature out of them. You have to adjust for spatial weighting of the records, for the time of day that the temperature was observed, etc. Otherwise it’s an apples-and-oranges comparison of data.

      To adjust historic instrument readings seems sloppy measurement practice and will produce poor scientific thinking — eg, the adjusted results are estimates of the temperature record, not the record itself

      It’d be far worse to not adjust them.

      But “estimate” vs “record”? Not really relevant. Any record is itself just an estimate, as no data-recording equipment is completely perfect. We aim for good enough, not perfect. We don’t need to be measuring temperature to millionths of a degree and every few milliseconds in order to get pretty solid data about the temperature.

      So there’s not much point in being pedantic about whether something is an “estimate”. It’s all estimates. The question is always “how good are they?” And here, they’re pretty good.

      • Your reply “all are estimates” shows that you could use several courses in experimental physics.

        Of course measurements are estimates — now whose being pedantic? The point missed is that measurements occupy a unique position in physics reasoning. Measurements should not be changed, but that doesn’t mean they must all be treated alike in estimating the past. It means only you estimate and estimate error should reconcile with the measurements they estimate.

      • Windchasers

        Measurements should not be changed, but that doesn’t mean they must all be treated alike in estimating the past. It means only you estimate and estimate error should reconcile with the measurements they estimate.

        I agree. And as far as I can tell, that’s being done. They identify the errors, they derive the adjustments, and they test the adjustments, giving a range on the errors for the adjusted data.

        I appreciate the BEST work, which generally includes error bars on their temperature charts. I’d love to see that done more consistently by the other groups, though, and not just see the adjustment error estimates left in the literature.

        But I don’t think it makes a lot of difference for the big picture. The errors in the adjustments are relatively small.

    • Steven Mosher

      Phillip.
      you do realize that many of the raw records are not information output by a sensing device.

      Prior to the automation of reporting, a human walked out to a thermometer looked, rounded, and wrote a number down.

      and none of the reports actually report the physical property of temperature.

      • Mosh,
        I think you must be confused about what is a measurement, unless you think an old physics professor of mine at Ga Tech was wrong to teach measuring length of objects using the human eyeball and a meter-stick to record rounded values with estimates of error. Humans can be and were part of the sensing and recording of measurements and these measurements are what you have, so work with them.

        As to whether ” the reports actually report the physical property of temperature,” I have no idea what you mean. Is it that thermometers don’t really measure the same property that today’s device do? If so, we need a whole new discussion.

      • Steven Mosher

        no phillip I was was just trying to make sure you actually understand what the records are.

        as for what they measure.

        tell me how an LIG thermometer works.

  77. “If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.”

    Until the 1970s, there were fewer than 1000 stations in the US according to this NOAA chart, and less than 2000 until about 2005. (I don’t know if all of these were used in generating the USHCN data sets, but if there were fewer, my following questions remain).

    http://www.ncdc.noaa.gov/oa/climate/isd/caption.php?fig=station-chart

    In how many locations are there “a number of stations” “a few kilometers” from one another?

    There are approximately 9.6 million square kilometers in the continental US. (If Alaska is included in the network, the number obviously goes up.)
    By my rudimentary math, with 1000 stations, that’s 9,600 kilometers per station. (Whew, I need a nap.) With 2000 stations (here the math gets hard), that’s 4,800 kilometers per station.

    If you have one station “within a few kilometers” of several other stations, and being generous and defining “a few” as four, then you have three or more stations within a 16 square mile area.

    Now I can see how that would happen in the real world; you want measurements where the people are. But it raises a couple of questions that may well have been answered, but I have not seen the answers and am curious.

    Here in the Chicago area, a few kilometers can make a real difference in temperature regardless of time of day. And the differences are not uniform – Skokie is not always warmer than O’Hare; the Chicago lakefront is not always cooler than Schaumburg.

    So:

    Question 1: Are the numbers above correct, or even close, as far area covered per station?

    Question 2: Don’t urban stations require more and broader adjustments, not just for UHI, but in general?

    Question 3: Are urban stations weighted differently because of their proportionally greater number in determining trends?

    Question 4: Are stations not “within a few kilometers” of several others, ever similarly adjusted, and if so, how?

    • Steven Mosher

      “Until the 1970s, there were fewer than 1000 stations in the US according to this NOAA chart, and less than 2000 until about 2005. ”

      WRONG.

      those are just ISD stations.

      the entire population of stations is substantially larger..

      If you seek understanding do not pull random charts from the internet.

      Go to sources.
      All the sources.

      • At least Zeke Hausfather mentioned that the larger number of stations is used for homogenization, rather than your obscurantist tack of implying they are included in the average.

        His figure 1 in the main post referenced ” Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth.” That is why I sought the number of NCDC stations.

        I missed this reference in the post: “A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.”

        But while the number of stations changes the math, it does not answer the underlying question. Whether 2,000, 7,000 or 10,000, I do not see how all the stations, as he says elsewhere in this thread, have several others within “a couple kilometers” of them.

      • The first sentence should have been deleted, poor editing. I saw that the reported average does include 7,000 stations.

      • Steven Mosher

        “At least Zeke Hausfather mentioned that the larger number of stations is used for homogenization, rather than your obscurantist tack of implying they are included in the average.”

        I implied no such thing

        in a discussion about USCHN, you linked to a unverified chart of a a different dataset entirely.

        obsfucator.

      • Mosher,

        That point I caught myself, as I noted in my second comment. I just failed to delete the snark before posting. My bad.

        But of the 4 questions I asked, Zeke Hausfather half answered one and neither of you addressed the other 3. Which is fine. No one is under any obligation to respond. But I read this thread as an attempt to address the concerns skeptics have regarding reported temps. An admirable goal. Sort of like Gavin Schmidt agreeing to answer all questions at Keith Kloor’s…once.

        But no answers are of course required.

        I am guessing his claim that each of the stations is “within a couple kilometers” was just a bit of hyperbole. I just don’t see that sort of coverage given the numbers.

    • Windchasers

      Here in the Chicago area, a few kilometers can make a real difference in temperature regardless of time of day.

      Heck, you can get big changes in temperature over just a few hundred feet, if the elevation change is big enough. I grew up at the base of a hill in Florida, and top of the hill was consistently warmer than the bottom.

      But the real question is the temperature anomaly. Does the temperature at the top and the bottom of the high change in sync? Yeah, pretty well. The correlation between them is pretty high.

      And that holds across most of the country. Temperature stations that are a few hundred miles apart still have very well-correlated anomalies, though I expect things like lakes and mountain ranges may tend to interfere with this.

      Also, in searching for data on this, I found this past post from Zeke:
      http://rankexploits.com/musings/2013/correlations-of-anomalies-over-distance/

      • Windchaser,

        I am not sure using anomalies as proxies for temperature simplifies the matter. I understand they give results more in line with what the consensus measuring them expect, but I think the prospect of determining actual average temperature in one given location is more complex than plotting anomalies.

        In prior blog threads some time ago I asked whether there was any experimentation to determine the accuracy and precision of anomalies as a proxy for temperature. Did anyone ever take actual hourly temperature readings at a range of sites over a period of time, and compare them to the average inferred from the anomalies. How do you know how accurate the long term temp trend against which you are calculating the anomaly is?

        At any rate, my questions are not about what is the best way to determine temperature trends. My questions are about whether any of the methods give the accuracy and precision claimed by those reporting them.

      • Windchasers

        GaryM:

        I am not sure using anomalies as proxies for temperature simplifies the matter. I understand they give results more in line with what the consensus measuring them expect

        We don’t use anomalies as a proxy for temperature. Rather, we use the anomalies to show how the temperature has changed.

        It’s actually somewhat difficult to define the average temperature of a region, because of things like the changes in temperature with elevation over even short distances. But it’s a bit easier to define the average anomaly, and besides, this shows us what we’re concerned with – how the temperature changes over time.

        How do you know how accurate the long term temp trend against which you are calculating the anomaly is?

        Whoa, anomalies aren’t calculated against long-term trends, but against a baseline temperature.

        If you subtract some temperature X from the temperature, record you get the anomaly – the temperature relative to some baseline temperature X. If you subtract the linear trend, though, you get something entirely else – the detrended data, which shows you how the temperature diverges from the trend. It’s not really that useful in comparison.

      • Windchasers,

        The results are reported as “average temperature” according to figure 1 in the main post. The plot shows a trend, but it is trend of temperatures.

        As for what is used to determine an anomaly, I know Mosher hates it when people link to those dang internet sites, but:

        “The term temperature anomaly means a departure from a reference value or long-term average.”

        http://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php

        The underlying question is not whether anomalies are consistent over large distances, but whether average temperatures are, because that is what is being sold to the public as the basis for public policy. That is why I refer to anomalies as a proxy of average temperatures, and why I ask if there is any research confirming their4 accuracy and precision as proxies.

        I have read the arguments behind their use, but I have not seen any testing to verify them. Not saying there isn’t any, just that I haven’t seen it. (And I don’t mean statistical comparisons to model generated data, I mean comparisons to actual temp measurements.)

      • Windchasers

        The results are reported as “average temperature” according to figure 1 in the main post. The plot shows a trend, but it is trend of temperatures.

        Aye. It’s the spatial average, and it shows a temporal trend, with temporal anomalies. (Note they y-axis label).

        “The term temperature anomaly means a departure from a reference value or long-term average.”

        Aye. So you get the anomaly by subtracting a reference value. I just want to distinguish that from subtracting the trend.

        The underlying question is not whether anomalies are consistent over large distances, but whether average temperatures are,

        The temporal averages definitely aren’t consistent over long distances. The average yearly temperature in Winnipeg is pretty different from the average yearly temperature in Miami.

        The spatial averages? Well, they’re spatial averages, so it doesn’t make sense to talk about how they vary in space. The number is derived for an entire region. The average US temperature is the same no matter where you go. You could be in Moscow, and the average US temperature would still be the same.

        I feel like I’m missing your point. The anomalies aren’t proxies for temperature in the same way that, say, the tree ring data is. The anomalies are just the temperature data with some number subtracted from the entire temporal series. Calculating the anomaly just shifts the entire temperature chart up or down, and doesn’t change how the temperature changes with time.

    • Steven Mosher

      Gary.

      are your questions about NOAA or BEST.

      if you can be specific then zeke or I can answer or get an answer.

      • Steven,

        Either one. I would be interested in the answers as to any data set.

      • Steven Mosher

        I will answer on BEST

        Question 1: Are the numbers above correct, or even close, as far area covered per station?

        Area “covered” by station varies widely across the surface of the earth.
        in some places the stations are dense ( say on average 20km apart) in other places ( south pole) they are sparsely sampled.

        Question 2: Don’t urban stations require more and broader adjustments, not just for UHI, but in general?

        The UHI effect ( ON AVERAGE0 is much smaller than people imagine.
        part of the reason is that the media and literature has focused on UHI MAX rather than UHI mean.
        In terms of adjustments I havent looked at the number of adjustments for urban versus rural. More generally I just eliminate all urban stations and look for a difference.

        Question 3: Are urban stations weighted differently because of their proportionally greater number in determining trends?

        Urban stations is a misnomer. There isnt a clear or validated way of categorizing urban versus rural. Several methods have been tried.
        Rather than a categorical scale I prefer a continous scale..
        For example.. rather than saying, as hansen does, that urban = population greater than X, whereas rural = population less than X, it makes more sense to just use population as a continuous variable.
        so there isnt any specific weighting applied on the basis of “urbanity”
        What we did was A/B testing . two piles, one urban the other rural.
        no difference.

        Question 4: Are stations not “within a few kilometers” of several others, ever similarly adjusted, and if so, how?

        There Isnt an adjustment

  78. GaryM,

    To answer some of your questions, the homogenization process uses the full co-op network (~8,000 total stations) rather than just the USHCN stations (1218 total) to detect breakpoints. It also only covers the conterminous U.S. (not Alaska and Hawaii). For all but the very early part of the record (pre-1930s), there are multiple nearby analogues for pretty much every station.

    • Zeke Hausfather,

      Thanks for the answer. Even using 10,000 stations, that seems like it would be an average of about 900 square kilometers per station.
      I still don’t see how does each station can have several others within a few kilometers, other than urban stations.

      And are you saying that stations that are not suitable for inclusion in the reported average are used to homogenize those that are?

      • Well, USHCN is a subset of the larger co-op network, where the primary criteria for inclusion is simply a long continuous record. There is nothing wrong with the rest of the co-op network per se, most of the stations just have much shorter records. Still quite useful for breakpoint detection.

        “A few kilometers” might be putting it a bit too strongly, but there are generally many stations within, say, 50 kilometers of any given station. You don’t really expect long-term climate changes to manifest as localize effects unless they are related to some change in the local condition. In that case, they are best not used to create a regional average, as you’d end up overweighting some localized change, be it due to vegetation change, instrument change, station moves, etc..

    • Is there a unique signature of thermometer type; say (Tmax-Tmin)/(Tmax+Tmin) that independently shows when transfers occurred?
      I don’t know how you can make these adjustments to individual stations, unless you know when the transitions occurred.

  79. Thanks Zeke Hausfather. I found the post valuable. I paused at this though:
    “…If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.”

    I see it’s an attempt to find errors. However what if the suspect station is a boundary one or effected by geography? Coastal compared to inland, high elevation compared to not so high, river valley compared to flat plain, many lakes versus few, forest land versus farmland, or on the Canadian border. Figure 7 shows a balanced result and is good for illustrating homogenization.

    A station location may react differently to non-error changing conditions and that’s where it gets interesting. The rich variety of the system meets the average and we’re after the real climate signal.

    • Steven Mosher

      “I see it’s an attempt to find errors. However what if the suspect station is a boundary one or effected by geography? Coastal compared to inland, high elevation compared to not so high, river valley compared to flat plain, many lakes versus few, forest land versus farmland, or on the Canadian border. ”

      Very good question.

      In some approaches the procedure that finds the errors is sensitive to
      geographical differences.

      I can speak to Berkeley earth

      A) Coastal compared to inland, this is a potential issue
      working on an improvement, however, the cases where it could cause
      a problem are small. the biggest effect of being by the coast
      is a suppression of variance. The effect drops off exponentially and is
      gone by about 50km or so.
      B high elevation compared to not so high. fully accounted for

      C river valley.. etc. the more important geomorphic type to be concerned
      about is mountain valley and cold drainage areas. Its a nasty problem
      as the DEM required is huge. Luckily these areas are small and isolated
      but users find them and complain to me.
      D. Lakes. I looked at this extensively. could not find any statistically meaningful effect. I know its there. haha. just cant find it.
      E. Land type. I have the data to assess this. Nothing has jumped out.
      but the historical metadata is low res ( 5 minute data )

  80. Zeke

    Thanks for the effort you have put into his. I will look forward to the next two articles so that the material in this post can be put in context.

    When you have completed all three would it be possible to then issue the series as one PDF suitably topped and tailed?
    Tonyb

  81. Trying to follow this narrative is like playing a game of intellectual “whack-a-mole.”
    I am willing to accept the Occam’s Razor claim and not impute malicious motives. However, I know of no other field (e.g medicine or flight test) that would allow use of infilled, estimated, or “zombie” data; particularly without either identifying it as such or putting error bands around it.
    Adjusting data more than once indicates there is little confidence in the adjustments that were originally made. If you not longer believe in the original adjustments why should I have any confidence in the latest adjustments?
    My 40 years experience in data acquisition and analysis leads me to believe that “best practices” are not being used.
    Can the defenders find nothing wrong with the methods and processes being used.
    Perhaps the strident defenders of the status quo should also accept an Occam’s Razor claim and not impute malicious motive to those not satisfied with the explanations they are being given.

    • Steven Mosher

      up until recently I would have agreed that best practices are not being used.

      however, the testing regime currently in use and the papers being published on the process have made me change my mind.

      See zekes forthcoming paper.

      • Do you believe that those “best practices” in the climate arena would qualify as “best practice” in the medical or flight test world?

      • Rud Istvan

        I follow all the math and statistics arguments. I cannot fault Zeke’s logic. But it is still possible to challenge underlying assumptions, which you do not, since the outcomes (more than just USHCN) do not pass common sense tests. For a Graphic example, see Joe D’Aleo’s Maine history from NCDC Drd964x 2013 compared to newly revised nClimDic 2014, posted at WUWT per this kerfuffle last week. HUH?!? Both charts officially NOAA labeled, and less than one year apart. Maine went from no AGW to lots of AGW on ‘official’ government provided charts.
        For a closer to home example, BEST station 166900 was changed from basically no trend raw to modest ‘expected’ warming. Your only reply has been to distinguish actuals from BEST ‘expectations’ and not explain why 26 months of cold extremes were rejected by your BEST algorithm (according the the BEST own information) because they did not agree with your modeled ‘regional expectation’. To repeat again, 166900 is the US Amundsen- Scott research station established in 1957. The most expensive, scientifically important station in the world. Your algorithm rejected 26 months of its reported temps because they did not agree with your model ‘expectation’. Words are off your website. Now, the nearest equivalent Antarctic station to compare is US McMurdo, roughly 1300 km away and roughly 2700 meters lower along the Antarctic coastline where it can be resupplied by icebreaking ships. Your notion of a region is flawed, as is your BEST process. It only takes one example to falsify an algorithm. There it is. Deal with it, preferably in less that cryptic brush off ‘read the literature, all of it’ style. Because I have read it all. And you still fail.

      • Matthew R Marler

        Rud Istvan: But it is still possible to challenge underlying assumptions, which you do not, since the outcomes (more than just USHCN) do not pass common sense tests. For a Graphic example, see Joe D’Aleo’s Maine history from NCDC Drd964x 2013 compared to newly revised nClimDic 2014, posted at WUWT per this kerfuffle last week. HUH?!?

        It’s the “expected value” of the conditional distribution of the true values for that locality, given all of the data, evidence about each thermometer’s bias and random variation, and testable assumptions about the site-to-site random variation (heteroskedastic Gaussian, most likely.) Nothing is statistics is common-sensical, because neither the generalities of nature nor the random variation are closely matched by our common sense. The elements of Bayesian inference are explained in the text by Francisco Samaniego called “A comparison of Frequentist and Bayesian methods of estimation”; and in the text by Rob Kass, Uri Eden, and Emery Brown called “Analysis of Neural Data” (which has a larger exposition of analyses of time series records.)

        In short, the Bayesian posterior mean has the smallest achievable expected squared error ( [true value – estimate]^2 ) of all estimators, the exact improvement depending on how much data there are, how accurate the individual records are, and how closely the distributions of the random components are approximated by the mathematical assumptions.

        D’Aleo’s selection of a seemingly bad outcome expresses the same naive view that lots of people have when viewing statistics: a treatment that improves almost everyone’s symptoms will seem to have made some selected person worse. Does the drug work as desired or not? Well, the existence of contrary cases shows that there is more to be learned, not that the statistical analysis was wrong or that the drug does not work. Same here: the Bayesian hierarchical modeling improves the estimate of the nationwide trend and of almost every local trend. That some trends do not seem to have been improved is, in this case, evidence that there is more to be learned, probably something about that locale.

        You asserted that someone (Mosher?) did not challenge the underlying assumptions. Actually, the BEST team have reported lots of challenges.

      • Matthew R Marler

        Rud Istvan: It only takes one example to falsify an algorithm.

        That is false. The most that one example can show is that the knowledge is not perfectly reliable, not that the algorithm used doesn’t achieve the best attainable estimate.

        If you have substantial evidence that the uncorrected record of a locale is exceptionally reliable, you can change the algorithm by fiat: reduce the size of the variance estimate of that site. But you need substantial evidence. Merely declaring yourself satisfied with the uncorrected version isn’t sufficient. I should note that if the error variance in one locale is sufficiently close to 0, the algorithm will not change its value by much: the posterior mean will be nearly exactly equal to the data.

        More detail can be found here: Kass, R.E. and Steffey, D. (1989) Approximate Bayesian inference in conditionally independent
        hierarchical models (parametric empirical Bayes models) Journal of the American Statistical Association, 84: 717-726.

        Plus, you can look up “Kriging” in many books that cover spatial statistics or multivariate time series.

      • Steven Mosher

        “It only takes one example to falsify an algorithm. There it is. ”

        ah no. The algorithm is a prediction with an error bounds. we fully expect with 40000 stations and millions of data points for a bunch of them, but not too many, to be outside the error bounds.

        simple stats rud.

      • Steven Mosher

        “PMHinSC | July 7, 2014 at 5:43 pm |
        Do you believe that those “best practices” in the climate arena would qualify as “best practice” in the medical or flight test world?”

        No. fields develop best practice over time based on the interaction with customers.

        different fields, different customers, different pratice.

        of course you can learn things from others

  82. As temperatures have warmed, the past has cooled. Temperatures have been level for a long time, and maybe be beginning to cool. So, if current temperatures cool, will the past warm back up- just curious?

  83. All very interesting, but it seems with so much data and so many adjustments, there still isn’t much credibility in CAGW science, and therefore no need for the war on CO2 or wasting trillions of dollars.

  84. RobertInAz

    Has anyone done a study of the independent proxies that would validate the US temperature record? I read a lot of anecdotes. Where have growing seasons changed? Where has the agricultural mix changed?

    I would think there would be ample independent confirmation the current temperature is significantly warmer than the early 20th century.

  85. Alexej Buergin

    A thermometer shows Tmax and Tmin during the last period of observation, usually 24 hours. It is an easy job to determine, which number belongs to which day. When TOBS is changed, an additional effort may be needed on the day of change. But afterwards it is just the same as before.
    So where is the problem?

    • Steven Mosher

      1. read the papers
      2. get some hourly data and study the problem
      3. wait for the rest of the series.

      Not that hard.

    • Windchasers

      When TOBS is changed, an additional effort may be needed on the day of change. But afterwards it is just the same as before.

      There are two types of TOB.

      1) Changing the time of observation. Can result in an extra half-day or so of temperatures being ascribed to the wrong day and month, which can be particularly significant during spring and autumn, when temps are changing the fastest.

      2) Bias from double-counting a Tmax or Tmin. E.g., if you record the temperature at 4pm today and it’s 100 degrees, then reset the thermometer, then the Tmax on the thermometer will be automatically still set to 100 degrees. Then say you come out the following day, and the high today was only 90 degrees, but the thermometer will still show 100 degrees as Tmax from the previous day. The Tmin will be unaffected. You’ve double-counted the max temperature.

      Recording in afternoons makes for double-counting hot Tmax, while recording in mornings makes for double-counting cold Tmins. Counting halfway in between or so is best (around noon or midnight).

      • Alexej Buergin

        You are so right, and it is so obvious, surely the people doing the job in, say, the year 1900, already were aware of it.

      • Alexej Buergin

        And the easiest way to correct (or not to have) the problem would be to reset the maximum-thermometer in the morning and the minimum thermometer in the evening; needs two visits, though.
        Fahrenheit must have thought of that.

      • Steven Mosher

        windchaser has read the papers.

        you guys should listen to him.

        he is fluent in these matters.

  86. Zeke, There do appear to be times when adjustments seem to go too far, such as converting a cooling trend into a warming trend, is this incompetance ? fraud ? a beserk computer program ? can you justify this ?

    https://wattsupwiththat.files.wordpress.com/2014/05/hansen-giss-1940-1980.gif

  87. Scott Basinger

    Great post, thanks for doing this Zeke.

  88. Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth.

    And, I can say, there is conspiracy in consensus. “Lamont made the same statement, you don‘t use consensus if you have a proof.” ~Richard Lindzen

    • A fan of *MORE* discourse

      wagathon proclaims  “There is conspiracy in consensus.”One thing is for *SURE* … scientists of all ages, genders, nationalities, and persuasions are united in wanting *NO* part of Wagathon’s Consensus-Conspiracy

      Question  How many Climate Etc readers have ever visited wagathon’s web-site “evilincandescentbulb”? Yikes. There is abhorrent material there.

      Conclusion  Scientists and (rational) skeptics and voters alike utterly reject the anti-science willfully ignorant extreme-ideology consensus of “Planet Wagathon”.

      Fortunately.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • A fan of *MORE* discourse

      Fixed indenting (hopefully) …

      wagathon proclaims  “There is conspiracy in consensus.

      “One thing is for *SURE* … scientists of all ages, genders, and nationalities want *NO* part of Wagathon’s Consensus-Conspiracy

      Question  How many Climate Etc readers have ever visited wagathon’s web-site “evilincandescentbulb”? There is abhorrent material there.

      Conclusion  Scientists and (rational) skeptics utterly reject the toxic extremist consensus of “Planet Wagathon”.

      `Cuz there are better planets.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      morediscourse@tradermail.info
      A fan of *MORE* discourse

  89. catweazle666

    that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

    Assuming?

    I thought you lot were supposed to be scientists.

    • OK, I’ll bite. Assume the contrary: that every word Zeke says, and all the other climate scientists as well, is a lie. And why stop there? Why isn’t Dr. curry a liar, too? Why assume her good faith? Heck, why assume that your ISP hasn’t altered the contents of your incoming and outgoing messages? How can you know that the words I am responding to are even the ones you wrote?

      You can assume bad faith and conspiracy theories and drive yourself nuts (if you aren’t already there), or you can assume that a conspiracy as vast as the one that would be needed to cook the climate science books would have overtly manifested itself by now.

      Your choice. Lewandowsky awaits.

    • If I am thinking of buying shares in a company, the last thing that I want to hear is that the auditors started from an assumption of good faith.

      Auditors should start with no assumptions about “faith”, good or bad.

      • Steven Mosher

        i think what zeke means is clear

        skeptics start with a belief that adjustments DEFINED IN 1986 are somehow suspect because of climategate.

        good faith means assume no evil intention.

        you have no evidence these guys, these PARTICULAR GUYS,
        had evil intentions.

        so look at the work, not who did it

        good faith

  90. The essence of temperature adjustments is to create a model of temperature data. There is nothing inherently wrong with doing this IF the model is validated against actual temperature measurements. It should be standard practice to regularly collect samples of actual temperature data and compare them to the estimated values. Clearly if there are errors one must question the accuracy of the model. At a minimum these discrepancies should be reported so there if full awareness of them. Science is the investigation of what is true. It is not and should not be a process of fabricating what one wants to justify.

    • Steven Mosher

      to validate estimations we hold out samples.
      to test the robustness of correction algorithms they are tested against synthetic series in a double blind fashion

      • Why not test estimations against actual data? I asked a similar question above.

        If there is a change in equipment, leave the old equipment in place for a year at say 100 sites. Keep series of measurements from both sets of equipment. Have one researcher prepare the correction based on the difference between the two types of equipment at 50 sites. Use that correction on the data of the old equipment at the other 50 sites, and then compare it to the actual measurements there of the new equipment. If they match, you know your can have real confidence in your correction. The same could be done with time of observation changes, station moves, etc.

        The real, essential problem I have with temperature reports, GCMs, paleo-climate and much of the rest of climate science is that they rely on statistics for validation, rather than comparison to actual data. If you once test a proposition, correction or model against actual data, and it proves accurate and precise enough, it would inspire a lot more confidence when you use it elsewhere.

        (Actually, I should correct myself in one instance. We are now able to test the consensus’ GCMs against 17 years of the consensus’ temperature data. And the results are not impressive. That divergence is starting to make the one hidden by Mike’s Nature trick look like a hiccup policywise.)

        But statistics is full of assumptions, Bayesian priors, estimated trends and the like. Such a process might well provide results useful for some purposes. But climate science is being used to push for massive public policy initiatives with enormous costs and negative economic impacts.

        “Trust us, we compared our results against our synthetic data,” is not good enough under those circumstances. The policy question isn’t whether your corrections and algorithms are the best available. It’s whether they are as precise as you claim, for the purpose for which you offer them.

      • Windchasers

        GaryM,

        Why not test estimations against actual data? … The same could be done with time of observation changes, station moves, etc.

        Essentially, that’s what’s being done (at least for TOB; not sure about the others). Once we have hourly readings, we can actually verify what the TOB is, rather than just estimating it. Then we apply it to the old data.

        see a different comment on this thread:
        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-605662

        When it comes to station moves, it’s generally sufficient just to show that the anomalies are well-correlated. If the temperature at the bottom of a hill is f(t) and the temperature at the top of the hill is f(t) + X, i.e., a correlation of 1, either one will do just fine for use in constructing the national anomaly.

        If you only had those two data points, and you move the station sharply, with no overlap, yeah, you’re going to have problems correlating their anomalies. But IIUC, there’s usually another station nearby with a contiguous record, and you can cross-check the correlation of the hill and valley stations with the correlation of the stationary station, using that other station to check the correlation of the two stations with each other. We’ve over-sampled the US for temperature measurements, so this isn’t a big problem.

      • Steven Mosher

        “If there is a change in equipment, leave the old equipment in place for a year at say 100 sites. Keep series of measurements from both sets of equipment. ”

        That was done for the MMTS change. Its also being done for CRN

        also you dont understand what holding out data means.

        and you dont understand why you have to test with synthetics data AS WELL

  91. Zeke,

    “This post will focus primarily on NCDC’s adjustments, as they are the official government agency tasked with determining U.S. (and global) temperatures. The figure below shows the four major adjustments (including quality control) performed on USHCN data, and their respective effect on the resulting mean temperatures.”

    Thank you for effort. This is a very helpful post.

    I do have one ‘curiousity’ question at this point: considering the tasked responsibility quoted here and considering the process as outlined in your Figure 4–does the NCDC have a formal quality assurance program for this task? If there is a (USHCN) quality assurance program then is there a site or repository where that material is brought together? In a sensitive high profile endeavor such as USHCN I would expect QA to be very visible–for example, touting the quality implementation of an appropriate and structured approach to getting what is needed. ;o) [ I am aware of USHCN ORNL/CDIAC-118 NGP-070 and TD-3200 . While they certainly provide some of the information one expects to find in a QA program they do not constitute one and leave open/evoke questions about QA, e.g., extent, content, and frequency of human review, signatures, external audits, QA program documentation, etc.]

    Thanks again for your effort.

    • All I am looking for is a simple yes or no: is there a formal USHCN quality assurance program in place?

    • Moot question…times are changing. Really excellent recent postings, Zeke and Mosh. Made me think, search and read for a couple of days on a topic I’ve been happy to ignore — geostats much more interesting. USHCN is a convenient sandbox dataset with which to hone tools and to explore but has its limits.

  92. Zeke,

    Thanks for the detailed analysis!

    I had to stop reading at Figure 8, though. You talk about the Tmax adjustment pre-1980, and the Tmin adjustment for the entire graph, but don’t really mention the apparent +0.4 degree adjustment from 1980 to present. I’d rather read a couple of paragraphs on that than two more parts to the series.

    That seems to address the entire debate in a single graph on a small part of the overall topic: a Tmin adjustment dip in the 1940’s would lower the past’s average, while a Tmax adjust that zooms upwards from the 1980’s would obviously raise the present’s average substantially. A conspiracist’s dream in a single graph and so few words spent explaining it.

    • Hi Wayne,

      The big post-1980s adjustment in maximum temperatures in figure 8 is mostly a correction for the ~0.5 C cooling bias introduced by moving from LiG to MMTS or ASOS thermometers. As I mentioned in the article:

      “While fairly symmetric in aggregate, there are distinct temporal patterns in the PHA adjustments. The single largest of these are positive adjustments in maximum temperatures to account for transitions from LiG instruments to MMTS and ASOS instruments in the 1980s, 1990s, and 2000s.”

      • Zeke, Thanks, I hadn’t focused on that. I did some reading on NOAA’s site, and it seems to indicate that: 1) many of the discrepancies were associated with snow cover on the ground conditions, 2) lows were raised almost as much as highs were lowered — something not indicated in Figure 8 — and 3) the highs were lowered less than 0.5 C.

        A more detailed discussion of the whole LiG to MMTS/ASOS move correction still seems to be the crux of the matter. If lows truly were raised almost as much as highs were lowered, the average would pretty much be nearly the same, or perhaps more like a +0.1 C correction.

  93. I don’t believe in global or national temperatures – raw, cooked or improved – and don’t want to be shown them, especially in graphic format. They’re like Dr Johnson’s walking dog, interesting because attempted, not because they serve.

    Graphs can be handy but they are naturally rigged for facile belief. Give me what you have, and, in the case of Australia, don’t connect and average together massively diverse climatic zones just because of current political boundaries. And don’t assume about huge areas of ice or desert which had no measurements because they had no people to measure. I find all that particularly silly.

    In the language we share, just tell me what you know, however shabby, however poor. And tell me what you don’t know, however vast.

    • +1.

      The BoM’s mysterious and freshly minted “national temperature” metric, is a case in point.

      Doing some sort of calculation based on where weather stations were historically located, across an entire continent, is BS which is still in the air before hitting the ground.

    • “I don’t believe in global or national temperatures.”

      I don’t either, and for many of the same reasons, at least as far as the tenths of a degree precision claimed. I don’t think anyone can tell the average temperature of Illinois on a given day, let alone CONUS or the entire Earth. And I reject the notion that you can get a trend to within tenths of a degree when you can’t get your starting data points with that precision.

      But I appreciate the efforts of Zeke Hausfather to explain why he disagrees. Mosher too when he wakes up on the right side of the bed. I’ve been wrong about enough important things in my life that I always am open to the possibility of being shown I am wrong again. It’s not happening here so far to my mind, but I’m open to listening.

      The fact that progressives, including the warmists, are incapable of critical analysis of their own positions is no reason for us to follow suit. So as long as they continue arguing for their knowledge of “global average temperature,” I’ll keep listening and asking questions.

      I disagree with the claims made about average temperatures. But I find that engaging on the issue and listening to the other side is the best way of addressing it. It helps me understand my opponents’ position. If what he says does not change my mind, I am better equipped to argue against him in the future. If he does, well the benefits of that are obvious, I can simply change my position. Either way, it helps to understand the other side.

      This is how I work as a litigator as well. I try to understand, make my opponent’s arguments, and criticize my own positions as I think he should. It is a practice that has resulted in the settlement of many cases; and I can’t remember the last time I was surprised by an opponent in court. I even oftentimes find myself disappointed that the other side did not make the arguments, or present the evidence, that I would have if I had been been representing them. It is a useful practice.

      (This is not about the dishonesty of a Gleick or Mann, or the data sets Joanna is referring to, but rather the subject of this thread, and the efforts of Zeke Hausfather in particular.)

      • I guess what I’m saying is that numbers are pretty dumb on their own and of limited value at other times. A Great Australian Temperature is a pretty silly thing for reasons which should be screamingly obvious, but I guess it’s a harmless enough bit of fluff compared to other confections.

        Talk all you like about the terrors of ENSO, Eastern Australia’s deadliest year for heat was a La Nina year (1939) flanked by neutral years. In spite of assumptions about PDO our longest drought (though not our worst) occurred between the late 1950s and late 1960s. This does not mean that the work of Walker and Mantua is not valid or of great value. It just means that data is pretty useless unless you use your loaf while handling. Putting data in the hands of the mechanists and literalists has proven to be an intellectual catastrophe.

        Got one of those great lumps of meat called a human brain? Drop the joystick and use it.

      • Windchasers

        GaryM,
        I appreciate your process. That’s actual critical thinking, when you think through both your position and your opponents’, and find the flaws in each.

        And I reject the notion that you can get a trend to within tenths of a degree when you can’t get your starting data points with that precision.

        If you have a solid understanding of the distribution of the errors and you have enough data points, it’s actually pretty straightforward. Though of course, the greater the range on the errors, the less certain the trend will be. But the fact that we’re averaging over a large area helps quite a bit, reducing the error of the average substantially compared to the errors of the individual stations.

        I don’t think anyone can tell the average temperature of Illinois on a given day, let alone CONUS or the entire Earth.

        Finding the average temperature is a different, harder problem. But usually it’s not relevant, so usually it’s ignored (USHCN not withstanding). If we mostly care about the trend – and we do – then we don’t need the average temperature.

        Re: average temperature, I’ll go back to my earlier example with a hill. Say you want to know the average temperature of a hilly square mile. And let’s say (for the sake of argument) that the temperature is perfectly correlated across this area – everywhere in this square mile, the temperatures move in lockstep up or down. What’s the average temperature?
        Where the elevation is lower, temperatures tend to be lower. And the type and amount of vegetation can change the temperature, shrub vs grass vs trees vs dirt. There are plenty of different sampling techniques you could apply, to try to get the average temperature across all the terrain and vegetation changes, but suffice it to say that finding the average temperature is going to be a pain.

        But what about the anomaly? Because the temp is perfectly correlated within this area, you only need 1 measurement location to get the temporal anomaly. Considerably easier.

        The point is that getting the average surface temperature requires a lot more sampling, and requires accounting for local spatial changes (topo, vegetative, etc.) that getting the surface temperature anomaly does not.

      • Windchasers,

        “And let’s say (for the sake of argument) that the temperature is perfectly correlated across this area – everywhere in this square mile, the temperatures move in lockstep up or down.”

        Average temperature for an area is hard, but average anomaly for the same area is easy? If I am reading you correctly.

        Your answer parallels what the NOAA site I linked to earlier says. (Item 7 in the list.)

        http://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php

        The fact that it is easier does not convince me it is more accurate. What you and the NOAA site both indicate is that my concerns about determining average temperature locally, let alone globally, as extremely difficult are correct.

        I have a lot of difficulties accepting the argument that a trend in anomalies gives you the same result with more accuracy. Just as I have difficulty accepting the consensus argument that it is easier to more accurately predict temperatures 100 years out than 10.

        To calculate an anomaly, you need an average to start with. I don’t see how you avoid dealing with the difficulties in finding an average temperature, when calculation of your anomalies requires you to do so as a first step.

        If it were one station, I don’t suppose it would make much difference, Even if you got the initial average wrong, at least you would be comparing future data against a norm.

        But for numerous stations over a wide area, your initial average must be based on numerous assumptions about the average temp in the first place. And the average would certainly be different in different areas. Which brings us back to the same place we started at. If it is so difficult to determine average temperature for a single location, how is it “easier” to determine the appropriate average for a larger area to compute anomalies from that average?

        The fact that the statistics work out does not convince me that the process is accurate or precise. In fact, Mosher has made the statement in the past that it doesn’t matter if you subtract warming stations, or cooling stations, or stations based on seemingly any other factor. The trend in anomalies stays the same.

        To this layman, this sounds remarkably similar to the fact that Mann’s original model always gave a hockey stick, no matter what data was input.

        The primary problem is that the entire global warming movement is being sold based on telling people that the global average temperature of the Earth is increasing at a dangerous rate. And that this rate is detectable to within tenths of a degree per year, per decade, per century.

        You write “If we mostly care about the trend – and we do – then we don’t need the average temperature.” But average temperature is what is sold to the public. And average temperature is what you need to calculate anomalies, and therefore a trend in anomalies.

        There seem to be just too many assumptions in the whole process to claim that precision.

        I would have no problem if the public were told that “we estimate that the country’s average of interpolated, estimated, krigged, infilled anomalies is increasing by one tenth of a degree per decade,” because then it would be clear that there is a lot more than measurement of temperature going on. And the argument would properly be over the validity of the various assumptions, corrections and estimations. Just as is occurring in this thread.

        But that has not been the public debate. They are told simply that “the average temperature of the US has increased by x tenths of a degree per decade.” Or “this year’s average temperature is one tenth a degree higher than last year’s.” And anyone who dissents from the claims of precision is labelled a denier.

      • GaryM,
        “To calculate an anomaly, you need an average to start with.”
        I don’t know how they do it, but I determine a daily anomaly for each station on min/max temps. Once I do that, I don’t have to calculate an average till I aggregate my station list.

      • Windchasers

        GaryM,
        Average temperature for an area is hard, but average anomaly for the same area is easy? If I am reading you correctly.
        Aye, that’s right. Or at least, the temperature anomaly is easier. It varies less in space than the absolute temperature does.

        The fact that it is easier does not convince me it is more accurate.
        When I say that it’s “easier”, of course I mean that the accuracy is higher, the errors smaller and it’s easier to verify the accuracy. I don’t actually mean that the calculations are necessarily easier.

        Just as I have difficulty accepting the consensus argument that it is easier to more accurately predict temperatures 100 years out than 10.
        It’s not actually easier to predict temperatures 100 years out than 10 years out. That’s not right. It’d be better to say that the error bars on our predictions grow quite rapidly as you look past a few years, and then they settle down into a range bounded by the climatic conditions.
        So 10 years out is easier than 100 years out, though neither has great accuracy. IOW, both will have substantial error bars.

        Of course, it may be easier to predict the average temperature for 70-100 years from now, than it is to predict the exact temperature 10 years from today. But that’s an apples-and-oranges comparison; weather and climate. Over similar time periods and similar areas, shorter-term predictions will be better than long-term (though maybe not much better).

        To calculate an anomaly, you need an average to start with.
        Not as I understand it.
        If I were constructing the US temperature trend, I’d start by identifying the offset that gives the best anomaly correlation between nearby stations. This gives you the best estimate of the normal temperature difference between a pair of stations. Then you can use that to get the average.. but you’ve already started by calculating the anomalies first. You have to.

        Why do it this way? Well, if any of your stations move/start/stop, you can’t just average together the temperature data. Going back to the example of a hill, let’s say a station moves from valley to hilltop, with a slight overlap in time, and both locations have a flat temperature trend, like this:
        Valley: 2 2 2 2 x x x
        Hilltop: x x x 4 4 4 4
        Then the average of the two is: 2 2 2 3 4 4 4. That’s not right. We specified that it was a flat trend at the start.

        If we get the mean-based anomalies before we average the stations, then we get:
        Valley anomaly: 0 0 0 0 0 x x x
        Hilltop anomaly x x x 0 0 0 0 0
        Average anomaly: 0 0 0 0 0 0 0 0.
        Then you can average the offsets (2 and 4), add them back in, and get the actual average temperature: 3 3 3 3 3 3 3. This answer makes sense, at least.
        So stations being added or dropping out is one reason we start by working with the anomalies, not absolute temperatures.

        The primary problem is that the entire global warming movement is being sold based on telling people that the global average temperature of the Earth is increasing at a dangerous rate.
        Yep. And that’s the point – we care about how quickly the Earth is warming, not what its average temperature is.
        If you have a function plus a constant, f(x) + c, it increases at the same rate regardless of the value of the constant. Derivatives do not “care” about constant offsets. IOW, the rate of change calculated from the anomaly will be exactly the same as the rate of change calculated from the average.

        And anyone who dissents from the claims of precision is labelled a denier.
        Not that I’ve seen. Your reasoning and approach is what matters, more than the conclusions you reach.

        If someone hasn’t read the literature, didn’t know about the adjustments, finds out about them, still puts no effort in to understand the science, but says the scientists are fraudsters and the adjustments are wrong, he may well get called a “denier”.
        If someone hears about the adjustments, reads up on them, studies the statistical techniques involved, and finds an error or a missed assumption in the adjustments, and this provides the basis of his skepticism, then I applaud him and thank him for his contribution to the science.

        The difference:
        The first person developed his opinions without sound information, shot his mouth off, and didn’t apply any critical thinking to test his own beliefs.
        The second person went and got educated, thought about the problem, and formed his beliefs on the basis of the best available data.

        I really don’t see too many people take the second approach. But man, those people are a lot more fun to argue with, since they approach the problem rationally and generally have some data behind whatever their beliefs. I learn a helluva lot more from them.

  94. stevefitzpatrick

    Zeke,
    Good post. You have more energy than I do.

  95. John B. Lomax

    Zeke Hausfather: Thank you very much for your excellent article (Understanding adjustments to temperature data) and the references to significant published articles providing more details. To the extent that I could understand most of it, it would appear to me that the adjustments have been well conceived, each having a specific goal to correct what is perceived as an error (not the desired/expected temperature).
    I do not understand the correction for time of measurement. I followed the procedures used and they appeared to achieve a result that met the analysts’ expectations. However, it would seem to me that a raw maximum or minimum temperature is, within the measurement accuracy of the equipment, by definition correct. It needs no adjustment. What is in error is the time of measurement. Even if the station has correctly reported when the readings were taken, we do not know when, in the previous 24 hours, those temperatures occurred. If we are looking over a century in time, does anyone really care which hour or day? Yes, an extreme measurement taken on January 1 could be reported as the extreme for that year when it was actually in the previous year; again do we care?
    Lastly, does everyone else understand that the Tavg probably has no sensible meaning? Sorry, I’m just and engineer.

    John B. Lomax

    • Hi John,

      Time of observation corrections are hard to intuitively understand, and involve essentially double-counting hot (or cold) days in the min or max temperature. The next post in this series (hopefully some time next week) will look at some in-depth examples, taking hourly data from the pristine Climate Reference Network and looking at how the daily and monthly means change based on the observation time.

  96. bit chilly

    many thanks to both zeke and steve for the replies here ,this discussion has certainly opened my eyes to some of the issues involved in measuring something that to this layman initially seemed fairly straightforward.

    i will certainly think long and hard before commenting again (it may not alter the stupidity level of my post,but you will know i tried.

    i asked a question up thread that probably got lost in the discussion,now the thread appears to be calming down i will try it again with a slight difference . sorry if this does not make sense , 8 schools in 3 different countries and the attention span of a gnat do not consistent coherence make.

    if the time series was expanded by splitting it down the middle and placing a 500 year manufactured data set in the middle with 1910 to 1960 data being the first 50 years and 1960 to 2010 the last 50 years,with the infilled data averaging the mean of the current trend,would the resultant trend begin and end at the same levels after the homogenization ,tobs and pha calculations ?

    • Not really, because you’d have 500 years of random data in between that (if truly random) would have zero trend. What you are talking about, testing how the algorithms work with synthetic data, was done quite well in Williams et al 2012, which might be worth a read if you are interested: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

      • thanks again for the reply zeke. that was an interesting read, a bit heavy going for a layman but informative nonetheless ,particularly as to the scale of the task faced.

        initial points would be ,in terms of tobs and instrument changes,surely in these cases what we are looking at is an absolute change only to the data at point in time change.
        so for each station the raw numbers would change by the difference resultant in the tobs change and instrument change,but not the trend ?

        whereas in the case of uhi, the trend would indeed change and manifest in an accelerated warming trend ?

        i can understand the creation of analog cases ,but from gcm,s ? as the gcm,s appear to be following a far greater trend than observed this suggests inputs that bear no relation to what happens in the real world.to be fair this point was addressed ,but i did detect an underlying feeling in the papers conclusions that the “real” trend would be found to be closer to gcm output if all the issues could be resolved,which hints at confirmation bias at some level.

        whether significant or not would be for those at least capable of proper technical investigation of the methods used,and well beyond me. another small point of note,again to a layman ,there are a lot of assumptions being made ,understandable in the situation ,but hard to reconcile with the apparent confidence levels shown.

        again i appreciate the time and effort required to respond to layman as well as the more informed poster, very different to the position maintained by others who would do well to follow the example.

  97. WhyNotAdmitYouCannotControlTheClimate

    I stood on the Gulf coast as the hurricane sent bullets of rain against my face and the wind swept me off my feet, I dug in the sand with my fingers trying to hold my ground, I prayed “dear God save me from this monster”. And God answered ” Be of good cheer, I’ll just take a little CO2 out of this mess”, and he did and behold the seas were calm.

    • Shorter version: “I have nothing of value to say.”

      • WhyNotAdmitYouCannotControlTheClimate

        Exactly, but just as valuable as arguing about ticks of temperature that prove absolutely zero. Not worth anyone’s time.

  98. So what we have in this thread is someone making a lot of comments defending a Just So Climate Story. All the adjustments are correct, so the squiggly line describes the climate in some meaningful way, and skeptics are just trying to change the subject, and he was against the adjustments before he was for them, and his objectivity is beyond question, and and and…

    Right.

    Andrew

    • michael hart

      Something like that, Andrew. I stopped reading and, out of curiosity, scrolled down to this point just to see if he’d ever shut up. After long enough diluting points that Zeke or others might be making I imagine a few other readers may have given up too.

    • Matthew R Marler

      Bad Andrew: So what we have in this thread is someone making a lot of comments defending a Just So Climate Story.

      Do you have a specific claim that it is a “Just So Climate Story”? Most specific criticisms have been adequately addressed many times (in fact, all specific criticisms that I have read so far), leaving nothing but a sort of anti-intellectual residue of bias of some kind.

      • “Do you have a specific claim that it is a “Just So Climate Story”?

        Sure. We have BEST Climate Product Team Spokespeople who claim they can reduce understanding of the history of earth’s complex climates into a squiggly line drawing… of course the only way to do it is complicated and full of assumptions, after-the-fact-adjustments and exclusion of adverse data, and can the only way it can be done is the way they do it.

        Right.

        Andrew

  99. I look forward to the future TOBS entry.

    Based on my understanding of the issue, past years have been cooled by making negative adjustments to TMax while modern years have been warmed by making positive adjustments to Tmin.

    Past Tmin should be fine without TOBS adjustments, and modern Tmax should be fine without TOBS adjustments.

    So, as a sort of sanity check, perhaps you could produce a comparison for pre-1980 data between TOBS adjusted average temps and Tmin (unadjusted) temps. And then you could produce another comparison between post-1990 TOBS adjusted final data and modern Tmax (unadjusted) temps.

  100. Zeke: I want to thank you very much for taking the time to post here and at Lucia’s about the temperature record. I’m sorry you need to deal with so much dubious thinking about this subject. Anomalies, TOB adjustment and instrument change adjustments make perfect sense to me, as long as you include the uncertainty in the adjustment in the overall uncertainty adjusted output. Perhaps you can answer some of these concerns in your next posts.

    I’m most concerned about the fact that you find a breakpoint that needs correction about once every ten(?) years and that the average breakpoint is 0.5-1.0 degC in magnitude (your Figure 7), each comparable in size to the 20th-century warming you are trying to detect. If a breakpoint is caused by slow deterioration of observing conditions, followed by maintenance that restores original observing conditions, that breakpoint shouldn’t be corrected. For example, FWIW Wikipedia tells me that a Stevenson screen needs to be painted every two years to keep a constant high albedo so that the temperature inside is in equilibrium with air at 2 meters, and not perturbed by some sort of radiative equilibrium with SWR. If all stations were suffering from a slow warming bias and only some of them were being maintained frequently enough to prevent a significant bias from accumulating, pairwise homogenization will transfer that bias to all stations. If you’ve got 10 breakpoint corrections and neighboring stations transfer 0.02 degC of bias with each correction, you’ve got a serious problem.

    I’m also concerned about misapplying the lessons from the US to global record. If I understand correctly, we don’t have much information about TOB and instruments outside the US. Are all adjustments to the global record pairwise homogenization? How many adjustments are being made? How big is the average adjustment? How much does that contribute to 20th-century warming? Do the adjustments create a better or worse fit to the satellite record?

    Thanks.

    • Steven Mosher

      The US is somewhat unique in a systematic change in TOBS.
      the other countries that have a few stations effected are
      Japan, australia, norway and canada.
      But in the US it was systematic.

      good question.

    • ” How big is the average adjustment? “
      I’ve done a post on that here. US GHCN adjustments, in terms of effect on trend, are about 50% bigger than non-US.

      • Alexej Buergin

        How come that the people in other countries could (and can?) do better measurements?

      • Some might say that the US has gone for quantity rather than quality.

        But really, the answer is TOBS. Other countries mostly prescribed reset times and stuck to it.

      • Steven Mosher

        Alexej.

        As Nick points out its historical.
        the US started with volunteers.
        TOB was not uniform.
        They changed that.
        Other countries had a better process.
        american exceptionalism

  101.  
    Do we really want to know the truth?

    How badly do we want to know?

    One ‘sensed’ that there was something wrong. But you see, sensing isn’t knowing. One hears things which make one feel uncomfortable, without being able to put one’s finger on anything specific. It’s almost an atmosphere — a way people talk, their conduct, or perhaps their gestures or even just their tone of voice. It is so subtle. How can one explain it to anyone who hasn’t experienced that time, those small first doubts, that kind of unease, for want of a better word? We couldn’t have found words to explain what we felt was wrong. But to find out, to look for an explanation for that… that ‘hunch’, well, that would have been very dangerous… One did know very early on that there were dangers in knowledge.

    (Sereny 1996, 458; my emphasis, as taken from, Thomas S. Kubarych, Self-Deception and Peck’s Analysis of Evil)

  102. Funny thing happened while adjusting global temperatures—e.g.,

    These energy-deprived people [in India, Africa and elsewhere around the globe] do not merely suffer abject poverty. They must burn wood and dung for heating and cooking, which results in debilitating lung diseases that kill a million people every year. They lack refrigeration, safe water and decent hospitals, resulting in virulent intestinal diseases that send almost two million people to their graves annually. The vast majority of these victims are women and children.

    The energy deprivation is due in large part to unrelenting, aggressive, deceitful eco-activist campaigns against coal-fired power plants, natural gas-fueled turbines, and nuclear and hydroelectric facilities in India, Ghana, South Africa, Uganda and elsewhere. The Obama Administration joined Big Green in refusing to support loans for these critically needed projects, citing climate change and other claims.

    ~Paul Driessen

    • A fan of *MORE* discourse

      Wagathon emits the usual “rollin` coal” clouds of anti-science propaganda.

      Ain’t yah got the memo, wagathon?

      Solar has won.
      Even if coal were free to burn,
      power stations couldn’t compete

      Last week, for the first time in memory, the wholesale price of electricity in Queensland fell into negative territory – in the middle of the day … largely because of the influence of one of the newest, biggest power stations in the state – rooftop solar.

      Get checks from utilities? No more writing checks?

      *EVERYONE* likes *THAT* energy-economy, eh Climate Etc readers!

      Good on `yah, Green Energy!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Imagine living under conditions endured by impoverished, malnourished, diseased Indians and Africans whose life expectancy is 49 to 59 years. And then dare to object to their pleas and aspirations, especially on the basis of “dangerous manmade global warming” speculation and GIGO computer models.

        ~Paul Driessen

      • “few coal generators in Australia made a profit last year”

        And when they go out of business solar will take over. At night. Right?

        Solar subsidies are killing off baseline = blackouts.

      • A fan of *MORE* discourse

        sunshinehours1 foresees libertarianism’s demise  “Solar subsidies kill-off baseline = blackouts.”

        Charles Koch! Is that *YOU*?

        “Solar power enjoys bipartisan support across the country and any ostensible attack on renewable energy is going to have the effect of showing the attacker’s interests to be misaligned with the American public as a people, the United States as a country, and our future as a planet.”

        Good on `yah, Green Power!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • “Britain could be at risk of blackouts by next winter, the boss of one of the Big Six energy companies has warned, as old power plants are closed and have not yet been replaced.

        “Keith Anderson said the green levy will force coal-fired plants to close too quickly.

        http://www.dailymail.co.uk/news/article-2520633/Npower-boss-warns-energy-blackouts-NEXT-WINTER-closures-coal-fired-power-plants.html

    • The reality in Queensland is very different. Power bills have doubled in the past few year chasing industry such as aluminium smelting offshore. The returns on solar installations are not sufficient to pay interest on the costs of installation even at hugely advantageous tariffs – and the bills keep coming in. A far bigger burden falls on those who don’t have solar panels for network costs that are fixed. Coal stations continue to operate in the background – simply shedding load until needed again.

      Australia has gone from having some of the lowest energy prices in the world to having some of the highest. It is an utter disaster for everyone driven by distorting energy subsidies.

  103. Re TOBS, the Schall and Dale paper describes it in nice laymans terms. It’s written in the 70s before things got political, and it backs up Zekes fine post here.
    Here it is: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281977%29016%3C0215%3ATOOTBA%3E2.0.CO%3B2

    • I saw there were papers clearly documenting TOBS in the 30’s, the 50’s and the 70’s.
      Innocent question, since this was a known issue for a very long time, are we certain that we are not now adjusting temp records that were already adjusted for a known issue?
      In other words did a reasonable adjustment get applied twice?

  104. It’s interesting reading Mosh and Zeke’s “good guy-bad guy”routine. The mere fact that there is so much argument about how to measure and adjust temp readings leads me to the conclusion that in spite of all your graphs and codes you do not have a clue.Lots of money involved in trying to make a silk purse out of a sow’s ear. Give me medical research any day of the week.After all the billions spent,all the years researching nothing has changed when it comes to predicting the weather. Why is nobody in the media questioning how much money is being wasted on futile science? I see medical breakthroughs in the news all the time,meanwhile paper after paper is published in climate science,then disputed,then rehashed,then disputed and so it goes on and on and on.

    • Steven Mosher

      “It’s interesting reading Mosh and Zeke’s “good guy-bad guy”routine. ”

      Damn give that person a prize.

      I wondered how long it would take.

      Note that everybody who asks a good science question gets a nice answer from zeke.

      object lesson over.

    • Matthew R Marler

      Noelen: in spite of all your graphs and codes you do not have a clue

      Sorry, but on this issue you are clueless.

      I see medical breakthroughs in the news all the time,

      Studies by Ioannides and others show that 40% of the results published in the medical journals can’t be reproduced. On the whole, there isn’t the evidence to determine whether climate science on the whole does better, but some cases of climate science analyses have gotten a lot of press.

  105. This answers my question why some of the adjustments make the trend more positive. Thanks, and I look forward to the next posts. I haven’t had time to read all the links or all the comments, but so far, so good.

    • That being said, if I want to know what month was the hottest ever, I’ll still use satellite temp series due to more uniform and consistent sampling – this in spite of satellite and sensor changes.

  106. I think we understand very well what it’s all about. No matter how you dress up the pig… it’s still a Left versus right issue and there are no more useful explanations. Simply take sides and get it over with: allowing what is going on to continue is a vote for racism based on access to energy.

    Poverty, in the sense of deprivation of basic goods and services, in very large part is a result of insufficient access to energy. Access to energy means electricity for our homes, businesses and computers; it means transportation, in the form of automobiles, trains and planes; it means heating in cold weather and cooling in hot weather; it means functioning hospitals and health care facilities; it means mechanized agricultural methods that ameliorate the effects of bad weather and pests; it means access to information; and many other things equally important. Without access to energy, people are trapped in local areas to lead a life of basic subsistence if not periodic hunger and starvation.

    ~Francis J. Menton, Jr. (The Looking Glass World of “Climate Injustice”)

    • A fan of *MORE* discourse

      Wagathon claims it’s simple  “It’s Left versus Right issue and there are no more useful explanations”

      Love yer weblog, waggy!

      `Cuz yer blog makes it real simple to show young scientists  This is what anti-science ideology looks like.

      “The schoolteachers that peddle climate p**n in the nations’ classrooms are **** about their underlying motives and don’t know **** from **** about global warming or what it takes to earn a living in the real world.

      It’s a pleasure to advise Climate Etc readers to carefully and thoughtfully contrast “Waggy-World” with a grown-up world-view.”

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • The Left dissembles while, “even today,” says Menton,, “over 1.2 billion people, 20% of the world’s population, lack access to electricity.”

      • A fan of *MORE* discourse

        Wagathon is worried!  “20% of the world’s population, lack access to electricity.””

        Where Solar’s ALREADY is making inroads … well ahead of Old Fossil!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Menton says, “Here is the World Bank’s description of what it means to lack access to electricity:

        Without access to energy service, the poor will be deprived of the most basic of human rights and of economic opportunities to improve their standard of living. People cannot access modern hospital services without electricity, or feel relief from sweltering heat. Food cannot be refrigerated and businesses cannot function. Children cannot go to school in rainforests where lighting is required during the day. The list of deprivation goes on.

        “The World Bank,” says Menton, “actually projects that the number of people in Africa without access to electricity will increase, not decrease, between now and 2030!”

      • Access to potable water is a bigger issue. So is access to vaccines, even if the same warming-doubint munchkins with in the US think “vaccines are bad for you, m’kay!?”

        It turns out it takes no particular effort to be totally ignorant, just a belief system that overrides one’s intellectual abilities.

  107. ‘Observation and reanalysis-based studies have shown that moist enthalpy (TE) is more sensitive to surface vegetation properties than is air temperature (T). Davey et al. (11) found that over the eastern United States from 1982 to 1997, TE trends were similar or slightly cooler than T trends at predominantly forested and agricultural sites, and significantly warmer at predominantly grassland and shrubland sites. Results from Fall et al. (12) indicate that TE (i) is larger than T in
    8 areas with higher physical evaporation and transpiration rates (e.g. deciduous broadleaf forests and croplands) and (ii) shows a stronger relationship than T to vegetation cover, especially during
    the growing season (biomass increase). These moist enthalpy-related studies confirm previous results showing that changes in vegetation cover, surface moisture and energy fluxes generally lead to significant climatic changes (e.g. 41-43) and responses which can be of a similar magnitude to that projected for future greenhouse gas concentrations (44, 45). Therefore, it is not surprising that TE, which includes both sensible and latent heat, more accurately depicts surface and near-surface heating trends than T does.’

    http://pielkeclimatesci.files.wordpress.com/2011/11/nt-77.pdf

    There is no potential to obtain an artifact free record from surface temperature data. Evaporation varies seasonally – with surface type – decadally and longer for many reasons. Even if it is accepted that ‘adjustments’ provide a better surface T record – it is still far from adequate for climate purposes.

  108. “The alternative to this would be to assume that the original data is accurate”

    No, an alternative would be to question the hypothesis of an ever increasing temperature trend.

    The oil drop experiment of Millikan and Fletcher is illustrative. It is said the original experiment measured the charge of an electron within 1% of its currently accepted value. There is also dispute about the legitimacy of some of Millikan’s tests and whether he massaged the data to fit his hypothesis. It is also known that it took many years and many experiments to refine the value of an electron charge. Richard Feynman argues this is so because scientists were biased to ignore results that differed from the accepted value. Scientists fooled themselves because they were ashamed to report results that were too far outside the “consensus”.

    Unlike the charge of an electron a historic temperature data point can never be remeasured. There will never be another July 25, 1936 in Lincoln Nebraska and the temperature on that date can never be retested. The same applies to any other historic temperature measurement. Lacking proof of instrument or recording error it is hubris beyond comprehension that one would alter the temperature record. To do so is presumptuous. It would only be done to impress one’s bias on the science.

    Yet presumption is exactly the position Zeke and Moshner and others defend. Rather than acknowledge the limitations of their temperature model they alter the data so that the trend will work. That is the key not to be missed. What is most important in this exercise is the trend. Because the trend cannot be questioned. So either the past temperature must be lowered or the present temperature must be increased.

    But the present temperature cannot be changed. Given the ubiquity of current temperature data it would be too much of a lie to change the present. So the past is altered and the trend is protected.

    What we need, in Feynman’s words, are scientists who refuse to allow themselves to be fooled. What is needed are scientists who are not ashamed to trust the data as it is and to refute the consensus when the data does not support it.

    • I’m not sure I understand. Are you saying that after making whatever adjustments they think appropriate, the NCDC people are destroying the original temperature data and thereby making impossible for inquisitive investigators to check or recreate their work? I would agree that doing so would be bad behavior. But I don’t think that’s the reality of the situation.

      • Don Monfort

        He did not say anything about destroying the original data. Nothing. You just made that up. Where do you people come from?

      • ==> “Where do you people come from?”

        Too funny. So much for the notion that “skeptics” don’t doubt that the Earth has warmed and that ACO2 is partially responsible (the only question is the magnitude of the effect).

        Eh?

        But don’t worry. As soon as Judith puts up another post, you can pretend that all these “people” don’t exist.

      • Don Monfort

        Just for fun joshie, can you elaborate on wtf it is you are talking about? You are always finding irony in all the wrong places, runt.

    • “Lacking proof of instrument or recording error it is hubris beyond comprehension that one would alter the temperature record.”

      No-ne is altering the temperature record. You will find the data as recorded on the GHCN Daily file. In fact, if you really want authenticity re July 25, 1936 in Lincoln, Nebraska, NOAA will provide you with the handwritten record.

      With TOBS, at least, they are redoing the calculation that goes from the reading of the max-min markers at a particular time (which is the real record) to the calculation of a monthly average. That requires knowledge of diurnal variation, and we now have lots of hourly data, unavailable when it was first done. But the original record is what they work from.

      • Wouldn’t the diurnal variation change with conditions? For example, if there’s a drought, the variation would probably be greater due to less humidity. So using modern records to correct older ones might not work out? Do you agree?

      • Don Monfort

        Do you consider the handwritten stuff the official temperature record, nicky? Is that what they present to the public as the temperature record? The stuff as it was written down? Then why the f does it keep changing? Where do you people come from? You characters are manufacturing straw men out of Dan’s very coherent statement. Why are you compelled to try to make everybody who doesn’t think exactly like you do to look like a dunce?

      • Steven Mosher

        Don yes, the written records will make it into ITSI as level 0 data.
        part of the public record

      • Don Monfort

        Steven, nobody here is claiming that they don’t keep the original data in a file somewhere. Can we at least get that straight?

      • Steven Mosher

        Sure Don.

        But some guys seem to be insisting that every monthly prouncement of temperature

        A) be done in a color scheme to their liking
        B) be annotated with every detail for every calculation done.
        c) be ISO9000.

        You know even in business we let people report toplline numbers with pointers to the entire justifaction.

        having lost the science battle, guys are shifting to the PR frame.

        Thats ok, as long as they clearly state. “the record is good” now lets talk about the presentation.

      • Don Monfort

        Steven, we have been talking about Dan W’s comment. He didn’t say anything about color schemes, or destroying data. Can we agree on those two things, at least?

        Why not address what Dan actually said. I will repeat what I wrote below:

        “What Dan is talking about is the fact that the data that is revealed to the public and trumpeted on the 6 o’clock news is the adjusted data. The warmed over data. The hot stuff. Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again. And when it was again, it was done very quietly.”

        Are we talking misplaced decimal points and station moves? How many decades does it take to figure that out?

        You know in business, if a company keeps adjusting prior years’ financial data, investors assume incompetence and/or dishonesty.

      • I like your handwritten record.

        I noticed that NONE of the Tmax values recorded during the month were duplicated from one day to the next.

        Since the entire basis for making TOBS adjustments to reduce the Tmax of temperatures recorded back in years like 1934 is that a very high reading might get double counted for two days due to recording temps in the afternoon, doesn’t the lack of a single double-value utterly refute the rationale?

        If there are no duplicate values, there is no need for a TOBS adjustment to “correct” the data.

      • “doesn’t the lack of a single double-value utterly refute the rationale?”
        No. “Double counting” simply means that two max readings were taken on the same afternoon. They won’t usually be the same. On hot Monday, the max was at 3pm, but it was still warm enough at 5 pm to be the Tuesday max.

      • Steven Mosher

        Don you want me to defend stupid pr in a post about math methods?

        Let me stipulate. Every public statement about warmest month is stupid.

        Now back to science

      • Don Monfort

        Steven,

        “Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again.”

        It’s about math methods. It’s also about credibility. Does BEST know which month is the warmest on record?

        Please note that I promise I am not talking about destroying data, color schemes, the price of potatoes…

      • Don Monfort

        Zeke,

        Can you help Steven?

        Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again.

        It’s about math methods. It’s also about credibility. Does BEST know which month is the warmest on record?

      • Steven Mosher

        “Steven,

        “Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again.”

        1. You wont find us saying anything about the july 1936 being the warmest month.
        2. If you are asking about NOAA, then ask NOAA.
        3. I try to spend as little time as i can wondering about or trying to explain why NOAA does some of the things they do.

        In general the warmest month will change because its an estimate.
        its not scientifically interesting to me. If you are interested, then waste your time on it, not mine.

    • Bill Illis

      We are going to be having the same argument for another 9 decades.

      They need to ramp up the adjustments from the current +0.9F (surprising that number hasn’t come up so far) …

      … to +5.8F in the next 86 years in order to meet the projected temperatures.

      Did you know that there was no corn crop in Minnesota in 1903? It was too cold to reach full maturity.

      • Bill Illis wrote:

        They need to ramp up the adjustments from the current +0.9F (surprising that number hasn’t come up so far) …

        … to +5.8F in the next 86 years in order to meet the projected temperatures.

        Alternatively, the physics behind the projections is sufficiently correct and the forcings by the real world climate drivers don’t deviate too much from what has been prescribed for the model simulations.

      • David Springer

        “Alternatively, the physics behind the projections is sufficiently correct”

        Good one! LOL

        Far more likely technology progresses to the point where we can reverse deleterious effect in the unlikely event it should arise.

      • Matthew R Marler

        Jan P. Perlwitz: Alternatively, the physics behind the projections is sufficiently correct and the forcings by the real world climate drivers don’t deviate too much from what has been prescribed for the model simulations.

        That is definitely one of the alternatives. It can be discussed independently of whether the temperature record, with the best possible statistical analysis, is a reliable estimate of climate change.

    • Steven Mosher

      ” Lacking proof of instrument or recording error it is hubris beyond comprehension that one would alter the temperature record. To do so is presumptuous. It would only be done to impress one’s bias on the science.”

      Nobody alters the record.
      It is still there.

      However, we answer the following question.

      What is your best estimate of the temperature on july 16, 1936 in grand rapids michigan.

      Suppose you look at that record

      it says
      july 15 : 15C
      july 16 155C
      july 17 15C

      Opps. how does the official record show 155C opps they moved the decimal spot.

      So, we offer a “corrected’ dataset. One that provides an estimate, our best estimate of what actually should have been recorded.

      or suppose the station move from 50meters above the ground to the ground.

      You can estimate the effect of that as well.

      So, the record is there. intact.

      When you want to do monthly averages, when you want an accurate prediction of what should have been recorded then you do an estimate.

      when people do these estimates the call the data “adjusted”

      • Don Monfort

        “Nobody alters the record.
        It is still there.”

        We know that. Can’t we at least get that straight? What Dan is talking about is the fact that the data that is revealed to the public and trumpeted on the 6 o’clock news is the adjusted data. The warmed over data. The hot stuff. Please explain how July 1936 was the warmest month for a long time, then it wasn’t, then it was again. And when it was again, it was done very quietly.

      • “So, we offer a “corrected’ dataset. One that provides an estimate”

        That is not what it is being sold as.
        It is being sold as the “global temperature” not an “estimate” of global temperature. Climate “science” is the only field I know of where it is acceptable to “estimate” data. If you don’t believe the data you should discard it. Aside from the issue that the data is now compromised, you have lost creditability. Once you justify “estimating” data one has to wonder what else you are “estimating” (perhaps with good intentions). One poor habit leads to another poor habit.
        Correct me if I am wrong but in another thread I believe it was argued that it doesn’t change the result. You are defending doing something that theoretically doesn’t change the result. Occam’s Razor argues against “estimating.”

      • “trumpeted on the 6 o’clock news is the adjusted data”
        Hardly ever. The today temps aren’t adjusted. Nor if they say “hottest day for city X”. What is likely adjusted is a monthly or annual average. That isn’t raw data. It’s a calculation. People figure out more stuff and re-do calculations.

  109. Wordoress: My comments just disappeared for no reason before I posted them.

    So I can now only summarise what they were. Figure 8 would indicate a drop in Tav in 1940, whereas Figure 1 shows Tav increased, as indeed do all other records. This requires explanation,

  110. Whoever is careless with the truth in small matters cannot be trusted with important matters.
    Albert Einstein.
    ___________________

    Albert E. summed up quite nicely what a lot of us out here in the climate interested street level public who no longer have any trust left in the “we are the experts and therefore don’t need to answer questions from those low level ignorants ” posturing of climate science advocates nor believe most of what they try and push as the ” adjusted out of it’s cotton picking mind”, so called “climate science”.

    • First bogus accusations against the scientists, single out ones or in general, are being made, then those accusations are used as pretext to dismiss the science.

  111. At the risk of being damned as a conspiracy theorist, may I just ask if there is an easy way to observe the ‘raw’ data,as in hand written records and the digitized ‘raw’ data we have come to know and love?

    • Hi DocMartyn,

      The new International Surface Temperature Initiative is trying to archive photocopies of all the original handwritten records that they can get their hands on. Its a slow process though, as there are literally millions of pages of logs.

      • Zeke, can you do me a favor?
        I went to Best and looked up Portland, Oregon
        Berkeley ID#: 174154
        % Primary Name: PORTLAND PORTLAND-TROUTDALE A
        % Record Type: TAVG
        % Country: United States
        % State: OR
        % Latitude: 45.55412 +/- 0.02088
        % Longitude: -122.39996 +/- 0.01671

        http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Text/174154-TAVG-Data.txt

        Then looked at the same station’s written records, for 1950.

        http://www.ncdc.noaa.gov/IPS/lcd/lcd.html?_page=1&state=OR&stationID=24229&_target2=Next+%3E

        The numbers for monthly average in the official record ( in F) do not match Berkeley Earth database after -32*(5/9).

        Am I doing something very stupid here?

      • David Springer

        I spot checked February,. 1950. Reporting station 38.8F vs. BEST 38.1.

        What you’re doing wrong is expecting unpaid amateurs to produce good working code.

      • David Springer

        Seriously, that’s about right, Doc. REAL raw data from 1950 found on station records is cooled by 0.7F by TOBS and SHAP (or equivalent) before BEST presents it as raw data. What BEST calls raw data is not what you thought it was.

        Isn’t that just precious? Now you know.

      • David, I want to know if the stations are the same. If they are the same we can then investigate why the first station and first year I examined have difference ‘raw data’.

      • David Springer

        They’re the same station. Portland Troutdale Airport. The old NCDC report mentions in the notes it’s the airport and on the 2nd of February was the lowest temperature recorded since its establishment in 1940.

        http://www1.ncdc.noaa.gov/pub/orders/IPS/IPS-BD3C4874-3F13-4A16-B1DF-CFFA20B93FB5.pdf

      • David Springer

        I already told you why the data is different. What you consider RAW isn’t what BEST considers RAW. Funny Mosher didn’t mention that when he was ranting about RAW vs. EXPECTED huh? I’m pretty sure he knows RAW is not the figures taken straight off the printed page turned in from the weather station.

      • Steven Mosher

        Doc they are different stations

        The written record for the FSO is an hourly station
        TOBS wont apply as Springer surmises.

      • Steven Mosher

        “I already told you why the data is different. What you consider RAW isn’t what BEST considers RAW. Funny Mosher didn’t mention that when he was ranting about RAW vs. EXPECTED huh? I’m pretty sure he knows RAW is not the figures taken straight off the printed page turned in from the weather station.

        ########################

        1. The raw records we consider are those that are in files not photocopies of written records.
        2. If you have a way of reading in PDFs and sucking out the numbers reliabaly then knock yourself out.

        3. The records Doc pointed to are just ONE of multiple records for that location. apparently a hourly FSO, wait for the series of posts that explains how all the various sources are prioritized ( hint how do you merge hourly and daily )

        Knock yourself out. When you find the problem report it.

      • Steven Mosher

        Doc.

        First you have to determine WHICH Portland station you are talking about

        One berkeley station you linked to was TROUTDALE airport

        your written record is not for TROUTDALE

        There are multiple portland stations, and multiple sources for each
        station

        http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Text/164883-TAVG-Data.txt

      • David Springer

        The 1950 scanned report is for Portland-Troutdale. Don’t make up mistruths.

        The problem is that your system is a big steaming heap of spaghetti code and the 1950 Troutdale data is one strand of it and you can’t phucking follow one strand because of the mess you made. Amateur.

  112. Speaking of USCRN …
    From the article:
    Government Data Show U.S. in Decade-Long Cooling

    The National Oceanic and Atmospheric Administration’s most accurate, up-to-date temperature data confirm the United States has been cooling for at least the past decade. The NOAA temperature data are driving a stake through the heart of alarmists claiming accelerating global warming.

    Responding to widespread criticism that its temperature station readings were corrupted by poor siting issues and suspect adjustments, NOAA established a network of 114 pristinely sited temperature stations spread out fairly uniformly throughout the United States. Because the network, known as the U.S. Climate Reference Network (USCRN), is so uniformly and pristinely situated, the temperature data require no adjustments to provide an accurate nationwide temperature record. USCRN began compiling temperature data in January 2005. Now, nearly a decade later, NOAA has finally made the USCRN temperature readings available.

    http://www.forbes.com/sites/jamestaylor/2014/06/25/government-data-show-u-s-in-decade-long-cooling/

    • jim2 wrote:

      The National Oceanic and Atmospheric Administration’s most accurate, up-to-date temperature data confirm the United States has been cooling for at least the past decade. The NOAA temperature data are driving a stake through the heart of alarmists claiming accelerating global warming.

      Apparantly, the author of this piece of text doesn’t know that the contiguous US covers only about 1.5% of the globe. He probably also has never heard of statistical significance and things like that. I am not surprised about these kind of statements considering where they are coming from, though. Forbes and Heartland are such reliable sources for arguments about these topics. It couldn’t get less based on science.

      • Jan – What??? From the quote on this page …
        The National Oceanic and Atmospheric Administration’s most accurate, up-to-date temperature data confirm the United States has been cooling for at least the past decade.

      • If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.

      • jim2, my comment is to be understood also with the following sentence I quoted:

        “The NOAA temperature data are driving a stake through the heart of alarmists claiming accelerating global warming.”

        This statement is a non-sequitur. It doesn’t follow from the alleged cooling (“alleged” because the “cooling” is not statistically significant) of the United States because of the small area of the whole globe, which is covered by the United States.

        jim2 also wrote:

        If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.

        I am not worried, if I make an “unscientifc speculation” at this place here. This is just an opinion blog, isn’t it?

        The alternative to my previous “speculation” would be that Taylor of the Heartland Institute is deliberating talking nonsense. This could be a valid explanation, also. Or a combination of both.

      • jim2, this amused me:

        If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.

        As Steven Mosher apparently feels perfectly comfortable calling me a liar. Earlier this year, he said “no skeptic has seen fit to test the hypothesis” the UHI effect “MUST make its way into the global average.” I pointed out I had done so. That’s when he said:

        knowing Brandon,I would say he is lying.

        and if he is asked to publish his test he will quicky cobble something together and back date it.

        I’m not sure how Mosher justifies portraying me as completely dishonest based upon nothing but his “knowledge” of me. He had far less of a case than anyone claiming dishonesty on this topic has.

      • Is the the same person who earlier wrote
        “First bogus accusations against the scientists, single out ones or in general, are being made, then those accusations are used as pretext to dismiss the science.”

      • Yes, and? Have I made any bogus accusations against any scientists who published results from scientific research I didn’t like? No. Then I also can’t have used this as pretext to dismiss any science.

        As for Taylor of Heartland Institute. So you tell me, is it what I wrote first, or the alternative, that he was deliberatly talking nonsense? Which one is it?

      • “This statement is a non-sequitur.”
        Agreed. While I’m happy with the temperature results, their actual weight has many contexts to consider.
        Also, so many stakes have been driven without effect, I’ve heard that phrase many times, the hypothesis has been falsified, it’s not a vampire.

      • Jan – more from that article:
        Second, for those who may point out U.S. temperatures do not equate to global temperatures, the USCRN data are entirely consistent with – and indeed lend additional evidentiary support for – the global warming stagnation of the past 17-plus years. While objective temperature data show there has been no global warming since sometime last century, the USCRN data confirm this ongoing stagnation in the United States, also.

      • jim2, the global data sets would be at least the proper basis for claims about global warming indeed.

        You quote following claim by Taylor:

        While objective temperature data show there has been no global warming since sometime last century,

        Based on what data specifically is this claim made about “no global warming”? (at the surface?) These are the surface temperatur trends in K per decade (with 2 sigma intervals) since 1997 (using 17 plus years as claimed by Taylor):

        GISSTEMP: plus 0.078+/-0.114
        NOAA: plus 0.05+/-0.104
        HadCRUT4: plus 0.05+/-0.108
        Berkeley: plus 0.084+/-0.113 (ends in 2013)
        HadCRUT4 krig v2: plus 0.106+/-0.119
        HadCRUT4 hybrid v2: plus 0.117+/-0.133
        (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html)

        Whas is the scientific reasoning used for the assertion that these data showed there wasn’t any global warming at the surface? Or what is the scientific reasoning for the claim, based on these data, that there was a “pause”?

  113. Wouldn’t it be interesting to compare the ten year temp
    record of the 114 pristinely sited temperature stations
    with all the other temperature stations and those closely
    correlating back measurements looked at to check out
    the earlier period a decade before to see whether non-
    cooling or warming. Seeing you seem to have a quality
    control bench mark here?
    jest-a-serf.

    • Yes, that would be VERY interesting. Just use the highest quality, pristinely sited, even if it’s just 10 or 20 stations (the more the better of course), but as evenly distributed as possible. No adjustments at all, not even gridding (it’s just a temperature index after all).

  114. Steven and others. Regarding “assuming good faith”. The fact that people like myself find the word of the consensus difficult to accept can be traced to the actions of a small but influential group of people. When Steve McIntyre, Judith Curry, the Pielkes are derided, the same kind of questions you are asking sunshine guy come to mind. People will assume good faith when people like Mike Mann and Jim Hansen show a bit themselves. There are wingnuts on both sides of this debate. There are others, yourself and Nick Stokes on the consensus side, Anthony and Steve M. on the skeptic side who engage and look first and argue when they are right. That being said, to assume that we have not been given ample cause for assuming bad faith is to be willfully obtuse. I was once neutral in this debate. A visit and question at Realclimate put me on the path of the skeptic. Reading the posts of your telescope friend and others like him keep me on that path. Not for the content, but for the tone. Derisive, condescending, poorly argued and lacking any assumption of good faith.

    • Well said. Same experience.
      The only redeeming thing I get from some of these guys is that, as a self acknowledged poor communicator, at least I know there are some in the world orders of magnitude worst! Comforting.
      Zeke is an excellent communicator.

      • Neckels communicates that Claes Johnson should be listened to. Kind of bizarre considering Claes is a lead skyyydragon.

      • WebHubTelescope | July 8, 2014 at 7:48 pm |
        Neckels communicates that Claes Johnson should be listened to. Kind of bizarre considering Claes is a lead skyyydragon.

        Telescope dude. Thanks for showing up! Could not have forged a better reply from you to illustrate your tendency to be “Derisive, condescending, poorly argued and lacking any assumption of good faith.” Assuming that “Zeke” and “Claes” are one person. Please explain for the class how Zeke, being a “skyyydragon” detracts from Nickels assertion that Zeke is an excellent communicator. Indeed. Where does he assert that Zeke / Claes should be listened to?

      • Assuming that “Zeke” and “Claes” are one person.
        Pretty sure they are not.
        http://www.yaleclimateconnections.org/author/zhausfather/

      • Claes had good references for aposteriori error estimation which is a useful tool for exploring the computability of diffeqs. I dont know much about skyyyyypedragon cause Im waiting till the third movie comes out so I can see them all together.

      • Oh, that must be a different movie then, I was thinking about the one with the dwarf king.

    • yes, one of the main things that got my attention and keeps my light skepticism going (skepticism mainly about uncertainties being understated, plus thinking there is some risk of massive group-think having lead the CC science seriously astray) is seeing how some prominent AGW scientists circle the wagons and trash folks like Curry, the Pielkes, etc. It is a huge red flag for the two concerns I mentioned (there are other concerns too).

    • Steven Mosher

      you have no cause to assume bad faith. none.
      the original TOBS work was done in 1986.
      the first skeptic to look at it vindicated the work.
      then others attacked.
      Another verification was done. successful
      then more attacks.. basically people saying “i dont understand”
      More explaination
      more verification.

      There is bad faith. but not by the guys who did the work in 1986.
      the bad faith is here and now. practiced by the likes of you

  115. One day all these suspect and ill advised and suspected as mainly spurious “adjustments” to the global and national temperature records and all the arguments going on about it will be looked upon as today’s stupidity equivalent of the medieval thesis of “How many angels can dance on the head of a pin?”.

    Just gives us, the public, the real as measured data, warts and all, right from the very beginning of records and let the interested us sort through and make of it what we will.
    With the other grossly “adjusted” data you climate scientists can go and play with and keep right on adjusting to your heart’s content or at least until the money runs out.
    Given human nature over that hundred plus years of records, it is highly likely that all the foibles and faults of those thousands of observers and their measuring equipment will through sheer numbers and bulk have about evened out to a neutral point around which the real actual temperature will be centered.
    And consequently which would not then need anything in the way of the very doubtful and unverifiable and unprovable for accuracy adjustments so beloved of the climate data manipulators.

    Of course if the actual recorded at the time temperature data was released as a full data base with official blessing it would probably only be a matter of quite as short time before it would be decided by the climate interested public and politicals that all that morphed out of reality, adjusted data those scientists were playing with on their play stations wasn’t really needed as it bore no resemblance to reality nor had any sort of any perceptible impact or effect on society and their funding should and would consequently cease.

    The CET to my knowledge has not been “adjusted” in any way we hope, yet it is accepted as a reasonable proxy for the temperatures in central England from 1659 to today.

    So why all those from the public viewpoint, mainly illogical “adjustments” particularly to any temperature recordings more than about 5 years old?.

    What was recorded was recorded to the best of the abilities of those many thousands of usually conscientious observers over those tens of decades past.
    Just leave it at that and use that original data, warts and all, and I suspect that the true average from all those past temperature data recordings will one day be found to be much closer to the reality of those past temperatures than all those ever so clever and highly sophisticated and totally useless temperature adjustments, which seemingly are only of use for the dis-information dissemination so beloved of today’s catastrophe advocating climate scientists.

    • Dear ROM,

      I recommend that you read Manley’s original CET papers. There are two of them and they’re both available free from the Wikipedia page on their author:

      http://en.wikipedia.org/wiki/Gordon_Manley

      Cheers,
      John

    • ROM

      Manley’s papers are included in my article that compared CET to the historic reconstruction by Mann and Lamb and which included my own to 1538

      http://judithcurry.com/2011/12/01/the-long-slow-thaw/

      Last year I was at the Met Office and saw David Parker who created the CET record from 1772. This latter relies on daily instrumental readings. That of Manley’s are monthly ones and also rely on other data, which to my mind makes it stronger as it helps to correlate the record.

      By themselves instrumental readings can have lots of flaws which range from observer error to instrumental error to the basic problem that very often the maximum and minimum temperatures were not captured.

      The Manley record has somewhat fallen out of favour, not the least because of the astonishing rise in temperatures from 1695 to 1739 when they were brought to a screeching halt by a very severe winter.

      Many people have looked at this period including Phil Jones who admitted that it had caused him to believe that natural variability was much greater than he had previously thought.

      We must bear in mind with all reconstructions Lambs wise words that ‘we can understand the tendency but not the precision’

      We place far too much reliance on believing that data such as is discussed here is of a standard that allows it to be put through the computer and come out the other end as an extremely accurate data base.

      To paraphrase the motto about models

      “All temperatures are wrong but some are useful.’

      I have it straight from the Horses mouth that the historic temperature record is not given much credence these days due to scientific uncertainties and they are somewhat played down.

      tonyb

      • tonyb
        Thankyou both for this and helpful past comments correcting some of my previous misunderstandings.
        . Your ID “climatereason” is well chosen.

  116. Kindly tell us:
    1.Who initiated this project
    2. When did it start
    3. Who is doing the adjustments
    4. What are their qualifications.
    5. What justification has been documented
    6. How does this change conclusions drawn from the previous curve.
    7. What official organizations are involved
    8. Are they the only organizations involved
    9. Are any of them global warming advocacy groups
    10. Where is data about before and after temperatures available.
    11. What peer-reviewed articles have appeared
    12. What other sources of publication exist
    13. Cite author affiliations for all of the above.
    14. In 200 words, explain the purpose of this project

  117. The TOBS issue seems to have been the reason Watts pulled his paper at the last minute two years ago. Watts doesn’t believe in the TOBS adjustment being skeptical that the official recording times were the actual recording times so that a systematic adjustment assuming those times can’t be made. However McIntyre was added to Watts’s paper and held it up to do the due diligence with TOBS that he viewed as necessary. I only get this part of the story from here which ends with McIntyre going off to check Watts results using TOBS 2 years ago. Note sure how that ended up or how it is today between McIntyre and Watts on TOBS, but his paper is still in limbo.
    http://climateaudit.org/2012/07/31/surface-stations/

    • McIntyre has recently gone on Paul Homewood’s page and said things to try to lead him in the right direction on TOBS ending with
      ” I got results that look similar to those shown in the NOAA graphic. I think that most of your comments and allegations are incorrect and should be withdrawn.”
      Homewood you may recall is one of those trying to criticize the NOAA temperature record.
      You can read through McIntyre’s comments to Homewood here
      http://notalotofpeopleknowthat.wordpress.com/2014/07/01/temperature-adjustments-in-alabama-2/
      Other comments include
      “I certainly do not think that the evidence that you have adduced warrants the somewhat overheated rhetoric on this topic and urge you to dial back your language.”
      This is fairly strong coming from McIntyre, and these show his thinking within the last 4 days. A lot of skeptics listen to what McIntyre says, so maybe they will take this into account.

    • Steven Mosher

      Jim

      Yes it was seen as a major gaff when they forgot to use the TOBS data.

      fixing it was a one day job. just switch datasets.

      instead, Anthony who doesnt trust the metadata ( except when he does )
      decided to use only those stations that have no TOBS (trusting the data he doesnt trust )

      We will see what results come out.

  118. Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
    Mine would focus on 3 key points.
    The first of adjustment of past temperatures from current ones.
    The second of a possible flaw in TOBS as used.
    The third on the number of what are referred to a Zombie stations
    1. Zeke says this is incremental and unavoidable using current temperatures as the best guide and adjusting backwards.
    “NCDC assumes that the current set of instruments recording temperature is
    accurate, so any time of observation changes or PHA-adjustments are done
    relative to current temperatures. Because breakpoints [TOBS] are detected through pair-wise comparisons, new data coming in may SLIGHTLY change the magnitude of recent adjustments by providing a more comprehensive difference series between neighboring stations.

    When breakpoints are removed, the entire record prior to the breakpoint is
    adjusted up or down depending on the size and direction of the breakpoint.
    This means that slight modifications of recent breakpoints

    The incremental changes add up to WHOPPING changes of over 1.5 degrees over 100 years to past records and 1.0 degree to 1930 records. Zeke says the TOBS changes at the actual times are only in range of 0.2 to 0.25 degrees. This would mean a cumulative change of 1.3 degrees colder in the distant past on his figures, everywhere.
    Note he is only technically right to say this ” will impact all past temperatures at the station in question though a constant offset.”
    But he is not changing the past 0.2 degrees.It alters all the past TOBS changes which cause the massive up to 1.5 degrees change in only 100 years.

    • Steven Mosher

      “The first of adjustment of past temperatures from current ones.
      The second of a possible flaw in TOBS as used.
      The third on the number of what are referred to a Zombie stations”

      I would suggest 3 different papers.

      Start with TOBS.

      Focus is better.

  119. Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
    Mine would focus on 3 key points.
    The first of adjustment of past temperatures from current ones.
    The second of a possible flaw in TOBS as used.
    The third on the number of what are referred to a Zombie stations
    2. TOBS and break adjustments are made on stations which do not have data taken at the correct time.
    The process is automated in the PHA.
    Infilling is done on stations missing data, ie not correct time . Zombie stations have made up data, ie not correct time.
    This means that potentially half the 1218 stations, the zombie and the ones missing data have an automatic cooling of the past done every day with the result of compounding past temperature altered levels.
    This should not be allowed to happen.
    Once a TOBS change has originally been made in the past eg 1900 should have been 0.2 warmer thern this altered estimate should stay forever and not be affected by future changes.

  120. The Models are wrong and the stop/pause is the cause. How many statistical angels can dance on the head of a pin? Man made CO2 will not cause cAGW, no matter how you re-arrange those statistical chairs on the cAGW Titanic’s deck.

    • Cult members are reassuring themselves by an endless repetition of their talking points to each other (like “the models are wrong”, “stop/pause”, “fraud/lies”).

      • Jan which of the CAGW are you defending since 1988? Or are you arguing something else?

  121. Finalky got TOB papers. Concepts make sense. A lit of loosey goosey quadrature is a concern, which could add permicious bias, but I cant say which way they might go. More study warranted….

    • And the next question will be how the 1958-64 training data performs through all years.
      By quadrature Im referencing the tendancy to ‘average at will’ without much consideration of the error introduced…
      If I understand right, with the fit done the TOB is just a simple function? Is it available? Unless the calendar stuff is a pain….

  122. Wow, the number of oxen this post has Gored. 500 comments in 12 hours.

  123. The post comes at a good time. Regardless of where you stand — skeptic or convinced — we all need to see the raw data.

    I imagine a useful table would contain fields like:
    station_id,
    lat,
    long,
    elevation,
    created_timestamp,
    modified_ts
    temperature_reading_ts
    temperature,

    And another table for adjustments:
    created_ts
    adjusted_temp,
    adjustment_reason_code,
    adjustment_reason_text,
    station_id

    Maybe someone with access can populate a SQL db and share it or dump a flat file of every station reading every taken and stick it on DropBox.

    The bad news is there could be millions of records. The good news is it’s not 1980 anymore and we have database systems that process records in the billions.

    When we have the data, we can easily do clever things like produce infographics of a station whose data was adjusted for TOB, how many times it was adjusted, how much it was adjusted, etc.

    I assume the adjustments are entirely necessary because I choose to believe that folks steeped in this stuff every day know what they’re doing. However, I’d still like to see the raw data at the most granular level possible with all applicable metadata.

    • I may have found the data:
      http://cdiac.ornl.gov/ftp/ushcn_daily/

      There’s also a slick web interface for pulling data and basic reporting:
      http://cdiac.esd.ornl.gov/epubs/ndp/ushcn/ushcn_map_interface.html

    • I pulled the daily data for Alabama, imported it into Excel and have data for almost every day and every month going back to 1926 up to 2012. Awesome.

      I don’t see the actual vs adjusted temps data, only the end result. If these flat files containing state data were updated daily, storing “yesterday’s” data and checking it against “today’s” data would be easy. Unfortunately the files last mod dates are Feb 27, 2013.

      I’m a climate science novice and more of a listener/reader than a contributor (to the science not to AGW ;-). I encourage anyone interested enough to form an opinion on AGW to pull the source data used to “prove” it in the USA and do some basic analysis on your own.

      • I noticed that all TMAX and TMIN recs for Alabama are rounded to the nearest whole numbers. Why is this? Since we’re talking about climate change anomalies in matters of fractions of degrees, I expect decimal places. What am I not thinking about correctly?

    • Steven Mosher

      go to our site.
      download the data
      or get the links to all 14 sources.

      and for gods sake dont put it in SQL.

      the metadata, ya, we have that in mysql.

      But if you want to put time series with incomplete data into a normalized table be my guest. that’s gunna be ugly.

      • David Springer

        The author imported into Excel. That’s not a database. Not sequel. Get a clue.

      • A flattened warehouse structure that can be sliced and diced by a BI system may work better.

        Thank you for the links!

      • WebHubTelescope


        David Springer | July 10, 2014 at 10:22 am |
        The author imported into Excel. That’s not a database. Not sequel. Get a clue.”

        The guy said he wanted “SQL db”. Springer is wrong on cue.

    • If you follow the URL in my name, I have sql, reports (in csv), and where to get the raw data I used.

  124. Statistics is probably not one of my strong points:-)
    I would suggest the following for the US UK and other countries with established temperature records:-
    1 Provide anomaly graphs of the un-adjusted readings min and max.
    2 ditto above for readings taken at the same time of day for stations which have not been moved and where UHI has little influence.
    3 ditto above where UHI has a lot of influence.
    Then take a look?

  125. A good post, fair comments by Matthew R Marler, too many repetitive comments from certain parties, eventually I just skipped to the end.

    One post by F Leanme asked about stations in, e.g., Russia. I don’t offer this as scientific evidence, but there was a strange but intense and gripping Russian film in 2010 called “How I ended this summer,” which was set in a very remote two-man weather station. The one “ending” (doing an out-of uni project to finalise his degree) got fed up with battling out in the weather to read instrumental values, and made them up or repeated them. The long-term observer got pretty wild when he found out. Worth a look if you’re interested in variety in weather stations.

    • In my youth I worked casually for a government utility. We were supposed to keep records of public attendance, though nobody ever wanted to define what constituted attendance, nor was it easy to keep count. So we made it up, often days late. Even the conscientious people were making it up, since they had no guidelines.

      We made it up…for years! Yet the records became official figures somewhere and were used in budgets, policies, department and government politics, council split-ups, media releases etc.

      Hey, maybe you were one of the policy makers, Faustino. If so, sorry about that.

    • nottawa rafter

      I am not asserting anything about individual actions but when any system relies on thousands of individuals to perform a task, the variable of errors, for whatever reason, have to be taken into consideration. Regardless of how simple the instructions and the task, some humans will screw it up.

  126. The Daily Mail (second highest selling newspaper in the UK) today says:
    “American ‘climate change experts’ have been exposed for fiddling temperature records to make it appear the past was colder than it actually was.”

    • Matthew R Marler

      Paul Matthews: “American ‘climate change experts’ have been exposed for fiddling temperature records to make it appear the past was colder than it actually was.”

      Perhaps now you can understand why the Daily Mail was wrong to publish that.

  127. son of mulder

    “Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends.”

    Why is this, have not practices changed in other parts of the world eg replacing LIGs with MMTS and changing times of observations? Whether or not shouldn’t the rest of the world historic data also be reduced by amounts similar to the US as LIGs and reading practices would have been similar in the past? At least to make US and rest of the world measurements like for like.

    Then to keep history consistent wouldn’t the 0.4 deg C reduction be expected to have to ripple back through time to the middle ages and earlier because the calibration of proxies would have been against temperatures under the unreduced regime? Little iceage 0.4 deg cooler, MWP 0.4 deg cooler.

    What about adjustments required to the Central England record of a similar type?

    That much cooler history then would have to be squared against the qualitative descriptions of the times. Has that work been done?

    • “Why is this, have not practices changed in other parts of the world eg replacing LIGs with MMTS and changing times of observations?”
      MMTS is relatively small. TOBS is an issue in US because COOPs were mostly volunteers, and if they wanted to change they could (by agreement). Elsewhere observers were mostly employees and observed as directed.

      • son of mulder

        ” Elsewhere observers were mostly employees and observed as directed.”

        Has this been quantified formally so we know it is less of an issue as opposed to an assumption? Is it fair to assume that as it looks from the global 5-year smooth graph above that global past has reduced by about 0.25 deg C. And as US is 5% of global only 0.4/20=0.02 deg C of the global reduction is due to US and the rest ie 0.23 deg C is down to global LIG to MMTS or is there other reasons why global is down?.

      • “And as US is 5% of global only 0.4/20=0.02 deg C of the global reduction is due to US and the rest ie 0.23 deg C is down to global LIG to MMTS or is there other reasons why global is down?.”

        I have numbers on that here. It depends on how you average. On simple average by stations reporting, US can be 20% or more.

        But the key graph is Zeke’s Fig 1. Using unadjusted or adjusted for global gives essentially the same result for the last 50 years or so.

      • son of mulder

        “On simple average by stations reporting, US can be 20% or more.”

        Surely it must be area weighted on the global picture. So You haven’t answered why global adjustment down over 60 years ago was around 0.25 deg C. What was the cause? Anyone?

  128. Zeke, regardless of how you explain it, the thing that outsiders find hard to accept is that the supposed errors don’t balance out as would be expected from normal data collection and especially of TOBS. Why do all adjustments increase the trend? It makes sense only if the adjusters systematically ignore any warming biases. In fact, due to the well-observed increase in populations around the measuring sites (UHI) we really would expect adjustments to go the other way.

    Hence we don’t get the overall impression that the adjusters are making things better, just warmer: A zero trend for the US has effectively been converted into a warming trend purely by adjustments. Everyone should be naturally skeptical of that! Sure that doesn’t affect the global temperature much but it does influence policy in the US. And then other purely warming adjustments are further added to the global trend. It just smells really bad! Nobody would care if we weren’t jeopardizing our future by ditching old energy sources before we have decent replacements based on these numbers and even more iffy models.

    Like others here I’ll assume good faith again when I hear more critical voices from within the climate community of those in their ranks that just make stuff up, call them an irrefutable facts and then denigrate anyone who legitimately disagrees from a pretence of highly dubious moral superiority. When we are truly worried about the cure being far worse than the putative disease, it really grates to be called childish names. When scientists can debate like adults then they might regain my respect. For now though they seem to be the enemies of industrial progress and hence the enemies of prosperity.

    • Steven Mosher

      “Zeke, regardless of how you explain it, the thing that outsiders find hard to accept is that the supposed errors don’t balance out as would be expected from normal data collection and especially of TOBS. ”

      Then you have not listento the explanation or read the papers.

      The errors WOULD balance out if the Time of observation change were random.

      But the change to TOBS is not random. See figure 3.

      For example. there are 24 hours in the day.
      If the stations had observation times that were uniformly distributed over these 24 hours and Then you changed the TOB randomly, THEN you would expect the biases to sum to zero.

      BUT that is not what you have. See Figure 3.
      If you had, for example, all the stations reporting at NOON, and then they ALL switched to morning, Then you DONT expect the change to sum to zero.

      So the premise you guys have is wrong from the start. The stations change in observation time is NOT random. It is highly skewed. As a result the bias will be in one direction. or rather you should not be surprised to see that it tends to be in one direction more than another.

  129. Like others here I’ll assume good faith again when I hear more critical voices from within the climate community of those in their ranks that just make stuff up, call them an irrefutable facts and then denigrate anyone who legitimately disagrees from a pretence of highly dubious moral superiority.

    Should they lie to agree better with your prejudices?

    It’s funny to see, how impossible it is for many to accept the obvious truth about the analysis of instrumental temperature data.

    • I don’t have prejudices, I observe! I was merely explaining why there is continued skepticism despite all explanations. As for lying, I have observed many scientists seem to have no difficulty with lying when they connect, without a shred of evidence, supportive modeling or any data or often even any theory such things as extreme weather is getting worse or is linked to CO2, wet areas will get wetter and dry areas will get drier, that the ocean swallowed the ‘missing heat’, using a proxy upside down doesn’t matter, the models are still adequate for policy even after such a huge divergence from reality, coral die-back is due to manmade warming rather than fishing, all warming must be bad rather than beyond a certain threshold, etc, etc, etc.

      As for obvious truth. When I have seen every single hockey stick graph turn out to be phoney and every adjustment making the trend warmer then there is no such thing as obvious truth. If you accept there is, despite the reservations (above) of even the people who compile these graphs then you are the prejudiced one here! As I said, none of this trivia would even matter if policy – and generally bad policy at that – was not being based upon it.

      My only particular bugbear here is with the TOBS adjustment because it makes zero sense and contrary to some statements made above, even Karl’s paper that this adjustment is derived from admits it is largely guesswork. Rather than guess I’d have left it alone – especially since it makes little difference to the global temperature anyway! But adding just that adjustment changes the larger warmth from the 1930’s to the present day: Quite important then!

      • verytallguy

        My only particular bugbear here is with the TOBS adjustment because it makes zero sense

        Have a read of the following, report back afterwards?

        there is a bias, and it’s a scientific duty to estimate and allow for its effect. The objectors want to say it is zero. That’s an estimate, baseless and bad. We can do much better.

        http://moyhu.blogspot.co.uk/2014/06/tobs-nailed.html

      • Nick did nail that one. I do wish people would attempt to understand the issues, before making baseless accusations of lying. It isn’t a particularly difficult thing to grasp, so this either speaks to intelligence (which I doubt) or a lack of honesty on the part of the accuser (which I do suspect).

      • Steven Mosher

        “I don’t have prejudices, I observe!

        Your first prejudice is your belief that you have none.
        Witness your inability to understand figure 3.

        you DIDNT observe. your are prejudiced to thinking that you do. but you dont actually observe. neither did you understand what was written.
        And I bet you didnt observe the actual data ( made available) or the code.

        you chose to stop observing, before you finished the job. why?
        because you have a prejudice.

  130. Weighting

    I understand that of course if you’re to try and give a global average then you need some degree of weighting if you wish to avoid the bias effects of spatial clustering.

    Some of the gridding methods employed by some groups only work at the grid resolution (such as box averaging) as each cell (box) is victim to an underlying clustering within. Change the box sizes or reference position and you’ll get a different result.

    Other methods employed such as Kriging deal with clustering implicitly and are therefore better at this sort of thing. However, you must first model the underlying structure of the data. Kriging is then applied to the resulting residuals (residual = observation – spatial model prediction) before adding the structural model back into the gridded series. If you’re spatial model is based on the raw station positions then any benefit from the kriging only applies to the random component! The underlying structural model is still more sensitive toward stations in poorly sampled regions.

  131. peter azlac

    Zeke Hausfather | July 7, 2014 at 11:18 am | says:

    “Ooh, can I try? :-p

    If you average absolutes and the composition of the network is changing over time you will be absolutely wrong because the change in underlying climatology will swamp any signal you are looking for.´´

    That is the real problem with BEST and the other series, the composition of the network changes over time but it does not need to. I have seen it stated (Mosher?) that only 26 stations are required to reproduce the BEST and other global temperature series and have certainly seen claims that the CET record five year smooth is a good proxy for global temperature anomalies:
    http://www.metoffice.gov.uk/hadobs/hadcet/ParkerHorton_CET_IJOC_2005.pdf
    CET is made up of only four stations at a time, though over the period 1878 todate there have been seven involved for differing periods – it is difficult to understand why as Oxford (Radcliffe, Stonyhurst, Ringway, Rothamsted and Ross-on-Wye) have continuous records over the complete period and whilst Ringway may have beemn removed because of urbanization and development of the airport Rothamsted that remians has also been subject to the same effects. It should be noted that whilst BEST claim no discernible effect of UHI on their record the UK Met Office acknowledges corrections of up to 1.5 C for this, largely to the minimum temperatures where most global warming is found.
    If CET compiled in this way can be used as a global proxy then why not the sixty plus equally long term stations with data from pre 1880 to 2013 from Europe, Russia, China, Japan, Australia, New Zealand, S America and Canada be used to compile a global series without the use of all the statistical manipulations of BEST. Instead we see a number of these long term series ´corrected´with data from adjacent station data from only the 1960´s onwards. For example Berlin Dahlem with continuouis data from 1769 and Berlin Templehof with data from 1701 are ´corrected´using data from the airports at Tegel, Schonefeld dating from 1953/63 where there was heavy military and civilian airtrafic and Alexanderplatz from 1991 all of which introduce a large UHI effect that shows in the ´corrected data as +0.12C – a seriosu undertimate of the UHI effect in my opinion as Templehof (an airport) already shows an increase over Dahlem (semi rural) of 0.15C. There are other examples of this type of ´correction´with very local data and the other 10,000 BEST ´stations/scalpel bits´only makes it worse no matter what statistical tricks are used.
    In they days before ´post normal science´when hypothesese were falsified or not with real empirical data it was expected that if one wanted to determine a change in some factor – for example response in corn yields to different rates of types of fertilsier the test was done on the same soil type in the same years. The same should be true for climate change we should evaluate the changes in temperature (not anomalies) over time at the same stations and present the data as a spaghetti graph showing any differing trends and not assume that regional or climates in gridded areas are the same – which they are not as is obvious from the climate zones that exist or microclimates due to changes in precipitation, land use etc. Most, if not all of these long term stations are run by scientists and meta data must exist showing any changes and the result of such changes so we do not need to guess or use scalping, kreiging, homgeneinsation and certainly not gridding to arrive at a completely useless global value that does not allow any meaningful analysis of responses to solar inputs, ocean cycles etc.
    Note that I am not saying that warming has not taken place just that it is not global – BEST admits that 30% of the stations have cooled and that is true of severla of therse long term stations – but that we should concentrate on finding a useful set of temperature trends in regional and zonal areas that reflect the impacts of climate change, as for example the Sahel, and understand the true reasons without assuming carbon dioxide to be the culprit.

    • Steven Mosher

      “That is the real problem with BEST and the other series, the composition of the network changes over time but it does not need to. I have seen it stated (Mosher?) that only 26 stations are required to reproduce the BEST and other global temperature series and have certainly seen claims that the CET record five year smooth is a good proxy for global temperature anomalies:”

      err no.

      Shen’s paper on this question suggests that 60 OPTIMALLY placed stations will suffice.

      we dont have 60 optimally placed. But playing around with this over the years you do get good answers at 60, better at 100, even batter with 300..
      and so forth.

      So, first start with a definition of what is “good enough”
      Century trend to +- .1C? century trend to +-.15c?

      Start with your definition of what is “good enough” and then given the data the answer can be computed.

      Theoretically ( see Shen) it wouldnt be less than 60

  132. Ian Blanchard

    As a bit of an aside, it is probably worth noting that the US has probably one of the most reliable historical (raw) temperature records available – large country with lots of measurements in rural areas, technologically advanced and reasonably to very wealthy throughout its history (so with good equipment maintenance relative to most other areas) and probably most importantly, no conflicts on its own soil since the 1860s, so there should generally be a long archive of records and stations.

    Compare with continental Europe, which was majorly disrupted by two world wars in the 20th century (so destruction of many archival documents) and even worse the ‘developing’ world, where equipment and record keeping are probably the biggest drawbacks to a reliable extended historic record.

    I look forward to Zeke’s post on TOBS – I think I understand the concepts ( principally that changing the time of readings influences the risk of double counting extreme values), but intuitively I suspect the size of the adjustment is too large. How frequently do these double counting issues acttually come about for mid-morning or mid-evening measurements?

  133. What would it look like measured in Fahrenheit? Just for fun.

  134. Joe D’Aleo had a paper which showed that if they didn’t do these dubious TOBS adjustments (that by themselves produce the warming trend) then all the solar reconstructions then match perfectly to the US data as well as for the Arctic data; ie the only two ‘good’ datasets we have. Food for thought!

    • Steven Mosher

      Joe D’Aleo had a paperr?

      No he wrote a paid for piece.
      and he was wrong
      and his co author uses the TOB adjustment

      Nice appeal to an uncited, unreviewed, wrong “paper” whose co author does not practice what that paper preaches.

      excellent.

  135. I like what Zeke and Mosh are doing. I think they are trying very hard at doing very tough data intensive work in a rigourous and honest manner. I don’t think their motives should be questioned.

    If they have an unconscious bias, join the crowd, we all do, that can never be helped, and I don’t think any unconscious bias they might have is affecting their analysis, as far as I can tell. I recall that Most posted several months ago that in his view, El Nino would cause global temps to set a record this year. Latest evidence is that El Nino is going bust. So maybe Mosh has an unconscious bias about temps. So what? Even if he does, if you think it might affect his analysis, show that their analysis is wrong in some way. Doesn’t look that way to me.

    I do think that Mosh goes a little too lightly on the issue of why the climate change community basically reacted with silence over Climategate. Yes, some scientists probably had their noses deep in their work and were only vaguely aware, at best. But many such scientists were only too aware, and with very few exceptions (thanks, Judy), they either did nothing, or in some cases attacked the (skeptical) messengers.

    My take is that climate science has been mostly politics for the last 15 years. It is warfare, tribal warfare, and it isn’t about the science, it is about the interpretation of science and whether you are on the right team. If you are on the climate change team, you defend your team, you don’t give the other side ammunition, as Mann (among others) famously said. If a university’s research depends on government money, the university’s PR department makes sure there is some dire implication in their press releases about their research findings. If an individual scientist thinks that climategate was a scientific fraud, it will do his career, and funding, no good to say so. If you are in the government, and the government has made it clear what its position is, you don’t rock the boat.

    So if Mosh has some unconscious biases, and if they affect his perception of things (as my unconscious biases no doubt do as well), it doesn’t play out in his rigourous assessment of the temperature record,. But it may play out in what seems to me to be a bit of a lack of recognition that the mainstream science community has multiple and converging non-scientific reasons to keep their mouths shut about climategate.

    • Steven Mosher

      “I do think that Mosh goes a little too lightly on the issue of why the climate change community basically reacted with silence over Climategate.”

      I think they remained silent for the some of the same reasons skeptics remain silent when Goddard makes mistakes, or when Scafetta refused to release code, or when denizens here say stupid stuff.

      I think they get defensive for the same reason commenters at WUWT get defensive or jonova get defensive.

      They are humans.

      As an experiment ( I love doing these ) go criticize someone on your own team. watch what happens. Go criticize a friends science. see what happens
      Willis and I are friends. But to people on the outside we look like enemies.

      Now, people have this idealized vision of the scientist. He’s the objective one. the one who operates with no alligience, well his alligence is to the truth. Sorry, Im not buying it. He’s a human. he has interests and feelings and biases and quirks and blindspots.

      So what do we do.

      I do some science. I show you, I give you my data. I show you, I give you my method. That allows you to CONTROL for the researcher BIAS.

      Your job is to FIND and DEMONSTRATE the ACTUAL BIAS.

      You dont DO this by arguing.
      You dont do this by questioning
      You dont do this by MERELY doubting.

      You DO this by actually DOING THE WORK of DEMONSTRATING the bias
      with data or with a method.

      Until you can SHOW the BIAS, you have nothing but PHILOSOPHICAL objections.

      Science aint philosophy

      • Mosh, I have mostly criticized commenters, or articles, at WUWT. Don’t confuse me with other people. Below see my latest comments, on the thread that questions whether disposal wells have caused earthquakes in Oklahoma. Perhaps you are thinking of someone else.

        That said, as someone that has been on the recieving end of denigration when I responded with science to a friend’s view point that sea levels would be 3 feet higher by 2060, I stand by my view that a lot of the failure of the climate community to address climategate is because of tribalism, don’t want to give ammunition to the opposition. I reported on that incident here about 10 months ago.

        ——
        Here are my latest comments on WUWT, just so you will know.

        John says:

        July 5, 2014 at 7:54 am

        We need to distinguish between earthquakes caused by fracking, and those caused by high volume disposal of liquid waste products. The largest earthquakes by far are those caused by disposal. There has been an earthquake as high as 5.7 on the Richter scale caused by disposal wells in Oklahoma. That big, and you can have several thousand dollars of damage to your house. The ones caused by actual fracking are usually between 1 and 2, barely noticeable if you are right on top. Big difference.

        If wastewater was recycled more, there would be much less need for disposal wells. And places like Oklahoma and Texas often don’t have all that much water to spare. If the industry wants to avoid a PR disaster the first time someone is killed by an earthquake caused by disposal, they have to recycle water more. It will cost a bit more, but it will be worth it.

        Face it, none of us would want a magnitude 5 earthquake near our house. Fracking is very good for the US. It makes tons of tax money for cash starved states (Pennsylvania in particular), provides many jobs, reduces our imports. The industry can afford to recycle water a lot more to reduce the bigger earthquakes caused by disposal well.

        John says:

        July 5, 2014 at 7:56 am

        Here is the link for the 5.7 earthquake near Prague, Oklahoma caused by disposal wells, not by fracking:

        http://www.reuters.com/article/2014/03/11/energy-earthquake-oklahoma-idUSL2N0M80SP20140311

      • Matthew R Marler

        Steven Mosher: As an experiment ( I love doing these ) go criticize someone on your own team.

        On this topic, you and I are on the same team. We were also on the same team when this topic (or a related topic) was discussed at WUWT.

        Since I criticized you, fairly I think, let me say that in reading this thread I am favorably impressed by your willingness to answer the same questions over and over again.

        Also, you spelled my name correctly, which I appreciate. I think things like that make a favorable impression on those readers who never comment.

      • A C Osborn

        I did demonstrate BIAS in BEST and you agreed that BEST can’t handle Islands and Coastal data.

        It doesn’t matter how much you prove that the “Maths” are good, the Adjusted data does not reflect reality, instead of changing the past, which should be Set In Stone because that is what human beings experienced at the time, adjust the present to fit instead.

      • A C Osborn

        Let me quote Mosher from a previous Post about BEST.
        If you want to know what the Temperature was Use THE RAW DATA.
        If you want the best Estimate use the “ESTIMATED FIELD”.

      • Mosh, please take another, closer look at my comment.

        I didn’t criticize the science that you and Zeke do, to the contrary I said I liked it. I didn’t criticize your data gathering or the way you handle it, or your results. Period.

        I said we all have UNCONSCIOUS biases, myself included. That shouldn’t be controversial.

        I thought perhaps, from your prediction a couple of months ago that we would have a new temperature record this year because of El Nino, that you might have such a bias in terms of when temperarures would rise again; perhaps you think (consciously or unconsciously) that the pause will soon come to an end, and model forecasts in a few years time won’t look as bad as they do now. That was speculation. I didn’t say that such an unconscious bias, should this particular one exist, affected your science.

        I did think then, and do think now, that tribalism is a major reason why the climate change science community has not criticized the climategate emails and perpetrators: we can’t give ammunition to the other side, we can’t suggest to our funders that we aren’t fully committed. You and I may have to agree to disagree on this point. But even if we do disagree on this point, it isn’t a criticism of your science.

        So – please read my email a bit more carefully next time!

      • Skeptics didn’t produce a false record used to influence massive policy decisions that required correction by anyone with any pretense to morality and ethics. Skeptics hadn’t accepted enormous sums in research funding that was exposed as questionable. Skeptics didn’t have a stake in maintaining the integrity of the institutions of science.

        Big difference. Huge.

      • Steven Mosher

        AC you didn’t demonstrate bias

      • @ Matthew Marler

        Upthread you asked the following, which I never directly answered:

        “You are not advocating that the whole temperature record be ignored, are you? ”

        As justification for political action to ‘control climate change/control global warming/control climate weirding/control ACO2’ or for any other ‘climate policy’, that is exactly what I am advocating.

        After reading Zeke’s explanation of the data processing (an excellent job, by the way, along with his follow ups to other other commenters), Mosh’s continuing efforts to educate us on BEST’s work, and a host of other data related posts and commentary that have appeared here over the years, it is patently apparent that the historical data record is simply not able to support the conclusions that are being so heroically extracted from it. It lacks precision, it lacks geographic coverage, it lacks any semblance of QC, it lacks continuity, ad infinitum. And no amount of heroic adjusting, infilling, kriging, correcting, or whatever, no matter the ‘need’ for precision data, is going to convert historical temperature data into a data base from which the monthly temperature of the Earth can be compared on a year to year basis with a precision that justifies press releases like the following:

        “The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago. In April, the globe tied the 2010 record for that month. Records go back to 1880. ”

        especially since the ‘record’ was broken by 0.02 C. Do you, Zeke, Mosh, or anyone else believe that the planetary temperature records going back to 1880, no matter how carefully massaged, can support the above as a statement of scientific fact?

        Scottish Sceptic made the following statement earlier: “From that experience I learnt that it was impossible to reliably measure the temperature of a glass slid about 1cm across to within 0.01C let alone an enclosure a few tens of cm.”

        He is right; you can’t make a meaningful measurement of room temperature with 0.01 C precision, never mind the monthly or yearly surface temperature of the planet, and anyone who has ever tried to measure temperature knows it and is instantly suspicious when faced with breathless headlines saying otherwise, especially when the headlines are based on century old, hand written data, collected from sub-optimably distributed locations using uncalibrated mercury thermometers by untrained observers, that has been heavily massaged by scientists funded by the government and are cited as justification for political action by the politicians who provided funding .

      • k scott denison

        Bob: +1000

      • Steven Mosher | July 8, 2014 at 5:08 pm |

        AC you didn’t demonstrate bias

        The Swansea Final Best trend is approximately 1.25 degrees, the Raw Best Data shows approximately 0.75 degrees so Best Final BIAS 0.5 degree.

        Plus the starting point of the trend in Final is 1.0 degree higher than raw.

        Like you said if you want the temperatures use RAW, if you want BIASED Climate Scientist modelled fantasy use “Expected” values.

      • Matthew R Marler

        Bob Ludwick: “The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago. In April, the globe tied the 2010 record for that month. Records go back to 1880. ”

        I agree that some people are claiming more precision and accuracy for some of the estimates than is warranted.

        Sorry I took so long getting back to you, but I am trying to “cut down” on my intrusions.

    • Again Bob Ludwick +1000.

  136. It is always a challenge to sort througj a climate paper and try to discover mathematically what is happening. I think i see it:
    Average monthly temp is a double integral over time (month) and space on a manifold (earth) of a somewhat nasty function ( http://www.eol.ucar.edu/cgi-bin/weather.cgi?site=fl&fields=tdry&site=fl&units=metric&period=monthly e.g. for the time varying part). A saving grace is that the integral is then divided by a months time and the surface area of the earth.
    Now ideally one would have stations at a nuce set of Gauss points in time and space. Instead we have some very sparse set of samples for T unevenly spaced.
    So the approach is to reconstruct T with scattered spatial interpoltion (gridding) and to use training data and periodicity of T to reconstruct in time TOBs. The we integrate and divide to average.
    A problem I have with TOBs papers is that the statement of the method is not clearly posed, the integration and interpolation are done simultaneously and hence any sort of standard quadrature error estimation is unavailable.
    This is the challenge of interdisciplinar work. Math guys could help with this in a major way, but climate culture is too proud to involve them.

    There is surely a formal development of this problem from a statisticians point of view as well. I would love to hear it and the associated (standard) error estimates.

    • The method is very practical and pragmatic tho, so hats off to the inventors.

  137. I think Zeke did an excellent job explaining how and why adjustments to temperature data were made. To me, it makes sense. The next questions are – given the questionable reliability of much of the raw data (especially the historical data), the gaps in coverage, and the number of adjustments that have been applied to the raw data, what is the confidence level that 1) a significant rise in temperature has been observed 2) that the trend is unprecedented 3) that the trend is accelerating 4) that any rise in temperature is directly attributable only to Co2 increases? Put another way, what is the confidence level that if we stopped burning fossil fuels tomorrow, we would see a decline in temperature and how long would it take for the decline to occur?

    Other questions I can think of that are not directly related to this post are 1) is a decline in temperature desirable 2) are increases in Co2 actually beneficial and 3) is now the time to impose legislation that will cripple our economy and limit our ability to adapt to severe weather events and changes in climate that will occur no matter what we do?

    • @ Barnes

      Thank you; have asked the same questions, and similar ones, often, and got no coherent answers. Maybe you’ll have better luck.

      • plutarchnet

        When the answerer is barraged with assertions of bad faith, personal attacks, vaporings of ‘you’re wrong’ — unsupported by evidence, ‘questions’ that arise from the questioner having not read the article they’re supposedly asking a question about, and so forth, it’s surprising that you get any answers at all.

        If you want answers to non-gutter questions, standing in the gutter isn’t a good place to ask from.

    • Barnes

      You asked an excellent set of questions.

      ‘… The next questions are – given the questionable reliability of much of the raw data (especially the historical data), the gaps in coverage, and the number of adjustments that have been applied to the raw data, what is the confidence level that 1) a significant rise in temperature has been observed 2) that the trend is unprecedented 3) that the trend is accelerating 4) that any rise in temperature is directly attributable only to Co2 increases? Put another way, what is the confidence level that if we stopped burning fossil fuels tomorrow, we would see a decline in temperature and how long would it take for the decline to occur?’

      —– ——

      I have looked at many historic sets of temperature readings and wrote about the difficulties with them in a previous article; The basic raw data-each individual temperature reading – is often more like a rough stone which can not be turned into a useful and reliable record than a gold nugget, which carefully prepared, has some value. You certainly wouldn’t bet your house on their reliability to anything more than plus or minus half a degree C.

      In answer to your questions

      1) A rise in temperatures can be observed which, taken with other records,can be traced back some 350 years. The glaciers first started melting again around 1750.

      2) The trend is unprecedented in the last 50 years. However our records are very short and a global average is of dubious value as it disguises the regional nuances. in this context I would say the trend is likely to be similar to the ones going from the dark ages cold period into the mwp and the lia into the modern warming period, so in human terms it is not unprecedented

      3) Even the Met office admits to the pause in land temperatures so the trend can only be seen to be accelerating if it resumes its upwards curve over the next 50 years. The MWP lasted 400 years, the Modern warm period at similar levels around 30 years with a hiatus, so the modern warm period may well have a long time to run.

      4) Co2 must have an effect, but whether that effect tails off at 30ppm, 300ppm or much higher needs resolving. Looking at historic temperatures co2 appears to be one of many passengers on the climate coach but is not the driver.

      To answer your other questions we can not dial up a perfect temperature to order. This current warm period is very benign and I would go with that as being desirable over any others that we can be confident of.

      It would take centuries before we saw any temperature decline-assuming co2 to be responsible-even if we cut emissions today.

      tonyb

      • But this is all just your personal opinion, nothing more, isn’t it?

      • Barnes

        Further to my reply to you,

        It must be said this is a good post by Zeke. We must wait for the other two in order to be able to put it into context. In my reply I was not inferring that Zeke or Mosh are in any way trying to pull the wool over our eyes. I also do not believe in hoaxes or conspiracy theories.

        However, much of climate science revolves around data that are more rough stones than potential gold nuggets.

        tonyb

      • Tony – thank you for your reply. Frankly, I ask those questions due in part to work of yours that I have read. If I recall correctly, your examination of historical temperature records show that abrupt climate changes are more the norm than the exception, and that we don’t readily know why.

        I think the work that Zeke and Mosher/BEST is doing is valuable, but I question the fidelity of the data WRT making drastic policy decisions that will clearly have an impact on our economy, quality of life (negative impact), and our ability to help those that the left claim to care so much about – the poor. I am clearly on the side of the “deniers” and think we have a lot to learn before we can attribute changes to climate due to anything beyond natural variability with some minor influences by humans – and those influences include things other than just burning of fossil fuels.

      • Tony – just saw your second post and agree. I don’t see anywhere where Zeke is claiming that this post demonstrates anything beyond explaining how and why adjustments were made. I think that may be why Mosher plays the bad cop through much of this thread.

        However, the warmest (like FOMBS) will hyperventilate over the results claiming proof of CAGW and further demand immediate and drastic action. Unfortunately, it’s not just the likes of FOMBS, it’s also too many of our political leaders, and virtually all of the MSM.

      • Steven Mosher

        Barnes is wise.
        Zeke is explaining what is done.
        For that mere action people attack his motives.
        Skeptics who demand attention to the data
        Attack the man.

      • > I think that may be why Mosher plays the bad cop through much of this thread.

        Some call it grooming.

    • Barnes wrote: “However, much of climate science revolves around data that are more rough stones than potential gold nuggets.”

      Scientists like Zeke and others have made heroic efforts to extract the most reliable global warming signal from the inadequate data we have. They have polished your “rough stones”. Unfortunately, we are left with several mysteries: 1) What has been happened at average station (producing breakpoints about once a decade) that has caused then to report on the average cooler temperatures after such events? Does station “maintenance” produce breakpoints? 2) Some stations must be biased warm by urban heat islands, but their influence on the global trend can’t be detected with any of the techniques available for separating urban and non-urban stations. How do we identify stations biased by UHI so we can prove they haven’t effected that global record? 3) What is the best way to present the uncertainty arising from re-processing historic data? The uncertainty in calculating a mean global temperature anomaly from homogenized data from thousands of stations is probably much smaller than the possibility of systematic errors from homogenization.

      • @ Frank

        ‘Scientists like Zeke and others have made heroic efforts to extract the most reliable global warming signal from the inadequate data we have.’

        You make the point that I and others have been trying to make for a long time, with no obvious success to date:

        The purpose of the thousands of man years and billions of dollars that have been spent torturing the patently inadequate historical climate data we have has nothing to do with understanding how the Earth’s climate works.

        The purpose IS as you said: to ‘extract the most reliable global warming signal’, which is POSTULATED, not theorized, to exist, certify that it is caused by ACO2, and provide a laundry list of undesirable to catastrophic consequences which ARE befalling us (present tense) and which will continue and escalate unless political action is taken to drastically curb our use of fossil fuels.

        By the way, the latest ‘bad thing on the laundry list’ (never any ‘good things’) caused by CAGW is apparently this: “Climate change could lead to the extinction of redheads in Scotland, a DNA expert has claimed.”, which made headlines around the world. Instantly.

    • I don’t think anyone has put serious resources into the “what would happen if we stopped all CO2 emissions tomorrow” because it isn’t a serious question worth devoting time and money to answering. It isn’t going to happen, so what’s the point?

      Onthe other hand, many models are devoted towards many different (and more realistic) emissions scenarios and they are publicly available if you are really interested.

  138. Without clearly defining the impact of collective political group think as opposed to the straw-man response contained in the article; “They’re is no conspiracy” it’s difficult to communicate at all to AGW advocates, “believers” and followers. I appreciate Zeke’s post but his minimizations of agenda driven culture regarding climate research reduce his credibility.

    Political bias requires no “conspiracy”. IRS, EPA, Academia, NY TImes…..NOAA…..NCDC….do you seriously think millions who generally “hope” any question of “evidence” doesn’t fit their narratives doesn’t impact a result??

    Forget “conspiracy” and look at the total culture of climate “research” before such an arbitrary claim is made; “the books aren’t cooked”. It’s pretty clear a good section of the rank and file climate research community are at least sympathetic to the warming narrative. We should explore all the people involved and their underlying political views if they were or are involved insensitive and abstract data “adjusting”. Disclosure builds confidence.

  139. What these adjustments boil down to is being able to produce a graphic that shows Warming. Without it, there is no Sciencey-Looking Climate Change Marketing to the masses. This is why the desperation Warmer defense.

    Andrew

  140. What it means is that we don’t have a temperature record.

    • It’s always been a simple minded affair, surface temp records and relating it to “climate”. Most of the ocean isn’t measured, the standards of even 20 years ago were very primitive let alone the claims of ice cores and tree rings.

      One unfortunate outcome for these discussions is the false validation of surface temperature as the exclusive climate driver. Most credible climate “scientists” would denounce this concept to my mind but almost all go along for the ride. From all this noise people try to dictate “policy” and demand control over vast private and national interests.

    • True in the sense that even though the average temperature reconstruction ‘makes sense’ there is zero formal error estimation of either the interpolation error of the global surface temperature reconstruction, or the quadrature thereof and hence the uncertainty in the temperature record is completely unknown (save maybe sound extreme bounds that one could probably work out on a napkin).

    • @ rhhardin

      “What it means is that we don’t have a temperature record.”

      Of course we do, rh. And a very fine record it is, too.

      Otherwise, how would our temperature experts be able to justify press releases like this one?

      “Driven by exceptionally warm ocean waters, Earth smashed a record for heat in May and is likely to keep on breaking high temperature marks, experts say. The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago.”

      It is worth noting that the record that was ‘smashed’ was 15.52 C, proving that the temperature records are able to resolve year to year variations in the ‘monthly temperature of the Earth’ with hundredths of a degree precision.

      If that doesn’t prove the quality of our temperature records, what would it take? After all, if NOAA didn’t have a pretty high level of confidence that their records were accurate to at least 10 millidegree precision, would they be reporting that a four year old record was ‘smashed’ by 20 millidegrees?

      • Bob

        On the last thread John Kennedy of the met office said that due to uncertainties the May 2014 figure was certainly in the top 10 but they could not be more certain than that.

        Perhaps NOAA or more likely their press department are more certain than the met office that the record has been ‘smashed’ by the huge amount cited.

        This certainty over fractions of a degree does no one any favours does it?

        Tonyb

      • @ Tony

        “This certainty over fractions of a degree does no one any favours does it?”

        Well, it certainly does a favor to the CAGW cause, in that the headlines that reach ‘Average Joe’ did say that the record was smashed.

        It is also worth noting that neither the headlines nor the reporting in general mentioned previous record or by HOW MUCH it was smashed.

        It took a bit of digging for me to find the smashed record and confirm that the margin of smashing was actually 0.02 C.

      • Bob

        ‘smashing’ certainly suggests a much much bigger record margin than has possibly occurred. The met office are right to be circumspect. NOAA really ought to issue a clarification or be accused of hubris.

        Good digging btw
        Tonyb

      • +0.36 degree Fahrenheit, or close to it.

      • Let me try: .036 degree Fahrenheit, it was only off by one zero.

    • Think of all the space they saved though.

  141. … under the banner of so-called “climate justice,” the U.N. is doing exactly the opposite. It is doing its best to hobble, hinder and obstruct development of the cheapest and most reliable sources of energy in the third world. ~Francis Menton

  142. “The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.
    Point of view surely since when portrayed as real temperature in the past it is exactly cooking the books.
    “no grand conspiracy to artificially warm the earth”
    ” I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer”… No answer given, evah

    So to be clear
    there were “ 1218 real stations (USHCN) in the late 1980s
    There are now approximately 609 original real stations left-
    There are 870 total real stations
    There are 161 new real stations , all in airports or cities
    There are 348 made up stations and 161 selected new stations.

    You are using 348 made up stations, Infilling others who are not reporting.
    Using an algorithm which puts past temperatures down and passing it of as real historical data
    Plus you say you do not see why you have to label the crockery as being an estimate and not real data for people using the graph.
    Well it is damn important when you present it as historical fact and let it be used to promote the idea of global warming due to Co2.
    It becomes a conspiracy when you refuse to acknowledge that the real past temperatures were ever at the most 0.2 degrees C higher and that in some sites only.
    When you cannot see the basic flaw you are perpetrating on everyone, not just yourself you are not a conspirator, just badly leading yourself up a garden path.

  143. Zeke:

    This article has been very helpful to me. I am sure the next two will be helpful also.

    On the issue of pair-wise comparison (which I am jumping ahead of your post on) – would we still do that in a perfect future world?

    Say we have identical weather stations every square kilometer. They take readings every 5 minutes. They are constantly calibrated. We do this for 100 years.

    Is there still a reason to compare each station to its nearest 10 stations and adjust if the trend of one is different than the trend of the nearest 10?

    Is that not really just averaging the 10 nearest stations and spreading that average over their area?

    In a perfect future world – with these identical stations every kilometer, after 100 years of data gathering – it seems like we would want to retain the micro -climate data and just use it all as is – rather than do the pair-wise homogenization step.

    What are your thoughts on this issue.

    Thanks in advance.

  144. Changing the Past? by Zeke
    ” The alternative to this would be to assume that the original data is accurate,
    and adjusted any new data relative to the old data (e.g. adjust everything
    in front of breakpoints rather than behind them). From the perspective of
    calculating trends over time, these two approaches are identical, and its
    not clear that there is necessarily a preferred option.”

    Go for it Zeke the morally right approach .
    The correct scientific approach and the past is left unchanged.
    Gee I would even give you a 0.2 degree TOBS adjustment to 1934 once off if you did this and we could all go home.

    • Angech:

      I suggested above that we could even do both approaches. Zeke said that would be possible – but might be confusing to some people. However, not to the readers of this blog (probably).

      I find the changing of the past (or at least the changing of the estimated past) very unsettling and would much prefer t see the present change relative to the past – or at least have the option to see that.

      I would even like to see some of the classic graphs – but showing with and without each of the four adjustments Zeke is talking about (and also both ways on changing past relative to present and changing present relative to past) – just to see the classic graph with the raw data, the classic graph with the QA – the classic graph with the TOBS correction and the classic graph with the pair-wise homogenization. Since all those files exist – the scientists could easily show all four (or five) each time – just for fun! (and for people like me who just want to gauge the difference all of these processing steps make to the raw data).

      That would be the best of all worlds – the processed data is there for the scientists who like to work with the tweaked data (cause it is the most accurate – probably). However, we could see the difference between the processed data and the data at each stage of the processing, all the way back to raw.

      Then with worldwide distribution of really good automated weather stations, after 100 years we would have really really good data and we may not need so much processing.

      • Steven Mosher

        “I suggested above that we could even do both approaches. Zeke said that would be possible – but might be confusing to some people. However, not to the readers of this blog (probably).”

        You have to be kidding

        If we changed the present, then people would say

        HEY! I was in dallas, no way was 14.2C They are changing the PRESENT.

        And then goddard would do charts showing the adjusted present to the ‘real’ present and argue that its colder now.

        There isnt a single of one of you who would call these people to task if zeke made the change you suggest.

        Do you think angech or you would go around on blogs and dispell that nonsense?
        Not on your life.

        Do you think you’d go around and say.. “wait guys, I asked Zeke to do that?” Not on your life.

        You like to give busy work. and then walk away.
        seen it before. And I seriously doubt that either you or angech would clean up the mess such a change would cause.

      • Steven Mosher

        hell angech cant even be bothered to count the dang stations for himself.

      • Mosher:

        Look – I merely suggested that if both approaches are equivalent and if a lot of people don’t like the past changing daily – then it might be a good idea to add a file where the present is changed relative to the past.

        If you don’t like that than ignore my suggestion.

        I think it is a good idea.

        I am not assigning busy work to anybody – merely dropping a suggestion in the suggestion box.

        Sure – people will complain no matter what is done.

        So what.

        The question is would it be better to show it both ways – I say YES.

      • Mosher, Zeke said he could do it, not me.
        read his introduction 887 comments back.
        Under changing the past.
        He said it was valid, kosher, doable, real.
        Do your own reading.
        I only said it was a very good idea.

  145. Meanwhile, the advocates of climate justice look to as their leaders the likes of Al Gore, who preach abstinence for others while living in multiple massive high-carbon-footprint mansions (http://www.snopes.com/politics/business/gorehome.asp ) [and] (http://www.huffingtonpost.com/2010/05/17/photos-al-goree-new-8875_n_579286.html ) and flying around the world on private jets. It is time for the advocates of climate justice to recognize the immorality of their campaign to keep the poor poor. ~Francis Menton

     

  146. My suggestion for you cnference at NCAR would be focus on a more formal and simple statement of the problem, and let the complicated methodologies spawn from that simpler framework. Either pick a standard deterministic reconstruction/interpolation/quadrature formality or a statistical one.
    The problem with trying to sort through the literature on the uncertainty in this problem is that without any simple formal statement of the problem and a well know statistical or mathematical approach, it feels like death by ‘is this paper even relevant, its so frigging weirdly complicated’.

    Simplicity. Good luck!

  147. “Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends”

    Golly. So by changing the figures in a spreadsheet one can ACTUALLY CHANGE GLOBAL TEMPERATURES?

    Is there a Nobel prize for Applyde Magick? Because this man rates one.

  148. Trust but verify (RR) I will continue to use Best et al as a parameter since the only thing I have to compare it with is UAH. Now in that comparison there is a sideways trend mostly below the baseline from 1979 to 1997 from UAH; whereas there is an upward trend on tne others that was supposed to represent the spike of global warming. However, there is an upward trend from UAH for 1979 to date and the temperature is higher and mostly above the baseline for 1998 to date. So it appears that there is a general warming trend no matter where you look.

  149. nobodyknows

    When it comes to changes of measuring devices, it is an interesting thing that appears in the graphs in Zeke Blackboard article. It is that there were minimal differences before 1985. Another issue is that it did not give so large effect on mean temperature.

    “For example, MMTS sensors Tend To read maximum daily Temperatures about 0.5 C colder than LiG thermometers at the same location. There is a very obvious cooling bias in the record Associated with the conversion of most co-op stations from LiG two MMTS in the 1980s ”

    “Quayle et al (1991) examined various sites around the country and found that MMTS sensor introduction led to a cooling of the maximum temperatures by around 0.4 C and a warming of minimum temps of around 0.3 C. Similar results on a smaller scale were found by Blackburn in 1993, Wendland 1993, and Doesken et al., 1995. a more recent paper by Hubbard and Lin 2006 found a .52 C cooling bias for maximum temperature and a 0.37 C warming bias for minimum temperature. “

  150. Peter Azlac

    Steven Mosher says:

    “we dont have 60 optimally placed. But playing around with this over the years you do get good answers at 60, better at 100, even better with 300..
    and so forth.”

    Whilst that is true since 70% of the Earth is covered in ocean at least the 60 odd stations I referred to cover all the continents and give good coverage of Europe and most important have a continuous record from the early 1800´s to the present. If an analysis of these records does not agree with the BEST, CRU, GISS evaluations then you have a problem of credibilty – Clive Best has already demonstrated differing long term trends for desert and humid areas and the same applies wherever there are differences in precipitation and windspeed and surface heat capacity – as the Chinese have demonstrated in Tibet,the Indians with regard to the monsoon ands more widely by the data from Class A Pan Evaporation studies.

    Frank Lasner has a better evaluation of the station data – sorting it by similar climate characteristics and distinquishing between areas most subject to ocean cycles, rain shadow and altitude effects. A major problem with the BEST approach of using local regional stations to ´correct´those with an apparent discontinuity is that, as they found for Armagh, such local stations give variable results based on such factors as soil moisture (heat capacity), aspect, surface roughness and wind speed and direction. Such factors are unique to each station and cannot easily be unravelled to provide a temperature signal, especially in relation to minimum temperatures that form the greater part of the warming described by the BEST and other series. .

    • Steven Mosher

      “If an analysis of these records does not agree with the BEST, CRU, GISS evaluations then you have a problem of credibilty ”

      quite the opposite.

      For people who do data mining to cherry pick stations, they need to provide a field test showing that the criteria they used are actually true and effective.

      Frank, Clive etc.. none of them have done this.

      That is NONE has validated the robustness of their selection criteria.

      They are, like Mann, picking and choosing without any attempt to validate their selection criteria.

      The uncertainty in their selection criteria is never evaluated.

      The biggest assumption is the one that long records are better. This is the same unexamined assumption that CRU and GISS make.

      • Steven Mosher commented

        That is NONE has validated the robustness of their selection criteria.

        Unless you only reject clearly bad values (any temp greater than or less than 199/-199 degrees).

      • Steven Mosher

        Mi Cro

        we are not talking about rejecting bad data.
        we are talking about SELECTING stations.

        Like Mann who selects proxies he likes, they do the same.
        with no validated criteria.

      • we are talking about SELECTING stations

        Fine, include all of those as well.

      • Steven Mosher

        Micro, you need to establish a criteria.
        you havent

      • I have one filter, and one criteria. Since my original purpose was the difference between today’s rising temp and tonight’s falling temp, for a record to be loaded, today’s station must have a record tomorrow.
        And then I added criteria that allows you to specify the minimum number of days for some number of years for a station to be included. Reports are then based on these station records for a defined area.
        No cherry picking.
        Originally I was planning of picking just clear sky days out of these 122 million records, but decided that there was no way to do this without being accused of picking the results I wanted.

  151. Zeke, got here late, don’t know if you’ve addressed this or not. Figure 2, I have a copy of NCDC GSoD here http://content.science20.com/files/images/SampleSize.jpg
    And there is a large drop in 1973 that your chart doesn’t show, have you seen this before?

  152. Steven Mosher

    Here is an Idea Denizens.

    Let’s see if you can get yourselves up to ClimateAudit levels of performance.

    Back in 2007 we criticized a paper by parker on UHI.

    The approach was simple.

    We collected our best questions, objections into one numbered list.

    Then Parker responded

    Start a thread. Collect ALL the questions you think need answering

    http://climateaudit.org/2007/06/14/parker-2006-an-urban-myth/

    http://climateaudit.org/2007/07/10/responses-from-parker/

    So amongst yourselves get organized.

    Create a list of questions, maybe you could categorize them.

    I’ll help by listing the categories in the next few threads.

    Then you go through all 600 comments and collect the best questions

  153. Steven Mosher

    The denizens best questions about TOBS

    List below as replies the best questions, challenges to the specific TOBS adjustment discussed in this post

    • nobodyknows

      What is the effect of TOBS on the mean temperature of one year. And one decade? Wil it matter if you read the instruments at 8 oclock, or if you do it at 4 pm? And I don\t know what “denizens” best question is. I think it is great to follow your comments here. (even if I am agnostic when it come to climate)

    • bit chilly

      this is part of a reply to zeke in relation to the williams et al paper he kindly posted a link to in a response to a previous question i had asked.

      initially my query was this : initial points would be ,in terms of tobs and instrument changes,surely in these cases what we are looking at is an absolute change only to the data at point in time change.
      so for each station the raw numbers would change by the difference resultant in the tobs change and instrument change,but not the trend ?

      which was partially addressed from a link to an old discussion topic on the blackboard by zeke stating the incremental changeover creating an incremental trend ,fair enough.

      surely once all the tobs adjustments were completed on all stations the incremental trend change would be removed ,and we would be back to the black and white temperature difference inferred by changes in tobs ?

  154. Zeke wrote “There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings.

    Because the biases are large and systemic, ignoring them is not a viable option. If some corrections to the data are necessary, there is a need for systems to make these corrections in a way that does not introduce more bias than they remove.”

    Zeke, I agree with 90% of this article but two areas completely invalidate the information that is correct. It may be the best guess by climate scientists, but that is just evidence against reliance on the temperature records.

    First, you claim that UHI only corrupts the data by 0.2 C but everyone of us sees this is not even close. We can turn on our local news or look at our cars thermometer during our commute every day. When we do that we see three to five degrees Celsius of UHI. While it does not matter whether the UHI is subtracted out of today’s temps or added into past ones to correct it, as long as the temperature records are not subtracting out 3-5 degrees of UHI then the trend comparisons are not valid. Especially when the homogenization process ‘corrects’ rural and well sited stations because they do not show the trend in UHI pollution.

    Second, the step process algorithms used for station moves can’t be trusted even if they are the best climate scientists can do with their current understanding. Things like repairing Stevenson screens, correct biases of several tenths Of a degree due to worn screens absorbing sunlight. When reprinting corrects the station bias, the process instead treats that as a breakpoint and adjusts the entire record for it. So rather than fixing the bias it is incorporated into the record over and over again. So if 4 repaintings each correct 0.3 C in bias the adjustment process adjusts the past downward by 1.2C to account for nonexistent station changes. Maybe the difference between the LIG and electronic thermometers is due to extra heat absorbed by poorly maintained Stevenson screens. So your adjustments for that change is only adding the measurement error into the trend.

    As long as the best guesses of climate scientists fail to correct for provable warming biases in the record the results are untrustworthy. Every station should have reference stations added at the nearest pristine location near it, and the UHI bias measured and subtracted from the trend. And all future station moves or instrument change should require at least a year overlap between the old station and the new to confirm the difference not allow an untrustworthy algorithm to make it up ex post facto.

    • Steven Mosher

      “First, you claim that UHI only corrupts the data by 0.2 C but everyone of us sees this is not even close. We can turn on our local news or look at our cars thermometer during our commute every day. When we do that we see three to five degrees Celsius of UHI”

      Anecdotal.

      the PEAK , the MAX UHI you see in large cities may approach this under the right synoptic conditions.

      A) the average for large cities is much less than this.
      B) there are not many large cities in the data.
      C) removing ALL urban stations, shows that you are wrong.

      for A see

      http://www.ncbi.nlm.nih.gov/pubmed/22142232

      This is for SUHI and you find similar ranges for UHI

      Urban heat island is among the most evident aspects of human impacts on the earth system. Here we assess the diurnal and seasonal variation of surface urban heat island intensity (SUHII) defined as the surface temperature difference between urban area and suburban area measured from the MODIS. Differences in SUHII are analyzed across 419 global big cities, and we assess several potential biophysical and socio-economic driving factors. Across the big cities, we show that the average annual daytime SUHII (1.5 ± 1.2 °C) is higher than the annual nighttime SUHII (1.1 ± 0.5 °C) (P < 0.001). But no correlation is found between daytime and nighttime SUHII across big cities (P = 0.84), suggesting different driving mechanisms between day and night. The distribution of nighttime SUHII correlates positively with the difference in albedo and nighttime light between urban area and suburban area, while the distribution of daytime SUHII correlates negatively across cities with the difference of vegetation cover and activity between urban and suburban areas. Our results emphasize the key role of vegetation feedbacks in attenuating SUHII of big cities during the day, in particular during the growing season, further highlighting that increasing urban vegetation cover could be one effective way to mitigate the urban heat island effect.

      • “Anecdotal.

        the PEAK , the MAX UHI you see in large cities may approach this under the right synoptic conditions.

        A) the average for large cities is much less than this.
        B) there are not many large cities in the data.
        C) removing ALL urban stations, shows that you are wrong.”

        A. The temperature indexes don’t measure the average temperature. They post a measure of the low and high for the day, and UHI is most influential to keeping the low from dropping as an area sheds heat at night and driving the high, higher during the sun of the afternoon.

        B. Fallacy, UHI does not effect only large cities but small ones, and rural sites with parking lots buildings and airports.

        C. That only proves that the supposedly rural stations are polluted to a similar level by UHI. UHI is logarithmic to population, meaning small towns and cities have UHI growing at a faster rate than large ones. This is why GISS adjusts 85% of urban areas to account for urban cooling because the cities aren’t warming as fast as the runways.

        Steve,
        You say on your own blog that UHI is minimal, accounting for only 0.1C per decade of the trend. Do the math Steve, 0.1C per decade since 1850 is 1.64 degrees. Twice the trend in global warming.

        I agree with your ideas about vegetation reducing the effects of UHI, but that doesn’t mean the UHI isn’t smeared all over the temperature datasets.

    • Windchasers

      First, you claim that UHI only corrupts the data by 0.2 C but everyone of us sees this is not even close. We can turn on our local news or look at our cars thermometer during our commute every day. When we do that we see three to five degrees Celsius of UHI.

      1) Well, if the station is in the same place and environment, with the consistent, same (hot) bias over time, then it won’t affect the trend. There’s no problem in that case. Fixed biases aren’t a problem, changing biases are. But I recognize that temperature stations do sometimes suffer from changing environments, so…

      2) We have created a perfectly sited, top-of-the-line, gold-standard set of stations: the CRN. These are placed out in the middle of nowhere, carefully maintained, use the most accurate equipment available, etc. And they show the same temperature trends as the adjusted USCHN record. So in addition to all the statistical validations and checks on the UHI adjustments, this provides a completely independent validation.

      When independent ways of approaching a scientific problem give the same result, you’re usually on the right track.

      • The CRN record is too short to compare at this time, especially since the change in UHI is a long term trend issue. Since we don’t have CRN records to compare 60-120 years ago, comparing a five year trend is irrelevant and most of the corrections are added prior to the existence of the CRN network.

  155. Steven Mosher

    The denizens best questions about Infilling

    List below as replies the best questions, challenges to the specific Infilling procedure discussed in this post

    • Since you ask, documentation that anomalies have normal (gaussian) joint distributions over space. Of course, any interpolation technique can be applied formally, but the ones used either tacitly (averaging) or overtly rely on multivariate normal for their statistical credentials. On no evidence to speak of, I conjecture anomalies have heavier than normal density tails and possibly skewed as well. And this can make a big difference to the quality of estimation – both statistical theory and financial institutions (banks) can attest to that.

  156. Steven Mosher

    The denizens best questions about PHA
    List below as replies the best questions, challenges to the specific PHA adjustment discussed in this post

  157. Steven Mosher

    The denizens best questions about BEST

    List below as replies the best questions, challenges to BerkeleyEarth.
    This was not the topic of this post, but I can save this for a future series

  158. Steven Mosher

    The denizens best questions about the “philosophy” of adjusting in general

    Many comments attack the very notion of adjusting data when the conditions of observation change.

    collect the best arguments as replies to this comment

  159. Steven Mosher

    The “Interests” of investigators.

    Many people avoid the technical issues altogether, and question the integrity, motives, interests, of people presenting science.

    Collect your best arguments as replies here.

    • Who do you see as the users of your results and in light of your professional responsibilities, how would you describe your obligations to them? (Using my own template with this one.)

  160. You can sum it all up in one sentence: Politically-Correct Voodoo. Climate change research has been plagued since the days of hysterical fears of imminent cooling in the 1970s, by design problems, misuse of research data (both positive and negative with adjustments to raw data without explanation, and adjustments made to the adjustments — all without any justification whatsoever — and, the substitution of data without any disclosure of the questionable gimmicks being employed, together with the knowing corruption and outright loss of raw data without accountability of any kind), poor statistics, small samples, unverifiable computer models constructed using questionable time-invariant climate parameters and reductionist mathematics, and a sycophantic culture of interrelated, self-reinforcing, self-serving, self-appointed gurus — elevated far above their competence for ideological reasons — who idolize and memorialize superstitious preconceptions, indulge in flawed conclusion and hucksterism, and proselytize their politically-correct voodoo pathological climate science (likened by some outside Western academia to the science of ancient astrology), all while self-righteously opposing with cannonades of denigration the accomplishments and observations of serious scientific skeptics and an ever-growing number of global warming heretics of self-defeating AGW theory and eco-terrorism.

  161. A C Osborn

    What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
    Better for what, certainly not the historic record.
    How can declaring old temperatures “WRONG” by replacing them with “calculated temperatures” be right.
    The people that lived through the 30s in the USA did not experience “calculated” temperatures, they experienced the real thing as reported by the thermometers of the day. They experienced the real affects of the temperatures and the Dust Bowl droughts.
    In Australia in the 1800s they experienced temperatures soo high that Birds & Bats fell out of the air dead of Heat Exaustion, in the early 1900s they had the biggest natural fires in the world and yet according to the Climate experts after adjustments it is hotter now than then.

    It is like historians going back to the second world war and changing the number of Allied Soldiers who died and making it far less than the real numbers. Try telling that to their familes and see how far you would get.

    Based on these CRAP adjustments we hear the “Hottest” this and “Unprecedented” that, the most powerful storms, Hurricanes & Typhoons, more tornadoes, faster sea level rise when anyone even over 60 knows, based on their own experiences that they are Lies.
    I remember as a child in Kent in the UK during the 50s & 60s the Tar in the road melting in the summers due to the heat, followed by a major thunderstorm and flooding with cars washed down the streets and man hole covers thrown up by the water. It is no hotter in the UK now than it was then.

    THE ADJUSTMENTS DO NOT MAKE IT A MORE ACCURATE ACCOUNT OF HISTORY.
    It is not REAL, that is why the work that Steve Goddard does with Historic Data is so important, it SHOULD keep scientists staight but it doesn’t.

    I have already pointed out to MR Mosher that BEST Summaries are CRAP and he agreed that they can be very wrong.

    • Windchasers

      Did you even read the post?

      How can declaring old temperatures “WRONG” by replacing them with “calculated temperatures” be right.

      No. We know the old temperatures have a bias because of how they were measured. They were wrong. Then we do our best to correct them, by adjusting for the known biases.

      The old numbers were wrong. You seem to think they were right, but you need to actually show that, given the large known biases the old numbers have.

      • “The old numbers were wrong.”

        Your Warmer slip is showing, madam.

        Andrew

      • Don Monfort

        According to the NOAA there is only one record high (not a tie) for a U.S. state, in the 21st century. Look at the 1930s. The 1930s were hot as hell. There is no way to get around it. Google it. Yet when the data are homogenized and anomalized, the first decade of this century is alleged to be the warmest decade on record. July 2012 was alleged to be the warmest month on record. Ooops! Now we are back to July 1936 as the warmest month on record. If they can’t get one month right, why should we believe them about years and decades?

        http://www.ncdc.noaa.gov/extremes/scec/records

        http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/

      • You seem to think they were right, but you need to actually show that, given the large known biases the old numbers have.

        IMO, the problem is you have no corrected value to compare to, so you can think of all of the possible biases you know the data has, but unless you know all of those biases individually for each bias for each station, really for each day, you’re making the data worse.

      • Windchasers

        IMO, the problem is you have no corrected value to compare to, so you can think of all of the possible biases you know the data has, but unless you know all of those biases individually for each bias for each station, really for each day, you’re making the data worse.

        No, you can definitely show that the TOB exists, just from the high-quality hourly data we have now. But it doesn’t vary greatly across individual stations, nor is there any reason to expect it would.

        It’s not like they never thought to check their assumptions or validate their adjustments. Seriously, you have to get educated about what the biases are, how they work, how the adjustments are calculated, how they are validated.. and then you have some room to find problems with them. Right now you’re just saying “it looks like nobody thought of [thing that they thought of and addressed 30 years ago]”.

      • Right now you’re just saying “it looks like nobody thought of [thing that they thought of and addressed 30 years ago]“.

        I didn’t say they didn’t think of it, just that you have no way to validate your attempted correction, sure you think it’s right, heck in general they sound reasonable to me, that just doesn’t make them right. And you have no physical way to validate it.

        And with TOB, if you’re looking at the anomaly, and only looking for a trend, as long as they don’t change observation time, it doesn’t matter what time they are measured. The Min and Max temps for the station will be wrong, but the trends in Min and Max will be the same. BTW, since we’re talking about TOB, I don’t like that you use mean or average temp trends, at least min and max are real measurements, not the the average of the two values.

      • Rud Istvan

        Windchasers, you need to be more precise. Yes, TOBS changed things. zeke above says maybe 0.2 degrees, NOAA USHCN v1 published closer to 0.3. Yes there may have been changes from Stevenson Screen/LiG to MMTS. The latter are, based on my reading of the literature, not well documented since not accounting for the actual condition of the screens. Those would cool the past a bit for comparability. On the other hand we know that there are numerous station siting problems that have grown up, all with a warm bias, which could be loosely grouped under UHI. NASA GISS says the proper correction is to ‘warm the past’ and uses Tokyo as the example. On their website last month the GISS correction for 1930 was about 1C; the Japanese data for Tokyo versus Hachikyo (a rural islandTokyo subprefecture) suggests 2C (and less warming than after the GISS UNI correction.Those things offset to some degree, and for at least some stations more than offset. Therefore it passes neither logical nor statistical (see Steriou and Katsoyiannis at EGU 2012 for a rigorous analysis of a sample of 163 GHCN stations globally) that the aggregate answer is always that the past is cooled by a lot more than TOBS. The most damning evidence is the around yearend switch at the state level from Drd964x to newer nClimDiv. Both supposedly had TOBS and MMTS and at least USHCN v1 UHI adjustments. Many states went from essentially no warming trend as displayed by the older 2013 ‘official’ government graphic to substantial warming innthennew 2014 version of supposedly the same data. My next book will use California, Michigan, and Maine as particularly egregious examples. By count, 44 states received an enhanced warming trend and only 8 remained unchanged or slightly cooled. ” Something’s rotten in Denmark”.

      • David Springer

        @Rud

        +1

    • How much meaning can an average CRAP possibly have?

    • Steven Mosher

      “Better for what, certainly not the historic record”

      The raw data is best for the historical record.

      However if you want to estimate the global average you get answers that are provably wrong.

      it s pretty simple.

      • David Springer

        You were asked where the monthly tAvg in Portland-Troutdale for 1950 comes from. Your non-answer listed a score of possibilities. Portland-Troutdale produced monthly reports for NCDC. Typed and subsequently scanned. The numbers on it do not match the raw data listed for Portland-Troutdale by BEST.

        Answer the phucking question. How exactly does your spaghetti code monstrosity change the observer data into what you call “raw”.

      • David Springer

        My guess is you don’t know and your amateur attempts to build a structured system have become so hopelessly complex and interwoven (spaghetti) that at this point you can’t unwind it to produce a simple answer to a simple question – where does raw monthly average data for Portland-Troutdale for the year 1950 come from and how is it processed such that it ends up 0.7F cooler than the what the station keeper recorded in his monthy reports?

        Prove me wrong. Answer the question.

      • David

        I would like to see the original-presumably hand written-record for Portland for 1950 and compare it to the processed current digital figures for that year.

        tonyb

      • @David Springer 10:52 am
        It sure looks to me that the answers to your questions are a few MB of data scattered amongst dozens of multi-GB zip files. Not exactly optimized for retrieval of processing history by station.

        Has anyone built a data extractor and compiler by station across the raw, intermediate, regional and final datasets?

      • Has anyone built a data extractor and compiler by station across the raw, intermediate, regional and final datasets?

        I sort of have, but it’s in PL/SQL and on GSoD dataset exclusively. As input you can define min/max lat/lon points to create a report, so you could put in a box around one or two stations and run it. I have already run it on 10 x 10 boxes, lat zones, 10 degree lat bands, and roughly by continent. Follow my URL for the reports I’ve already run.
        What lat/lon box do you want on the actual stations? But you have about 10 minutes before I leave for the day.

      • Stephen Rasey:

        Has anyone built a data extractor and compiler by station across the raw, intermediate, regional and final datasets?

        Nope. I’ve written a few custom functions in R which allow me to extract station records from the data.txt files, but I’ve barely touched the flags.txt or sources.txt files. The size of all these files makes them difficult for me to work with.

        Plus, BEST has released gridded data, and I’m more interested in it at the moment. I’m currently working on seeing if I can figure out why the NetCDF packages for R can’t seem to load it directly. I can view the data with other software. I can even unpackage it, repackage it, then load it into R. That’s just a huge pain given how much data there is.

      • @Brandon Shollenberger at 4:43 pm
        Nope
        It is good to know that you don’t know of any.

        BEST has released gridded data, and I’m more interested in it at the moment
        On that subject, is there any uncertainty information, such as mean standard error of the estimates associated with the grid points?

      • I’m sorry, Mi Cro. I missed your reply on my last scan.
        The GSoD dataset is the one from NOAA?
        I’ll study your site.

        Hmmm.. Google BigQuery…..
        https://developers.google.com/bigquery/docs/dataset-gsod
        Promising. Have to find out how much it has adjusted history.

      • Stephen it’s from NCDC, In the data section @ the sourceforge link, there’s a link to the NCDC site, they have a doc that describes their QA prior to them making it available. But, avg/mn is just the average of min/max I don’t really use it for anything other than so I can relate to all of the other temp series.

      • Steven Mosher

        Brandon
        ‘Plus, BEST has released gridded data, and I’m more interested in it at the moment. I’m currently working on seeing if I can figure out why the NetCDF packages for R can’t seem to load it directly. I can view the data with other software. I can even unpackage it, repackage it, then load it into R. That’s just a huge pain given how much data there is.”

        use ncdf4

        works like a charm

      • Steven Mosher

        “David

        I would like to see the original-presumably hand written-record for Portland for 1950 and compare it to the processed current digital figures for that year.

        tonyb”

        ################################

        to do that you will have to wait for ITSI to finalize.
        In the end, it may very well be that we would shift to the ITSI data set

        or use it as an alternative.

        They took a completely different approach to the station problem
        using a probabilistic approach to de duplication

      • Steven Mosher

        “My guess is you don’t know and your amateur attempts to build a structured system have become so hopelessly complex and interwoven (spaghetti) that at this point you can’t unwind it to produce a simple answer to a simple question – where does raw monthly average data for Portland-Troutdale for the year 1950 come from and how is it processed such that it ends up 0.7F cooler than the what the station keeper recorded in his monthy reports?

        Prove me wrong. Answer the question.”

        Sadly its not a simple question. If you think it is, then you dont know what you are talking about.

        but knock yourself out. you are looking at one datasource. hourly at that.
        you realize that there are multiple ways of turning that hourly into a daily average. None is better than any other but you have to do them all consistently.

        one way is to integrate the hours
        another is to look at tmin/tmax

      • Stephen Rasey:

        On that subject, is there any uncertainty information, such as mean standard error of the estimates associated with the grid points?

        Nope. Right now the files have no information about any uncertainties.

        Steven Mosher:

        use ncdf4

        works like a charm

        Interesting. I didn’t see the ncdf4 package before because it’s not hosted by any of the CRAN mirrors. Apparently they don’t have binaries built for it. I’m not sure why. I think I see why the ncdf package didn’t work though. According to the ncdf4 documentation, it was written to work with a new version of NetCDF files, one the old ncdf package can’t handle.

        The more up-to-date package does work, but I won’t say it works like a charm. It’s one of those R packages that apparently doesn’t have any safeguards against memory consumption. It’ll happily eat up all your RAM, and if you don’t have any more to spare, R will just crash.

        Still, it does the trick. You just have to know what you’re getting yourself into. Thanks for pointing me to it.

      • David Springer

        Steven Mosher | July 10, 2014 at 6:36 pm |

        Sadly its not a simple question. If you think it is, then you dont know what you are talking about.

        but knock yourself out. you are looking at one datasource. hourly at that.

        ———————————————————

        Portland-Troutdale was not hourly in 1950. The report has daily min/max entries which are summed and a monthly average computed by the observer in the monthly report.

        So you didn’t even bother looking at the original scanned report you just made crap up. Some might call that a lye. Bald faced.

        The question is simple. Finding the answer is hideously complex. And that’s exactly why I asked the question. Any reasonable person will assume you should be able to say quickly and easily why a data input from February 1950 Portland-Troutdale airport monthly report is cooled by 0.7F where BEST shows “raw” data.

        The nut is that the data is far from raw and you don’t phucking know what happened to it between the observation and the output of your spaghetti monster.

        ROFLMAO

      • Steven Mosher:
        use ncdf4
        works like a charm

        Doesn’t work for me on Windows. Says not available. Looks like only Mac binaries are supplied.
        Neither ncdf nor ncdf.tools recognizes the file.

        Stuck, and not impressed.

      • OK, got a zip version of ncdf4 from the guy’s site, and did a local install, and eventually got it to work. Took hours, though, and not a hint in the documentation.

      • Nick Stokes, I wish I would have known you were having trouble. I could have walked you through what to do in a couple minutes.

    • Zeke and Moshpit
      What are the original 1950 numbers and what are the adjusted ones?

      Tonyb, any changes to make in the long slow thaw old numbers?

      I am with you on this request for an accedotal sample from Portland
      Scott

      • Scott

        The CET ‘old’ numbers were created many years ago and I have seen some of the original transcriptions. I would be more concerned with the ‘new’ numbers as a change in the stations used has , I think skewed the data over the last 12 or 15 years.

        It would be VERY interesting to see the original Portland 1950’s data and see how it compares to today.
        tonyb

      • David Springer

        Tony the link to original Portland-Troutdale Airport report typed in 1950 was given several times as well as the link to the BEST data. I spot checked one month of the year at random, February, and found it cooled by 0.7F between the observer’s typed report in 1950 and BEST’s manipulation at the present time.

        There’s only one Portland-Troutdale airport and one report submitted from it. Mosher simply doesn’t know what specifically happens to the data. That’s what happens when you build a big ugly pile of spaghetti code it becomes more and more difficult to untangle and make sense of what’s going on as time goes on. It’s a classic phenomenon. See here:

        http://en.wikipedia.org/wiki/Spaghetti_code

        Search for ‘Portland’ in this thread and you’ll find the links.

      • David Springer

        @climatereason

        http://www.ncdc.noaa.gov/IPS/lcd/lcd.html?_page=1&state=OR&stationID=24229&_target2=Next+%3E

        You need to scroll down to the month of interest. I spot checked February 1950 against current BEST raw data. The link generated for the scanned report expires in 24 hours so I can’t give it to you with any confidence it will still work but here it is:

        http://www1.ncdc.noaa.gov/pub/orders/IPS/IPS-F71FEF55-3D34-42A6-AD55-D6646E31237E.pdf

      • David, I uploaded daily and yearly station data for Portland/Troutdale from GSoD here:
        https://sourceforge.net/projects/gsod-rpts/files/Reports/

      • David

        I think I might have mentioned previously that I spent a few hours at the Met office library a few months ago trying to find original UK data for Frank Lansners temperatures project. The Met Office have these monthly and annual records for the US dating back to around 1890.

        I have asked Mosh several times how it is possible to cool the past (or warm it) In other words the record is as it is, which would have been recorded at the time by a qualified observer. I would be surprised if there were any issues with this era which is more than you can say for historic records. That is why I like to use much derided ‘anecdotal’ information. Tying up crop records together with observations helps to put the instrumental record-where available- into context.

        I hope the remaining two articles in this series will help clear things up because at present I can see no justification for changing the past.

        I went to see ‘Evita’ last night (which is why I didn’t respond to Mosh)

        It was a very slick production. The stage sets and key actors ad the story line based loosely on fact made me immediately think of the climate debate which has its own key actors, a flimsy stage set of props that don’t bear scrutiny and a story line that, since the hockey stick, only loosely bows to reality.

        tonyb

      • David

        Your various links worked, thank you. I don’t know if there is any known way of printing out the Portland monthly report which is still visible as I have never been able to print out pdf’s

        tonyb

      • David Springer:

        Portland-Troutdale was not hourly in 1950. The report has daily min/max entries which are summed and a monthly average computed by the observer in the monthly report.

        There are in fact hourly meteorological reports available for Portland-Troutdale for February 1950, so you are obviously looking the the daily summary files (GSOD).

        You can get the entire year from:

        ftp://ftp.ncdc.noaa.gov/pub/data/noaa/1950/726985-99999-1950.gz

        726985 is the station id for Portland-Troutdale.

        and confirm for yourself that this has hourly observations. Off hand, I don’t know how you get the scanned in “raw” reports. I suspect Anthony Watts could tell you, if you really wanted to know.

        I prefer to use the classic weather underground site for individual sites for its convenience as it merges different data sources and is much easier to navigate around. I also don’t need to download large files to get a particular time window, e.g. February 1, 1950.

        You can see e.g., February 1, 1950 here:

        http://classic.wunderground.com/history/airport/KTTD/1950/2/1/DailyHistory.html?HideSpecis=1&format=1

        KTTD is the designator for Portland-Troutdale of course.

        Change “1950/2/1” to “1950/2/2” etc to see reports for the rest of the days of that month.

        It is easy enough to write a script to pull down any period of interest that way.

      • The July 1950 hand written Station meteorological summary of Portland Oregon Airport includes station coordinates 45 deg 36′ N, 122 deg 36′ W. Those are the coordinates of the Portland International airport, not those of the Portland-Troutdale, which is about 15 km further East.

    • At the very least put some GD error bars in the data and the older data should probably have higher error bars, especially if it was adjusted. I am sick of temperatures reported to the nearest 0.01 degree from 80 years ago, especially when they adjusted it 0.3 degrees.

      • ding ding ding, give the man a prize. BEST does include an uncertain range that pretty much includes the unadjusted data. There are some uncertainty issues associated with smearing that may not be properly considered, but that is mainly due to newer polar data.

      • @captdallas2 0.8 +/- 0.2

        BEST does include an uncertain range

        But. if they don’t increase starting as a minimum during the early 70’s, getting larger and larger going back into the past, they are worthless, and if my memory is correct, they don’t change.

        Why should they change? Because the number of surface measurements decrease substantially starting about 1973.
        http://content.science20.com/files/images/SampleSize_1.jpg

        Now this if for the NCDC GSoD dataset, BEST combines many sources, but there are still reductions in station counts in the past.

      • micro, “But. if they don’t increase starting as a minimum during the early 70′s, getting larger and larger going back into the past, they are worthless, and if my memory is correct, they don’t change.”

        The early seventies is not that big a problem once you include the potential cooling from the 40s. You can pick about any trend you like if you ignore the error bars prior to the 1970s. It is dirty data that is of limited utility, such is life.

      • The early seventies is not that big a problem once you include the potential cooling from the 40s. You can pick about any trend you like if you ignore the error bars prior to the 1970s. It is dirty data that is of limited utility, such is life.

        That all fine, the data is “lacking”, we all know that. But as the number of samples decrease, the error should increase. If the indicated error doesn’t increase (in a GAT product), I can’t help but they they are bogus.

      • aptdallas2 0.8 +/- 0.2,
        “BEST does include an uncertain range that pretty much includes the unadjusted data.”

        I’m not sure this sentence makes sense.
        What do you mean by unadjusted data in this particular case?

      • phi http://www.woodfortrees.org/plot/best-upper/to:2010/plot/best-lower/to:2010

        This isn’t what I remember seeing. My feeling after looking at these bounds, is they still seem to small, but at least they grow over time.

      • captdallas2 0.8 +/- 0.2,
        And ?

      • Phi, “captdallas2 0.8 +/- 0.2,
        And ?”

        GISS, HSDCRUT and NOAA all fit inside that uncertainty envelop. If you use the raw unadjusted surface temperature data and make your own temperature product it would hug the high end uncertainty side. Basically, arguing over “global” temperature anomaly is a waste of time. Regional is another story.

      • captdallas2 0.8 +/- 0.2,

        What I do not see is what you mean by unadjusted data in this specific case. For a global or regional curve which does not include any adjustments you have to choose only between two methods:

        1. The Goddard method that simply average global temperatures available.
        2. The method with complete raw series.

        To my knowledge, the known results of these two methods are very far of getting into the BEST margins of error.

        http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/global-land-TAVG-Trend.pdf

      • A note about error margins. If you compare these two graphs:

        http://www.woodfortrees.org/plot/best-upper/to:2010/plot/best-lower/to:2010

        http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/global-land-TAVG-Trend.pdf

        You see it is not easy to interpret! And it is even more difficult to calculate. And anyway it correspond to nothing because they do not take into account unknown on systematic errors.

      • An important difference between GISS and BEST methods is that BEST eschews the absolute temperature and prefers instead to trust the slope of temperature trends between breakpoints. To obtain multi-decade to multi-century records, they must integrate segment slopes backward from present day to obtain the historical absolute temperatures. So far so good, in theory.

        But each slope has an uncertainty range. As you integrate, that uncertainty must integrate as well. Furthermore, as BEST makes its segments shorter with each additional breakpoint, the uncertainty of each the slope must increase because there is less data to available to constrain the uncertainty in the slope.

        I see no indication in BEST results this segment slope uncertainty is percolating backwards in time in their station results.

      • To have correct long continuous time series is always better than to have fragments only, but it’s better to have fragments that have the correct average properties than to have erroneous long time series.

        Whether it’s better to have a small number of carefully analyzed and individually corrected long times series than a much larger number of automatically processed and fragmentary times series is not obvious a priori. Trying this approach adds to the knowledge and various tests made on the results add understanding on its virtues and problems.

        That’s what BEST has done. The tests have shown that the method works, whether it’s optimal may still be questionable.

      • phi, “You see it is not easy to interpret! And it is even more difficult to calculate. And anyway it correspond to nothing because they do not take into account unknown on systematic errors.”

        You can download or cut and paste the BEST uncertainty for any data set they have. Then for any period you can determine the mean error. For
        CONUS for example the mean error from 1910 to 2010 is ~+/- 0.2 C using monthly values. Mosher or Zeke could fill you in on what the error margins consider, but basically it is pretty close to the standard deviation of the series. The more stations you add to the average the less variation in the monthly averages the tighter the uncertainty range. That assumes that systematic errors are random and I believe normally distributed, but statistics ain’t my thing doncha know.

      • Stephen Rasey,

        “An important difference between GISS and BEST methods is that BEST eschews the absolute temperature and prefers instead to trust the slope of temperature trends between breakpoints.”

        As GISS uses now GHCN, and given the structure of the GHCN data (rather short series) and the adjustment method used, ultimately, GISS is not far from BEST on this point.

        “I see no indication in BEST results this segment slope uncertainty is percolating backwards in time in their station results.”

        Indeed, obviously, it is not addressed by BEST. Maybe we can see that by comparing margins over ten years and margins in WFT (monthly values but margins on annual basis ?). Moreover, I believe, they justify their method by the lack of trends bias.

        Pekka Pirilä,
        “That’s what BEST has done. The tests have shown that the method works,…”
        How do we know?

        captdallas2 0.8 +/- 0.2,
        “That assumes that systematic errors are random and I believe normally distributed…”
        Systematic errors, precisely, have not those characters.

      • Along with the issues raised by Stephen Rasey here, there is quite another source of systematic error. It is only urban records that are available in many regions of the world to provide what BEST calls “regional expectations” over century-long time scales. What snippets of non-urban data may exist are then scalpeled to conform to such biased expectations. That is the dirty little secret why removing the non-urban data scarcely makes any difference in the linear trend of their “global average.”

      • phi, “Systematic errors, precisely, have not those characters.”

        Just because every new version seems to find more warming doesn’t mean there is systematic errors :)

        Actually, the instrumentation errors are likely random and close to normally distributed. Interpolation (kriging) errors are a little more difficult to nail down. The south pole station, Admundsen-Scott, is so isolated that is peerless or I think Mosher calls it a corner problem. So correcting a corner based on “expectations” from unrelated stations is a bit of questionable choice. BEST though seems to agree that the highly skilled and overly trained surface station operators at Admunsen-Scott must be more focused on long night partying than attending to business since about 28 of the recorded months were tossed. Also when GISS added the Antarctic region the southern hemisphere variance changed indicating some likely issue that might need to be dealt with. Regionally that is an issue and the guys quoting “unprecedented” anything in the polar regions are blowing smoke. However, “Globally” it only makes about 0.05C impact which is inside the “global” uncertainty range. Digital thermometers in extremely cold environments would also tend to have a slight warm bias.

        The geniuses should know all that so the ones constantly picking the least accurate regions as some p[roof of the need for urgent action might not be as smart as they think or may be more devious than I think.

      • captdallas2 0.8 +/- 0.2,

        “Just because every new version seems to find more warming doesn’t mean there is systematic errors :)”

        We know that the result of adjustments is always to increase the warming (mostly for stations and always for the regional). Given the number of values ​​this can only be due to systematic errors. What is the source of these errors and what is the right way to treat them: these are the questions.

      • @phi at 3:29 am |
        Given the number of values ​​this can only be due to systematic errors. What is the source of these errors and what is the right way to treat them: these are the questions.

        I don’t have proof, but I think the pieces fit.
        The Zombie station controversy is real. About 45% of todays stations in the grid are zombies being infilled by regional trends from other stations.

        What stations likely die off and which live on? Is there a bias that the station that die off are in the low population and declining population areas of the country? Is there a gradual urbanization in the mix of stations over the past 3 decades?

        Some people will say, “What does it matter? UHI corrections are minor and can have either sign.” I don’t buy that. Urban stations are gradually contaminated over the decades as roads are paved, parking lots build, as energy use per capita rises, as populations increase in urban centers.

        So, mix the two problems. UHI is a much bigger element of temperature trends for urban stations than accepted. And Zombie station growth, disproportionately from the rural, non-UHI contaminated, group of stations. And Zombie stations are infilled with neighboring living stations, which are becoming more urban as a population of the whole.

        It is a fairly easy hypothesis to test. Let’s look at the zombies and their nearest stations? On balance are the populations densities of the living neighbors higher than the zombies? Is it changing over time?

      • UHI is a much bigger element of temperature trends for urban stations than accepted.

        All you need to do to see this is to use a IR thermometer and measure a grassy surface and compare it to asphalt, does anyone not think a 20-40F difference doesn’t matter?

  162. The problem Zeke is having is by assuming, with a touch of sanctimony, that we should all trust any part of the climate community with a blank check. The actual story of the AGW movement, its infiltrations into government authority (autocracy) are rather obvious to those who aren’t sympathetic to the actual intensions of the movement itself.

    Getting far closer to the mark each and every day;

    http://thefederalist.com/2014/07/07/the-mean-girls-of-global-warming/

    AGW is a political bully and thug construction, are there any links of Zeke denouncing the routine behavior of the climate “science” community?? Does he even acknowledge the existence of this behavior or what the agenda it is that motivates AGW advocacy?

    If parties can’t get specific to the real social and political motivations of the AGW movement then everything claim is suspect.

  163. Zeke, Thanks a lot for your hard work analyzing the issue and answering questions here. I don’t agree 100% with all of your conclusions, but I greatly respect your efforts.

    JD

    • Langmuir calls pathological science, “the science of things that aren’t so.”

    • A fan of *MORE* discourse

      cwon14 gets grossly delusional  “We should explore all the people involved [in climate-science] and their underlying political views.”

      Climate Etc readers may wish to reflect on the long list of top-ranking scientific luminaries who have prominently appeared on various far-right “enemy lists” (including but not limited to names that include): Albert Einstein, Leo Szilard, Robert Oppenheimer, Andrei Sakharov, Linus Pauling, and (yes!) Richard Feynman.

      Conclusion  Is it any wonder that the overwhelming majority of STEM professionals are adamantly opposed to any-and-all manifestations of anti-scientific conspiracy-theoretic far-right denialism?

      The world *DOESN’T* wonder, eh Climate Etc readers?

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  164. Bias and Bad Faith

    There are a number of posts here stating that realists shouldn’t assume bad faith or lying with respect to what I will politely here call mainstream climate scientists. I will agree wholeheartedly that no one should be accused of untruthfulness without solid evidence of that. (Such as Michael Mann’s dishonesty about excel file which has been documented by McIntyre.) Thus, if my concerns are only suspicions, I keep my mouth shut.

    On the other hand, the mainstream’s track record with respect to openness and honesty has been poor, and anyone is justified in having a strong B.S. detector. For instance, when McIntyre requested CRU metadata, he was told that it would only be released to an academic. When Pielke jr. requested it the CRU stated that they didn’t have the data. The most reasonable interpretations of what happened (in my view that the data was intentionally destroyed. Others may have different opinions) are very damaging to the CRU. If I was a mainstream scientist, I would be outraged by this episode because accurate data procedures are extremely important, but the mainstream scientists were not. The fact that the CRU grossly mishandled the data is bad enough, but the silence of mainstream scientists was inexcusable.

    Additionally, the laughable but mean-spirited use of the d-word (with the dishonest innuendo they intend to spread) to attack realists, which is a common habit of mainstreamers is also disgraceful. Mainstreamers are relying on failing parameterized models, yet instead of admitting some of the obvious inadequacies of their work, they double down and use name-calling to distract attention from their inadequate science.

    I agree with the Scottish Sceptic who has stated that the practices of the mainstreamers are not nearly solid enough and that the mainstreamers are way to thin skinned. For instance, I am a lawyer and all sorts of people criticize me all of the time. I don’t take it personally and get on with my work. If the work of the mainstreamers is as important that they claim it to be, they need to develop a thick skin and a culture of openness. For example, several years ago, I requested the salaries of new prosecutors in the county in which I live. (to compare those salaries to those of beginning police officers to analyze a ballot issue) The person with the information didn’t question my motives and simply said that she would give me the salaries of all of the prosecutors, which was easier for her. If salaries of prosecutors are so easily and readily available climate data and communications should be equally available. Instead a substantial number of the mainstreamers act as if they have a divine right to keep their information within their own small circle and that they are above FOIA requests notwithstanding their government funding.

    JD

    • A fan of *MORE* discourse

      JD Ohio, it’s entirely reasonable to appreciate that not every climate-scientist is a saint.

      That’s why it’s *DOUBLY* reasonable to appreciate the efforts of Zeke and Steve and BEST and ISTI to (in Ronald Reagan’s phrase) “trust but verify” the consensus finding of climate-science, that the world *IS* warming.

      Conclusion  `Good on yah, Zeke and Steve and BEST and ISTI, for working so persistently, effectively, and heroically to slay the climate-science “conspiracy monster”!

      Question  Do Climate Etc’s resident conspiracy theorists appreciate that &mdash: in the minds of almost STEM professionals (young scientists especially) &mdash: the “conspiracy monster” has been heroically slain?

      Can *ANY* amount of evidence convince a committed conspiracy-theorist?

      The world wonders!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Fan

        You ask a good question;

        ‘Can ANY amount of evidence convince a committed conspiracy theorist?’

        As someone who does not believe in grand conspiracies or hoaxes nor that the overwhelming majority of climate scientists are either stupid or intend to mislead! I think you ask a question that needs answering

        I think scientists often have blind spots and place too much reliance on dubious data, such as global sea surface temperatures to 1850.

        However, that is an argument over the gathering and analysis of data they believe to be gold standard and not that they are trying to deliberately circumvent the system.

        Consequently I would also be interested In hearing the answer to your question from the various people here who do not seem to think that Zeke and mosh in particular and the overwhelomg majority of climate scientists in general, are not acting in good faith
        Tonyb

      • ‘Can ANY amount of evidence convince a committed conspiracy theorist?’
        I’m good with the temperature reconstruction, although, as usual in the climate literature the development seems to lack a simple mathematical framework and error analysis, and instead relies on complicated and obscure statistical arguments and leaves me unsure about exactly how accurate the end results is.

        The beast’s primary head is in the modelling arena for me.

      • Tony,

        The very fact that the AGW movement is framed around a start date of “1850” and was and is targeted at “CO2” and “industrialization” is pretty much a good indicator that the climate change belief system is “A CONSPIRACY THEORY” from inception.

        You can track the post 70’s movement growth to the 60’s anti-market hippies (Earthday/Gaia Worship/Armchair Marxisim)and the oil embargo results. You can track larger class warfare motives back to J.D. Rockefeller and the dogma of those times. There were subplayers like national security/pronuclear advocates in the mix but now it’s a Greenshirt leftist movement almost exclusively.

        It’s Fanboy and his peers that wear the TinFoil hats on this forum.

      • Cwon14

        I live close to the met office in Exeter which enables me to use their excellent library and archives to reseach my articles. there are something around 2000 employees here, many of them high qualified scientists.

        I know or have met half a dozen and dealt with many more. Do I believe this vast cohort of intelligent qualified and dedicated people are all in on some vast conspiracy? Of course not. Do you?

        Whether I agree with all of their research and analysis is another matter of course.

        tonyb

      • FOMD

        I shouldn’t waste my time on a true believer who uses the clownish term “denialist” as an ad hom in the context of unproven models. However, I will respond this time to your statement that:

        “Energy-balance climate-science predicts — and observations verify — the rising of sea-level, and the heating of ocean-water, and the melting of ice-mass.” You claim that any change in the above would be evidence that would modify the views of warmist scientists.

        Here is what NOAA states about the inadequacies of ocean heat content measurements: ” Nonetheless, preliminary processing of Argo data indicates that it is not without problems associated with different calibration and manufacturers of the instruments; a problem common for atmospheric measurements. Moreover, the different results from different analyses (Lyman et al. 2010) suggest that best methods have yet to be found. The analysis by Trenberth and Fasullo (2010) of the total energy budget, which reveals missing energy in recent years because the ocean heat content has not kept up with the excess of incoming radiation at the top of atmosphere, reveals shortcomings in the total observing system. It is now considered most likely that the missing energy lies below the top 700 m of the ocean that has been most analysed, highlighting the need for deeper observations and analysis (Trenberth 2010).” See http://www.oco.noaa.gov/roleofOcean.html

        Also, the UCSD Argo center states that: “The global Argo dataset is not yet long enough to observe global change signals. Seasonal and interannual variability dominate the present 10-year globally-averaged time series. Sparse global sampling during 2004-2005 can lead to substantial differences in statistical analyses of ocean temperature and trend (or steric sea level and its trend, e.g. Leuliette and Miller, 2009)” http://www.argo.ucsd.edu/Uses_of_Argo_data.html

        Thus, your claim that observations verify heating of ocean water is incorrect and you prove my point. You have claimed as fact (rising ocean heat content), something that has not yet even been adequately measured. You fit the profile of alarmists who are impervious to facts and data while at the same time imputing their fallacies to others.

        JD

      • A fan of *MORE* discourse

        JD Ohio proclaims [utterly wrongly]  “Thus, your claim that observations verify heating of ocean water is incorrect.”

        Climate Etc readers may wish to verify for themselves that JD Ohio’s links are peculiar in including no citations more recent than 2009.

        JD Ohio, perhaps you — and Climate Etc readers! — might learn more from an an up-to-date ARGO bibliography?

        Try for example (from among hundreds):

        • Lyman, J. M., and G. C. Johnson, 2014: Estimating Global Ocean Heat Content Changes in the Upper 1800 m since 1950 and the Influence of Climatology Choice*, J. Clim., 27(5), 1945-1957, http://dx.doi.org/10.1175/JCLI-D-12-00752.1

        • Piecuch, C. G., and R. M. Ponte, 2014: Mechanisms of Global-Mean Steric Sea Level Change, J. Clim., 27(2), 824-834

        • Abraham, J. P., et al., 2013: A review of global ocean temperature observations: Implications for ocean heat content estimates and climate change, Reviews of Geophysics, 51(3), 450-483

        • Gleckler, P. J., et al., 2012: Human-induced global ocean warming on multidecadal timescales, Nature Clim. Change, 2(7), 524-529,

        Conclusion  Science marches on! Denialism, not so much.

        It has been a pleasure to help expand your appreciation of climate-science, JD Ohio!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • David Springer

        Please don’t feed the tr.oll!

    • JD,
      thanks for your common sense. As a long time observer to the fight, I appreciate Zeke’s hard work and explanations. Given that, the past data should be published as measured. Modifications for whatever reason can follow and then the evidence presented that the changes make sense in the view of the empiracal data changer. I was surprised at the level of changes and continuous history of them. The new US Climate Reference Network should go part way to restoring the balence of measaurements vs measurements, then past adjustments.

      Zeke, thanks for your work.
      JD thanks for sensible contribution.
      Scott

      • David Springer

        It seems such a reasonable request to ask for a database of station reports in CSV (comma-separated values) that virtually every spreadsheet and database in the world knows how to read including the gazillion MS Office installations that come standard on so many personal computers or the free Apache Open Office with essentially the same suite and features as MS Office.

        Never expect the government to do things cheaply and efficiently. It just doesn’t work that way. NASA and NOAA are no exceptions. Unpaid volunteers produce the work you’d expect for the price as well. This is why things like the computer flight control system for the space shuttle are contracted out to the private sector who most certainly does not rely on volunteer coders to get the work done.

    • FMD “Can *ANY* amount of evidence convince a committed conspiracy-theorist?”

      This goes for both sides. Is there any evidence that will convince mainstreamers that there models are underperforming and need to systematically re-evaluated? Is there any evidence that will convince mainstreamers (such as Mann) that the Koch brothers and evil capitalists are not behind criticism of mainstream science? Is there any evidence that will convince mainstreamers that Julian Simon’s work is robust (in this context that humans can adapt to temperature changes) and needs to addressed in a meaningful way?

      JD

      • Jd

        That is also a good question. Battle lines have been drawn and the respective troops are dug into their metaphorical trenches.

        What will it take for either ‘side’ to be convinced that the others may have some valid points?

        What evidence or argument would it take for FAN to stop his sniping?

        Tonyb

      • A fan of *MORE* discourse

        JD Ohio wonders “Is there any evidence that will convince mainstreamers that there models are underperforming and need to systematically re-evaluated?”

        Energy-balance climate-science predicts — and observations verify — the rising of sea-level, and the heating of ocean-water, and the melting of ice-mass.

        So long as these three trends continue — all without decadal-duration pause or obvious limit, as affirmed by multiple independent scientific groups, and solidly based upon fundamental thermodynamical understanding — then for precisely that same same duration of time, a strong consensus of scientists will continue to accept the energy-balance climate-change worldview.

        Meanwhile, denialists will believe ever-more-strongly that climate-change science is an ever-more-vast conspiracy.

        It is a pleasure to help resolve your climate-change confusion, JD Ohio!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Windchasers

        Is there any evidence that will convince mainstreamers that there models are underperforming and need to systematically re-evaluated?

        “Systematically re-evaluated”: well, they are, about every time a new model version comes out. It’s normal to identify and compare the errors of various aspects of the climate to those of previous versions.

        But should they be re-evaluated in the sense that they should be re-drawn from scratch? No, I don’t see any evidence for that. The theoretical basis for the models seems pretty firm to me. There’s work to be done, but it’s in moving forward, not back: more comprehensive physics, better parameterizations, smaller gridsize, etc.

        Is there any evidence that will convince mainstreamers (such as Mann) that the Koch brothers and evil capitalists are not behind criticism of mainstream science?

        While I’m sure that the Koch brothers have their criticisms, I’ll gladly acknowledge that they’re not behind the general criticism.

        PS. Don’t know much about Julian Simon wrt climate change. I’m confident that humans will adapt to climate change, just not that it will be cheaper than reducing emissions.

      • Where I struggle is that when you look at the measured max temp alone, no adjustments, no infilling, no krieging, it has no trend. And Min temp goes up and down differently around the globe, no global trend, just regional variation.
        yet, every time the same (?) data is processed based on a model of how temperature should translate from one location to another, you get a huge trend. Look at the Continental data here
        http://sourceforge.net/projects/gsod-rpts/files/Reports/

      • “Meanwhile, denialists will believe ever-more-strongly that climate-change science is an ever-more-vast conspiracy”

        More of an Orthodoxy than a conspiracy, honestly.

      • So please refer to us as ‘schismatics’ rather than deniers, which has obviously uncomfortable connotations. Thanks!

      • Or maybe the Apostasy.

    • Steven Mosher

      JD..
      Bad faith
      Guess who was also sending FOIA to Jones?

      • “Bad faith
        Guess who was also sending FOIA to Jones?”

        Please make your point explicitly. Don’t get it.
        JD

    • Seems ter some of us skeptical serfs that what’s been lacking
      in climate science is that culture of openness you refer to.JDO

      In Oz adjusting the data and lowering the past temperature trend
      but raising the late 20th century warming trend, rings alarm bells
      … seems, you know, kinda curve fitting the narrative.

      jest a serf on the ground.

  165. pottereaton

    “Observation times have shifted from afternoon to morning at most stations since 1960, as part of an effort by the National Weather Service to improve precipitation measurements.”

    It would have been helpful if they could have recorded both afternoon and morning temperatures in order to provide some continuity to the long-term record. I know they could not have forseen the arguments we are having over adjustments today, but it still would have been the scientifically correct thing to do, imo.

    • Steven Mosher

      yes, but we can also just use hourly stations.

      you get the same answer, after adjustment.

      That is one thing that tells you the adjustment is required

      • That does not tell they required adjustment is the correct adjustment. Just that the adjustment gave the answer the adjuster expected to get, also known as confirmation bias.

    • Again, if they use anomalies, ie temperature difference from one day to the next then there was no need for any adjustment at all. All they had to do was spit the records, end one and start another.

      Bear in mind that these thermometers recorded the max and min through the 24 hour cycle (if they didn’t then the whole exercise was pointless) so the only day they had a difficulty to resolve was the changeover day, so just skip that day and all is well.

      All these adjustments are an example of how to make a very simple exercise difficult and controversial.

      • JamesG commented

        Again, if they use anomalies, ie temperature difference from one day to the next then there was no need for any adjustment at all. All they had to do was spit the records, end one and start another.
        Bear in mind that these thermometers recorded the max and min through the 24 hour cycle (if they didn’t then the whole exercise was pointless) so the only day they had a difficulty to resolve was the changeover day, so just skip that day and all is well.
        All these adjustments are an example of how to make a very simple exercise difficult and controversial.

        When I read in the data record, I store yesterday’s min and max temp, then when I parse the next row I calculate a difference (anomaly) for that specific station based on it’s previous day’s reading. This difference only has a single day’s decay baked into the value. And large changes, like painting the cover, affect a single(in this case) record. I then average this value for each day for the area I’m reporting on.

        On TOB adjustments, because I generate a diff on both min and max, and because T avg is just the average of min and max, I don’t think time of observation really makes a difference, as long as it doesn’t change. You can have a true min temp, that has to be time independent.
        Or the meter is checked on a schedule, and the day over day movement of daily min temp under clear skies is all orbital/tilt differences, which drives a change in the length of the day.
        Here’s a picture
        http://wattsupwiththat.files.wordpress.com/2013/05/clip_image022_thumb.jpg?w=864&h=621

      • Mayor of Venus

        Indeed, I don’t see how “time of observation” can make any difference in recording daily maximum and minimum temperatures. In 1959-60, my senior year, I made these recordings at the Pomona College observatory, usually in the late afternoon well after the maximum of the day. The 2 thermometers in the white box record the low and high of the day, but not the times they happen (usually just before dawn and early to mid afternoon, respectively). If I would forget to record the measurements until the following morning, the high thermometer would still have recorded the high of the previous day. But you wouldn’t know if the low thermometer reading applied to that morning or the previous morning, so no measurement should be recorded for either day. I don’t see how the “time of observation” could affect the actual data from these types of thermometers.

      • David Springer

        Time of observation does, on average, change the average temperature.

        If you reset close to time when maximum or minimum is reached and next day isn’t as extreme then you get the extreme recorded two days in a row. Afternoon resets get more double extreme highs and mornings more double extreme lows.

        So say there’s a more or less concerted shift from afternoons to mornings because 10am is very unlikely to be a high or low daily extreme. Which is what happened. So to normalize afternoon readings with morning readings you would subtract something from the afternoon numbers. To know how much to subtract is a different issue. It’s pretty clear there’s a warm bias taking afternoon readings vs. morning.

        One way to estimate how much to subtract is to have hourly data and select a certain hour each day to take the previous 24 hour min/max from and see how much they differ. If you have enough data from enough different places to sift through you can get a good idea of how to adjust.

        The thing of it is that the vast majority of land instrumentation is continental US and Europe which represents only a small fraction of the earth’s surface and happens to be areas with extremely high land use change due to industrialization and agriculture. In order to get a measure of global average temperature requires global coverage and, unfortunately, we only have that since 1979. No one more than me desires a better global temperature record to set things straight but it just doesn’t exist.

  166. Schrodinger's Cat

    I was struck by the opening remarks that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

    That is really the problem, since after Climategate, it is very difficult to trust any of this. It is asking too much to assume good faith. It is also counter to the scientific intuition to keep changing raw data including raw data that is decades old.

    I would much rather see the raw data with all of its problems. This could be annotated to give details of changes or other problems or breaks in the data. Then we could see the best “improved” data together with justifications and explanations.

    Seeing both trends together would be helpful. It would preserve the original data. It would show why the official trend is different from the original and give the explanation. What could be more transparent?

    It may also deter the problem of confirmation bias, aka the heavy thumb on the scales.

    If the alarmist community finds this suggestion unacceptable, why would that be the case?

    • Sure, sure, let’s just apply a few parameters to the data files. What could go wrong? Let look at—e.g., the HARRY READ ME file and play a little game called, how many times can you spot the word, ‘parameter.’

      “… ..which is good news! Not brilliant because the data should be identical.. but good because the correlations are so high! This could be a result of my mis-setting of the parameters on Tim’s programs (although I have followed his recommendations wherever possible), or it could be a result of Tim using the Beowulf 1 cluster for the f90 work. Beowulf 1 is now integrated in to the latest Beowulf cluster so it may not be practical to test that theory…”

      “Introduced suitable conditionals to ensure that 61-90 anomalies and gridded binaries are automatically produced if the relevant secondary parameters are requested…”

      “Then, a big problem. Lots of stars (‘*********’) in the PET gridded absolutes. Wrote sidebyside.m to display the five input parameters; VAP looks like being the culprit, with unfeasibly large values…”

      “On a parallel track (this would really have been better as a blog), Tim O has found that the binary grids of primary vars (used in synthetic production of secondary parameters) should be produced with ‘binfac’ set to 10 for TMP and DTR. This may explain the poor performance and coverage of VAP in particular…”

      “So, I went through all the IDL routines. I added an integer-to-float conversion on all binary reads, and generally spruced things up. Also went through the parameters one by one and fixed (hopefully)their scaling factors at each stage. What a minefield!”

      “…Finally I’m able to get a run of all ten parameters. The results, compared to 2.10 with sidebyside3col.m,are pretty good on the whole. Not really happy with FRS (range OK but mysterious banding in Southern Hemisphere), or PET…”

      “PET precursor parameters: ranges…

      “Gridding primary parameters…

      “The 2.5-degree PRE/WET path is now at x10 all the way to the final gridding. The 0.5-degree PRE/WET path is at x10 until the production of the synthetic WET, at which point it has to be x1 to line up with the pre-1990 output from the gridder (the gridder outputs .glo files as x1 only, we haven’t used the ‘actfac’ parameter yet and we’re not going to start!!)…”

      “Still not there. One issue is that for some reason I didn’t give the merg runfiles individual names for each parameter! So I might mod the update program to do that. Then re-run all updates…”

      “So.. I guess I will use tmp.0903081416.dtb, pre.0903051740.dtb, and the earliest available from the other parameters. In other words…”

      “I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more… ”

      “Well, the merged database is written principally from dbm*, with dbu* chipping in ‘new’ stations. I guess that new stations should be added to the wmo reference file? They are pan-parameter (well the MCDW ones are) but I have an eerie feeling that I won’t experience joy when headers are compared between parameters…”

      “Wrote metacmp.for. It accepts a list of parameter databases (by default, latest.versions.dat) and compares headers when WMO codes match. If all WMO matches amongst the databases share common metadata (lat, lon, alt, name, country) then the successful header is written to a file. If, however, any one of the WMO matches fails on any metadata – even slightly! – the gaggle of disjointed headers is written to a second file. I know that leeway should be given, particularly with lats & lons, but as a first stab I just need to know how bad things are. Well, I got that…”

      “METACMP – compare parameter database metadata…”

      [READ ME for Harry’s work on the CRU TS2.1/3.0 datasets, 2006-2009!]

      • Steven Mosher

        Harry readme is about a data set that nobody uses for climate studies

      • “This in itself has become a major scandal, not least Dr Jones’s refusal to release the basic data from which the CRU derives its hugely influential temperature record, which culminated last summer in his startling claim that much of the data from all over the world had simply got “lost”. Most incriminating of all are the emails in which scientists are advised to delete large chunks of data…” ~The Telegraph

    • @Cat,
      Here you go, no changes, no infilling.
      10 degree Lat Bands.
      http://sourceforge.net/projects/gsod-rpts/files/Reports/10DegreeLatBands.zip/download
      Continents
      http://sourceforge.net/projects/gsod-rpts/files/Reports/ContinentsReports.zip/download
      10 x 10 Box (as long as there was a single station)
      http://sourceforge.net/projects/gsod-rpts/files/Reports/10x10LatLonBox.zip/download

      And look at both the area report file and the ST_LST report for the same area.

    • Steven Mosher

      look at the top chart.

  167. “How did the good politics of social justice become chained to the bad science of global warming?” – Freeman Dyson

    • The narrative was always more important then the science, that’s why it’s a high pressure sales pitch to grab the authority “to do something” when there is less than nothing in actual empirical or reproducible predictive models to support the actions.

      I’ve seen very little evidence that “social justice” leads to any better political results either. Usually it’s the same dish rag rationalization to seize liberty in the name of failed central planning.

    • A fan of *MORE* discourse

      cwon14 suffers amnesia “I’ve seen very little evidence that ‘social justice’ leads to any better political results

      Please keep in mind, wagathon and cwon14, that The past isn’t over. It isn’t even past.

      Not everyone — scientist or voter — is afflicted by a willfully ignorant memory, eh Climate Etc readers?

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • I have not heard any instances of someone managing to escape America for a better life elsewhere. And yet, the Left continues to indulge the fiction that Americanism is the problem in the world.

      • A fan of *MORE* discourse

        FOMD’s colleagues in IBM’s North American Research Divisions are migrating en masse to European and Chinese laboratories … from less-socialist to more-socialist economies.

        Why is this mass-mirgration of STEM talent underway, wagathon?

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • A fan of *MORE* discourse commented

        FOMD’s colleagues in IBM’s North American Research Divisions are migrating en masse to European and Chinese laboratories … from less-socialist to more-socialist economies.

        Well it could be at least part that they will collect about the same pay, with no US taxes, and can live in an exotic foreign country probably with a lower cost of living, IBM pays lower taxes, and has to deal with fewer regulations.
        This is the climate the US government has enacted for US business, drive them overseas. It’s a win for IBM, their employees, the US Government, and the far left. With only the rest of the US getting the staft.

      • Alexej Buergin

        Wagathon : How about Sydney Bechet? Donna Leon? (Or do you mean “escape” literally?)

      • Rather than a relocation I meant by escape the idea of a dangerous journey, perhaps on foot or a perilous crossing of angry waters under cover of darkness, risking capture and death or life in the gulag, all for a better life.

      • A fan of *MORE* discourse

        FOMD commented  “FOMD’s colleagues in IBM’s North American Research Divisions are migrating en masse to European and Chinese laboratories … from less-socialist to more-socialist economies. Why is this?”

        Mi Cro postulated [wrongly]  “They will collect about the same pay, with no US taxes, and can live in an exotic foreign country probably with a lower cost of living.”

        You know little of regulated-economy nations like Switzerland or Germany, eh Mi Cro?

        `Cuz they are among the least “exotic” nations in the world!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • David Springer

        Crank alert!

  168. We have noted this divergence before.

    http://www.woodfortrees.org/plot/crutem4vgl/from:1979/plot/uah

    The question you need to ask is why the difference. The answer is that it depends on where latent heat shows up in the atmosphere.

    Energy at 2m is a mix of radiant and latent heat and the mix varies with water availability. The entire surface record seems misleading and pointless – as a climate metric – without this last adjustment.

  169. peter azlac

    Steven Mosher | July 8, 2014 at 1:14 pm | Says

    “For people who do data mining to cherry pick stations, they need to provide a field test showing that the criteria they used are actually true and effective.”

    You ignore the point I made that if you want to show temperature trend from 1850 or 1880 to the present you should use the data that is available from that period and not adjust it with data that only became available during the reference period when it has been assumed that CO2 is responsible for the increase in temperature, at least up to 1998. That is not cherry picking it is real science that you use all the available real data with the minimum of adjustment based on measured errors at the sites involved and not at from adjacent sites or other means. The papers mentioned by Ron C from Pielke Sr are among those that you cannot reasonably assume a uniform climatology across grid areas or even within a few kilometers as the Armagh study showed.

    • Steven Mosher

      “You ignore the point I made that if you want to show temperature trend from 1850 or 1880 to the present you should use the data that is available from that period and not adjust it with data that only became available during the reference period when it has been assumed that CO2 is responsible for the increase in temperature, at least up to 1998.”

      if you told me the moon was made of green cheese I would ignore you as well

      consider yourself ignored again

    • Steven Mosher

      oh wait

      “The papers mentioned by Ron C from Pielke Sr are among those that you cannot reasonably assume a uniform climatology across grid areas or even within a few kilometers as the Armagh study showed.”

      sadly wrong.
      1) yes you can

      • David Springer

        “sadly wrong.”

        Self-reflection? Don’t be so hard on yourself. You’re more like laughably irrelevant, hoplessly inadequate, and a raging dikhed. Trust me, I’m a game programmer and you’re my customer. The only thing that’s missing is acne and no girifriend. ROFL

  170. Easily observable data prove that AGW model-makers’ failed nature’s reality-test. So what’s next: indulge the notion that Western government scientists – from the comfort and security of their ivory towers – are competent to redirect dollars flowing from evil American enterprise to a far more noble cause of saving the world from being Fukushima’d by capitalism?

  171. This local story – of children fleeing live-threatening conditions at our own borders who desperately need help – shows the result of false government claims that CO2 is a dangerous pollutant, rather than part of the natural cycle of life that connects plants and animals:

    http://evolvingelder.wordpress.com/2014/07/08/united-in-hope-and-charity/

  172. To the keepers of the temperature data:
    Please:
    1) keep a copy of, and supply on request to any who ask, the original data as recorded
    2) meticulously record, and supply on request to any who ask, the reason for, and the used to adjust the data
    3) prominently mark the adjusted data as “adjusted”
    4) include revised uncertainty information that is the original uncertainty plus the amount of all adjustments

    Thanks

    • Steven Mosher

      Keepers of the data already do this.

      1. the data is there, do a http get request
      2. the adjustments are algorithmic. the code documents what and why
      3. adjusted data is named “adjusted”
      4. typically provided

      • @Steven Mosher at 7:04 pm
        2. the adjustments are algorithmic. the code documents what and why

        You are missing the point. The code no doubts documents many types of adjustments and criteria. What is sought is a flag in the data that says not “adjusted”, but an indicator of the specific rule(s) that triggered the adjustment at that point. Degree of confidence would help, too.

      • I don’t believe that gets done, but I do believe you can turn off various adjustments to see their effect.

        It’s not entirely a linear process (generally when you have threshold triggers, the resulting filter is nonlinear), but in any case I don’t believe you can really break up the result into separate, non-interacting adjustments.

      • @Carrick at 9:18 pm |
        It’s not entirely a linear process (generally when you have threshold triggers, the resulting filter is nonlinear), but in any case I don’t believe you can really break up the result into separate, non-interacting adjustments.

        That is an important point. It is non-linear. Order of application probably make a difference, too.

        The iterative process is a source of concern. When Berkeley gives a station a breakpoint, it is implied they have found a defect in the station record and the adjustment improves it. Logically, that adjusted station, which is improved, must be used again to recalculate the regional trends which will/may discover other break points in stations. These get adjusted and the regional trends get recalculated again. So in addition to a data flag that identifies the rules used to determine the breakpoint, you need a timestamp and/or sequence number, with a related table of what stations versions were in the regional grid that the rules acted upon.

        Maybe they don’t cycle around. Maybe the regional trend only uses the data before breakpoints are applied. But that doesn’t seem consistent with the idea of “improving” the data quality at each station and removing contamination like UHI, TOBS, and sawtooth drift-correction patterns.

      • Stephen Rasey—on order of the operations mattering. That’s another good point, thanks.

      • Steven Mosher

        The iterative process is a source of concern. When Berkeley gives a station a breakpoint, it is implied they have found a defect in the station record and the adjustment improves it.

        no.
        .
        Here is how breakpoints are done.

        1. lets say they move a station XRS123 from the roof to the ground.
        They FORGET or DECIDED to keep the ID the same, when in fact
        the station is different. Its like a manufacturer changing a part
        and not changing the part number.
        so we split those. NOT because the station temperature data is “defective” but because they made a mistake in the metadata. they changed the station but kept the station ID the same.

        2. lets say a station changed its instrument. Same thing. If you put a different engine in a car you change the ID for that product. But they didnt do this. They changed instruments and kept the IDs the same.
        So we dont judge that the temperature data is “defective” its not.
        What we do is correctly change the station ID to a new ID.

        3. Empirical breakpoints. These are the slices where one dataset differs from it neighbors. We dont judge it to be DEFECTIVE. We change its ID and say.. ‘the station looks like it changed” For example this can happen because of an undocumented station move. or undocumented TOBS change or undocumented instrument change or building a parking lot next to the site.

      • David Springer

        The most interesting question in this thread was ignored by both Mosher or Ezekiel. BEST 1950 “raw” data is cooled down from the actual raw data the observer wrote down on his report sheet. A single station was checked at random for faithfulness to the temp record and here we find the data tampered with from the original.

        DocMartyn | July 8, 2014 at 6:47 pm |
        Zeke, can you do me a favor?
        I went to Best and looked up Portland, Oregon
        Berkeley ID#: 174154
        % Primary Name: PORTLAND PORTLAND-TROUTDALE A
        % Record Type: TAVG
        % Country: United States
        % State: OR
        % Latitude: 45.55412 +/- 0.02088
        % Longitude: -122.39996 +/- 0.01671

        http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Text/174154-TAVG-Data.txt

        Then looked at the same station’s written records, for 1950.

        http://www.ncdc.noaa.gov/IPS/lcd/lcd.html?_page=1&state=OR&stationID=24229&_target2=Next+%3E

        The numbers for monthly average in the official record ( in F) do not match Berkeley Earth database after -32*(5/9).

        Am I doing something very stupid here?

        I did a random check on the February 1950 tAvg comparing BEST raw and NCDC. I found BEST mysteriously cooler than NCDC by 0.7F.

        What’s up with that BEST dudes? Cat got your tongues?

      • @Steven Mosher at 1:52 pm
        Looks to me that your are finding the Station ID “defective” and are changing it. Regardless, you are talk around the issue once again.

        In the Case of Empirical Breakpoints, ” These are the slices where one dataset differs from it neighbors.” So, when you institute a breakpoint, split (rename, invent, the word doesn’t matter) it’s station ID, then what was one station record segment is now two. These go back into the pool.

        I presume these segments are now used in the iterative process to find more empirical breakpoints. THAT’s the issue. Recursion in the breakpoint routine.

  173. Zeke Hausfather | July 7, 2014 at 12:42 pm | said

    Hi Bob,

    I dug into the MMTS issue in much more detail a few years back here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

    In the reference paper’s Fig. http://rankexploits.com/musings/wp-content/uploads/2010/04/Picture-233.png there is a clear break happening about 1985. Before 1985 CRS and MMTS raw Temps agree well, after they diverge sharply. Why should that happen?

    • Steven Mosher

      you misunderstand the chart.
      the changeover happens and they diverge

      They agree before because they are all CRS..

      • Wrong, the prior chart shows MMTS back to 1965. And that’s not all.

      • Philip, read Zeke’s article again:

        http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

        See in particular Doesken 2005 “The National Weather Service MMTS (Maximum-Minimum Temperature System) — 20 years after”, which Zeke does quote:

        During the mid 1980s, the National Weather Service began deploying electronic temperature measurement devices as a part of their Cooperative Network. The introduction of this new measurement system known as the MMTS (Maximum-Minimum Temperature System) represented the single largest change in how temperatures were measured and reported since the Cooperative Network was established in the 1800s. Early comparisons of MMTS readings with temperature measurements from the traditional liquid-in-glass thermometers mounted in Cotton Region shelters showed small but significant differences. During the first decade, several studies were conducted and published results showed that maximum temperatures from the MMTS were typically cooler and minimum temperatures warmer compared to traditional readings. This was a very important finding affecting climate data continuity and the monitoring of local, regional and national temperature trends.

        The stations are the same name but the switchover occurs during the 1980s. This produces the shift observed between the stations that remained CRS and the newly converted ones.

        Also see Quayle 1991 for a discussion of the adjustments made to correct for the bias.

      • Thanks Carrick for your added ref and comment. I’m afraid you fail to grasp the flaw shown by the chart http://rankexploits.com/musings/wp-content/uploads/2010/04/Picture-233.png . First, MMTS sensors were around prior to 1985 shown in that chart having good agreement with LIG sensors. Beginning in 1985 the chart shows a temporal drift between the sensors. So, either the sensors changed over time or Zeke’s analysis has an unexplained growth factor. Which is it?

        Now, having asked the questions several times and seen the responses given, I conclude that I’m not really going to get an explanation, but thanks for trying.

      • Philip Lee, again it would help if you actually read Zeke’s blog post instead of trying to interpret everything from the figure.

        The figure is misleading.

        He is showing stations that were converted into MMTS in the 1980s from CRS against stations that were not converted over the period displayed.

        So when you say

        So, either the sensors changed over time or Zeke’s analysis has an unexplained growth factor. Which is it?

        The answer is, the sensors of the stations in the series marked as MMTS changed over time—in the 1980s when the stations were upgraded to MMTS from CRS.

        This of course is exactly the explanation that Steven Mosher gave above.

      • Also, the drift seen in temperature is the cooling bias that Zeke was discussing in the blog post that you obviously haven’t read.

      • Steven Mosher

        hey Phillip?
        get it yet?

      • I read the article in the first place, where I noted “the trend for max temps have an average cooling bias of 0.25 C +/- 0.54 C per decade” and others. I suppose now you might suggest that not only have I misunderstood a chart but also the plane language of his text that the bias of sensor differences has a temporal drift.

        As for “They agree before because they are all CRS”, that is contradicted in several places and you’ve passed beyond a point of helpfully providing information to spreading an advocacy position. The charts at issue are introduced by “If we calculate trends for all 81 gridcells that have at least one MMTS and one CRS station available, and weight each gridcell by its relative size, we get the following raw mean temperature trends:” and show MMTS stations back in 1965. I suggest you might actually try to understand a topic before you claim I don’t understand.

  174. Kip Hansen

    Can someone correct me if I’m wrong to think that the combined magnitude of the adjustments detailed above approximately equal, in sign and magnitude, the entire stated effect?

    In the medical world, a result that can only be found by such a method would be considered null or very very suspect.

    When I personally investigated the national met station in Santo Domingo, Dominican Republic, it was explained to me that in the days of thermometer reading, that the greatest variable was the “height of the observer”. Short men read the thermometers too high and tall men too low, giving what the Director of the met station called a known error range of “plus or minus 1 degree”. There was a concrete block supplied for the short men, so they would be viewing the thermometers at the correct eye-level, but “el orgullo” (pride) prevented the short men from using it.

    We may be able to somehow time travel and adjust recorded temperatures for “known biases” [why we might use a device with a know 0.5ºC error is beyond me] — but I doubt that we will be able to do it for the < 1º C range that is found in the records themselves to any degree of sound scientific accuracy. My personal opinion is that we are "fooling ourselves" with our ability to crunch numbers — thinking that ability to do it validates the results.

    I would rate "Average Earth Land-Sea Surface Temperature" (even on any given day, no less a century long time series) a serious "We don't know to any degree of usable accuracy."

    • “why we might use a device with a know 0.5ºC error is beyond me”

      Interesting question. Given that the temp of many (most?) non-tropical places on earth ranges over about 100 degrees F over the course of a year, and 20 to 40 degrees F over the course of a day, I assume (I am not a climate scientist) was hard to anticipate that we’d ever be concerned about 1 deg F error in the daily readings and the consequent uncertainties. (And I understand that average temps need to change by more than about 4 deg F before it matters much?)

      • “I assume IT was hard to anticipate…”

      • That’s ok there aren’t that many recordings outside the US anyway. Notwithstanding that 70% of the global temp is made up of sea temperatures which, prior to 2003, were largely based on throwing a bucket over the side of the boat.

        I once said on Pielke Snr’s blog that the only useful place to measure temperature is in the Arctic, since that is where most manmade warming is expected, where there is reasonably good coverage/history and where there is no clutter from UHI. He thought it was an excellent idea. And that was that!

    • Anyone competent at reading temps from a thermometer, if they were tall would stoop slightly and if short adjust the temp up the 1/4 to 1/2 degree that they would know based on at least once getting on the concrete block. Its not rocket science, its just what any decent technician would do.

      • Should have said adjust the temp down. I used to read Stevenson screen temps when I worked in the arctic and the height was not an issue for anyone over 5 feet tall IMO, based on recollection.

      • Kip Hansen

        Reply to Scott Mc ==> The point of relating my story about the Met center in the DR was that in many places, plus of minus 1ºC was considered perfectly acceptable by the actual staff recording the temps. (The Director would have preferred more accuracy.) Who would have ever thought that someone would be “averaging” all those temperatures (for the whole world) and squabbling about 10ths or even 100ths of a degree? You see, the techs in the DR just didn’t think it was even important enough to step up on the block (short guys) or bend all the way over (tall guys). They just glanced in and wrote down the temps — 85 or 86? “Hey, mas o menos… no importa….” There was no practical difference between 85 or 86 — no practical difference between taking the temp at 1200 hrs or 1400 hrs — it’s mid-day, etc. My gut feeling is that, except at scientific research stations, this was how temps were really done.

        I’m not knocking dedicated co-op guys but I doubt they lost sleep over whether their thermometers or recording devices had been re-calibrated recently, whether they were accurate to even a single degree….they just wrote what they saw.

      • Kip Hansen

        Correction: “…that in many places, plus or minus 1ºC was considered perfectly acceptable…”

    • @Kip Hansen at 5:53 pm
      Can someone correct me if I’m wrong to think that the combined magnitude of the adjustments detailed above approximately equal, in sign and magnitude, the entire stated effect?

      That’s the rumor. With an average of 10 breakpoints per station, one has to ask the question.

      Is there a file that shows the adjustments for each station by time and magnitude?

      Is there a file that shows the breakpoint mean offsets and year by station and segment ID?

      I would love to see a plot of number of station segments > (10, 15, 20, 25, 30, 35, 40) years in length vs year. Better if done by regions.

  175. In his article Zeke Hausfather says: “MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location.”

    Several obvious questions arise, 1) What would be the correct maximum temperature at a site having both temperature devices? 2) What does “tend” mean here? Measurements over time for the same two devices or measurements over different pairs of devices? 3)Does the word “tend” mean that device differences are statistical with mean .5 C? If so, what is the standard deviation? 4) If statistical, have the variances been added to the error budget in temperature estimates?

  176. The Twitteratti link is killing it–e.g., “…we can solve our problems by being more open-minded and creative–and scrutinizing all our assumptions.” ~John Horgan

  177. Does anyone have a good program that can digitize numbers in columns from pdf’s of scanned documents?
    I wish to check the ‘raw data’ that BEST have used against the official records;

    http://www.ncdc.noaa.gov/IPS/lcd/lcd.html;jsessionid=5076668B70515AFE76DDE1BC3C67EF0A

    I wouldn’t like to do a whole station record from 1950-2013 by eye and hand if there is a quick program that will do it.
    I like the idea of doing Portland Oregon as my initial as it is very long (as the actress said to the Bishop).

    • You can copy from the PDF,
      Paste into Excel.
      Data > Text to Columns > Delimited > comma as the delimiter.

    • Convert PDFs to Excel:

      Fortunately, Acrobat 9.1 offers a couple of different ways to export to Excel.

      1.Select table and open in Excel
      This allows you to select a portion of a page and open it in Excel.

      2.Export as Tables in Excel
      This method uses some artificial intelligence to convert multiple page PDF documents to multiple worksheets in an XML-based spreadsheet file. It works best on files which were converted directly from Excel to PDF.

    • David Springer

      Good luck with that. No one who answered realized that a pdf of a scanned document needs OCR to begin the task. Google it and see what others have come up with.

      • David Springer

        You’d think NCDC kept the original numbers after transcription. They had to have something to feed the correct and adjust pipeline.

      • The spreadsheet comes out okay but it doesn’t post correctly on a blog –e.g., for the first block, instead of being a single column, on the spreadsheet it is a single row. The second block is a second row, &etc.

      • Interestingly–e.g., looking at the “AVERAGE TEMPERATURE (°F) 2013 YOUNGSTOWN/WARREN (KYNG),” showing monthly temperatures from 1984 through 2013, the average annual temperatures for 1984 and 2013 were 49.1°F and 49.0°F respectively and the average annual temperature for all 30 years was the same as for 1984: 49.1°F (rounded up from 40.09167). The highest annual temperature was for 1998.

  178. From Zeke’s “Pairwise homogenization…” section in the head post.
    With any automated homogenization approach, it is critically important that the algorithm be tested with synthetic data with various types of biases introduced (step changes, trend inhomogenities, sawtooth patterns, etc.), to ensure that the algorithm will identically deal with biases in both directions and not create any new systemic biases when correcting inhomogenities in the record

    Since so far (at 823 comments) this is the only mention of “sawtooth” There have been several posts on WUWT challenging whether the sawtooth is worth a breakpoint. The Sawtooth might be a boundary case which identifies instrument drift with a quick recalibration event that is essential to keep.

    • In any kind of sawtooth-shaped wave of a temperature record subject to periodic or episodic maintenance or change, e.g. painting a Stephenson screen, the most accurate measurements are those immediately following the change. Following that, there is a gradual drift in the temperature until the following maintenance.

    • Since the Berkeley Earth “scalpel” method would slice these into separate records at the time of the discontinuities caused by the maintenance, it throws away the trend correction information obtained at the time when the episodic maintenance removes the instrumental drift from the record.

    • As a result, the scalpel method “bakes in” the gradual drift that occurs in between the corrections

    from WUWT, “Problems With the Scalpel Method”, Willis Eschenbach, June 28, 2014.

    The issue is how does Berkeley determine when is a sharp change in temperatures not a recalibration to be kept to counteract drift, and when is it an instrument change that suggests a new segment?

    Zeke made seven replies to this thread, one with cases he says are sawtooth cases “correctly adjusted”.

    The Savannah example shows some sawtooth patterns in the neighbor difference series, but they are homogenized in such a way that both the gradual trend and the sharp correction are removed.

    Really? It is not obvious. Indeed, the rapid transitions seem to be creating breakpoint rather than being retained as necessary recalibration. (see 1900, 1930, 1977, 2003)
    http://berkeleyearth.lbl.gov/stations/169993
    SAVANNAH/MUNICIPAL, GA.
    2 moves, 8 other breaks, 3 longest segments since 1960: (18, 17, 10) year,

    Zeke, I suggest the sawtooth case (drift, recalibration event) be a significant section in your Part III.

  179. To be sure, station records are beset by a plethora of problems that render
    the great majority of them unsuitable for scientific work. Yet, the naive
    notion persists that a plethora of adjustments can make all station records
    servicable. What those who have never taken any scientific instrument into
    the field fail to consider is that even greater errors can be introduced by
    adjustments made without requisite proof of validity.

    An egregious example of unproven adjustment is provided by the so-called
    “TOBS adjustment,” which seeks to disambiguate the diurnal MAX and MIN
    temperatures from the readings of MAX/MIN thermometers at times other than
    midnight. It patently fails to exploit the fact that temperature at time
    of instrument reset (TRESET) is invariably recorded along with MAX/MIN
    readings for the previous 24 hours. Thus a simple change in clerical
    procedure, wherein only those readings which are not identical to
    yesterday’s TRESET would be used to compute monthly MAX/MIN averages, is
    sufficient to wring sporadic biases out of the climatic series.

    Instead of curing the problem on a case-by-case basis, NOAA resorts to a
    one-size-fits-all adjustment applied indiscriminately to all data. To
    compound the absurdity, their empirically based adjustment is predicated
    upon estimating the TOBS effect at various reading-times from hourly,
    rather than continuous records. Such discretization mathematically ensures
    that the minimum adjustment will always be larger than need be. Only
    analysts incapable of basic time-series analysis would defend such
    bias-inducing nonsense as “necessary.”

    • Don Monfort

      Zeke and Mosher will jump all over this one. Just watch.

      • If TOBS was so trivial and wrong, why did Watts’s failure to account for it torpedo his surface stations paper (“A game-changer” says a breathless Pielke!) for two years and counting? He should have just published.

      • Steven Mosher

        The problem is he is wrong on the basics.

        and he is unaware of our results which validate a discrete TOBS adjustment, and PHA results which validate a discrete adjustment

        There are basically two approachs.

        1. A discrete adjustment which is subject to all the doubts people raise
        a. can the metadata be trusted
        b is the uncertainty handled properly.
        2. a purely statistical approach ( PHA or Berkeley earth)

        PHA as Zeke notes can and may replace TOBS. that is, one just looks at the data and looks for breaks REGARDLESS of the cause. Then one calculates and adjustment.

        Next, one calculates a discrete adjustment.

        Then you compare the two. What you find is that despite the legit doubts raised about 1, a purely statistical approach produces the same results

        Put another way. you hand me a pile a data. you secretly induce a TOBS change. I dont know which series has this change or when you did it.
        We can however reliably detect this change even without the metadata.

        So, trust the metadata and you get answer X
        Dont trust it and you also get answer X

        which means that distrust in the metadata is reasonable, but unfounded.

      • Don:

        All I’m seeing so far is technically uncomprehending non-sequiturs from a sales rep for purveyors of fictional data.

  180. Why did much of the Australian continent experience similar conditions to the US in the 1930s (when China flooded catastrophically)? Why did our big eastern Oz drenching coincide with the long, horrific Texas drought in in the 1950s? How dry can the west of the US get compared to recent times (hint: very)? How dry can even New York get (same hint)? How dry can China and Northern Africa get in a period of general cooling (hint: scary, don’t ask)?

    Guessing numbers is fun, especially when temps can only go two ways and they’re always doing one or t’other. I imagine climate science, when it comes along, will be somewhat more strenuous.

  181. “”Steven Mosher | July 8, 2014 at 2:22 pm |
    hell angech cant even be bothered to count the dang stations for himself.
    Neither will Zeke or Nick and you said you were too lazy to do it for me and too out of touch with USHCN to be able to commentate so why are you here?
    Zeke will not touch this with a bargepole because if he gives out the true nubers everyone will jump on him for manipulating the data.
    After all 650 out of the original 1128 is 50% FAIL.

  182. Numbers,dang.

    Zeke said “July 5th, 2014 at 5:32 pm angech,You are confused. The number of “real” stations reporting was in the first graph of the first post I wrote about this whole subject: http://rankexploits.com/musing…..-Count.png
    There were only ever 1218 USHCN stations. None were added to replace ones that stop reporting.
    July 5th, 2014 at 7:26 pm
    Sorry, I should have added the caveat that the station composition of USHCN v2 hasn’t changed since it was created. There were changes when the network was updated from USHCN v1 to v2.”

    [in response to my pointing out the USHCN “By the mid-1990s, station closures and relocations had already forced a reevaluation of the composition of the U.S. HCN as well as the creation of additional composite stations.
    The reevaluation led to 52 station deletions and 54 additions, for a total of 1,221 stations (156 of which were composites). Since the 1996 release (Easterling et al. 1996), numerous station closures and relocations have again necessitated a revision of the network.
    As a result, HCN version 2 contains 1,218 stations, 208 of which are composites; relative to the 1996 release, there have been 62 station deletions and 59 additions.]

    Zeke’s response in being caught out ” I really have no clue why people keep harping on this exact number of active real stations” question when its trivial to answer…”

  183. Mosher cannot do trivia, heh,heh,heh.
    Zeke posts an out of date weird baseline graph for the number with no number visible and gives gives a list of unnamed stations and says count them yourself.
    http://rankexploits.com/musing…..-Count.png
    You can download all the raw station data yourself here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/
    I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…

    Nick posts out of date graphs at Moyhu
    Wednesday, May 14, 2014
    USHCN, adjustments, averages, getting it right.

    The new team, sticking together like glue.
    Cannot let that number out can we.

    • Steven Mosher

      no I just refuse to do your homework

      1. USHCN doesnt matter to me or my work.
      2. the station count doesnt matter to me or my work.
      3. the data is available to you to do a count.
      4. whatever the count is is immaterial to Zeke’s explanation of HOW ADJUSTMENTS ARE DONE.

      so, you are off topic. next you are demanding that I do your homework.

      Now wait, where have I seen behavior like this before? ah up thread

      A: Can I see your books.
      S: Yes here they are.
      A: (ignoring the books). Here is a chart I found on the internet showing
      your bogus adjustments to income.
      S: please look at our books.
      A: no first explain this random stuff I found on the internet.
      S: here are the books, can you just audit us?
      A: you should be audited
      S: I thought thats what you were doing, here are the books. please look.
      A: What are your interests in this company?
      S: I own it. I make money
      A: AHHHH, so how can I trust these books
      S: can you just look at the books.
      A: first I want to talk about this youtube video. See this chart, the red is really red.
      S: I didnt make that video, can you just look at the books.
      A: do you have an internal audit.
      S: ya, here are some things we published, you can read them.
      A: Ahhh, who reviewed this.
      S: It was anonymous, just read the paper.
      A: How do I know your friends didnt review that, I dont trust those papers.
      S: well, read them and ask me questions.
      A: I’m giving the orders here tell me what is in the papers.
      A: and where are your books?
      S; I gave you the books.
      A: who is your accountant?
      S: my wife, she does all the books
      A…. Ahhh the plot thickens… you need to be audited.
      S: err, here are the books.
      A: oh trying to make it my job huh.. Im here in good faith
      S: ah ya, to audit, here are the books.
      A.not so fast, youre trying to shift the burden of proof

  184. I’m sure there is good faith. But, just as we know that MacGyver is going to get out of his fix and Maid Marian will elude the Sheriff of Nottingham’s grasp, we know which way the adjustments of temperature will go. As sure as Kojak will crack his case, we know.

    We just know, don’t we?

  185. I apologise for the length of this, but I have an important point to make about PHA and I want to take the time to make it properly.

    Consider a poor quality urban station. When initially established it was of good quality and semi rural. Over a long period of time its environment has deteriorated so that we now expect that it is measuring a temperature considerably higher than the temperature which would have existed if the environment at that location had remained unchanged.

    Reasonably it is replaced by a better sited rural instrument which immediately reads cooler – closer one presumes to the temperature the original instrument should have been reading if its environment had not deteriorated. The PHA detects the shift and adjusts the ENTIRE temperature record of the first instrument downwards to match up the breakpoint.

    The downwards shift is reasonable for recent record of the old instrument. However it is highly unreasonable to shift the entire record downwards. To do so is to assume that the old instrument WAS ALWAYS poorly sited and all of its readings right back to the day it was first established, were effected by the identical bias. We actually KNOW that this assumption is false because we KNOW that the environment of the instrument has deteriorated over time. We can document this deterioration with pictures and records of urban growth. And indeed it is precisely this deterioration over time that has created the need for a shift of location and adjustment. It does not make any sense that the adjustment procedure should assume that the old instrument delivered equally biased results from the day it was first established.

    The PHA adjustment procedure you have described is based on an assumption that we KNOW TO BE FALSE, namely that the bias of an instrument is constant right back to its date of establishment (or the previous PHA breakpoint) . The results of this adjustment procedure are therefore also KNOWN TO BE FALSE. Why are we using an adjustment procedure based on false assumptions which generates false results.

    In my opinion what is needed is a PHA procedure where adjustment depreciate over time leaving results in the distant past unadjusted. One would need to experiment with an appropriate time scale for the depreciation. It ought to reflect the time scale over which instrument environments deteriorate.

    Without such a depreciating adjustment PHA simply stacks up the deteriorations of every site and amalgamates them. It isn’t a procedure for correcting bias at all. It is a procedure for collecting biases and creating an even more biased record.

    • Not to just change the subject from your important point, but the paper Zeke references makes clear that there is a strong seasonal fluctuation of about 0.4degF in the differences between LIG and MMTS sensors that the idea of using one adjustment for all season would be wrong. So, what do they do?

      • I would not assume that the MMT sensor was necessarily the most accurate just because it is the most recent. LIG is based on the very simple physics of thermal expansion. This is well understood and known to be linear. Simple is good. There isn’t much to go wrong with a LIG thermometer. MMT sensors are much more complicated.

      • Well if they use anomalies, ie temperature differences, rather than absolute temperatures then they can stop the old record with the old sensor at the changepoint and start a new record for the new sensor. No correction necessary at all.

    • correct, compound interest temperature

    • son of mulder

      I think this is a very important point and how many stations are still urban after such a passage of time from originally being rural, feeding in inflated readings if used for spotting an anthropogenic CO2 fingerprint?

      Is there a baseline of continuous rural stations? And remember we’re looking for warming trend so adjusting for when TOBS changed should be looking at the trend before and after TOBS change. I don’t see how TOBS change should affect temperature trend within in each block of data either side of the change point.

    • Ian H commented

      The PHA adjustment procedure you have described is based on an assumption that we KNOW TO BE FALSE, namely that the bias of an instrument is constant right back to its date of establishment (or the previous PHA breakpoint) .

      Any assumption you make about the rate of change, any calculation, is all subject to being wrong, and since there’s no physically traceable calibration, it’s all guess work.

    • k scott denison

      Ian H: this is a great point and you stated it very well. I have tried to ask a very similar question of Mosher et al many times in the form of “What metric do you use to determine the urbanization of a site?”

      I have yet to receive an answer. Perhaps now that you have posed the problem and question so much more thoroughly we will get one.

      Thank you.

      • Do you really? You are either hopeless optimistic or simply new around here. Personally I expect to be completely ignored. I have found that most climate scientists have selective vision. They only notice things that might make warming seem worse.

      • k scott denison,

        You might have gone to the BEST web pages and found their paper on UHI

        http://www.scitechnol.com/2327-4581/2327-4581-1-104.pdf

        The answer to your question is there.

      • k scott denison

        Thank you, but this does not answer the question Pekka. Where are the plots of trend for urbanizing stations versus non-urbanizing?

  186. Thank you Mr. Hausfather for the insightful explanation. You make a good case that the temp records are not being maliciously manipulated. However, it is also apparent, that given the complex nature, and large uncertainties of the data, that a nuanced interpretation is in order.

    It seems therefore wrong, to use it for the purpose of generating sensational headlines with the intent of manipulating public opinion. The data simply don’t support many of the assertions that are made in the MSM.

    How about a compromise:

    One side stops ballyhooing the hottest (or second, third, fourth hottest … or maybe not?) day, month – whatever – in all recorded history…

    And the other side stops shouting that the data is biased, cooked, useless, garbage…

    Then, perhaps, it might be a bit easier to conduct research a little more objectively?

    I am looking forward to your follow-up articles.

    Greetings from Germany
    (7:1 hee, hee, hee – sorry, off topic, please forgive me)

    • One element of a compromise that I would suggest is that when referring to past temperatures and future predictions/projections that the term “estimated” be used before the number of the temperature being discussed. This will put the number in the right perspective.

      JD

      • Steven Mosher

        Precisely.

        I’ve tried to make this point in a couple posts, and its hard to get across to people.

        The raw data is a record. Everyone uses that raw data to make estimates or predictions of the what the past was. This leads to several things.
        prediction of the past is a bit of a mind bender, –maybe I’ll explain a bit if folks are interested

        1. As methods and data changes today, we can expect and will see ESTIMATES of the past change.
        2. Since our understanding of the past is ONLY an estimate, we can expect and will see places where the estimate of the past DIVERGES from the raw record.
        3. Once people see the record of the past as an estimate or more precisely as a PREDICTION of what should have been recorded, then the
        next step is clear: create your estimates (predictions) with a reduced dataset, and then test your prediction against held out data.

        Step 3 is really kind of fun. For example robert way has used our prediction to identify interesting local cases to examine.
        cases where our prediction is very different from raw and very different
        from NCDC adjusted. These anomalies in our predictive model end up being one of two things: a bad adjustment by NCDC or a local area where our model doesnt fully capture the climate with the current regressors.

      • Mosh’s reply should be understood in the context of other experiments in physics. Looking backward in time in celestial mechanics with epicycles as your data fitting model can be compared to observations, but claiming that a recorded eclipse of the sun which happened on June 15, 763 BC actually happened on June 16, 763 BC (estimated) should raise questions about your estimation process. In climate “science”, the question would be raised about the diligence of the scribe recording the observation or of the conversion between calenders. Perish the thought that our estimation process might be flawed.

    • Steven Mosher

      “One side stops ballyhooing the hottest (or second, third, fourth hottest … or maybe not?) day, month – whatever – in all recorded history…”

      yep. folks should just report their numbers monthly or quarterly or whatever without any spin, positioning, or commentary.

      ideally.

  187. ‘but that critical analysis should start out from a position of assuming good faith’

    Behaviour of ‘the Team’ was not so poor , the silence of those working in this area over this behaviour so deafening and that when ‘mistakes’ in such adjustments when admitted to always ,by ‘lucky chance ‘, turned out to work in favour of the promotion of AGW.

    By their own words shall you know them , once you consider ‘the Teams’ own words you can easily see why there is a lack of good faith .

    The areas has frankly done much to earn its poor reputation , that its professional often cannot or will not work at an academic level expected for undergraduate handing in essay, is a hallmark of its quality issues. B
    ut lest be honest its gone from poor , uncared for and little know cousin of the physical sciences , to major league with all the cash and jobs that brings on the back of AGW. Why should they then seek to break that back?

  188. David Springer

    In the final analysis we are arguing about a temperature record made by volunteers with little oversight over many generations on continents where a truly massive amount of land use change due to industrialization and agriculture and where said land mass is but a small fraction of the globe’s surface one might wonder why we bother with it. It’s really not very relevant to global warming. It’s hopelessly compromised and representative of a small percent of the globe with the greatest degree of land use change. Give me a frickin’ break. It’s all academic as they say. Arguing about how many angels can dance atop a Stephenson Screen. ROFL

    • True, true, true.

      This post has generated over 940 comments so far. Is this a record Judith?

      Strikes me as not being able to see the forest for the trees.

      • Half of them are from one desperate person. ;)

        Andrew

      • The record is 2000+ comments on a single thread, these were the sky dragon threads and also one of Vaughan Pratt’s went over 2000 comments. The key to a successful thread is active and productive engagement by the author – 10 stars to Zeke, with also stars to Mosher in a supporting role.

      • Current top posters (976 comments at the point I did this):

        153 Steven Mosher
        53 sunshinehours1
        47 Zeke Hausfather
        39 Matthew R Marler
        29 Don Monfort
        28 Wagathon
        23 nickels
        20 Windchasers
        19 Jan P Perlwitz
        19 A fan of *MORE* discourse
        15 David Springer

        Of course this is total comments, not unique comments.

        30% of the comments were by Zeke Hausfather and Steven Mosher.

      • Mosh rather puts me to shame in that list.

        Unfortunately I have a really hard time keeping up with threaded comments when threads get this big, as continuing separate conversations requires rereading a ton of stuff.

      • Matthew R Marler

        curryja: The key to a successful thread is active and productive engagement by the author – 10 stars to Zeke, with also stars to Mosher in a supporting role.

        I am kind of embarrassed I wrote so many, but I would like to end with another thank you to Steven Mosher and Zeke Hausfather. I look forward to Zeke’s next two posts, and I promise to interfere less.

      • “I am kind of embarrassed I wrote so many”

        Matthew,

        Don’t worry. We can come back later and adjust your totals so they don’t look so intrusive. ;)

        WINK

        Andrew

      • Zeke Hausfather, I wouldn’t feel shamed by your spot on that list. You’ve written far fewer comments than Steven Mosher, but I think you’ve done a far better job of conveying information than him. The fact someone makes tons of combative, unproductive comments doesn’t mean they contribute more than you.

      • Steven Mosher

        haha, you guys are lucky that i wasnt stuck in traffic.

      • Steven Mosher

        Brandon you are a little slow today.
        As others have figured out, zeke handled the On topic substantive questions.
        I played wack a mole with the off topic, personal attacks, and random BS.

        Carrick beat me to doing the actual count. it shows, or is indicative of, the relative percentage of people who are actually interested in understanding.

        crude method, but it seemed an easy way to get a general idea. An actual count ( using raters ) would be better. perhaps you and Tol can team up and do a better job.
        experiment over.

      • Steven Mosher

        matthew
        i was a little hard on you. sorry. I noted that you did take up the challenge an spend some time engaging with some of the nonsense that people spew. Thanks for that. Your help was appreciated, sorry for the grief I gave you.

      • Steven Mosher:

        Brandon you are a little slow today.
        As others have figured out, zeke handled the On topic substantive questions.
        I played wack a mole with the off topic, personal attacks, and random BS.

        I’m not “a little slow” because I recognize your attempts at playing “wack a mole” increased hostility and encouraged people to be unproductive. A person would have to be “a little slow” to not realize your posting style encourages the very behavior you claim to be trying to address.

        In other words, you exacerbate a problem then use the extent of the problem to criticize people you don’t like.

      • @Brandon Shollenberger
        I dont appreciate policemans tone either. There are other more positive things to do in life than read that kind of crap, we’ll see how long i last here.

      • I repeated Carrick’s count after my last comment because I saw the thread was at 1000 comments. It seemed like too round a number to pass up the opportunity. The top 10 were:

        157 Steven Mosher
        53 sunshinehours1
        48 Zeke Hausfather
        40 Matthew R Marler
        32 Wagathon
        29 Don Monfort
        27 nickels
        21 A fan of *MORE* discourse
        20 Windchasers
        19 Jan P Perlwitz
        18 Carrick

        I think it’d be amusing to create a concordance for each of them.

      • Steven Mosher

        yes brandon I encouraged them to be counter productive.
        I made them do it.

        oh wait, no i was afraid they would really find something so I was just distracting them..

        But check out sunshinehours.. what a boneheaded mistake.

        now he could have shared his code as I badgered him to do
        instead Nick Stokes spent time to find his mistake.

        Was that a good use of Nicks time? do you think sunshine will admist his mistake? nope.

        will you blog about his mistake? nope.

      • > I think it’d be amusing to create a concordance for each of them.

        A concordance should go beyond what the concepts used by the current top posters.

        A concordance of the claims, requests, and lines of arguments would be even more rewarding.

      • nickels:

        @Brandon Shollenberger
        I dont appreciate policemans tone either. There are other more positive things to do in life than read that kind of crap, we’ll see how long i last here.

        I don’t know about anyone else, but that’s the reason I haven’t been commenting on this page. I have no motivation to point out a person’s mistakes when doing so requires trying to overcome Steven Mosher’s petty behavior. He might as well go around accusing everybody of fraud. He’s acting on the same level as the people who do that.

        Steven Mosher:

        yes brandon I encouraged them to be counter productive.
        I made them do it.

        oh wait, no i was afraid they would really find something so I was just distracting them..

        Or you’re just a egotistical brat who gets off on confrontation and uses it to avoid discussions which would show your lack of knowledge/competence. Goodness knows I can find plenty of examples to support that interpretation.

        But check out sunshinehours.. what a boneheaded mistake.

        will you blog about his mistake? nope.

        Of course I’m not going to blog about his mistake. I’ve already written about it when Steven Goddard made it. I’m not going to write a new post every time somebody makes the same stupid mistake. The fact you’d try to make an issue of me not doing so shows you’re more interested here in things like point scoring than anything else.

      • > I have no motivation to point out a person’s mistakes when doing so requires trying to overcome Steven Mosher’s petty behavior.

        Providing this incentive to the Moshpit might be suboptimal.

      • Matthew R Marler

        Steven Mosher: Thanks for that. Your help was appreciated, sorry for the grief I gave you.

        At the risk of raising my comment count, I appreciate your apology. You are doing yeoman work on this thread, and with your work, and that was just a transient exasperation.

      • When stars are passed out not for scientific insight, but on the basis of how many comments someone wrote, we get a glimpse of what is fundamentally wrong with New Age education.

      • I would like a link to the Sky Dragon Thread.
        Is there an index of all the Threads?

    • ROFL

      Have to laugh at Springer. Those of us that know how to use that data have at our fingertips a treasure trove of information, whereas, alas, a bloviator like Springer is left to spew in rage, unable to lift a pencil and add any value.

      Thanks, NASA, NOAA, BEST, etc.

      • David Springer

        There’s no value in the land instrument record. It isn’t worth messing around with. Spatial coverage is pitifully inadequate, misses the oceans which are the real climate drivers, no discipline in siting, calibration, no repeatibility, amateur volunteers doing all the heavy lifting… A fool’s errand trying to make a silk purse from a sow’s ear. An errand perfectly suited to people like you and Mosher. I just sit back and kibbutz while waiting for mother nature to prove me right in her good time. Seven years so far and she’s done a bang up job for me. In 2007 when I started blogging on this the pause wasn’t killing the cause. Now it is. I’m already vindicated and basking in the glow of being correct. It’s a wonderful feeling. I sincerely hope you get to experience it someday but you probably wont.

      • That pencil is mighty heavy, ain’t it Springer?

      • David Springer

        No it’s just a few grams. Thanks for asking!

    • So we’ll never see Springer saying there is a ‘pause’ in temps.

    • “So we’ll never see Springer saying there is a ‘pause’ in temps.”

      How can he, if he thinks the data is compromised?

      Trickbox city.

      • thisisnotgoodtogo

        Whutty follows Michael into impoverished thinking:

        ” ‘So we’ll never see Springer saying there is a ‘pause’ in temps.’

        How can he, if he thinks the data is compromised?”

        Even when the dice are loaded and the card pack fixed, a win is a win and should be collected on.
        Think better, guys!

      • David Springer

        Of course I can say there’s pause. The satellite data is solid and it hasn’t shown any significant warming in going on 20 years. What are you boys smoking?

      • The UAH record shows a greater slope than surface records do.

    • I think Richard Lindzen said in one of his talks something like:

      We are talking about a half a degree C over a hundred years. I don’t understand what the fuss is all about.

      Through in that the error bounds on the measurements are greater than the magnitude of the changes and wtf do we have?

    • This has been interesting and a lot of fun.

      Thanks Zeke and friends.

      We would like to get the original temperature measurements back after all the changes is a separate graph. The PHA comment by Ion above identified a potential error in temp adjustments. A rural station becomes impacted by UHI and then corrected all the way back to when it was originally rural?

      What do you think Zeke?
      Scott

    • Stephen Mosher, I think Brandon (and others) have a bit of a point here—I think a bit less confrontational whilst simultaneously more informative style would benefit the discussion on this thread.

      In other news, sunshinehours1 (Bruce) has 53 posts and is wrong on the internet again.

      • Steven Mosher

        Cool carrick, I expect you on the next post to come on over and show us all how you deal with people who call you a liar, refuse to read the text, post bogus charts, and change the topic, all the while claiming to be dedicated to science and understanding.

        I’ll watch

      • The thing I don’t get about that post is it seemed obvious that’s what he was doing. Did we really need another post saying Steven Goddard’s stupid methodology is stupid?

      • Don Monfort

        My guess is that Carrick would deal with it as Zeke has. Would you jump in and serve as Carrick’s snarling guard dog? Don’t get mad, but you left an opening there:)

  189. I’ll just leave this here for whoever sees it to draw their own conclusions. This is Circleville, Ohio in 1934. I’ve added the dates to the beginning of each line for clarity’s sake, but everything else is a direct cut and paste from the daily “tavg” release for the USHCN.

    06_26_2014 USH00331592 1934 114 -533 277 1030 1851 2441 2627 2259 2062e 1311 830 18 0

    07_01_2014 USH00331592 1934 99 -547 262 1016 1837 2427 2613 2245 2048e 1297 816 4 0

    07_07_2014 USH00331592 1934 118 -528 281 1035 1856 2446 2632 2270 2073e 1322 841 29 0

    For those unfamiliar with the data format: the date is my addition, USH00331592 is the station number, 1934 the year, and the following numbers are each months “temperature” in Celsius. So. On June 26th this year Circleville Ohio had a January 1934 “average temperature” of 11.4C, on July 1, 2014 it had a January 1934 “average temperature” of 9.9C, and finally on July 7th, 2014 in 1934 the temperature in Circleville Ohio was 11.8C.

    • Way overstated. The units are hundredths of a C, not tenths. It was not 262.7 C in July. There is a fluctuation of temperature of about 0.15°C. Probably from the requirement that adjusted and raw match currently, propagating small changes all the way back.

  190. A fan of *MORE* discourse

    JD Ohio proclaims [utterly wrongly]  “Thus, your claim that observations verify heating of ocean water is incorrect.”

    Climate Etc readers may wish to verify for themselves that JD Ohio’s links are peculiar in including no citations more recent than 2009.

    Hmmmm … what has been learned, as ARGO data continues to pour in?

    JD Ohio, perhaps you — and Climate Etc readers! — might learn more from an an up-to-date ARGO bibliography?

    Try for example (from among hundreds):

    • Lyman, J. M., and G. C. Johnson, 2014: Estimating Global Ocean Heat Content Changes in the Upper 1800 m since 1950 and the Influence of Climatology Choice*, J. Clim., 27(5), 1945-1957

    • Piecuch, C. G., and R. M. Ponte, 2014: Mechanisms of Global-Mean Steric Sea Level Change, J. Clim., 27(2), 824-834

    • Abraham, J. P., et al., 2013: A review of global ocean temperature observations: Implications for ocean heat content estimates and climate change, Reviews of Geophysics, 51(3), 450-483

    • Gleckler, P. J., et al., 2012: Human-induced global ocean warming on multidecadal timescales, Nature Clim. Change, 2(7), 524-529,

    Conclusion  Science marches on! Ideology-driven ignorance, not so much.

    It has been a pleasure to help expand (patiently!) your appreciation of climate-science, JD Ohio!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • FOMD “Climate Etc readers may wish to verify for themselves that JD Ohio’s links are peculiar in including no citations more recent than 2009.”

      If you had reading comprehension beyond that of a 10-year-old, you would have noticed that included within my first paragraph were two 2010 studies that NOAA cited.

      My second paragraph quoted the UCSD web page. The UCSD bibliography is up-to-date and includes about 50 articles from 2014. See http://www.argo.ucsd.edu/Bibliography.html You simple assumed that one point made concerning one 2009 study was all that UCSD did, which was obviously wrong. Nothing on the page I cited (http://www.argo.ucsd.edu/Uses_of_Argo_data.html) indicated that it was meant to be a comprehensive survey of the literature. Rather, the author of the web page just pointed to one 2009 study that explained part his analysis of why the ARGO data was too immature to make conclusions regarding climate change signals.

      The fact that you lack basic literacy and yet accuse others of denialism is classic alarmist hypocrisy. Respond however you wish, but I won’t waste my time with you anymore.

      JD

      • thisisnotgoodtogo

        Give Fanny a break!
        It must be a chore to keep up with the macramé, ribbons, dolls, and lace work.

      • A fan of *MORE* discourse

        JD Ohio [angrily quibbles, waffles, denies, and finally concludes] “I won’t waste my time with you anymore.”

        It has been a pleasure to enlighten you regarding recent research, JD Ohio! Please accept my best wishes that your freed-up time be spent learning *MORE* about climate-change science.

        For sustained educational enlightenment, Climate Etc readers may wish to consult for themselves the *FANTASTIC* web page of NOAA climatologist Gregory C. Johnson, which includes non-paywalled preprints of articles that include:

        •  Purkey, S. G., G. C. Johnson, and D. P. Chambers. 2014. Relative contributions of ocean mass and deep steric changes to sea level rise between 1993 and 2013

        “The subsurface ocean steric [water-heating] expansion is found to contribute rates of 0.78, 0.40, 0.36, 0.07, 0.06, and 0.05 mm/yr from 1996–2006 between 300–700 m, 700–1000 m, 1000–2000 m, 2000–3000 m, 3000–4000 m, and 4000–6000 m, respectively.”

        •  Lyman, J. M. and G. C. Johnson. 2014. Estimating global ocean heat content changes in the upper 1800 m since 1950 and the influence of climatology choice.

        “Ocean warming is observed between 1950 and 2011 in all layers […] Changing the mapping scheme, including using different time or length scales or different mapping formalism, will alter the results quantitatively but not alter the conclusions qualitatively.”

        JD Ohio, it has been a wonderful pleasure to assist you — and hopefully assist many Climate Etc readers — to a broader, rational, civil, science-respecting appreciation of the *MANY* independent channels of robust evidence which affirm that climate-change is real and serious!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • thisisnotgoodtogo

        I checked for myself and you were wrong, Fanny.
        What you are now doing seems less than honest.

      • A fan of *MORE* discourse

        Climate Etc readers are invited to verify for themselves that ARGO‘s web page Global Change Analysis — which asserts “The global Argo dataset is not yet long enough to observe global change signals” (as JD Ohio correctly quoted) — regrettably cites no scientific articles more recent than 2009.

        Climate Etc readers are *further* invited to explore the articles referenced in the up-to-date ARGO Bibliography web page and also Gregory C. Johnson’s preprint database, which remediate *BOTH* deficiencies, by providing (1) up-to-date literature that (2) amply documents heating oceans.

        It is my pleasure to help further allay your skeptical concerns, thisisnotgoodtogo and JD Ohio.

        Please enjoy your learning adventure!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • ClimateGuy

        Right off the bat anyone can see for themselves in his linked page:

        Like these

        ” Moreover, the different results from different analyses (Lyman et al. 2010) suggest that best methods have yet to be found. The analysis by Trenberth and Fasullo (2010)”

      • A fan of *MORE* discourse

        ClimateGuy/JD Ohio cherry-picks “Moreover, the different results from different analyses (Lyman et al. 2010) [submitted in 2009] suggest that best methods have yet to be found. The analysis by Trenberth and Fasullo (2010) [is a two-page ‘Perspectives’ commentary having no references to post-2009 articles]

        ClimateGuy and JD Ohio, it is a great pleasure to remind you again — and remind Climate Etc readers too! — that climate-science has progressed *GREATLY* since 2009!

        Denialism not so much, eh folks?

        Good on `yah, ARGO and (now!) DEEP ARGO, for showing Climate Etc folks so undeniably that global warming is real, cumulative, and serious!

        ClimateGuy and JD Ohio, best wishes are extended for your continued enjoyment in learning from the climate-science literature!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • FOMD,

        You HAD said “… JD Ohio’s links are peculiar in including no citations more recent than 2009.”

        Which is shown to be a false statement.

        And you fret and make false statements about links shown here – while praising Michael Mann, professional, celebrated , self Nobel laureated Michael Mann serving up 2005 in 2013.

        Where’s your ethical compass, FOMD?

      • A fan of *MORE* discourse

        Question  Why do Climate Etc denialists in general — and comments from folks like JS Ohio and thisisnotgoodtogo and ClimateGuy in particular — adamantly refuse to survey the recent climate-change literature?

        The world wonders!

        The Slashdot Answer

        The oil companies/Heartland Institute don’t have to create spin anymore, because they’ve had the most important success possible: making denialism an important part of the identity of a lot of people.

        In some ways, it [climate-change denialism] is very cult-like in the way that it forms identity.

        Denialism gives you victim/threatened status (those evildoers are attacking our beliefs, we need to be warriors), enough victories to think of oneself as a winner but maintain the communal aspects of thinking oneself under threat, charismatic leaders, the companionship of shared beliefs, a sense of superiority to those who disbelieve, and, in the most cult-like aspect, the assurance of being above mere facts, of living in a world where your personal beliefs trump mere objective facts.

        That’s the way the STEM professionals of Slashdot regard climate-change denialism, anyway!

        And it reasonable to expect that Judith Curry’s students feel the same, eh?

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • ClimateGuy

      Check out Michael Mann’s AGU trick

      From Climate Audit:

      “Mike’s AGU Trick

      There were two components to Mann’s AGU trick. First, as in Mann and Kump, Mann compared model projections for land-and-ocean to observations for land-only. In addition, like Santer et al 2008, Mann failed to incorporate up-to-date data for his comparison. The staleness of Mann’s temperature data in his AGU presentation was really quite remarkable: the temperature data in Mann’s presentation (December 2012) ended in 2005! Obviously, in the past (notably MBH98 and MBH99), Mann used the most recent (even monthly data) when it was to his advantage. So the failure to use up-to-date data in his AGU presentation is really quite conspicuous.

      Had Mann shown a comparison of Hansen’s Scenario B to up-to-date Land-and-Ocean observational data, the discrepancy would have been evident to the AGU audience, as shown in the loop below.”

      Oops! He cut off his data at 2005 in a 2013 presentation.
      He is a wonder.

  191. A nice post at Moyhu about some of the mistakes here: http://moyhu.blogspot.com/2014/07/someone-is-wrong-on-internet.html

  192. Just curious, is ‘average monthly temperature’ mathematically defined anywhere?
    Is is 1 foot off the ground, 2 feet? Does it go around vegetation or include it? Does it go up the face of the Eiger or smoothly pass over? Or is it the average of the first 30 meters?
    You get the idea…
    I ask because any type of error analysis has to start with a precise definition of what you are approximating. And the climos have a bad habit of dodging such definitions and defining their model output to be the thing they’re estimating, hence ‘look ma, no error’.
    There must surely be a precise definition? Reference?

    • @ Nickels

      “Just curious, is ‘average monthly temperature’ mathematically defined anywhere?”

      So, after reading about climate change for awhile and how the ‘Temperature of the Earth’ has moved up and down in increments of hundredths of a degree, over the past millennium or two, you are looking for a precise, scientific definition of the thing that has varied? Presumably so that it could be independently measured and verified?

      Take a number.

      • Well, hopefully you’re not right and there is a formal definition, but….

        “Presumably so that it could be independently measured and verified? ”

        The other reason I would like to have a definition is so that we actually think about using mathematical interpolation theory and actually think about what error estimates might look like.

        I’m also curious how sensitive such as definition would be, i.e. how much would the answer change if we fiddle with it slightly. I don’t know a lot about the gradient of temperature near the ground….

    • Steven Mosher

      “Just curious, is ‘average monthly temperature’ mathematically defined anywhere?
      Is is 1 foot off the ground, 2 feet? Does it go around vegetation or include it? Does it go up the face of the Eiger or smoothly pass over? Or is it the average of the first 30 meters?”

      1. technically speaking its not an average temperature.
      2. its a prediction of what would have been recorded 2 meters from the surface.
      3. The spatial detail ( up the face of a mountain) is a fine as you want
      since T is a function of Z you can use any DEM you want.
      we choose to average Z over 1 degree, although we have produced
      the product at 30 minutes.

  193. @ Brandon Shollenberger | July 7, 2014 at 11:27 pm |

    jim2, this amused me:

    If you aren’t careful, Jan, you’ll get moshered for unscientific speculation about the author.
    *****
    For his interaction with me, see …

    http://judithcurry.com/2014/07/01/ncdc-responds-to-concerns-about-surface-temperature-data-set/#comment-603726

    http://judithcurry.com/2014/07/01/ncdc-responds-to-concerns-about-surface-temperature-data-set/#comment-603727

    http://judithcurry.com/2014/07/01/ncdc-responds-to-concerns-about-surface-temperature-data-set/#comment-603781

    Mosher isn’t the best representative of his group, he is just too mercurial.

  194. Nickels | July 9, 2014 at 8:56 am |
    “Just curious, is ‘average monthly temperature’ mathematically defined anywhere?
    Is is 1 foot off the ground, 2 feet? Does it go around vegetation or include it? Does it go up the face of the Eiger or smoothly pass over? Or is it the average of the first 30 meters?
    You get the idea…”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    Yea that’s the thing, in a way Earth is infinitely complicated, yet, yet, it does operate following basic processes that are knowable and that have been studied and learned about for a long time.

    Nickels seems to believe that setting up impossible expectations is OK.
    But science is a learning process of accumulating credible information.
    We try to understand our planet the best we can with what we have. Playing on the fringes of contrarianism about every scientific finding is a great way to go crazy, but it won’t help you progress much.

    I myself know that it’s the “70s. 80s warmest” fears that we are watching come to fruition with a vengeance and at a speed none of us expected. Yet, we still have folks claiming that Hansen’s testimony was somehow false because some Senators tried to play some politics.

    We have people who want to spend all day arguing about fractions of a degree in processing data, while ignoring Earth’s greatest temp proxies our North and South Pole ce sheets, and pretty near all the cryosphere in-between, is warming and melting away at alarming rates.

    • Im really not trying to be contrarian. Im trying to run an estimate in my head of what the potential errors in the TOBs and spatial reconstruction might be, but I need to understand what region we are averaging over to do so. Also, many simplifications are likely fine to make about ground height, etc but to make a simplification and know that it makes little difference one must have a reference….

  195. sunshinehours1 | July 7, 2014 at 11:49 am | writes
    Mosher: “Stop criticizing us …. ”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Actually, I believe Mosher’s message was: Stop fibbing about Zeke !

    Legitimate Criticism and rational discussion of disagreements is fine.
    Playing word games, misrepresenting history or evidence, turning a blind eye to accepted credible data, conjuring grand (evidence- less) conspiracies intent on distracting attention from the real issues – is an entirely difference story – {even if it’s great for the endless resolution-less dog-chasing-tail debates contrarians are so fond of}.

    But many are so caught up in their political ideology that their glaring double standards have become standard operating procedure and thus unrecognized by themselves.

    • I think if you read all of Moshers posts my comment is correct.

    • Steven Mosher

      Actually, I believe Mosher’s message was: Stop fibbing about Zeke !

      Yes,

      Let me tell you a couple things. A few year back Anthony Watts and D’aleo
      accused NOAA of fraud. The guys who work at NOAA of course couldnt do any science until the matter was cleared up. Now you have Goddard doing the same thing. With regards to Zeke’s time on this and my time on this, contrary to what people think we are not be encouraged by our organization to engage in this. Quite the opposite. Many regard it as a low probability of success effort. Talking to people who have their minds made up or those who refuse to even read the basic materials, is, some think, a waste of our time. However the good people Zeke and I have both worked with deserve better than goddard’s smears. It is one thing to question the work of a scientist, it is one thing to demand his data and code, it is one thing to dig hard, dig with you own two hands, and quite another thing to assert or assume evil intentions without doing any work or shoddy work. I am encouraged to engage with actual users of the data, and to engage with those who want to understand. I do that by mail. More and more engagement with the general public, large swaths of which have already decided that we are frauds, is no longer interesting or even fun.

    • ClimateGuy

      Citizenschallenge, it’s one thing to intimate untruthfulness, it’s quite another to engage in/condone the vigilante Treehut efforts to find ways to find “metaphorical” punishers hackers and assassins, after the work Watts McIntyre and Mosher have done, isn’t it?

  196. “warming and melting away at alarming rates”

    Citizen,

    You do know that ice melting is a natural phenomenon, right?

    Andrew

    • Let me know when there is a change in the equator-to-pole temperature difference not a change in the global average of temperature (especially during the summer) — such as for example occurs due to changes in Arctic isolation — and, then I will be interested… not that there will be a damned thing anyone can do about it.

      • Let me know when there is a change in the equator-to-pole temperature difference not a change in the global average of temperature (especially during the summer) — such as for example occurs due to changes in Arctic isolation — and, then I will be interested…

        Let me know when we actually have some weather stations North of 85 Lat. We had one, for 2 partial years.

  197. >Global climate alarmism has been costly to society, and it has the potential to be vastly more costly. It has also been damaging to science, as scientists adjust both data and even theory to accommodate politically correct positions. ~Richard Lindzen

  198. From the Twitterati above:

    The mission of the statistician is to work with the scientists to ensure that the data will be collected using the optimal method (free from bias and confounding). Then the statistician extracts meaning from the data, so that the scientists can understand the results of their experiments and the CEOs and public servants can make well-informed decisions.

    So, Edward Wegman was right! Michael Mann was on a secret mission.

    • “CEOs and public servants can make well-informed decisions.”
      What happens when their isnt a lot of meaning in the data? I guess if you’re quiet enough about this sounds like a great paradigm to take the will of the voters out of the loop too!

      • There’s not such a big problem when using data that was collected for altogether different and relatively mundane purposes — e.g., comparing the percentage of black cars sold in cities based the number of air conditioners and garages per capita — but, outside of that, the integrity, honor and ethics of researchers cannot be assumed.

  199. Another question, in regards to the vose paper.

    Figure 3 shows that the residual errors after TOB adjustment are as great as -0.2 degrees midday. Fig 4 shows that the number of stations measure at times is smaller than the number that measure when the residual is less. So the obvious question is what the relative spatial weighting to those ‘cool biased’ measurements is? And how do we trace this sort of possible amplification of the cooling effect (if measurements that are taking midday are remotely located) through the final product? (Sorry, but someone had to ask)

    • Actually, the ‘spatial average residual errors are as great as -0.2. Which potentially means that any particular residual error could be much worse. If such a point were amplified in the spatial interpolation…..
      you see the point…. (I expect this is addressed, but I just want to make sure)

  200. Zeke, et al. I think you guys are doing a good job with some pretty horrible records. I’m sure as time goes on and you can better incorporate topographic, more temperature records, and more climatological factors; the temperature construction will become more accurate. I now appreciate why Tob and instrument type changes cause the positive temperature trend to increase, which was my biggest concern.

    Thanks for the effort and I look forward to the next posts.

  201. The question I have is, are we now even more certain about the validity of the scientific claims for catastrophic warming?

  202. A fan of *MORE* discourse

    jim2 [speaks for FOMD and many] “Zeke, et al. I think you guys are doing a good job with some pretty horrible records […] Thanks for the effort and I look forward to the next posts.”

    Please let me endorse your fine comment jim2!

    Similarly deserving of our appreciation, respect, and praise are *ALL* of the global-scale energy-balance observation efforts (IPSI, GRACE, JASON, ARGO, etc.); that condition *ALL* of the general circulation models (too many to list!); that in aggregate strongly affirm the solid thermodynamic foundations of the energy-balance climate-change worldview.

    Consensus Conclusion  Observation, computation, and physical science unanimously agree: climate-change is real and serious.

    Alternative Conclusion  Tens of thousands of scientists have subordinated their entire careers to a secret, multi-generation, multi-nation, multi-discipline, globe-spanning, young-and-old “warmunist” cabal.

    Which is it? The world wonders!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • A fan of *MORE* discourse

      FURTHER BREAKING NEWS
      Michael Mann wins *ANOTHER* round!

      Climate Change Skeptic Group Must Pay Damages
      to University of Virginia and Michael Mann

      Thus ends “Climategate.” Hopefully.

      Is the ending of an era?

      The ending of an era of personal smears and willful ignorance?

      A return to public discourse that is rational, science-respecting, and civil?

      The world wonders! And the world *HOPES* so!

      Meanwhile … good on `yah, Michael Mann!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • FOMD,

        Another totally uninformed post showing that you are impervious to facts and logic.. The order just entered by the Virginia Supreme Court was simply a formal statement for the record that, for the reasons stated in the April Opinion rendered by the Supreme Court, the trial court’s judgment was affirmed. http://www.southernstudies.org/sites/default/files/va_sup_ct_ati_damages_mann_uva.jpg This was not “another round.”

        JD

      • A fan of *MORE* discourse

        The Slashdot comments are merciless:

        •  “As long as that pesky cabal of climatologists is out to get those poor little fellas in the coal and petrol industries, Climategate will continue rising from the grave.”

        •  “Plot idea: 97% of the world’s scientists contrive an environmental crisis, but are exposed by a plucky band of billionaires & oil companies.”

        •  “Why this isn’t climate change at all! It’s *removes mask from monster* Michael Mann and 97% of the world’s scientists! We would have gotten away with it too if it weren’t for you meddling billionaires! (Oops. Should have added a spoiler alert.)”

        •  “‘Tis but a scratch! It’s just a flesh wound!”

        •  “The oil companies/Heartland Institute don’t have to create spin anymore, because they’ve had the most important success possible: making denialism an important part of the identity of a lot of people.”

        •  In some ways, it’s [denialism is] very cult-like …

        JD Ohio, thank you for helping Climate Etc readers to appreciate that the world’s STEM community staunchly opposes climate-change denialism!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • David Springer

        Crank alert!

    • Consensus Conclusion
      Alternative Conclusion

      As if those were the only possible conclusions.

  203. Zeke:

    I know the length of the thread is long – so you might have missed this question. If you could look at my question here and give me your thoughts I would appreciate it:

    http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-60650

  204. Sorry – my link doesn’t work.

    It is my comment “RickA | July 8, 2014 at 10:36 am”.

  205. Or perhaps McCook, Nebraska? Here is 1934 USHCN “data” for McCook Nebraska:
    06_26_2014 USH00255310 1934 91E 112a 468E 1191E 1991E 2431E 2864E 2645E 1617a 1423a 628E -184a 0

    07_01_2014 USH00255310 1934 95E 116a 472E 1185E 2003E 2427E 2868E 2649E 1624a 1427a 631E -180a 0

    07_07_2014 USH00255310 1934 110E 111a 472E 1185E 2004E 2427E 2869E 2647E 1624a 1423a 578E -179a 0

    The weather in 1934 keeps changing!

  206. “… but that critical analysis should start out from a position of assuming good faith…”

    Um, no. I’m willing to go as far as starting out with an open mind, but, at this point, not a presumption of good faith.

  207. I’ve only read through about half the comments so far and want to thank Zeke for his explanation and response to comments. It seems to me this thread provides examples of the skeptical and the lukewarmer mindset. The skeptics distrust Zeke’s analysis and his statement (and similar ones of of Mosher) “there is no grand conspiracy to artificially warm the earth” and think such staements reflect the confirmation bias of some one with a vested interest of seeing his own work and POV accepted. The lukewarmers, OTOH, seem less ready to suspect the intent or motivation of those who do the statistical analyses.
    I’ve followed Mosher and Curry long enough to know that they are not ideologues and are willing to follow the data.
    I thnk Zeke’s explanation is credible although there’s still room enough to wonder if warmist comformation bias hasn’t created that small upward warming adjustment of about 0.1 C or less globally (more in the USA).

    But is this the battle we should be fighting? Whether 0.8C or 0.9C warming since the beginning of the industrial revolution, we’re talking about a small amount of warming, not catastrophic warming. There’s been little or no acceleration in that warming with the period between 1920-1942 and 1978- 1998 accounting for almost all of it. We’re over half way to a doubling of CO2 (400ppm CO2 is 43% more than preindustral 280) and 40% more CO2 (40% of 400 or 160) will give us a double- 560ppm. I suspect the second half of the CO2 doubling will be like the first half (or would you prefer those climate models with their warming bias!) and that temperatures will rise another 0.8 or 0.9C when we reach that 560 ppm double- again nothing catstrophic. In addition, there’s little reason to think that the whole 0.8 or 0.9 temperature increase we’ve had since pre-industrial revolutionary times is 100% caused by GHG. Infact, since it was warming before anthropogenic increases in GHG and, furthermore, one of the largest temperature increases occurred during the great depression when GHG hardly increased, there is sufficient reason to think that GHG are just forcing of that 0.8 or 0.9 C temperature increase, meaning climate sensitivity probably is not the 1.8 or 1.6 that I think the doubling of CO2 will show, but something lower.
    If all of that is reasonable, then a skeptic’s distrust of the temperature record is unimportant and probably counterproductive in promoting some sort of climate realism that AGW exists, but not CAGW. For the warmists here, I agree that the remarkable co-incidence of a reversal of the natural warming trend might possibly have occurred at the same time that anthropogenic CO2 levels accelerated, thereby masking a high climate sensitivity. Yes, a possible though highly unlikely coincidense. Therefore, warmists, I agree that CAGW is theorectically possble and can not be ruled out just as every other trendline (like the coming ice age) can not be ruled out due to the paucity of data and understanding we presently have.

    My take-away is that CAGW looks very unlikely, more unlikely every year the temperature continues to flat-line, and that skeptics, by calling attention to their distrust of motivation, are huting the more important goal of educating the public, politicians, and especially journalists that the longer term warming trend we’re in is NOT catastrophic.

    • You are right.

      Whether or not AGW is real, it certainly is not catastrophic.

    • Steven Mosher

      “But is this the battle we should be fighting? Whether 0.8C or 0.9C warming since the beginning of the industrial revolution, we’re talking about a small amount of warming, not catastrophic warming.”

      If we ditched the historical record ALTOGETHER, if we only had the physics of today with no historical data whatsoever, we would still have a good argument to limit C02.

      Fighting over the historical record is a diversionary tactic

      • Talk about confirmation bias.
        “The theory is right, the data must be wrong.”

      • Mosh

        Consensus by policy makers depends on two elements, the physics and the mannian view of a constant climate which seems to confirm the physics.

        If yOu discarded the current temperature record the physics is less compelling.

        If we reverted to the real historic record of considerable climate variability the physics would have a hard time surviving in isolation.

        That is the reality of the world being run by politicians and not by scientists.

        Tonyb

    • A fan of *MORE* discourse

      Steven Mosher notes [correctly] “Fighting over the historical record is a diversionary tactic.”

      Your point is entirely correct Steven Mosher!

      Question  Denialists seek to distract voters from … what?

      The Slashdot Answer  The STEM community’s radically progressive economic/technological program Blueprints For Taming the Climate Crisis

      “Here’s what your future will look like if we are to have a shot at preventing devastating climate change. Within about 15 years every new car sold in the United States will be electric … up to 60 percent of power might come from nuclear sources … and coal’s footprint will shrink drastically, perhaps even disappear from the power supply.

      There’s an alternative carbon-nergy future that will line some folks’ pockets … most folks *NOT*.

      Your many fine comments are appreciated (by me and many) Steven Mosher!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Doug Allen | July 9, 2014 at 3:53 pm | Reply
      “But is this the battle we should be fighting?”

      Thank you! The issue is not whether it is getting warmer or colder.
      The issue is whether CO2 is driving temperature.
      All the rest is an academic exercise with everybody having an opinion.

      • Good point to all who made it. It’s not getting warmer, It is too. Accountants telling other accountants they can’t count correctly. And each accountant thinks they are the best estimators of the current as well as past situations. While accounting has it uses, most of it is looking back, it’s recording. There’s more to running a business than that.

  208. @jim2 at 11:52 am
    Zeke, et al. I think you guys are doing a good job with some pretty horrible records. .. I now appreciate why Tob and instrument type changes cause the positive temperature trend to increase, which was my biggest concern.

    Sure, the case can be made that documented TOB changes implies an adjustment to records to remove a bias, plus added uncertainty because the adjustment is an estimate. Likewise, documented changes to instruments can and should receive an adjustment to remove the bias — and earn additional error contribution from the uncertainty in the magnitude of the adjustment.

    None of this, however, is justification for the slicing of long term records into shorter segments. That has been my objection from early 2011. the Scalpel is a Low-Cut filter, a filtering process that removes the lowest frequencies from the signal. Climate signal is Low Frequency – decades long.

    what to me appears to be minimally discussed wholesale decimation and counterfeiting of low frequency information happening within the BEST process. If you look at what is going on in the BEST process from the Fourier domain, there seems to me to be major losses of critical information content

    My summary argument remains unchanged after 20 months: [now 39 months]
    1. The Natural climate and Global Warming (GW) signals are extremely low frequency, less than a cycle per decade.
    2. A fundamental theorem of Fourier analysis is frequency resolution dw/2π Hz = 1/(N*dt) .where dt is the sample time and N*dt is the total length of the digitized signal.
    3. The GW climate signal, therefore, is found in the very lowest frequencies, low multiples of dw, which can only come from the longest time series.
    4. Any scalpel technique destroys the lowest frequencies in the original data.
    5. Suture techniques recreate long term digital signals from the short splices.
    6. Sutured signals have in them very low frequency data, low frequencies which could NOT exist in the splices. Therefore the low frequencies, the most important stuff for the climate analysis, must be derived totally from the suture and the surgeon wielding it. From where comes the low-frequency original data to control the results of the analysis ?

    …Power vs Phase & Frequency is the dual formulation of Amplitude vs Time. There is a one to one correspondence. If you apply a filter to eliminate low frequencies in the Fourier Domain, and a scalpel does that, where does it ever come back? If there is a process in the Berkley glue that preserves low frequency from the original data, what is it? And were is the peer-review discussion of its validity?

    From: Stephen Rasey, Dec. 13. 2012, “Circular Logic Not Worth a Millikelvin”

    I believe this is the first time in 1000+ comments that the Fourier Domain has been brought up. I think it is important because long records are the preservation of low frequency information content. Cutting 100 year long records into 5-18 year segments is applying a low cut filter to the data.

    High frequencies cannot be used to predict low frequencies, something obvious when looking at signals in the Fourier Domain. Regional homogenization cannot fix the problem because in the regional trends the high frequency information that remains after the scalpel cannot predict the low frequency information cut away.

    If there is a process in the Berkley regional analysis that preserves low frequency from the original data, what is it?
    And were is the peer-review discussion of its validity?

    • Steven Mosher

      “Climate signal is Low Frequency – decades long.”

      what makes you think that?

      careful what data you use to answer that question.

    • Hi Stephen,
      That’s an interesting point, and I think I understand it.

      Are you saying the loss of low frequencies invalidates the current BEST temperature construction?

      If so, how would the recovery of the low frequency component of the temperature record affect the final temperature chart?

      Since the raw and corrected data is available, the suturing and low frequency analysis can still be carried out. Is that correct?

      Has it been carried out?

      • @jim2
        Are you saying the loss of low frequencies invalidates the current BEST temperature construction?

        What I am saying is that from an information theory point of view, and looking at the problem in the Fourier Domain, there seems to be a permanent loss of most important information.

        “Perhaps it can be argued, demonstrated, and proved, that somehow the low frequencies were extracted, saved, and returned to the signal intact. ” But until I see a peer-discussion of how real low frequency content is returned to the final analysis, I think we should treat the low frequency content as counterfeit — it might look real, but the broken chain of custody harms their case.

        I have a fuller explanation of the Fourier Domain and an equivalent process geophysicists uses to create a seismic inversion profile,
        http://stephenrasey.com/2012/08/cut-away-the-signal-analyze-the-noise/
        About half-way down, and continue to the end.

        I’m a geophysicist. Geophysical seismic processing is heavily dependent upon Fourier Analysis. What I see BEST doing is eliminating low frequencies with the scalpel, performing some magic semi-regional homogenization of the high frequency segments behind the scenes, then returning with a result with “better” low frequency in it. I would sooner believe that the 2nd Law of Thermodynamics could be violated. How did they get something for nothing? How did they throw away low-frequency only to get back “better low frequencies” in the result?

        I explain this because what BEST is attempting to do is very similar to what seismic processors do when they Invert the seismic to obtain a full impedance profile. What must be understood is to get a full inversion you need two things.
        (1) the band pass seismic data for high-frequency detail, and
        (2) the velocity-density profile which provides the low frequency information.
        When we invert, we integrate the seismic data, but that means we integrate the noise, too, so error grows with time. For (2) we get the velocity information from the study of velocities that maximizes the signal-to-noise in the stacked data. Density can be estimated from anticipated rock, depth, and fluid content. It is very model dependent, but it is controlled by the stacking and move-out process and is an independent control on the cumulative inversion error from band pass data in (1).

        What BEST does with the short segments is to map out short term trends that they must integrate over time to create 150 year temperature profiles. But as we know from seismic data, we need an independent source of the low frequency information that is not in the band-pass signal. Seismic processors get this from the Stacking Velocities and anticipated density profile (model). Show me the BEST process that preserves real low frequency climate data from the original temperature records. I don’t see it.

    • @ Stephen Rasey

      “I believe this is the first time in 1000+ comments that the Fourier Domain has been brought up…”

      Jerry Pournelle posted this note from me in Dec 2010. It makes somewhat the same point as you: the sample length being processed by ‘Climate Science’ is totally inappropriate for the periods of the waveforms being ng sampled:

      “Hello Jerry,

      It strikes me that the climate modelers are doing the equivalent of collecting about 50 samples of an extremely noisy 1 Hz sine wave using an A/D with a 1 MHz sample rate, performing an FFT on the sampled data, and using the results to describe the waveform that they have sampled. If it were done in an EE lab to demonstrate to budding signal analysts the pitfalls of ignoring the warnings of Messrs. Nyquist and Shannon when performing such analysis, it would be mildly amusing. And educational. When the results of the climactic equivalent are being used to justify bringing western civilization to a halt and establishing a world government with essentially infinite power to ensure that it stays halted, it is neither amusing NOR educational.

      Bob Ludwick”

      Also back in 2010 I wrote something very similar here: “………..The second question is, postulating that the temperature record from satellites is absolutely accurate and unfudged, and in light of the fact that climate changes historically occur naturally with periods of hundreds to thousands of years, do you think that the 31 annual data points available from the satellite record are adequate to establish long term climate trends and that the trends are a consequence of human activity?

      It strikes me, maybe without justification, that the climate science community, discounting the political agenda aspect, is doing the equivalent of a budding signal analyst collecting 31 samples of an extremely noisy 1 Hz waveform, using a 1 MHz sample clock, and then using an FFT to characterize the waveform. He would of course get results, but their correlation to the actual waveform would be nebulous at best. In the case of the would-be signal analyst, Messrs. Nyquist and Shannon would be amused. In the case of the climate analysts, whose results are being used to justify planet wide upheaval and the shuffling of trillions of dollars, amusement is not the first emotion that comes to mind.”

      and again in April 2014:

      “@ rhhardin

      “The strange science of climate change seems to reverse everything.”

      I suspect that Messrs Nyquist and Shannon, after spending a few hours contemplating the endless plotting of ‘trends’ by Climate Scientists and their pontificating on the dire consequences thereof–with 97% certainty, no less, would consider the whole field to be comedy comparable to Abbot and Costello’s ‘Who’s on First’, were it not for the fact that this ‘comedy’ is being cited as justification for governments taxing and regulating every human activity that either produces or consumes energy.

      Sampling theorem? WHAT sampling theorem?”

      I was obviously way off topic however as neither comment about the sampling theorem stirred a response.

      • @Bob Ludwick at 11:07 pm

        @Stephen Rasey
        Jerry Pournelle posted this note from me in Dec 2010. It makes somewhat the same point as you: the sample length being processed by ‘Climate Science’ is totally inappropriate for the periods of the waveforms being ng sampled:

        THE Jerry Pournelle? The Sci Fi writer? Cool.
        Thanks for that.

        From Dec. 2010 was it. That was before BEST broke out the scalpel and made the situation much worse from a long wavelength aspect by making short temperature segments fashionable.

        I will endeavor to search for the word “sampling theorem” in the future. Most people who are aware of Nyquist are aware of the dictum of at least two samples per wave form component before aliasing sets in. Fewer remember the formula also cautions that the length of the time series restricts the maximum wave length that can be studied.

  209. [mods: Obviously the link in the 4:14 pm should have been closed after “Millikelvin”]

    Think of a Climate Signal “Chain of Custody.”

    Climate signal is the low frequency content of all temperature records. We remove daily ranges, We remove weather fronts by averaging over a month, We remove seasonal signals via anomalies. We attempt to remove human induced biases in the instrumentation, such as TOBS and changes in thermometers. All of it is to preserve and accentuate the Climate Signal.

    Then BEST cuts a 100 year long record in tiny pieces that can no longer contain the information we seek. It has taken a useful Low-pass signal and turned it into a Band-Pass signal that can no longer reliably tell us what we need to know.

    It is as if we take a suspected murder weapon. We disassemble it. Throw the parts into an evidence bin. Then comes trial, we assemble the weapon from the parts in the bin, replacing a couple of broken springs while we are at it. Then present it at trial as evidence.

    • Steven Mosher

      wrong.

      • Is that the best BEST has to offer?

      • Steven Mosher

        the records you think are intact are not in fact intact.

        They are different stations, given the same name.

        NCDC has done things like combine stations within 20km into one station
        record.

        You think thats an intact record? its not.

        We basically restore the integrity of the data.

        Any “information” you find in records that never should have been combined is a phantom.

    • EDIT:
      It is as if we take a suspected murder weapon. We disassemble it. Throw the parts into an evidence bin with weapons from other crimes. Then comes trial, we assemble the weapon from parts in the bin, replacing a couple of broken springs while we are at it. Then present it at trial as evidence.

    • Sometimes a metaphor is just a metaphor. So it is with your gun.

      As I understand it, BEST splits records specifically to avoid TOBS-type issues. They recognize that a nirvana state of a zillion temperature stations with pristine set-ups, perfect administration and no changes to them from Creation until the present day is a fairy tale. We have the stations we have, with their complicated and tangled history of location changes, equipment changes and time changes. The BEST technique is as good as any. IF your mission is to get to the best, most accurate and reliable answer you can with the raw material available.

      • You could apply a TOBS and instrument change correction to an unbroken record. As long as you include the mean standard error of the adjustments, I have no problem with that.

        It is the slicing and dicing of the records that is causing damage, because it is leaving low frequency information content on the cutting room floor.

        From a practical matter, If we have a 1900-2014 temperature record, do we really care what adjustments we make in 1940-1970? The ending section of a timeseries have more contribution to the overall slope and uncertainty. The slope is less sensitive to what goes on in the middle.

        But if we slice records, then now we create many more end points. The degrees of freedom has expanded enormously.

        Another practical matter is what do short segments do to uncertainty?
        With one long 115 year record, the mean standard error of the slope is well constrained, even with 0.5 deg uncertainty in any given month. But If I create nine 14-year segments out of that record, the estimate of the slope of each segment has huge uncertainty.

      • Steven Mosher

        we slice records because THERE ARE DIFFERENT STATIONS

        the mistake was made upstream. when they changed the TOBS, its a different station. we are not slicing a record thats intact, we are quite simply giving the station a different name. As they should have done when they changed it.

        Any “information” that was in a bogus station is not important.

        Imagine i spliced two stations 10km apart into one record and then told you there was important information in this “record”

        Suppose now I treated it as two different records, which it is, and this “information” was lost.

        Well DUH, its information that should be lost becuase it is there because of a naming mistake

      • Imagine i spliced two stations 10km apart into one record and then told you there was important information in this “record”

        First, are you saying that the bulk of BEST breakpoints are similar to 10 km station moves? Not from what I’ve seen.

        When it comes to a TOBS application, or an instrument change, or a Stevenson screen is painted, or there is a vacation or hospital gap in a record, BEST has DECIDED that these are different stations.
        That doesn’t mean they ARE different stations, nor should they necessarily be treated as different stations.

        Even a station move within the confines of most airports ought not be a different station for the purposes of climatology. Suppose the station move is just to move it away from airport expansion, to restore an established station back to Class 1 siting. Should it be a separate station? According to BEST, of course. But keeping the same station preserves the longer term record and long length records are valuable.

        The sawtooth case of gradual instrument drift and site contamination, then a discontinuous maintenance recalibration event to restore the quality of the station. If BEST makes a scalpel cut at a maintenance event, then it bakes in the drift as climate signal and discards the just as important recalibration information.

        BEST is breakpoint happy. You think no harm can come from adding breakpoints. But you are wrong on this point.

        Breakpoints do harm. The scalpel cuts away low frequency climate signal and keeps the weather, UHI, microsite, and drift noise. It shortens record segments and therefore increases uncertainty greatly.

    • Stephen, have you performed the suturing procedure and run the FT on data prepared that way?

      If so, what did you find?

    • NCDC has done things like combine stations within 20km into one station
      record.

      This is the inappropriate argument from the particular to the general that is the fallacy a dicto secundum quid ad dictum simpliciter

      We basically restore the integrity of the data.
      Is THAT what a scalpel does???
      Is THAT why you homogenize Class 3, 4, 5 stations to determine breakpoints at Class 1 stations???
      Is THAT why the science sees nothing wrong in creating zombie stations??

      Any “information” you find in records that never should have been combined is a phantom.
      But how does anyone know it never should have been combined? Take your word for it? I don’t think so.

      Frankly, your statement is an indictment against all regional homogenization. Stations 20 miles apart “never should have been combined” in a regional grid because the krigged surface “is a phantom”.

      Confirmation bias throughout.

  210. No, I have not done it on BEST data. I wouldn’t know where to start.

    I have done seismic inversions. I know what is needed. I know some Fourier Analysis. There are certainly people with far better expertise than me is processing in the Fourier Space. But I know enough to see the red flag in the process.

    In some ways, your question strikes me like, “have you build the suggested perpetual motion machine and see if it runs.” Maybe you cannot disprove a perpetual motion machine without building it, but to point out problems in regard to the 1st and 2nd Law of Thermodynamics I think is fair.

    P. Solar Nov. 4, 2011, Climate Audit, “Best Menne Slices”, did a quick FFT on BEST and Hadcrut3.

    OK, Stephen may be right about the longer frequencies. Here is a comparison of the FFT of Hadcrut3 and Berkeley-est.
    http://tinypic.com/r/24qu049/5
    Accepting that Hadcrut is land and sea, it still seems like the longer frequencies have been decimated as Stephen suggested.

    Mind you, you could come up with similar Fourier Power Spectra between two series, it would not prove anything. You would need to compare the phase as well as frequency content. And compare it to what… another temperature record that has been manipulated?

    I am raising a mathematical issue that I see loss of important information in the Fourier Domain and I do not see where it is authentically preserved and restored in the end. I’m willing to be shown.

    • You would have to start with probably the Tos and instrumental-change adjusted data before the BEST methods are applied – so you wouldn’t be dealing with anything to do with BEST. All the data is available for download and the R statistical package might make the task easier.

      • Are you buying? ;-)

        But your suggestion seeds a plan.
        Identify from BEST the points and magnitude (mean, rmese) of TOBS and instrument change adjustments by station. BEST should have that on file.

        Take and apply those adjustments, with added error, to a sample of the RAW, uncut station data. Do Fourier Analysis (not necessarily FFT) on each of the adjusted stations. Compare them with the stations after the scalpel and regional homogenized data.

    • Steven Mosher

      the assumption here is that HADCRUT is a standard that captures a reality.

      in short, you are assuming its a good representation of reality when we know that its not.

      • And compare it to what… another temperature record that has been manipulated?
        The assumption you are making is that one ad hoc comparison is any more than that.

        But in this one of one case, P.Solar agreed that BEST had less power in the low frequencies than HADCRUT3, which is an observation consistent with my concern on unwelcome changes to the Fourier Spectrum. No more than that.

        So stop nitpicking and address the central issue.
        The BEST scalpel eliminates low frequency content that is found in the original records. This low frequency data is an important component of any Climate Signal. Where and how does this low frequency data come back through a chain of custody that can be followed?

      • Steven Mosher

        “original records” are NOT original.

        HADCRUT does not work with ORIGINAL RECORDS

        thats the point you dont get.

        So ask yourself what is HADCRUT doing to inject this spurious signal into the original records.

      • @Steven Mosher 1:23 pm
        HADCRUT does not work with ORIGINAL RECORDS

        Nowhere did I say it did. In fact I said:
        And compare it to what… another temperature record that has been manipulated?

        Not only are you nitpicking, you are inventing your own nits.

        Address the real issues:
        Breakpoints take longer station records with uncertainty and convert them into shorter station records with a LOSS of LOW FREQUENCY information content and more uncertainty. In addition, the shorter the segment, the greater the uncertainty in the slope of the segment.

        While there may exist some breakpoints can be justified and on balance increase signal to noise, that is neither proof nor justification that everybreakpoint improves signal to noise.

        The specific may not be used to prove the general.
        The specific can be use to disprove the general.

  211. David Springer

    Important question (still unanswered by Mosher or Ezekiel):

    DocMartyn | July 8, 2014 at 6:47 pm |
    Zeke, can you do me a favor?
    I went to Best and looked up Portland, Oregon
    Berkeley ID#: 174154
    % Primary Name: PORTLAND PORTLAND-TROUTDALE A
    % Record Type: TAVG
    % Country: United States
    % State: OR
    % Latitude: 45.55412 +/- 0.02088
    % Longitude: -122.39996 +/- 0.01671

    http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Text/174154-TAVG-Data.txt

    Then looked at the same station’s written records, for 1950.

    http://www.ncdc.noaa.gov/IPS/lcd/lcd.html?_page=1&state=OR&stationID=24229&_target2=Next+%3E

    The numbers for monthly average in the official record ( in F) do not match Berkeley Earth database after -32*(5/9).

    Am I doing something very stupid here?

    • Only asking Mosher [“Do your own research”, Mosher] or Ezekiel and leaving yourself open to some cold air from a fan in another room.

      • Windchasers

        “Do your own research” Mosher

        I know, right? It’s absolutely *crazy* to expect people to take initiative or to get educated without someone else there, spoonfeeding them. Mosher should quit being so lazy, and do the work for us.

        …sarcasm aside, I wonder how many of the people arguing with Zeke here have bothered to read the references he posted? Based on the arguments I’ve seen, not many, since many of their objection are already addressed in those references.

      • Steven Mosher

        Windchasers.

        Many of these people dont have a good memory.

        Long ago when we asked hansen ( well BADGERED Hansen for code)
        the only reasonable objection that people raised was this.

        “We are afraid that you guys only want the code so that you can pester us with stupid questions. ”

        I thought this was a valid concern. My way of handling it was to make a promise to gavin that I would never ask a question about the code. I would slog through it and figure it out. To me that seemed fair. I just wanted the tools.

        Now of course other people thought that hansen and others should be there to answer every last question ( some asked the same question over and over ) I disagreed with them then and now.

    • Probably because Berkeley is combining past records a bit differently from NCDC. Stations like this have moved a number of times, and sometimes the move triggers a different station ID and sometimes it doesn’t. In the Berkeley approach, the optimal combination of station records isn’t really that important, as they are all treated as different segments for the purposes of creating a regional estimate (as the scalpel will cut stations at any documented station move).

      • For most members of the public, I fear that your explanation for different global temperatures – reported by different research groups – is as “clear as mud”, Zeke.

        The public assumed research groups reported measured temperatures.

        Each ex post facto admission of data adjustment only further weakens public confidence.

      • Steven Mosher

        see below zeke.
        its got multiple ghcnd records, and gsod.

    • Steven Mosher

      key phrase

      “Then looked at the same station’s written records, for 1950.”

      wrong

      The thing Doc doesnt understand is that site has MULTIPLE SOURCES
      there is no “official” source, no single source

      here are ALL the sources


      % Alternate Names: Missing Station ID – 999999-24242
      % PORTLAND TROUTDALE
      % PORTLAND TROUTDALE AP
      % TROUTDALE
      % TROUTDALE 2
      % TROUTDALE AIRPORT
      % TROUTDALE AVN
      % TROUTDALE SUBSTATION
      % TROUTDALE SUBSTN
      % TROUTDALE WB AP
      %
      % IDs: coop – 358631
      % coop – 358632
      % coop – 358634
      % faa – TTD
      % ghcnd – USC00358631
      % ghcnd – USC00358632
      % ghcnd – USC00358634
      % ghcnd – USW00024242
      % gsod – 726985-24242
      % gsod – 726985-99999
      % gsod – 999999-24242
      % icao – KTTD
      % ncdc – 10009634
      % ncdc – 20016386
      % ncdc – 20016387
      % ncdc – 20016537
      % ncdc – 30015766
      % nws – TRTO3
      % usaf – 726985
      % wban – 24242
      %
      % Sources: US First Order Summary of the Day
      % US Cooperative Summary of the Day
      % Global Historical Climatology Network – Daily
      % Global Summary of the Day
      % US Cooperative Summary of the Month
      % Multi-network Metadata System

      So, the first thing that happens is we assemble ALL the sources

      Then we have to De-dup the sources.

      then we have to prioritize the sources on a monthly basis.

      That priority logic goes from raw daily to raw monthly.

      So when you look at the berkeley data you would have to look at the file called sources.txt that will tell you on a monthly basis which source was used for the data.

      So the mistake that Doc made was looking at the post processed file ( after combining sources ) and he compared it to only ONE of the input sources.
      he should start by looking at the multple sources dataset.

      Now it gets even more complicated because you see all the multiple ncdc sources as well as usaf sources. there are cases where you have two instruments at the same location and records are aliased.

      Another example of a site that is hard to resolve algorithmicly is central park.

      • “Now it gets even more complicated because you see all the multiple ncdc sources as well as usaf sources. there are cases where you have two instruments at the same location and records are aliased.”

        WOW, Steven, it really is complicated! Did you make sure the public and the policy makers knew that?

      • Is there, perhaps, an intermediate file or records that identify for each station and each month the prioritized source used?

        So when you look at the berkeley data you would have to look at the file called sources.txt that will tell you on a monthly basis which source was used for the data.

        Ah well, that’s all I need…. /sarc.

        http://berkeleyearth.org/about-data-set
        Find: sources.txt No matches found.

        http://berkeleyearth.org/analysis-code
        Find: sources.txt No matches found.
        Ah: Source Code and data: download (added Jan, 2013)
        (hmmm. 1.89 GB download, without any indication that sources.txt is in it. And it is 18 months old.)

        Ok, I give up. Where can I download that sources.txt file? I wouldn’t want to make a mistake.

      • Steven Mosher

        “Ok, I give up. Where can I download that sources.txt file? I wouldn’t want to make a mistake.”

        from the data page. duh.

        http://berkeleyearth.org/data

        You’ll have to decide what you want to look at.
        the multiple source data,
        singled valued
        QC’d

        Each zip file contains all the intermediate values and a sources.txt

        I used to maintain a package to help folks, but the key library (bigdata) upstream got deprecated on windows. So, software wise you are on your own.

      • Steven Mosher

        “http://berkeleyearth.org/about-data-set
        Find: sources.txt No matches found.”

        Its in the zip file

      • Steven Mosher

        So for example you would download this

        http://berkeleyearth.lbl.gov/downloads/TAVG/LATEST%20-%20Quality%20Controlled.zip

        Thats a ZIP file. So, when you searched for sources.txt that was a mistake, because the file is in the ZIP .

        here is the read me

        File Generated: 15-Nov-2013 16:28:19
        Dataset Collection: Berkeley Earth Merged Dataset – version 2
        Type: TAVG – Monthly
        Version: LATEST – Quality Controlled
        Number of Records: 40747
        Number of Locations: 40747
        Number of Data Points: 15717007
        Dataset Hash: 67a656aa1b14deb35b5bcc9edebe9fca

        ————————————

        Berkeley Earth Merged Dataset – Version 2

        The Berkeley Earth Merged Dataset is the main dataset used for the analysis
        conducted by the Berkeley Earth project. “Version 2” is the first version
        intended for general external use.

        This dataset has been constructed by merging data from:

        1) Global Historical Climatology Network – Monthly
        2) Global Historical Climatology Network – Daily
        3) US Historical Climatology Network – Monthly
        4) World Monthly Surface Station Climatology
        5) Hadley Centre / Climate Research Unit Data Collection
        6) US Cooperative Summary of the Month
        7) US Cooperative Summary of the Day
        8) US First Order Summary of the Day
        9) Scientific Committee on Antarctic Research
        10) GSN Monthly Summaries from NOAA
        11) Monthly Climatic Data of the World
        12) GCOS Monthly Summaries from DWD
        13) World Weather Records (only those published since 1961)
        14) Colonial Era Weather Archives

        And it is further supplemented with additional metadata from the:

        15) Multi-network Metadata System (from NOAA)
        16) World Meteorological Organization Station Metadata

        Further documentation on each of these source datasets will be provided from
        the Berkeley Earth website (http://www.berkeleyearth.org/) as well as copies of
        each data set placed into the Berkeley Earth format.

        The Berkeley Earth Merged Dataset is provided in several different formats:

        1) “Multi-valued”: This includes all time series from the originating
        datasets. Due to duplication with the same data being reported by multiple
        agencies, on average there will be 3-4 time series reported with each site.
        Only limited quality control flagging has been performed at this stage.

        2) “Single-valued”: Data have been collapsed so that there is only one time
        series per location. Quality control procedures have been completed and their
        output is reported via a series of quality “flags”. Users of this data set
        will have to consider these flags and remove any data they don’t want to use.
        Seasonality is preserved in this data set.

        3) “Quality Controlled”: Same as “Single-valued” except that all values
        flagged as bad via the quality control processes have been removed. This
        dataset is recommended for users that require relatively clean data, want
        seasonality to be preserved, but are willing to tolerate the possibility of
        long-term inhomogenuities in their data.

        4) “Breakpoint Corrected”: Same as “Quality Controlled” except a
        post-processing homogenization step has been applied to correct for apparent
        biasing events affecting the long-term mean or local seasonality. During the
        Berkeley Earth averaging process we compare each station to other stations in
        its local neighborhood which allows us to identify discontinuities and other
        inhomogeneities in the time series for individual weather stations. The
        averaging process is then designed to automatically compensate for various
        biases that may appear to be present. After the average field is constructed,
        it is possible to create a set of estimated bias corrections that suggest what
        the weather station might have reported had apparent biasing events not
        occurred. This data set is recommended for users who want fully quality
        controlled and homogenized station temperature data. This data set is created
        as an output of our averaging process, and is not used as an input.

        5*) “Non-seasonal”: Same as “Single-valued” except that each series has been
        adjusted by removing seasonal fluctuations. This is done by fitting the data
        to an annual cycle, subtracting the result, and then readding the annual mean.
        This preserves the spatial variations in annual mean.

        6*) “Non-seasonal / Quality Controlled”: This dataset is the same as
        “Non-seasonal” but further removes any values flagged as bad by the quality
        control processes. This is the dataset used by the Berkeley Earth Averaging
        Methodology and is recommended for users who do not require that seasonality be
        preserved.

        *: The “Non-seasonal” data products are likely to be discontinued in the
        future. That data type was created as a necessary processing step in earlier
        versions of the Berkeley Earth analysis, but they are no longer part of our
        workflow.

        Please refer to the header of this file to note the version of the current
        data. In addition, the header describes the type of data represented in the
        current file set, e.g. TMAX / TMIN / TAVG and Monthly / Daily.

        Note regarding homogeneity: Unlike some data sets produced by others, this data
        product does not include any adjustments to “correct” for apparent
        inhomogenieties or other discontinuities and biases in the data. Whenever
        possible, the data used here is based on the original raw data reports. We
        have included quality control procedures that identify the most highly spurious
        values which can then be removed. However, most users will have to design
        their procedures to make allowances for data that may include anomalous values
        and various spurious effects. The Berkeley averaging procedure includes
        procedures for responding to such issues, but will only be suitable for certain
        times of problems.

        ————————————

        Data Formats

        The Berkeley project collects temperature data and metadata from a variety of
        sources and attempts to represent it in a common format. This information is
        primarily managed via a series of customized Matlab classes, and the data and
        source code to use this system has been made available by Berkeley Earth
        (though the proprietary restrictions on Matlab may limit its availability for
        some users).

        To accommodate other users and formats, we have designed a system to represent
        all of our information via a series of text files. The data is distributed
        across a variety of files. A minimal user wishing to inspect temperature and
        location data will need to examine only two of these files. However, to
        accommodate advanced users we provide additional files including a variety of
        quality control, sourcing, and additional metadata.

        The most important files for new and casual users are highlighted with the
        indicator “***”.

        General dataset files:

        *** README.txt: An overview file describing the nature of the dataset and the
        way the data is distributed across other files. The current file you are
        reading is the README file.

        *** site_detail.txt: An accounting of the metadata associated with each site.
        This file is suitable for most users but omits certain metadata only available
        in the complete file.

        site_complete_detail.txt: Comprehensive collection of site metadata. This
        file provides all available metadata including historical and conflicting
        values. In order to represent all of the possible information, the format of
        this data file is the most complicated, and consequently this file is probably
        not of interest to casual data users.

        site_summary.txt: A brief summary file providing geolocation metadata in a
        simple tab delimited format. This is entirely redundant with the more
        comprehensive site metadata files; however, it is provided as a simple easy to
        use alternative that may be convenient for some users.

        site_flags.txt: A variable length set of integer flags attached to each site
        indicating specifying additional characteristics of the site and the data
        attached to it.

        site_flag_descriptions.txt: Plain text descriptions of the site flags.

        data_flag_descriptions.txt: Plain text descriptions of the per datum flags.

        source_flag_descriptions.txt: Plain text descriptions of the source codes.

        station_changes.txt: A file highlighting the times at which changes in
        station metadata have occurred.

        data_characterization.txt: A file providing summary statistics for each each
        station’s record.

        Per data type files:

        *** data.txt: File containing the temperature time series associated with this
        dataset. This file also contains a limited amount of per datum metadata.

        flags.txt: File containing a variable length array of per datum quality
        control, diagnostic, and data history flags.

        sources.txt: File containing a variable length array of per datum source
        archive indicator flags.

      • @Steven Mosher 12:49 pm
        12:09 am: So when you look at the berkeley data you would have to look at the file called sources.txt that will tell you on a monthly basis which source was used for the data.

        Oh, so there are several sources.txt files buried into .zip files of unknown size to download. Thanks for the clarification, Steven.

        BTW, I am downloading Single Valued TAVE zip file. Estimated time is over 10 minutes.
        ( Download done – Size 221 MB, which expands to 2700 MB of which sources.txt is about 631 MB).
        I note that the name of the zip file is LATEST – Single-valued.zip.
        I suggest that TAVE be prefixed to the name. Wouldn’t do to get them confused.

        My ultimate goal is to see how consistent the sources are between TAVE and TMIN and TMAX at each station and stations within regions. From your instructions, I assume there is no such file that compares sources for TAVE, TMIN, TMAX.

    • David Springer

      ROFL

      The obfuscation. It burns.

      And we still don’t know the source of the raw data.

      It seemed like such a simple question.

      You’re the expert Mosher. Unwind the spaghetti code mess you created and answer the question. Where did the monthly average come from specifically. Show the source(s), the numbers, and the calculations.

      • Steven Mosher

        err. looks like you havent looked at the matlab.

        I have a request que.
        stuff gets prioritized. people who are writing papers get bumped to the top. graduate students get bumped to the top. other open source developers get bumped to the top. People who write me email get bumped to the top.

        If I have time left over after that, then I get to work on my own stuff.
        If I have time after that, I might respond to a blog request from a slacker.

        In short, when I asked for code and data from Hansen I made a promise to gavin. ‘I will never bug you with a question. just give me the tools” I apply those rules to everyone.

        Folks who have a question have the tools. beyond that I have a que. I get paid to service that que. I get paid to prioritize that list. You aint on it.
        If you have a question, theres the data, there the code, knock your self out. If you want help, submit a request. i will prioritize it and add it to the stack. if you dont like your position in the stack, give some money

      • But then Hansen never created a thread at Climate Etc. called “Understanding adjustments to temperature data”.

      • David Springer

        It’s more fun and costs less watching you squirm. I can ask and the question looks reasonable. You refuse to answer and it looks evasive.

        What did you boys do to the raw data recorded by an observer in 1950 at Portland-Troutsdale such that it’s been cooled by 0.7F?

        Do you even know?

  212. Understanding adjustments to temperature data says it all. Zeke.
    And now we understand. You do temperature adjustments. You fabricate estimates.
    Fabrication does not mean lying, it means assembling together.
    Note the English, Mosher [ “Man are you dense. Zeke has a written a paper describing all this. Personally, I dont look at USHCN and I don’t use it for years “.] but I am quite happy to fabricate 160 comments on a subject that I haven’t used for years.
    When you get unhappy with the term fabrication it seems as if you do have something to hide.
    Perhaps its the 1.5 C you put the 1930’s up by and then do not label your USHCN graph as fabricated or estimated.
    Perhaps Mosher’s word, estimated. Mosher says it is all guesses, None of it is correct [true is another meaning of correct for the English Major].
    And now we understand.
    Judith is right, Zeke is to be commended for commentating.
    Without his attempting to correct Lord Voldemort we would never have had the chance to find out about the depth, scope, cruelty and hypocrisy in these changes. The Hypocrisy is not in the Fabrication. With or without Noble Cause intent Zeke is right on the Maths involved and it does have a lot of uses. The Hypocrisy is in not labeling it as an Estimate clearly on every USHCN graph.
    It is not the History. It is not fact, it is an estimation. A concatenation of compound interest aggregations leading to a spurious, fictional fabricated Unreal 1.5 adjustment to al;l past early 1900 records that is promoted as a truth when factually it is an estimate, a really,really bad estimate

    • Steven Mosher

      ” You fabricate estimates.”

      yes, ford fabricates cars.

      They we test our fabrication.
      It’s better than others.

      Pretty simple.

      • Agreed, you have a way with words. That’s why I like reading your comments.
        Even better you tell it as it is and everyone thinks you are disagreeing with them.

  213. Zeke July 5th, 2014 at 5:32 pm

    angech, All CONUS temperature reconstructions are estimates. Adding an “estimated” tag would be meaningless.

    No it would be defining the product.
    Like a made in America tag.
    Not fabricated in the Phillipines,
    with an USA flag stamped on it
    and sold as made in America

    • Steven Mosher

      all the ingredients are list.
      and the recipe as well.

      That’s way better than a mere label.

      • @Steven Mosher: 1:29 pm
        all the ingredients are list.
        and the recipe as well.

        Where? Please supply a direct URL and lookup point.

        Is it your usual “Its In the code.”?
        You know that is not what is asked for. We are looking for a quality estimate attached to the data which comes from the quality of the ingredients and the page of the cookbook to identify the recipe used for that data. It changes across the data set depending upon what was used in each case.

    • angech, an “estimated” tag would be meaningless.
      A tag that gave more information as to HOW it was estimated would have great meaning.
      Another tag to roughly classify its uncertainty of the estimate is not only useful, but essential for good science.

  214. About all these mostly imbecilic daily and monthly or whatever adjustments, changes. infilling, krieging, estimating, zombying to decades long past history will do;[ shades of Orwells “1984”, ] is probably give some activist scientists of the next couple of generations a very nice living re-altering, re-estimating,.re- krieging, re-infilling, re-zombying all those carefully adjusted and etc and etc historical temperature records of today all over again.

    Deeply isolated in their own tiny academic bubble only talking to like minded individuals also inside of that bubble where the real word rarely intrudes, I doubt that very many of these scientists realise just how stupid and even imbecilic and disposable they are starting to appear to the ordinary citizen on the street particularly when they try to sell a bill of goods like those adjusted and etc and etc temperatures from a half dozen or more decades past as the real temperatures of the times and then change those same temperatures or remove then the next day or week or whatever and then change then yet again and again.
    When will it stop.?
    Nobody has given any answer to that question and the longer the alterations, changes , adjustments and etc and etc go on the further from the real temperatures of the times the “adjusted / estimated” temperatures will become and the further down into the basement of public opinion the public’s estimations of science and scientists will sink.

    • So in your case nothing has soaked in. Everyone has a bubble, even ROM.

    • > Nobody has given any answer to that question [when will the “adjustments” stop?] and the longer the alterations, changes , adjustments and etc and etc go on the further from the real temperatures of the times the “adjusted / estimated” temperatures will become and the further down into the basement of public opinion the public’s estimations of science and scientists will sink.

      In case someone would really be amused by a concordance, I believe this request to answer this question should be in it. That ROM expresses that request is secondary. So that would be:

      When will be stop adjusting data? (ROM, July 9, 2014 at 8:52 pm)

      The “When will be stop adjusting data?” is just a suggested presentation. The maker of the concordance is free to change it.

      If this question appears more than once in a resource, then the citation could be added in the concordance.

      ***

      This kind of concordance might be more informative than counting the number of comments a commenter contributed to a thread.

    • A fan of *MORE* discourse

      ROM deplores folks who are “Deeply isolated in their own tiny bubble only talking to like minded individuals also inside of that bubble

      In the words of The King … Veeeeevvvvvaaaaaaaaahhhhh … Las Vegas!

      Serious young climate-science students can perceive “the bubble” all-too-plainly, eh Climate Etc readers?

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  215. bit chilly

    a small point to add to angech,s last comment. i do not recall seeing the term reconstruction on any of the images in the mainstream media used to inform the general public of temperature trends. i understand this will be second nature to those working in the field,but perhaps should be emphasized to the mainstream media,various government departments and the general public.

    • Analysis, Reanalysis, Construction, Reconstruction, Estimate, Prediction, Projection, Expectation, Anticipation … Anticipaa a tiiiooon …

    • Steven Mosher

      well, that would be nice, but I don’t think it will happen. The Land ocean INDEX, is commonly referred to as an Average temperature, and explaining to people that this is actually a prediction or estimation is even more difficult. For example, consider CPI.

      • @Steven Mosher 12:42 pm
        For example, consider CPI.
        You mean the Consumer Price Index? If not, what?
        Not only can you find the definition, but you can find much discussion on how it has been turned into a misleading statistic.

      • Just label it as, and I will agree with you and Zeke that your reconstructions are good as reconstructions, just not as reL past temperature.

  216. From Rasey at 7:17 pm
    Identify from BEST the points and magnitude (mean, rmese) of TOBS and instrument change adjustments by station. BEST should have that on file.
    Does BEST have such a file?

  217. Greetings! Very useful advice within this post! It’s the little changes that
    make the largest changes. Many thanks for sharing!

  218. From Dougmanxx above:

    “The weather in 1934 keeps changing!”

    now I have a question:

    Using current methods, are the temperatures from 1934 approaching some sort of equilibrium? Or are they just resonating in some fashion?

    As the dataset gets longer, one might expect a trend. Something asymptotical perhaps, which might make at least the past predictable?.

    or am I just naive?

  219. KenW | July 10, 2014 at 3:18 am From Dougmanxx above:
    Using current methods, are the temperatures from 1934 approaching some sort of equilibrium?

    How could we possibly tell, some just keep cooling others go down and then back up and then down again.
    It is a mess with no credibility at all, despite the very good presentation by Zeke.
    Mathew R Marler says that the Temperatures as recorded in the past were wrong, TOB suggests that the odd record here or there could have been wrong depending on certain weather conditions, but as they have no idea of the prevailing weather conditions at each station they adjust every record of every station “Just in Case”.
    So beware next time you go to the Dentist, if he finds a cavity in a tooth, he may decide to drill and fill them all “Just in case” as it is good scientific practice.

    • Well, Dougmaxx shows that the temperature in Circleville, Ohio in 1934 is apparently still oscillating. (in ’34 my father was a little kid)

      My question is, have the oscillations gotten bigger or smaller lately?
      And, might they be converging on some value?
      In which case we might predict where they will land someday?

      Of course, if they aren’t converging, then our current measurements must be getting worse, or the applied corrections are insane.

      On the other hand, if they hit a resonant frequency, then we might all have froze and burned up long ago anyway.

  220. Zeke Hausfather wrote on independent confirmation of global warming:

    There was a recent paper on this subject by Anderson et al: http://www.agu.org/pubs/crossref/pip/2012GL054271.shtml

    There is another paper which provides an independent confirmation of global land warming w/o using any measurements from meteorological stations. The authors derive the temperatures over land from other physical variables. Thus, any artefacts in the measured temperature data or allegedly due to the adjustments done to the measured temperature data, supposedly causing artificial global warming in the data, which wasn’t real, as claimed by AGW-“Skeptics”, can’t objectively have influenced the results from the analysis. The agreement with the results from the various surface temperature analyses, for which the data from the meteorological stations are used is very good, nevertheless:

    Compo et al., GRL (2013), http://dx.doi.org/10.1002/grl.50425
    http://onlinelibrary.wiley.com/doi/10.1002/grl.50425/pdf

    • Jan, you’re fighting the wrong battle. The question is not whether there was warming, but how much and when? Eschewing the direct measurements of the subject, and using proxies can only give a general confirmation of sign. Magnitude is still questionable.

      The also used observation data: “…inferred them from observations of barometric pressure, sea surface temperature, and sea-ice concentration using a physically based data assimilation system called the 20th Century Reanalysis.” If the original observed measurements are suspect, and we already know SST measurements are less accurate than the land, how much confidence is there in this study?

      • “The question is not whether there was warming, but how much and when?”

        Why is the question not whether there was warming?

        How do you know there was warming?

      • @lolwot 2:42 pm
        Given that the answer to “How much and when” can be a time series with positive and negative numbers, it is a far superior question than “Whether there was warming” which is a best indeterminately probabilistic or at worst binary.

        “Whether there was warming” is a leading question. It doesn’t ask “how much?” It doesn’t ask “how frequently?”. I

        It doesn’t ask “Whether there was cooling?” at periods over the same time interval

      • I don’t know Stephen, I think you and others are still stuck on the question whether the world has warmed since 1900.

        Maybe you should figure that one out between yourselves first before moving on to more specific questions about whens and how muchs.

      • @lolwot at 3:04 pm |
        I don’t know Stephen
        You are right. You don’t know.
        You don’t know me.
        You don’t know what I think.
        You don’t know what I know and have experienced.

      • So you claim the world has warmed since 1900. What’s your evidence of that? Surely not the unreliable station records *gasp*

      • @lolwot at 6:53 pm |
        So you claim the world has warmed since 1900.
        I have never made that claim either.
        You are trying to put words in the mouths of others. Don’t do that.

        Possibilities:
        1. I claim “A”
        2. I claim “not A”
        3. I claim neither “A” nor “not A”, because “A” is an ill-formed proposition.
        4. I claim the uncertainties of “A” and “not A” and “Other” are too large to make any other claim. The Null Hypothesis is not rejected.
        5. I make no claim at all through lack of reliable information.
        I make no claim that 1-5 are the only possibilities.

        Now, how am I to bet?
        If “A” is world has warmend by at least 0.01 deg C since 1900
        I’d lay my probabilities as (15%, 0%, 5%, 60%, 20%) for possibilities (1,2,3,4,5)

    • “We have ignored all air temperature observations and instead inferred them”

      lol

      Warmer Science just gets better and better. Observations of the phenomenon in question are ignored.

      Andrew

    • Interesting paper. I’m influenced, but I find it hard to stomach the model being completely independant since the group itself is not independant.
      Im not making accusations; I’ve handing my own working code off to someone with great confidence and seem it blow up in their independant hands….

      • Sorry, but welcome to what happens when scientists enter the political arena…

    • Steven Mosher

      Ya, I think I saw that in poster form ( Wahl was on it )

      “The thermometer-based global surface temperature time series (GST) commands a prominent role in the evidence for global warming, yet this record has considerable uncertainty. An independent record, longer and with better geographic coverage, would be valuable in understanding recent change in the context of natural variability. We compiled the paleo index from 170 temperature-sensitive proxy time series (corals, ice cores, speleothems, lake and ocean sediments, historical documents). Each series was normalized to produce index values of change relative to a 1901-2000 base period; the index values were then averaged. From 1880 to 1995, the index trends significantly upward, similar to the GST. Smaller-scale aspects of the GST including two warming trends and a warm interval during the 1940s are also observed in the paleo index. The paleo index continuously extends back to 1730 with 66 records. The upward trend appears to begin in the early 19th century but the year-to-year variability is large and the 1730-1929 trend is not significant at the p<0.05 level. In addition to its value in vetting the thermometer-based record, our approach shows the potential of the un-calibrated paleo archive in understanding environmental change; this approach can be applied to aspects of environmental change where the instrumental record is even shorter (ocean pH, sea ice, hydrologic extremes).

      • “yet this record has considerable uncertainty”

        Ya think?

        Noooo…

        Andrew

      • Steven Mosher,

        The abstract you are quoting probably belongs to the work by Anderson et al, referenced by Zeke Hausfather. Wahl is one of the paper’s authors. The study by Compo et al., which I referenced here doesn’t use any proxy data for their confirmation of the global land warming, which is independent on any measured air temperature data and their alleged artefacts.

      • David Springer

        “The upward trend appears to begin in the early 19th century”

        The Little Ice Age appears to end in the mid 19th century.

        Maybe the two are related?

      • David Springer

        Queue the Little Ice Age deniers.

      • springer, “The Little Ice Age appears to end in the mid 19th century.

        Maybe the two are related?”

        nah, can’t be. that would reduce the evil human impact by two or more and indicate that the oceans tend to energy storage mechanism unrelated to atmospheric co2. that would take most of the alarm out greenhouse gas warming.

      • David Springer

        Are we having fun yet, Dallas?

        To be quite honest I could go for some El Nino about now. La Nada isn’t cutting it for refilling our lakes & aquifers in US Southwest. Rainfall is enough to keep up with demand. Our excess comes from El Nino and some ass-kicking hurricane rain bands neither of which have been visitors for going on a decade.

    • Jan P Perlwitz,
      You are much too optimistic.
      From many years I launch appeals for raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century. No result so far.
      I reiterate.

    • A nice example of what can be found in reality:
      http://imageshack.us/a/img21/1076/polar2.png

      • This link does not contain any information on the source of the graphic, neither any information on the sources of the data used to produce the graphic, and what data were used specifically, neither on the methodology applied for the smoothing. The graphic is missing confidence intervals around the data to be able to evaluate whether the differences are just a random features or a systematic deviation. And how does it make sense to compare UAH data with surface temperature proxies? The satellite data are not measurements of the surface temperature.

        In short, the graphic doesn’t allow any substantial conclusion to be drawn from it.

      • Jan P Perlwitz,
        From what I read, you are not very familiar with the current state of the proxies debate. You should follow the post about this topic on ClimateAudit.

        What I am looking for : “raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century”.

        I would be really happy that you give me a specific reference to such a case.

        With regard to the graph of raw data from Briffa et al. 2013, I agree that the image does not give complete informations, although the essential is there. If you are interested in this case, you can consult this link:
        http://noconsensus.wordpress.com/2014/02/26/confirmation-of-phis-reconstruction
        Also read the comments. This post by Jeff Id follows others on the same subject.

    • Jan
      Use bristle ones.
      Please,please, use bristle ones.
      You know, they showed the 1930’s were heating up real fast.
      Like you said.
      Then they showed it heated up even faster from the 1950’s on.
      They is real independent confirmation of global warming.
      And they don’t show no pause, no siree.
      Go for the Bristle cones, Jan, you’re on a real winner here.
      I’m with you all the way.
      Who needs piddly glass thermometers anyway?

      • angech,

        The study to which I referenced doesn’t use any proxy data.

        And you AGW-“Skeptics” are the ones who want that the surface temperature analyses which use measurements from meteorological stations were all wrong, because of artefacts in the measured temperature raw data or because of artificially warming trends allegedly introduced by quality control and/or homogenizaton procedures. Not? I am not the one here who is making these kind of claims.

        You say,

        Please,please, use bristle ones.
        You know, they showed the 1930′s were heating up real fast.
        Like you said.

        I am supposed to have said this? Where and when? Please provide a quote and proof of source.

  221. Climate Science 1, Climate Deniers 0

    • what the hell is that supposed to mean ? this is not a game . whether the general public get it or not, climate science is driving massive changes the world over. all the money being spent ,from scientific research to installing wind turbines etc is public money.

      this is a resource in short supply whether you are a member of the public or the government. some of it is mine.i want to know it is being spent correctly,based on sound recommendations based on sound science.

      i understand steven moshers frustration , i believe he is an honorable man attempting to do the very best he can. what you need to remember is,that might just not be good enough . all the math and statistics in the world can only get you so far , if the raw data is not fit for purpose,it should not be used for any purpose.

  222. This is sort of off-topic, but since there’s been a lot of discussion of BEST already, I want to comment on something. All these comments about BEST’s code and data being available are starting to annoy me. Last year, the same message was being touted, yet this is what Steven Mosher told me when I was trying to reconcile contradictory descriptions of the BEST methodology:

    The approach has changed substantially from then. Until the code is published its going to be very hard for you to follow any discussions or descriptions.

    The reality is whether or not BEST’s code and data for any particular set of results are available is a crap shoot. There’s basically no way to associate any particular code and data with any particular results they’ve published as they’ve overwritten old files in many cases, and in other cases, simply not bothered to state what files were used. Additionally, BEST has freely promoted contradictory information about its data and results, makes changes to both which are (nearly?) impossible to check and fails to inform people of changes in its methodology while referring people back to the outdated descriptions of that methodology.

    But then, most of this discussion of BEST is due to Steven Mosher. This is the same Mosher who, when I pointed out apparent contradictions in how the BEST methodology was described and asked for clarification, acted like I was an idiot and kept insisting I was wrong. It was only after a great deal of grief he acknowledged my “confusion” was due to a change in the BEST methodology the BEST website, figures and data files were not updated to reflect. Even then, he still insisted I was wrong and stupid.

    This was along with a variety of other oddities, like Mosher having written a post which said:

    Finally, we’ve improved the estimation and removal of seasonality in the kriging process.

    But later giving me grief on the basis:

    As I said brandon seasonality is “removed” or rather estimated before krigging.

    BEST’s project may be high quality as far as the statistics go, but it seems they don’t understand how to handle public relations. Leaving aside my belief Mosher is one of the worst spokesmen possible, BEST hasn’t managed to do simple things like: 1) Update the descriptions of its data and methodology on its website when it makes changes; 2) Specify which data and code was used to produce which results. The latter is especially troubling as anyone reading the BEST papers cannot possibly hope to reproduce the results of those papers because there’s no way to know what data or code was used for those papers.

    • @Brandon Shollenberger July 10 at 2:39 pm
      cannot possibly hope to reproduce the results of those papers because there’s no way to know what data or code was used for those papers.

      I don’t know about “no way”, but it would be a monumental job given the way the files are organized.

      It might be possible to recreate their results with their data and their code, But that would only be recreating any mistakes in the code and theory.

      That’s why I’m harping on uncertainty analysis and Fourier Domain issues.. These are theoretical aspects that can be used as independent tests on processes and conclusions.

      It is a good post, Brandon. It deserves a “Thank You.”

      • Stephen Rasey, thanks. The problem I have is I’ve seen changes in the BEST results on at least three different occasions which clearly indicated a change in their code. In all three cases, there was no public disclosure of that change. Even if you could get all the different versions of the code and data they’ve created, how would you tell which went with what? And even if you could, not all of their results were saved. They’ve overwritten results. I guess if you had each iteration of the code and data, you could run each and see what the results were. Short of that though, I don’t see how it’d be possible.

        By the way, you might be interested in a post I uploaded a short while ago. I used a much simpler test of the BEST results to show why I don’t trust them.

      • WebHubTelescope

        I will let Brandon squirm for a good long while as he tries to figure out what he has done wrong:
        http://imageshack.com/a/img856/7997/uh.gif

        I tell you what, you need more than a CoCo or JuCo degree to compete at this level.

        Mosh will get a huge laugh out of this one. He has just got to see this.

      • WebHubTelescope, I have no idea why you think I would “squirm for a good long while” after you post a single graph without any explanation of how it was made. If you can’t be bothered to even say what error I supposedly made, why would anyone assume I made one?

        Heck, you clearly didn’t even use the data I used. What kind of error is that supposed to indicate?

      • WebHubTelescope

        It looks like you have yourself a problem there, don’t you?

        Why should we tell you what you did wrong?

        You have exactly zero influence on science either way, and you are simply an impediment that can be easily ignored by those people that actually know what they are doing.

      • WebHubTelescope, I posted a comparison, explaining how I generated it with links to the source data I used. You posted a comparison between data that is obviously not the same as I used, without explaining what data it is, and you apparently expect people to just assume I screwed up. Given nobody could possibly verify your claim, and anybody could verify mine, I have no idea what makes you think anyone would listen to you.

      • Don Monfort

        Brandon, Frank Lansner’s analysis is interesting. It seems to indicate BEST is generously smearing the warmth around. Mosher and Zeke don’t want to talk about it (unless I missed something in this huge thread). What do you think of Frank’s work?

      • WebHubTelescope


        Given nobody could possibly verify your claim, and anybody could verify mine, I have no idea what makes you think anyone would listen to you.

        Is there something wrong with you?
        You did something obviously incorrectly and now you want somebody to help patch up the mess you made?

        You made your bed and now ….

      • Don Monfort, that post seemed interesting, but after a quick skim, it seemed to be mostly about what data BEST uses. I didn’t see anything which really addressed how the BEST methodology functions. I’ll read it in detail later today and see if I missed things.

        If not, he’s examining an important issue, but it’s a different issue than the ones I’ve looked at. That means I’d need to spend some time going over background material before I could pass any judgment. It does look interesting though. Thanks for pointing me to it.

        WebHubTelescope, if there’s something wrong with me, nobody could possibly know because you’re refusing to say what I supposedly did wrong. Jumping up and down shouting people are wrong while hurling invective won’t convince anyone of anything, except that your comments are a waste of time and space.

        If I did something wrong, say what was wrong about it.

      • Reality 1

        Webby -1000

        Andrew

      • WebHubTelescope


        if there’s something wrong with me, nobody could possibly know because you’re refusing to say what I supposedly did wrong. Jumping up and down shouting people are wrong while hurling invective won’t convince anyone of anything, except that your comments are a waste of time and space.

        If I did something wrong, say what was wrong about it.

        Since you are not coming clean, it is looking more and more like you are intent in turning your analysis goof into a fabrication of results and use that to smear the BEST team.

        If you keep it up you will be invited to the next klown show by the Heartland Institute. They appreciate people like you.

        Willard will also be interested in what Chewbacca has been up to.

      • Don Monfort, I had a bit of time to look at that link, and I definitely think it is interesting. I think the implications of nefariousness on BEST’s part are unfortunate though. It says a number of things which imply BEST avoids using particular data to achieve desired results, like:

        Best claim that UHI plays no role. But remember results for all 11 countries analysed; First BEST first avoids the cold trended stations (by deselecting or warm-adjusting OAS stations) and THEN they compare the remaining warm trended OAA stations with city stations. It is on this basis that BEST concludes that UHI is not an issue in climate data.

        But from what I’ve seen, there’s no basis for claiming this. It seems BEST simply doesn’t have access to certain data due to how and to who it is made available. They don’t deserve this sort of treatment for something like that. It’s certainly good to examine how the data not used by BEST would affect their results, but that doesn’t require implying nefarious intent.

        Additionally, I don’t think his conclusions about the data are justified. Even if we accept the idea coastal stations in the BEST network are biased to include ones with warming trends instead of non-warming trends, it doesn’t follow from that that the global record must be significantly affected. It certainly doesn’t support an idea like an option he suggested: “[L]and areas with little noise from ocean air trends show no heating after around 1930.”

        In other words, it looks like Frank Lansner may have found a legitimate problem (data availability issues causing BEST to have biased samples in coastal areas), but I don’t think the conclusions he draws from it are justified. I’d like to see the issue examined in more detail by someone who can avoid over-interpreting their results.

      • WebHubTelescope, I don’t know what you are smoking:

        Since you are not coming clean, it is looking more and more like you are intent in turning your analysis goof into a fabrication of results and use that to smear the BEST team.

        But there’s nothing for me to come clean about. I described what I did. If anyone had asked, I would have provided code which replicates my results. As far as I can see, there’s no legitimate question about what I did.

        But you’ve consistently refused to provide any detail or information about what you did. As far as I can tell, the only person “not coming clean” is you.

      • WebHubTelescope

        Hey Brandon, I submitted your blog post to the Internet Archive Wayback Machine so that the evidence for your data crimes will live on.
        https://web.archive.org/web/20140711202245/http://hiizuru.wordpress.com/2014/07/10/is-best-really-the-best/

        No need to go back and correct your graph now. Also it makes a good entry for the Field Guide to Climate Clowns.

        And to Bad Andrew: I am evidently like a Bad Cold or a Bad Dream to you, something that you can’t seem to shake. Recommend seeing a doctor or a shrink.

      • WebHubTelescope, um, okay? I’ll get right on not doing what I never would have done. Supposing I did make a mistake, and god knows why we’d suppose that, I’d acknowledge it, post a correction and leave the original material online. I’ve never used editing to cover up a mistake larger than a typo.

        But by all means, keep saying and doing useless things rather than actually participating in any sort of discussion. I’m sure your snide and petty behavior will do something for you, even if it won’t do anything for anybody else.

      • WebHubTelescope

        Listen carefully Brandon. This is not that hard. Just take the data and make a graph correctly. That was your obvious mistake. You did not do it correctly.

        I guess the jury is still out whether you did this intentionally or not. So please will you just fix it? Do this before Mosh or Zeke come around and they will be easier on you.

      • Don Monfort

        Brandon, thanks for having a look and for your comments. I don’t get the impression that Frank is accusing BEST of malfeasance. My take is that Frank is mostly criticizing BEST for using the same data as the other purveyors of temperature reconstructions and getting the same results, that they have not made the effort that he has made to liberate a ton of original data that makes a difference. It looks to me that he is right about the countries he has analyzed, so far. It will be interesting to see where he goes from here.

      • Don Monfort

        webby, webby, webby

        Please tell us wtf it is you are going on about. It seems really important, to you.

      • WebHubTelescope

        Donny, One thing I have learned is that it takes more work to actually figure out how some F-student did something wrong than to just point out that he obviously made a mistake somewhere. My point is that whatever Brandon did is wrong and we shouldn’t reverse engineer the mess that he made. You actually can’t reverse engineer if he did something deliberately.

        If done correctly, the BEST and GISS data sets for Springfield Illinois do lie on top of one another, with nearly identical trends. If Brandon keeps on abnegating this fact, he just digs himself a deeper and deeper hole.

      • WebHubTelescope, I directed people to the data. Anyone can check it and see my graphs are accurate representations of that data. Anyone who looks at your graphs will see they are not accurate representations of that data. It is not my fault you are too lazy to check my work before repeatedly saying I’m wrong and incompetent.

        Don Monfort, I might have read into his post more than there was, but I don’t think so. He clearly had a derogatory tone toward BEST (e.g. his photoshopped image). The only question is whether it what form the derogation took. Regardless, my understanding is BEST requires the data it uses be freely available online with a minimum monthly resolution. I don’t think he found much (if any) data which meets BEST’s requirements.

        As for WebHubTelescope, I’d just ignore him until he shows his work. If he does, it’ll be obvious why his comments here have been ridiculous. If he doesn’t… well, they’ll still be ridiculous, just for a different reason!


      • Anyone who looks at your graphs will see they are not accurate representations of that data.

        How can my chart not be an accurate representation of the data? One of the underlying charts is the graph as is from the source.

        First thing you should do is to admit which chart of yours is wrong. I can give you that much. Is it GISS or BEST?

        You are the one accusing BEST of not “getting a right answer”. And then you accuse me of being lazy. You better own up to what you did wrong.

      • WHT wrote

        “How can my chart not be an accurate representation of the data? One of the underlying charts is the graph as is from the source.”

        Please link to that “as is” one.
        And the rest?

      • Matthew R Marler

        WebHubTelescope: Why should we tell you what you did wrong?

        You are not just telling him, you are telling the rest of us. And you are giving him the opportunity to clarify to the rest of us what he did that looks like an error, or to admit it, or to rebut you.

        As it is, you have merely written an unsubstantiated slur. Any denizen can tell that.

      • Matthew R Marler, I wouldn’t expect WebHubTelescope to be clear about what he did. It’d be impossible to describe what he did, read what I did and not see why his criticisms have been completely stupid.

        I’m hoping I’m wrong though. I’m hoping WebHubTelescope will provide us all a link to the graph he used and a description to go with it. Then anyone who bothers to read what’s being discussed can laugh at how ridiculous his comments have been.

        In the meantime, it’s fun to watch him flail around blindly, showing he has no idea what he’s doing.

      • “Confess your crimes!”

        …sounds like a Scientologist.

      • Don Monfort

        This is what webby is doing:

        https://www.youtube.com/watch?v=bkjsN-J27aU

        We are waiting for the spotlight to go on.

      • Matthew R Marler

        Brandon Shollenberger: Matthew R Marler, I wouldn’t expect WebHubTelescope to be clear about what he did. It’d be impossible to describe what he did, read what I did and not see why his criticisms have been completely stupid.

        Unless there is something in his posts that he can explain, you have accurately presented another example of why lots of people do not trust the results of the adjustments.

      • Hey guys. I found WebHubTelescope’s behavior here so funny I wrote a post about it. It explains what stupid mistake he made. For those of you who don’t want to read it, here’s a short version.

        I compared GISS and BEST gridded data for the area I live in. GISS uses 2º x 2º grids while BEST uses 1º x 1º grids. My original post pointed out that means they’re not completely comparable (the GISS gridcell I used covers a larger area than the BEST gridcell I used). WebHubTelescope somehow ignored this, and instead of looking at gridded data, he looked at data for a single city (Springfield, Illinois).

        The GISS gridcell I used covers something like twenty cities and who knows how many towns. It covers something like a quarter of the state. Of course WebHubTelescope found he doesn’t get the same results when he uses a single city’s measurements!

        I’ll put it more simply. WebHubTelescope repeatedly insulted me, suggesting I’m incompetent (and maybe dishonest) because if you use a single city’s data instead of data for 1/4th of the state of Illinois, you get different results.

      • Matthew R Marler, I addressed WebHubTelescope’s comments in response to my post just above. I think just about everyone can agree he was way off the rails. As for your suggestion I:

        have accurately presented another example of why lots of people do not trust the results of the adjustments

        I should point out this particular problem is rather humorous. BEST smears information around on the spatial dimensions quite a bit. Part of how it manages to do that is its empirical breakpoint algorithm.

        By cutting up records when they disagree too much with regional trends, BEST forces its data to be more homogenous over larger areas. It’s trivially easy to see the “empirical breakpoints” often have no justification external to their scalpeling. That means they’re forcing their data to be more homogenous over larger areas by introduces artifical breakpoints. In other words, they’re massively overfitting their data.

        Unless I’m missing something, the oft-touted scalpel method BEST uses has been implemented in a way that makes their results worse. I think that’s hilarious.

      • Webby has left the building. I hope he comes back. We will pretend that nothing has happened, webby. We promise.

      • Springfield for GISS matches Springfield for BEST.
        I have no idea how Brandon Shollenberger messed it up, but he did.

        Anybody can do the comparison, except for him.

      • ClimateGuy | July 11, 2014 at 9:30 pm | said

        “Confess your crimes!”

        …sounds like a Scientologist.

        Note how he places in quotes something that no one said in this thread, but is commonly heard during the London Dungeons tour. That’s real journalistic integrity right there.

      • This is priceless. I specifically explained what WebHubTelescope did wrong, comparing city level records when I compared gridded data, and yet, he promptly follows up with:

        Springfield for GISS matches Springfield for BEST.
        I have no idea how Brandon Shollenberger messed it up, but he did.

        Anybody can do the comparison, except for him.

      • So did Brandon Shollenberger detrend the GISS data for Springfield on purpose, or did he accidentally insert a line with a negative slope in to the data?

        That’s why I stated that it is often extremely difficult to figure out how a student can screw up an analysis. There is typically one way to solve a problem concisely and correctly, while there are millions of ways to creatively get something wrong. When I had the chore of grading assignments and exams in the past, I easily spent an order of magnitude more time trying to point out to the poor student where his derivation went south, as opposed to the student that did it right. Correcting papers by A-students is a breeze, while for the laggards I would often just give up.

        So here we have the case of a Brandon Shollenberger who claims that the GISS data for Springfield shows very little upward trend, while the BEST version of Springfield shows much more warming. He then casts aspersions on the BEST team when he says “At what scale does BEST stop giving imaginary results and start getting a right answer? It doesn’t at the city level”

        Yet the actual GISS results for Springfield do match the BEST results for Springfield, in stark contrast to what Brandon is implying based on his incorrect GISS curve. So the questions remain: is his data doctored intentionally? or how can one go about accidentally adding a negative trend offset into the dataset?

        It may perhaps be that he was intending to add a constant offset shift to the data (which is fair), but instead he accidentally accumulated the offset. I suppose it can happen, but like I said, I am not in the business of fixing why someone can’t do the job. If Brandon needs help perhaps he should apply to a degreed science program at a major university.

      • WHT, as Brandon stated, he’s looking at the gridded product, which integrates over individual cities. What’s your major malfunction here, that you think comparing Springfield IL from both series, tests the same thing?

      • WHT, by the way, if this is how you explain errors to your students, you have no business being an instructor.

      • What you are doing is really pathetic, webby. Man up and move on.


      • Carrick | July 12, 2014 at 8:55 am |

        WHT, as Brandon stated, he’s looking at the gridded product, which integrates over individual cities. What’s your major malfunction here, that you think comparing Springfield IL from both series, tests the same thing?

        Carrick, Do you have a reading comprehension problem? This fellow Brandon Shollenberger said “At what scale does BEST stop giving imaginary results and start getting a right answer? It doesn’t at the city level. “

        He is casting aspersions that BEST does not give correct answers at the city level, based on what we can only assume in the city that he lives. And he mentions Springfield specifically. And he mentions a ” five year smooth”.

        Retroactively he calls me “lazy”, so what else do I do but search for BEST’s Springfield record with a 5-year average applied by BEST. This lays on top of Brandon Shollenberger’s curve give or take a slight divergence prior to 1920. And the GISS record for Springfield has the same linear trend, which one can search for as well.

        I am not the one that is implying that BEST is not giving the “right answer”. And I am not the one that messed up the GISS trend for Springfield. But I am the one that showed some intellectual curiosity when I saw Brandon Shollenberger’s curve that showed the huge trend difference between the BEST and GISS results.

        Brandon Shollenberger is the one that messed up the GISS Springfield record and tried to use that to say BEST is not getting the “right answer”.

        How these armchair warriors think they can get away with such casually indirect yet calculated smears is frankly astounding. Note that Brandon Shollenberger is quick to point out that he is not accusing anyone of fraud or of not trying to smear anyone. Yet he keeps on saying that BEST is not “getting a right answer”. Isn’t that special?

        And here I am getting the smear treatment myself … ClimateBall at its finest and the Never Ending Audit on display.


      • Carrick | July 12, 2014 at 8:55 am |

        WHT, as Brandon stated, he’s looking at the gridded product, which integrates over individual cities. What’s your major malfunction here, that you think comparing Springfield IL from both series, tests the same thing?

        Carrick, Do you have a reading comprehension problem? This fellow Brandon Shollenberger said “At what scale does BEST stop giving imaginary results and start getting a right answer? It doesn’t at the city level. “

        He is casting aspersions that BEST does not give correct answers at the city level, based on what we can only assume in the city that he lives. And he mentions Springfield specifically. And he mentions a ” five year smooth”.

        Retroactively he calls me “lazy”, so what else do I do but search for BEST’s Springfield record with a 5-year average applied by BEST. This lays on top of Brandon Shollenberger’s curve give or take a slight divergence prior to 1920. And the GISS record for Springfield has the same linear trend, which one can search for as well.

        I am not the one that is implying that BEST is not giving the “right answer”. And I am not the one that messed up the GISS trend for Springfield. But I am the one that showed some intellectual curiosity when I saw Brandon Shollenberger’s curve that showed the huge trend difference between the BEST and GISS results.

        Brandon Shollenberger is the one that messed up the GISS Springfield record and tried to use that to say BEST is not getting the “right answer”.

      • Bad webby. You know you have made a fool of yourself. You know that Brandon graphed gridded data and not the data that only related to Springfield, IL. He was obviously discussing a larger area. You quoted Brandon out of context:

        “To be clear, I don’t think this means BEST is fraudulently adjusting the data. I’m not Steven Goddard. I suspect what’s actually happening is BEST is smearing warming from other areas into mine. That is, warming in other states is causing Illinois to seem like it’s warming far more than it actually is. That’s not fraud. That’s just low resolution in the estimates.

        But here’s the thing. BEST is supposed to be the best temperature record. It has a website encouraging people to look at data on as fine a scale as individual cities. WHY?! If BEST can’t come close to getting things right for the state of Illinois, why should anyone care at what it says about the city of Springfield, Illinois?

        At what scale does BEST stop giving imaginary results and start getting a right answer? It doesn’t at the city level. It doesn’t at the state level. What about at the regional level? Could it get temperatures right for something like, say, Southeast United States? Nope.”

        Brandon did not compare BEST and GISS data for the city of Springfield. You know that. You are very dishonest. Everybody can see it.

      • Donny, You have never linked to a chart that you have made with your own hands, and I doubt you ever will.

        The chart of temperature for Springfield that one can get from the BEST web site matches very closely to the chart that Brandon Shollenberger created for what he calls the Springfield area. It matches down to individual noise glitches.

        If the two match then that means that Springfield is a very good representation of the Springfield area. What ever he is spouting about differences in gridded area is pointless gibberish.

        Bottomline is that the GISS trends for Springfield match the BEST trends for Springfield. Brandon Shollenberg messed up and he refuses to admit it.

        I will give you a chance to create your own sets of charts so you can check this for yourself. But since you do not have the skills, I will place the comparison on the host in a few hours to demonstrate your shortcomings in analysis.

      • Hm… comments are still showing up as truncated. So posting in pieces (slight editing to improve clarity).

        WHT, actually no, I suffer no particular reading comprehension issues.

        * Brandon clearly stated he’s using gridded data.

        *  He’s discussing the spatial smearing associated with the gridded data, not just homogenized data.

        * You are not and you getting a different answer. This is not a surprising result.

        *  Technically, GISTEMP doesn’t even do homogenization these days (they used to, but stopped a number of years ago). That gets done in the GHCN product that they use.

        I’m not sure we’ve learn very much by comparing two stations both coming from probably the same data set, other than BEST homogenization gives virtually the same result as GHCN.

      • Part two.

        As I commented on Brandon’s blog:

        Note we’re interested in gridded data, not individual sites, because that is the quantity that is supposed to most closely compare to $latex T_s(\vec r, t).

        The results that Brandon are seeing are not atypical. As I posted on Brandon’s blog:

        You can get gridded averages from the Climate Explorer:

        http://climexp.knmi.nl/selectfield_obs2.cgi?id=someone@somewhere

        Here are the trends for 1900-2010, for the region 82.5-100W, 30-35N:

        berkeley 0.045
        giss (1200km) 0.004
        giss (250km) -0.013
        hadcrut4 -0.016
        ncdc -0.007

        Berkeley looks to be a real outlier.

      • Part 3:

        I believe that part of the problem [with BEST] is method of interpolation used (kriging) and in particular the assumption that the correlation field relating spatially separated stations is azimuthally invariant.

        Also (as I also commented on Brandon’s blog):

        NCDC does not appear to suffer from spatial smearing. They also use EOF, so that does appear to be a better methodology than kriging, at least as [kriging is] implemented by BEST.

      • I don’t have the words to describe WebHubTelescope behavior. I mean, this is bad enough:

        So did Brandon Shollenberger detrend the GISS data for Springfield on purpose, or did he accidentally insert a line with a negative slope in to the data?

        That’s why I stated that it is often extremely difficult to figure out how a student can screw up an analysis.

        As all I did was plot the data. But then he goes on to say:

        The chart of temperature for Springfield that one can get from the BEST web site matches very closely to the chart that Brandon Shollenberger created for what he calls the Springfield area. It matches down to individual noise glitches.

        If the two match then that means that Springfield is a very good representation of the Springfield area. What ever he is spouting about differences in gridded area is pointless gibberish.

        When I specifically discussed only the difference between the GISS Springfield data and the GISS gridded data for the Springfield area. The reason for this is I know BEST doesn’t display city data. What it does is allow people to see the estimates of temperatures centered on cities. That means it is effectively the same as gridded data. That’s why the BEST data he plotted matches the BEST data I plotted

        I discussed the difference between GISS Springfield data and GISS data for the Springfield area. WebHubTelescope determinedly looked anywhere else.

      • Carrick, to clarify something, you say:

        I’m not sure we’ve learn very much by comparing two stations both coming from probably the same data set, other than BEST homogenization gives virtually the same result as GHCN.

        But the BEST data WebHubTelescope plotted is not station data. It’s the BEST estimate for the area of Springfield. The reason it’s so similar to what I plotted is it is basically the same thing as I plotted. That’s because BEST creates a temperature field for the area, and everything in that area is assigned the temperature of that field.

      • Don Monfort

        I don’t do charts, webby. But that’s just diversionary BS. We are talking about Brandon’s charts. You are just making crap up and everybody knows it. Your credibility is in tatters, webby.

      • Thanks for the clarification, Brandon. I had no idea which product WHT was using, since he’s not provided the information necessary to duplicate his result. I assume this is because he is a tittie baby, and doesn’t want anybody to find problems with what he’s doing. That’s the usual reason for refusing to explain how you obtained a particular result.

        I thought, though, there was a way to look at station data (homogenized etc) on BEST’s web interface. Am I wrong?

      • Keep your eye on the ball. This is what Brandon Shollenberger has said:


        The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.

        This is pretty simple to understand. He is comparing GISS and BEST for what he considers “my area”, which is Springfield as he later reveals.

        Yet there is actually no meaningful difference in trend between the GISS and BEST results for Springfield, contrary to what he asserts above. There is no “huge warming trend” that BEST adds that GISS doesn’t have already. GISS for Springfield has the exact same warming trend as BEST for Springfield, and I figured out that Brandon Shollenberger likely messed up his GISS chart and somehow eliminated that warming trend. That is actually pretty hard to do, and it beats me how that happened …

        But now it appears that Brandon Shollenberger is saying this:


        I discussed the difference between GISS Springfield data and GISS data for the Springfield area.

        And so now Brandon is suggesting that GISS has the problem? And it is not BEST with the problem after all?

        What does that mean to Brandon? That GISS data for the Springfield “area” will now add that “huge warming trend” back in?

        This is actually a great example of someone that keeps digging a deeper and deeper hole since he refused to come clean initially.

      • Carrick,
        You have to put yourself in the position of a faker like Brandon Shollenberger. How would he get the data? He would go to the web interface for BEST and query and then download the data for Springfield. And then he would do the same for the GISS web interface with regard to Springfield. This is a very simple procedure for anyone to follow, and I just followed the links he provided.

        If you do these two procedures and then plot the two sets of data on the same graph, you will see that they show the same trend. This is in contrast to what Brandon Shollenberger says:

        “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”

        What exactly did he do to mess this up that badly?

      • Thank you Brandon.
        I think it best to gloat privately wrt WHT’s comedic goof-ups. :)
        It’s the cult mania that drives them cuckoo.

      • Carrick:

        I thought, though, there was a way to look at station data (homogenized etc) on BEST’s web interface. Am I wrong?

        You can. You just have to select the station. If you select anything else, it’ll give you areal results. You can find links to some stations for the area you select on the right side of the screen.

      • This is too funny. WebHubTelescope has now said:

        You have to put yourself in the position of a faker like Brandon Shollenberger. How would he get the data? He would go to the web interface for BEST and query and then download the data for Springfield. And then he would do the same for the GISS web interface with regard to Springfield. This is a very simple procedure for anyone to follow, and I just followed the links he provided.

        Yet there is no question that is not what I did. My post specifically said where you can find the data I used:

        For a bit of an introduction, I recently looked at the gridded data BEST published showing its estimates of average, monthly temperatures across the globe (available on this page). After some playing around, I decided to extract the values given for the area I live. I then did the same thing with another temperature record, NASA’s GISS (available on this page).

        You’ll note, I specifically said “gridded data” and “estimates of average, monthly temperatures across the globe.” The links I provided go to pages with gridded data in NetCDF files. WebHubTelescope’s claim that he “just followed the links [I] provided” is complete BS. Nobody following the links I provided would do what he did. They’d find pages with links to NetCDF files with gridded data, what I said I used.

      • @Brandon Shollenberger July 11 at 1:45 am

        By the way, you might be interested in a post I uploaded a short while ago. I used a much simpler test of the BEST results to show why I don’t trust them.

        Excellent, Brandon!

        It’s amazing. From month to month and year to year, GISS and BEST look nearly identical. The high frequency components of their graphs are indistinguishable. The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.

        This is exactly what I expect if the BEST scalpel is acting like a low-cut filter. I have had theoretical reasons, and now you supply test data analysis, that there is wholesale decimation and counterfeiting of low frequency information in the BEST process.

        I have much more to say about your results, but not here. Let’s start a level 1 comment about it. This one is too long.

      • Don Monfort

        Are you still hoping that Mosher will come by and bail you out, webby?

      • WHT:

        You have to put yourself in the position of a faker like Brandon Shollenberger. How would he get the data? He would go to the web interface for BEST and query and then download the data for Springfield.

        LOL, but that’s not how he go the data. You’re friggin’ hilarious.

        He got the data using the published netcdf files and interpreting it using R. He was discussing this upstream on this thread. Mosher pointed him to ncdf4, which allowed him to read the files and post the results you errantly criticized him for.

        He could also have gotten it using climate explorer, as I pointed out on this thread too.

      • I am not sure what you are laughing at.

        Brandon Shollenberger said:


        WebHubTelescope’s claim that he “just followed the links [I] provided” is complete BS.

        Yes, you are right, I followed the BS links provided by BS, and I found the city site information for Springfield, Illinois on both the BEST and GISS sites.

        This is the data plotted from BEST and GISS if the Springfield sites are loaded from their respective web interfaces:
        http://imageshack.com/a/img850/2545/2ke.gif

        The “huge warming trend” in the BEST dataset that Brandon Shollenberger is claiming is confirmed by the GISS dataset for Springfield. Yet when Brandon plotted his GISS dataset that warming trend disappears.

        Why did this warming trend disappear, and why did Brandon Shollenberger accuse BEST of not getting “a right answer” ?

      • Don Monfort

        webby, webby, webby

        You foolishly or deliberately persist in misrepresenting what Brandon did and said. He did not compare BEST data for Springfield, IL with GISS data for Springfield, IL. He compared gridded data for larger areas of Illinois, that includes Springfield. Brandon did compare the BEST trend with the GISS trend in the city of Springfield.

        “I suspect what’s actually happening is BEST is smearing warming from other areas into mine. That is, warming in other states is causing Illinois to seem like it’s warming far more than it actually is. That’s not fraud. That’s just low resolution in the estimates.”

        Brandon specified that he used gridded data that covers a large part of Illinois. If you can find where Brandon claimed that he had analyzed data specific to Springfield and found a discrepancy, quote it. But you know that he did not do that. He analyzed and commented on the gridded data for a larger area. Anybody can see that plainly, including you. Case closed.

      • Don Monfort

        Correction to comment that landed in moderation:
        “Brandon did NOT compare the BEST trend with the GISS trend in the city of Springfield.”

      • The BEST data for the Springfield site lines up very closely to the chart that Brandon Shollenberger created for the Springfield area dataset that he must have dug up from elsewhere:
        http://imageshack.com/a/img829/7861/otz.gif

        So Brandon Shollenberger twirls around 100 times and lands facing in the same direction he started in.
        Is that supposed to impress someone?
        Don’t work stooopid, work smart.

      • Don Monfort

        Poor little webby says: “The BEST data for the Springfield site lines up very closely to the chart that Brandon Shollenberger created for the Springfield area dataset that he must have dug up from elsewhere:”

        Please point us to where Brandon said that he created a chart for the Springfield site, or the Springfield area dataset. I recall that Brandon said he created a chart from gridded datasets for his area. And he never claimed that the GISS and BEST trends form gridded data for the Springfield area, or trends for the Springfield site (which he did not analyze) were different. He said:

        “I suspect what’s actually happening is BEST is smearing warming from other areas into mine. That is, warming in other states is causing Illinois to seem like it’s warming far more than it actually is.”

        You keep insisting on substituting Springfield for Illinois. Why are you doing this, webby?

      • GISTEMP gives you a way of looking at the gridded data graphically, and it looks like Illinois has significant warming over the century from that too (0.5-1 C in this example).
        http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=3&type=anoms&mean_gen=0112&year1=1981&year2=2010&base1=1881&base2=1910&radius=250&pol=rob

      • The bottom line is that Brandon Shollenberger has made multiple claims questioning BEST.

        1. “At what scale does BEST stop giving imaginary results and start getting a right answer? It doesn’t at the city level”
        2. “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”
        3. ” If BEST can’t come close to getting things right for the state of Illinois, why should anyone care at what it says about the city of Springfield, Illinois?”

        He compares BEST results for Springfield against GISS results for Springfield in this graph:
        http://hiizuru.files.wordpress.com/2014/07/7-10-home-trend.png
        and then wonders why the BEST results have a “huge warming trend”, not realizing that the GISS graph has a warming trend that somehow disappeared from what can easily be seem at the GISS web site:
        http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425724390050&dt=1&ds=12

        One can eyeball that chart and see the warming is much greater than the almost flat trend that he generates for his faked chart.

      • Thanks JD, I owe you one. I submitted my comment with the URL for the GISS data of Springfield when I saw you did the same thing.

        As JimD said, the warming is clearly greater than 0.5C for the century, which is in contrast to the almost flat trend that Brandon Shollenberger has generated for Springfield from what he claims is GISS-based data.

        How’d he do dat?

      • Jim D, there are two problems with your comment. First, you are misreading that graph. It doesn’t show the amount of warming you say for Illinois. It says that for part of Illinois (the northern part). The southern part has a much lower (and in parts, possibly 0) value.

        Second, that chart does not show warming trends. It shows the difference between two periods, 1880-1910 and 1980-2010. As anyone knows, it is perfectly possible to have a difference between two sections of a graph despite the overall trend being neutral. That’s the case here. If you look at the graph I made, you can visually see the figure with a neutral trend line has a warmer 1980-2010 period than 1880-1910 period. In fact, that difference is perfectly in line with what a correct reading of your linked images shows (.2-.5 degrees of warming).

        In other words, your image tells us nothing new. It just gives us a bad way to look at one small part of the data I’ve graphed.

      • Brandon, I am very skeptical of your “GISS” plot here
        http://hiizuru.wordpress.com/2014/07/10/is-best-really-the-best/
        It looks too similar in every wiggle, except detrended, to be a different area from BEST. Are you sure about it? Local datasets should not be so similar in such detail. Different stations go into these averages.

      • Jim D, I’m not sure what you’re asking. I’ve never suggested the GISS graph I plotted is for a different area than the BEST graph I plotted, save in that their grid sizes are different. That is the only difference between them.

        The trick is with WebHubTelescope’s comparison. He uses the GISS data for the city of Springfield. This is obviously different from the graph I plotted, which is the GISS data for the gridcell in which Springfield is located. The reason for the difference is the gridcell Springfield is located in has a number of other temperature stations (and also has a certain amount of smoothing applied). We should expect these results to be different.

        The BEST data he uses is different. BEST does not display temperature for Springfield, the city. What it displays is temperature for an area centered on Springfield, taken from the temperature field BEST calculates for the entire globe. Selecting a small area like that is effectively equivalent to making a grid of the temperature field and selecting a single gridcell.

        In other words, WebHubTelescope is comparing GISS data for Springfield to BEST data for an area in which Springfield is located. I am comparing GISS data for an area in which Springfield is located to BEST data for an area in which Springfield is located. My comparison is apples to apples. His comparison is apples to oranges. That’s why we get different answers.

      • I’ve been less precise than I should have been. I’ve been saying WebHubTelescope uses data for the city of Springfield. In actuality, what he uses is data from a single temperature station located in the city of Springfield. That is, he uses only one temperature station. I use results which combine data from many stations within a given area.

        That’s why our results are different.

      • Brandon, I am saying that a 1 degree BEST grid point should not look so similar in every small detail to a 2 degree GISS grid point. Even two neighboring 1 degree points should be different in these details.


      • Jim D | July 12, 2014 at 10:28 pm |

        Brandon, I am saying that a 1 degree BEST grid point should not look so similar in every small detail to a 2 degree GISS grid point. Even two neighboring 1 degree points should be different in these details.

        That’s the same thing that raised my suspicions initially, JD. It’s almost as if the two curves are from the same source data but somebody took the chart labeled GISS and detrended it by a linear function.

        Then if one goes further and actually compares what one gets from just dialing in the Springfield region via the BEST web query interface, we get this match:
        https://imageshack.com/i/n1otzg

        Note how close many of the recent years noise fluctuations match spike for spike, indicating that this is likely very near to the Springfield area, if not right on top of it, subject to whatever other errors that Brandon Shollenberger may have made.

        Moreover, if we compare the actual Springfield BEST and GISS datasets, we see that they align very closely.
        http://imagizer.imageshack.us/a/img841/9661/8ds9.gif

        The GISS overlay in blue follows the same noisy envelope as the BEST data over the common years, indicating that BEST is not doing much significantly different than GISS in interpreting the source data ( that they most likely share ).

        Keep on digging that hole of yours Brandon, as it keeps getting deeper.

      • Ah, I see Jim D. In that case, I should point out my post specifically raises the issue which answers your concern:

        Brandon, I am saying that a 1 degree BEST grid point should not look so similar in every small detail to a 2 degree GISS grid point. Even two neighboring 1 degree points should be different in these details.

        We’d only expect the four BEST gridcells located within the one GISS gridcell to be meaningfully different from one another if there was a high degree of spatial resolution in the BEST dataset. A large amount of spatial smearing would mean we’d expect nearby gridcells to be highly similar.

        The very problem I’m complaining about, the lack of spatial resolution in the BEST dataset, explains the issue you raise.

      • Brandon, but Web’s point is that GISS is the outlier. When you compare station rises at Springfield to these series, it is BEST, not GISS, that agrees with the station, so the flat GISS is the one to question here, not BEST. How does GISS get this so flat and different from the station? Maybe Carrick has an answer.

      • Jim D, to address your concern in a more direct manner, I went ahead and plotted the same portion of the BEST dataset for the gridcell I used and the eight gridcells around it. You can see the nine together here. You’ll notice there are some tiny differences, but every gridcell in this three by three grid is nearly identical.

        I did the same with the GISS gridcells. You can see them here. You’ll note they have a lot of similarities too, though they also have more differences. That’s due to GISs using 2×2 gridcells (so the total area covered by this was 6×6) and there being different amounts of spatial smearing.

        I may go ahead and do a 6×6 plot for the BEST data so it’s directly comparable, but I’m not sure. That’s a lot of graphs. Also, these graphs make it obvious what datasets I’m using (as if that wasn’t obvious already).

      • Jim D, do you really need an explanation as to why a single temperature station doesn’t match the trend of the area it’s in? The entire reason people look at trends of areas is single stations don’t give reliable estimates of what happens in an area.

        That’s especially true when talking about urban stations. With how many people have complained about the effects of UHI, why would anyone think it means anything that a single, urban station shows a warming trend not present in the trend of its area?

      • Brandon, but your assertion that BEST is off is not supported by local data. If anything, it is the less smoothed of the datasets. Several GISS points do show more of a trend, but it is possible your 2-degree GISS box goes far enough south to where the trend was flatter making that less representative of your area than BEST. From my GISTEMP map, the trend is larger as you go north in this area. Being at 39.8, Springfield may be at the northern edge of its GISS 2 degree box, which might explain things.


      • Jim D | July 13, 2014 at 12:05 am |

        Brandon, but Web’s point is that GISS is the outlier. When you compare station rises at Springfield to these series, it is BEST, not GISS, that agrees with the station, so the flat GISS is the one to question here, not BEST. How does GISS get this so flat and different from the station? Maybe Carrick has an answer.

        JD, in a separate subthread below, Carrick agrees with us concerning the Springfield GISS series. He found it a much higher slope for GISS than Brandon had portrayed.

        This is very bizarre the way they are struggling with scientific honesty while trying to retain their dignity.

      • Jim D, how can you possibly claim my “assertion that BEST is off is not supported by local data” when you’ve only looked at one station? And how in the world do you make this absurd claim: “If anything, [BEST] is the less smoothed of the datasets.” Anyone who looks at the data would know that’s complete BS. In fact, that’s such obvious BS I have to demonstrate.

        Look at this image. The top graph is the GISS graph for the gridcell I used. The bottom four are the four BEST gridcells within the same area. It’s the same thing as we saw before.

        Now look at this image. The top graph is the GISS graph for one gridcell to the east. The bottom four are the four gridcells within the same area.

        Now look at this image. Same thing as above, but one gridcell north instead of east.

        Now look at this image. Same thing as above, but one gridcell north and east.

        There is no way anyone can look at that and say the GISS data is more smoothed. The sixteen BEST graphs in those images could each pass for one another. All four GISS graphs are far more different.

        On the upside, this was a good excuse for me to test the layout command in R. Now that I know how to use it, it will be much easier to make comparisons.

      • Jim D, how can you possibly claim my “assertion that BEST is off is not supported by local data” when you’ve only looked at one station? And how in the world do you make this absurd claim: “If anything, [BEST] is the less smoothed of the datasets.” Anyone who looks at the data would know that’s complete BS. In fact, that’s such obvious BS I have to demonstrate.

        Look at this image. The top graph is the GISS graph for the gridcell I used. The bottom four are the four BEST gridcells within the same area. It’s the same thing as we saw before.

        Now look at this image. The top graph is the GISS graph for one gridcell to the east. The bottom four are the four gridcells within the same area.

        Now look at this image. Same thing as above, but one gridcell north instead of east.

        (The original version of this comment landed in moderation. I’ve removed one entry in the hope that having only three links won’t trip moderation. The full version of this should show up later.)

        There is no way anyone can look at that and say the GISS data is more smoothed. The sixteen BEST graphs in those images could each pass for one another. All four GISS graphs are far more different.

        On the upside, this was a great excuse for me to test the layout command in R. Now that I know how to use it, it will be much easier to make comparisons.

      • Brandon, thanks for doing the plots. It is clear to me that GISS is representing an area that is south of Springfield where the trend is weaker than at Springfield itself. The confusion is because of the north-south gradient in trend here. If you go further south to Union City just north of 36 N, you find stations have flatter trends, more like the SE US area. These stations are in the same GISS box as Springfield.

      • oops, I guess Sparta (38 N) would be a choice in the same GISS grid cell, which is also quite flat.

      • Jim D, the fact that GISS gridcell is representing areas south of Springfield was always obvious. If you take a look at this map of Illinois, you’ll see Springfield is just south of the 40N line. That means the GISS gridcell it is in goes from 38N to 40N. That means the gridcell covers area nearly two full degrees south. The only reason there has been any confusion about it going that far south is WebHubTelescope made things up about what data set I used.

        (Incidentally, I don’t live in Springfield. I live one degree south of it. I just picked Springfield for the name in my text because it is the most known city within that particular area.)

      • Brandon, your GISS cell to the north looks more like BEST, so you don’t need to be suspicious of BEST anymore.

      • Jim D, you just looked at three GISS gridcells and 12 BEST gridcells for the same area. Two GISS grid cells had very different trends than the eight BEST grid cells in their area. One GISS grid cell had a trend more like the four BEST gridd cells in its area. From this, you concluded there’s no problem.

        That makes no sense at all. The fact GISS and BEST agree for some areas does not magically make the huge disagreements in other areas vanish. I can plot data which shows this sort of thing happens for entire states!

      • Don Monfort

        The D stands for disingenuous. Webbee’s little helper get’s schooled. Nice work, Brandon. I am drinking a toast to your skill in the graphology.

      • Thanks Don Monfort. I’m actually terrible at making pretty graphs. It’s only when they’re simple like this that I can make them look alright.

        By the way, you might be interested in a post I just uploaded. I’m offering to do the same comparisons I did for my area for anywhere. People can pick their home towns or random cities out of a hat. Anywhere, anyone picks, I’ll post the comparison (if there’s data, of course).

        I selected Atlanta, Georgia for a demonstration of how bad things can get. There, GISS has a mild cooling trend while BEST has a significant warming trend. Obviously, they can’t both be right. Given I can’t find any location where BEST shows cooling, I’d guess it’s the one that’s messed up.

        Interestingly, these graphs are making me more sure of a different criticism I’ve made about BEST. If I’m not mistaken, they may allow us to prove an entirely different problem with BEST as well.

      • Good work on the nearby grid cells, Brandon.
        It seems there are two conclusions:
        1. you have not done something correctly and are not pulling data from the grid cells you think you are.
        2. Your plots are correct. Wich means BEST uses a huge radius for station influence in creating its regional gridding algorithm. Thereby, stations may degrees away have influence and whether a station is 0.5 degrees, 1.0 degrees, or 2.0 degrees seems to make little difference. That is a dubious assumption. But it would be consistent with GISS and how it uses Greenland and Irish stations to adjust Iceland. [WUWT: 10/12/12 GHCN’s Dodgy Adjustments In Iceland.]

        In support of #2, When you pull up the plots for cities, rather than stations, you get a plot of number of stations by year by distance away (50 km, 200km, 500 km, 1000 km, 1500 km, 2000 km)
        http://berkeleyearth.lbl.gov/locations/39.38N-104.05W
        I am quite interested in how many stations are within 50 km (about 1/2 of a degree). I’m interested in 100 km… they don’t show that. 200 km. mildly interesting, especially when it is under 100 stations.

        But why do they show there are 2000 stations within 1000 km? they couldn’t possibly be incorporating all those in the regional picture for Denver could they? From your series of 1×1 grid cell results, it looks like they might.

        What are the parameters for their gridding algorithm?

      • climatereason, funny you ask. I just uploaded a post in which I invite people to select areas they want to compare the data sets for.

        I can’t select custom area shapes though. I can only select 2º x 2º grid cells (that’s what GISS uses). I included a map in that post with a numbered grid to help people select the spot they want. People can use that, or if they’d prefer, they can give latitude/longitude and I’ll convert it. Or I guess they could even just say where they want and I’ll look up the latitude/longitude. Any of that works really.

        Anyway, for your request, I selected the grid cell for 52º-54N, 2ºW-0ºW as that seemed to be the grid cell with the most land (I’m not using BEST’s land+ocean data set as it’s still preliminary). You can see the comparison here. It was nice to see in this case BEST and GISS agree quite well.

        If you’d like another area compared, just let me know. It might be better to say so in a comment on that post. I’m sure to see any requests posted there. I might not see requests buried in the middle of this thread.

      • Stephen Rasey, I have no doubt I’m pulling data from the grid cells I think I am. I could maybe believe I’m off by a single row/column due to getting the alignments wrong, but there’s no way I’m off by enough to explain these results. As confirmation of that, I just did one for an area in the UK, and the results matched well for both data sets.

        As such, I’d say your 2 is confirmed. And this isn’t just for the gridded data I use. The gridded data is just a translation of the global temperature field they create (i.e. it breaks that field into gridded points). That means the effect I show holds for the BEST methodology in general.

        Mind you, I don’t know if this affects their global temperature record. The biases introduced by this could cancel out. Right now, all I know is it undermines all of their results for regional and smaller scales.

        Well, I also know this makes it incredibly silly for them to have a web interface that lets you see temperature estimates for every city (and gridded data with a super fine scale. There’s no point in any of that if they can’t resolve the data at the scale of entire countries.

      • Brandon Shollenberger said

        “That makes no sense at all.”

        It was just a matter of time until the Chewbacca made an appearance. You really have no idea how ridiculous you appear most of the time, and then you go ahead and jump the shark.

      • I don’t find it surprising that two gridded datasets disagree in regions of large gradients. This is the GISTEMP century trend.
        http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=3&type=anoms&mean_gen=0112&year1=1981&year2=2010&base1=1881&base2=1910&radius=250&pol=rob
        It would be great if BEST had a tool that could map its trend data like this, and if you did, I am sure that the differences would be subtle at best. This is a storm in a teacup, as they say.

      • Jim D, if you’ve been following the discussion, you know BEST and GISS disagree for all of Southeast United States. If that doesn’t surprise you, I don’t know what could possibly make you question BEST’s results. The fact I initially picked only one example of a problem doesn’t mean the traits of that example are true of the problem in general.

        As for them getting the same answer, that’s like saying two climate models prove each other right because they get the same climate sensitivity, ignoring the fact tons of the parameters they calculate disagree. If one climate model says clouds have a strong positive feedback and another says clouds have a weaker negative feedback, it doesn’t matter much if both get the same climate sensitivity.

        Also, that graph does not show trends. I told you this the last time you posted it. I wish you’d stop misrepresenting it.

      • Brandon, if you can show that BEST shows a warming trend for Georgia, where my map shows that GISS clearly doesn’t, go ahead. It is better to look at the centers of large non-warming areas like this if you are looking for differences. I had missed anything you said on the SE US if you have already done this.

      • Brandon

        You still seem to be hanging round this impossibly long thread so I have reposted this.

        http://judithcurry.com/2014/07/12/open-thread-17/#comment-607698

        So we in the UK seem to be back to where we were in 1998 )and the 1730’s) following a sharp rise then a sharp fall.

        Did you ever see the article I wrote that identified cooling trends? It came out just before BEST and Muller confirmed to me that around one third of the worlds stations are cooling although Mosh then put lots of caveats on that

        If you are interested I will link to it but multiple links seem to go straight to moderation.

        Might be easier to do that on the Open Thread.

        tonyb

      • I had a look at the two curves of this link

        http://hiizuru.files.wordpress.com/2014/07/7-10-home-trend.png

        The curves seem to be identical except that there are a few points with a break (a vertical shift). The most significant shifts are approximately at 1927, 1950, and 1970. In addition a few years at both ends deviate. Thus the question seems not to be about detrending or different locations but about adjustments of the type BEST is doing in most time series.

        I don’t know, what’s right and what’s wrong, only that the nature of the difference seems very clearly to be as I state above.

      • Jim D wrote:

        “if you can show that BEST shows a warming trend for Georgia, where my map shows that GISS clearly doesn’t”

        From current BEST ‘Data’ pages ( http://berkeleyearth.org/data )

        State: Warming since 1960 (degC/century)
        Georgia (state) 2.28 ± 0.17 United States

        Warming since 1960 (degC per century) in column 3:

        Atlanta United States 2.51 ± 0.16
        Chattanooga United States 2.47 ± 0.16 (on GA-TN border)
        Columbus United States 2.36 ± 0.13
        Savannah United States 2.13 ± 0.19
        Tallahassee United States 2.04 ± 0.26 (FL near border ~15mi coast ~20mi)

        HTH

      • Jim D, why keep resorting to pixels on an image map? This seems like a silly and rather subjective way to resolve a dispute.

        See my link above.

        Objectively BEST is finding a larger trend than the other series for this area. Second “worst” is GISTEMP 1200-km with GISTEMP 250-km showing a net cooling.

        I looked at the US SE because I’ve been aware for a while that it doesn’t follow the general global trends: much of it has seen a net cooling over the last century.

        So this is an ideal location to look for spatial smearing of trends from regions with larger trends into regions with smaller ones. Which is what I think we are seeing here.

        In other words, this isn’t a generic statement that BEST always having a higher trend than the other series. It’s a statement about the spatial resolution of BEST being poorer than the other series.

        Note spatial resolution here measures how closely you measure a given difference in trend, which is different than spatial sampling frequency, which tells you just how many points you’ve provided data for. 1°x1° isn’t useful, if your resolution is only 5°x5°.

      • mwgrant, indeed. Looking at the same sort of information you just posted is what made me initially take this path. You can see a bit of that in this post I wrote. Also, you can see Carrick highlighting the issue Jim D apparently missed here.

        climatereason, I’ll admit I haven’t even looked at the Open Thread. I tend not to read many comments on this site anymore unless they’re on a post I am interested in. I have seen the article you mention though. I believe you brought it up and we discussed it a little. (So you know, you can safely post up to three links. It’s when you include four that you land in moderation.)

        Pekka Pirilä, I’ve compared results for over 30 GISS grid cells now. The same thing is seen in a lot: Same high frequency content, different low frequency content. In each case, the differences could be resolved with simple shifts on the Y-axis like those you could get by splitting the series and realigning it. Also, in each case I’ve seen where the low frequency content was different, BEST had a larger warming trend. I think that’s pretty indicative of what the issue is.

        Jim D, mwgrant already beat me to that, but as has Carrick. In fact, Carrick showed BEST has a significant warming trend for the entire south eastern portion of the United States (link in my first paragraph).

      • Actually Georgia is quite interesting.
        http://berkeleyearth.lbl.gov/regions/georgia-(state)
        There was a large temperature drop just after 1950 that accounts for a lot of its lack of net warming. This was likely an aerosol increase from the increased refining of oil in Texas and more local emissions from cars, whose effect on dimming is enhanced by the humid environment in the SE, and perhaps land-use change(?). How does BEST treat such a large and sudden change with its objective methods? Does it attempt to remove it from stations by breaking their records into pre and post aerosol periods, which might hide this drop. If BEST does show a warming in this region, is it by removing a real effect from station records such as a pretty sharp aerosol increase? I am willing to be persuaded either way on this.

      • JD, the so-called skeptics do tend to drift don’t they?

        After schooling them on Illinois, it looks like they have to switch to the south-eastern USA to try to win the argument.

      • Don

        You ask about smearing the data over long distances. This teds to be worse the furer back in time you look as the number of stations drops off dramatically, particularly in the Southern Hemishpere.

        The following is from a private email-I have deleted the names fpr that reasn and edited it.. It explains what has gone on in regards to the older records.

        ——- ———
        Tony,

        I think the Afghanistan claim of 1898 is based on Khorog. Records seem to have started here in 1898, when it might just possibly have been considered part of Af.

        For Albania, I wouldn’t necessarily distrust the post-1951 readings – it’s probably from a military installation. I think the extension back is from Corfu (1851-), which is pretty close.

        I get this info from the raw data via a KMZ file, described here. There’s a Google maps GHCN version here, which doesn’t require downloading. They are arranged so you can see the time sequence of data, and in the Maps case, even a movie. You might find it interesting from a historical point of view.
        —– ——- —–
        The above was sent in reply to my email under from August 2012

        Hi xxxxxxxx

        You were kind enough to provide a link to old stations for me when I asked the simple question as to what stations had been used in the BEST reconstruction to 1750, as I wanted to try and see if the data used was original or had been ‘adjusted.’

        Having looked at the BEST country by country data I thought I would just run my initial understanding of the methodology employed by BEST past you. This is a highly simplistic summary as I’m more interested at present in the generality of the database construction rather than the minute details and twiddly bits.

        Here is the country list which I am working through in order to find which of the old stations were used (to 1750) and establish their characteristics.

        http://berkeleyearth.lbl.gov/country-list/

        Just selecting the first few countries in ‘A’ seem to yield some surprising results, for example Afghanistan;

        http://berkeleyearth.lbl.gov/regions/afghanistan

        The earliest observation is said to be 1898.The idea that we have reliable records from that country from that date to the present is bizarre enough in itself, but because there is another station (in another country) within 500km (of equally dubious provenance) it seems it can be stretched back to 1875, then because there is a qualifying station within 2000 km that data can be stretched back to 1850.

        Albania

        http://berkeleyearth.lbl.gov/regions/albania

        With an earliest observation of only 1951 it seems the data can be stretched to 1850 because there is a station within 500km and then back to 1750 because there is a station up to 2000km away.

        Now I wouldn’t trust Albania’s figures as far as I can throw them, yet they seem to have become one of the long lived stations presumably because an Italian station is within 2000km -probably Bologna- whose original data was dramatically changed by Camuffo for the EU ‘Improv’ project. As previously discussed, it was one of those stations which inconveniently displayed a ‘warm bias’ in its early years that contradicted the computer model, so the data was adjusted to suit the model (its all in the book ‘Improved understanding of Past climatic variability from Early Daily European Instrumental sources’ edited by Camuffo and Phil Jones.)

        That well known doyen of accurate temperature readings, Algeria;

        http://berkeleyearth.lbl.gov/regions/algeria

        has earliest records said to be 1852 (accurate records??) but also manages to become a 1750 station by dint of it being within 2000km of a European Mediterranean station in Italy or France.

        it appears to me that a substantial number of the countries cited either have data you wouldn’t trust, or uses data borrowed from other stations over a great distance (many of equally doubtful merit) a la Hansen and Lebedeff 1987. I have real problems with believing that the data from all stations is equal, having examined many of them in detail. The dubious provenance of much of the data becomes more problematic the further back in time one goes, which is all quite apart from any problems with station siting, station moves, uhi, consistency of methodology (time of reading the instrument/use of a suitable screen) etc etc.

        All in all I am bemused as to why BEST thought they had constructed a meaningful global temperature to 1750 when it appears to be based on data from 10/15 European stations from that date, which contain some of the most potentially highly adjusted data of any in the record. In addition the error bars are so huge the end result can’t really be termed scientific.

        It seems to me that this dataset exhibits the same attributes as the SST data set that I wrote an article about, whereby the provenance of the original data can be as dubious or unlikely as is possible, but researchers seem prepared to disregard its accuracy in order to analyse and parse it and then make profound pronouncements.

        Whilst simplistic, in your view is the above a reasonable summary of how the BEST database was constructed?

        —— ——-

        tonyb

      • For the record, we had discussed the SE US long before you added your contribution to the topic.

        You are a dishonest hack.

      • Jim D, regarding Georgia, you need to also remember its proximal location to the Gulf of Mexico. A warmer climate means more precipitation from the Gulf which means more cloud cover, which means cooler temperatures.

        That’s what I think makes this region an interesting one to study climate change in. It’s been paradoxically cooling and two of the biggest negative feedbacks are present.

      • Remember what was said by Brandon Shollenberger, that BEST added a “huge warming trend” to the area he was from. But he wouldn’t say where that was exactly, so I looked at Springfield in particular, because he mentioned that location. And, lo and behold, that certainly does match closely to the graph he had created from the BEST data source he used:
        http://imageshack.com/a/img829/7861/otz.gif
        Anybody can do this by just going to the BEST web site and querying for Springfield.

        Later on in this thread, he said that he was actually from 1 degree latitude south of Springfield. That puts him just east of St. Louis. So I looked at St. Louis in particular and the GISS and BEST results match very closely:
        http://imageshack.com/a/img842/6848/ielk.gif

        This is the way that normal people use the BEST and GISS products. They go to the web interface and query for the location they are interested in.

      • Webby

        Surely Brandon is being ironic when he says he hacked Skeptical Science? If you read the article and follow the links you will see he was accused of doing it and is pointing out what actually happened.

        http://wattsupwiththat.com/2013/08/13/google-hacked-the-skeptical-science-website/

        I don’t have a dog in this particular fight but the general consensus is that your pursuit of Brandon is becoming rather creepy. It’s not worthy of you, please give it a rest

        Tonyb

      • No Tony, maybe you want to give this feigned indignation a rest. Brandon Shollenberger’s recent antics include placing what apparently were the private contents of a Skeptical Science forum on his own web server.
        http://rankexploits.com/musings/2014/sks-tcp-front/

        Makes me look like a boy scout. I am simply debunking an assertion on an open academic forum.
        You don’t like the answer so you do the usual stunt.

      • WHT, no, you demonstrated nothing of value.

        You either don’t understand still what Brandon did or you are so childish that you can’t admit you made an error.

        You then proceeded to try and distract away from your series of the same error by mounting creepy personal attacks on Brandon.

        This is obvious. Everybody knows what you are doing and why.

        What are you good at? I wonder because what you’ve shown in this blog is not much competency at anything.

      • Alas poor Carrick,
        When is Brandon Shollenberger going to redo his Springfield temperature plot from GISS so it turns out much, much closer to what you have calculated?

        That’s what set everyone off that Brandon Shollenberger did something wrong. His chart was almost flat while the actual trend was warming closer to what BEST was showing, and that was far from a zero slope. Shollenberger’s “huge warming trend” exists in both Springfield products.

    • Don Monfort

      Bottom line: you are faking it, webby. Making crap up. Mosher and willie didn’t come to your rescue, but you got little jimmy dee. Very funny. Carry on. Nobody cares, except for the entertainment value.

      • Donny is at least good for something — creating another subthread to make it easier to navigate.

        So we have these two local datasets that Brandon Shollenberg manipulated

        GISS for Springfield:
        http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425724390050&dt=1&ds=12

        BEST for Springfield
        http://berkeleyearth.lbl.gov/locations/39.38N-89.48W

        The noisy trends are essentially the same as you can see by this crude but effective overlay of the two:
        http://imageshack.com/a/img841/9661/8ds9.gif

        Donny don’t do charts so you get it fo-free.

      • WebHubTelescope, I was perfectly clear about what data sets I used. They weren’t even close to the ones you claim I used. The ones I said I use take a little bit of work to extract specific areal data from. You can’t do it with something like a simple browser and Excel graph.

        That’s why I call you lazy. If you actually took the time to use the data I specifically said I used, you’d get the same results I got. It’s only by being lazy and willfully ignoring what I said I did, that you can claim I did anything wrong.

      • I know how to read, webby. And I know who is credible and who isn’t. You keep insisting that Brandon used local Springfield datasets, when he has never said he used local Springfield datasets. Long before you ingloriously started this BS, Brandon said he used gridded datasets from his area and he made a statement about his findings in regards to the state of Illinois, not the city of Springfield. Here it is again:

        “I suspect what’s actually happening is BEST is smearing warming from other areas into mine. That is, warming in other states is causing Illinois to seem like it’s warming far more than it actually is.”

        If I need some charts, I hire someone to do them. Same as I hire a lawyer or a dentist, when I require those services. I would hire Brandon to do charts, not you. And certainly not little jimmy dee.


      • That’s why I call you lazy. If you actually took the time to use the data I specifically said I used, you’d get the same results I got. It’s only by being lazy and willfully ignoring what I said I did, that you can claim I did anything wrong.

        I did not ignore what you said. You said that “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”.

        How can BEST have added a huge warming trend, while GISS had the same huge warming trend?

        The only plausible way that this can happen is if you did something wrong. That is way more likely than both BEST and GISS doing something wrong simultaneously.

      • Donny said:


        Brandon said he used gridded datasets from his area and he made a statement about his findings in regards to the state of Illinois, not the city of Springfield.

        Yet there is relatively little difference between Brandon Shollenberger’s graph of whatever area in Illinois he is referring to and that specifically of Springfield, which anyone can select from the BEST web interface.

        These are the two charts, one Brandon’s and the red overlay selected from a Springfield BEST web query. Note that they align very closely:
        http://imageshack.com/a/img829/7861/otz.gif

        So if one set of data aligns with other set of data, then they are likely indistinguishable — something about walking like a duck. And I very much doubt that I could have come up with a better match than if I followed Brandon Shollenbereger’s nonexistent desk instructions to a T.

      • I used to think you were bright, webby. Your friend Brandon has explained it to you:

        “BEST does not display temperature for Springfield, the city. What it displays is temperature for an area centered on Springfield, taken from the temperature field BEST calculates for the entire globe. Selecting a small area like that is effectively equivalent to making a grid of the temperature field and selecting a single gridcell.”

        Can you dispute that?

        If you want to complain about something do something along the lines of little jimmy dee’s latest comment. Or complain that the BEST data Brandon used is only one square, while the GISS data is four squares. And how does he make a statement about the whole of Illinois based on that limited data. Or just think of something that looks like it might be a plausibly valid issue, instead of your just plain petulant BS. I am trying to help you save yourself, webby.

      • Here are the slopes computed for the area around Springfield, using Climate Explorer.

        Brandon is correct that BEST comes in substantially above the other series.

        series slope (°C/decade)
        berkeley 0.071
        giss_r1200 0.049
        giss_r250 0.040
        ncdc 0.035
        crutemp 0.002

        You can download the files I generated using Climate Explorer here.

        Start from this link. Use the “land only” series.

        There is a screen grab in the zip file that shows how I set the parameters.

      • A couple more details for people that haven’t used Climate Explorer before:

        After clicking on the link above to Climate Explorer, which should open a window entitled “Climate Explorer: Select a monthly field”:

        • Choose the series, e.g, GISS 250km.
        • Click on [Make time series]

        This will generate a new window.

        • Scroll down to the line “Anomalies with respect to the above annual cycle (eps, pdf, raw data, netcdf)”

        eps, pdf, raw data, netcdf will all be clickable links.

        Click on “raw data”. If you selected “Giss 250km” you should get a file named:

        http://climexp.knmi.nl/data/igiss_temp_land_250_-90–89E_39-40N_na.txt

      • Donny quoted and then asked:


        “BEST does not display temperature for Springfield, the city. What it displays is temperature for an area centered on Springfield, taken from the temperature field BEST calculates for the entire globe. Selecting a small area like that is effectively equivalent to making a grid of the temperature field and selecting a single gridcell.”

        Can you dispute that?

        I originally stated that Brandon Shollenberger did something wrong in his analysis, which was very obvious to anyone that can lift a finger and compare for themselves the BEST and GISS datasets.

        And to top that off, Brandon Shollenberger had the temerity to state “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”

        But it appears that Brandon Shollenberger is the one that has subtracted a significant warming trend from the GISS data … for some unfathomable reason … perhaps to make the BEST data look bad.

        I don’t really know what goes on in the heads of these AGW abnegators.

      • WHT, I was also able to reproduce Brandon’s graph using Climate Explorer by the way.

        Please feel free to download the zip file and then to apologize for wasting people’s time.

      • Sorry webby, you are just blubbering. You still don’t know what datasets Brandon used. He keeps telling you, but you refuse to get it. You have made up your own story.

        OMG! Carrick has independently duplicated Brandon’s findings and webby is thus thoroughly duped. Sorry, webby. Mosher is not going to save you. He know when to keep quiet. Shhhhhhhhhh!!!

      • Carrick detailed:


        series slope (°C/decade)
        berkeley 0.071
        giss_r1200 0.049

        Alas poor Carrick, the following is the difference in the slopes that Brandon Shollenberger showed in his blog post:
        http://hiizuru.files.wordpress.com/2014/07/7-10-home-trend.png

        Brandon Shollenberger made the slope of GISS go almost to zero, around 0.01C per decade warming trend. This is a 700% reduction form 0.71C, way beyond the difference you found.

        Yet you couldn’t get the GISS data that low. That is because you are a real skeptic, not a faker like Brandon Shollenberger.

        So thank you Carrick for staying sane, and I forgive you for calling me a tt baby.

      • You are shamelessly modifying your complaint, webby. But you fail to realize that you implictly endorse Carrick’s conclusion:

        “Brandon is correct that BEST comes in substantially above the other series.”

        You lose, webby. Mosher can’t help you.

      • How can I lose Donny, when I already linked to this chart:
        http://imageshack.com/a/img850/2545/2ke.gif

        Note that I have the same trend for GISS as Carrick does, of 0.049C/decade or 0.0049C/year. And I got that by just using the GISS web query interface for Springfield. My BEST slope was higher as well.

        Yet somehow Brandon Shollenberger managed to get that GISS slope down to around 0.01C/decade, about 1/5 the amount that Carrick and I found.

        I don’t understand the Svengali-like control that Brandon Shollenberger has on you skeptics. He is able to claim things like “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.” and he has you believing it. Even if you disprove it, you still believe him. Get a grip, please.

      • You have been told about a thousand times, webbbby, that Brandon used the gridded data from GISS, not the local Springfield data. Why don’t you get the GISS data for the 2×2 grid that centers on Springfield (or Nashville, where Brandon lives) and see what you get. If you get something significantly different than what Brandon presented, then you can shoot your mouth off. Or just continue to falsely maintain that Brandon used local Springfield data. That continues to be slightly amusing.

      • Donny, Carrick did my work for me. Brandon Shollenberger called me lazy upthread, so I thought I would do what I always do — let the AGW abnegators score some “own goals”.

        So GISS for Springfield around has a much higher slope than Brandon Shollenberger claimed.

        Get over it, Donny. Your messiah made a boo-boo.

      • You are ridiculous, webby. I guess you haven’t seen me hit Brandon with a blunt instrument more than once. I have not said that Brandon is correct. However, I believe that he is highly competent and you are not. You also continue to falsely characterize what datasets Brandon has used, when by now you know that you are wrong. Do what I suggested in my previous post and get back to us. If you don’t use the same datasets that Brandon used to do your little charts, you got nothing to whine about. Period.

      • Carrick did it for us, Donny. No need for me to re-invent the wheel. That would be stooopid.

      • Don Monfort

        OK, I’ll leave it there, webbee. You agree with Carrick:

        “Brandon is correct that BEST comes in substantially above the other series.”

        I don’t know that Carrick used the same gridded data that Brandon used. And, as I said, Brandon’s comparison of unequal grid areas can be questioned. I don’t know if it makes a diff. Whatever, I have some very nice single malt to attend to.

        You may carry on with your inglorious display of dishonesty and continue to hope that Mosher will come along to bail you out.

      • Don Monfort, you should take a look at this comment of mine. It has three different images in it, each comparing the temperatures of NASA gridcell to those of the four BEST gridcells located within it. There’s no question the use of uneven grid size has no effect on my results.

        And as you may be able to guess from those images, I can now compare any NASA gridcell to the BEST gridcells in the same location. If anyone wants me to compare other areas, I can have it made it posted within a couple minutes.

        (Well, any area both data sets have data for.)

      • Don Monfort

        I didn’t notice that comment, Brandon. I guess webbeee also missed it : ) Not that it would have had any effect on the clown.

      • Brandon

        Are you able to compare NASA and BEST for the UK?

        I am specifically interested in comparing them to CET. That is to say central England roughly bounded by Bristol Manchester and London.

        Thanks

        Tonyb

      • You can’t attack Carrick because he is one of your own AGW abnegators.
        Typical.

      • brandon, “And as you may be able to guess from those images, I can now compare any NASA gridcell to the BEST gridcells in the same location. If anyone wants me to compare other areas, I can have it made it posted within a couple minutes.”

        Ft. Collins Colorado. The university still maintains a stevens screen LIG station.

      • Brandeon, just for fun you could develop a “scientific” surface station network. Use just university ans research station data and give the “scientists” the benefit of doubt and assume they can accurately record air temperature. Then you should have a valid unadjusted surface temperature data set. :). .

      • climatereason, I responded to your comment not long after you made it, but I placed it in the wrong fork. You can find it here. Sorry about that!

      • captdallas2 0.8 +/- 0.2, you picked one where GISS and BEST happen to match up fairly well. If you look closely though, BEST does increase the warming trend a bit. See here.

        And WebHubTelescope, I’ve “attacked” Carrick a number of times. I’ve even said some things about him which are about as bad as the things I’ve said about you. I just don’t see whether or not someone confirms my results with a different data set being a reason to attack anyone. There’s no way to doubt I got the results I got save to simply ignore the data I used.

        Which is what you’ve done. Because you’re too lazy to open NetCDF files and extract gridded data. Also, because it’d prove you’re a buffoon.

      • WHT, put it another way, Brandon and I disagree quite often. It isn’t tribalistic. Sometimes he gets it wrong and sometimes IMO he gets it wrong. Of course I realize that I make errors sometimes too.

        All I’ve done is take the gridded temperature series for a the region near Springfield, IL using Climate Explorer and computed their trends. I also included the data files that were provided by Climate Explorer and instructions on how to reproduce them.

        What we find is that Brandon is correct: The trend for BEST is much larger (nearly a factor of two) than it is GISTEMP (250km smoothing).

        This is an objective fact and it can be confirmed by anybody who wants to repeat the not very difficult but slightly tedious process of recollecting the data.

        I didn’t start out assuming he was correct, I facts checked him.

        You started out assuming he was wrong, and you turned out to be wrong, but you apparently lack the grace to admit it when you are wrong.

        Doesn’t matter.. you are still wrong.

      • Alas poor Carrick puts down numbers that directly contradict the graph that Brandon Shollenberg has presented but hasn’t the scientific integrity to call him on it.

        Recall that Carrick found these numbers:


        series slope (°C/decade)
        berkeley 0.071
        giss_r1200 0.049

        yet Brandon Shollenberg presented a GISS chart that barely reached 0.01C per decade.
        http://hiizuru.files.wordpress.com/2014/07/7-10-home-trend.png

        And then he claims that “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”

        Amazing how important it is to never admit to someone that follows the real science that you have made a mistake.

      • Brandon Shollenberger above all else is really a newbie when it comes to analysis. Everyone that has studied the statistics of the instrumental temperature records realizes that about 1/3 of the stations show cooling, and that if you set up the boundaries correctly one can tilt the local or regional temperatures one way or another.

        Richard Muller of BEST has written:
        “We discovered that about one-third of the world’s temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming.

        So what people like Brandon Shollenberger do is cherry-pick station data in creative ways to create more FUD than is necessary. I this case, he over-reaches and uses innuendo-style smears to point out that BEST is wrong, while not admitting to the fact he did not do the GISS analysis correctly.

      • WHT, what Brandon said was qualitative and, while you might bicker over the adjective chosen, he was quantitatively right.

        Since I’m not comparing exactly the same region he compared, it shouldn’t be surprising that I get slightly different quantitative results than what one infers from his graphs.

        The big thing to realize is that the US SE is a region that was cooling over that period, so a slight shift north or south can have a relatively large effect for this region (especially in overall magnitude, since the trends are tiny throughout the region). As you go further south, GISTEMP (250km) even go negative, so there’s not much to be surprised about here, regarding the difference in quantitative results.

        So, I wouldn’t be a bit surprised if I use the code Brandon’s posted as a basis for my regions to compared, I would get virtually identical quantitative as well as qualitative results.

        It’s curious though that you choose to cut my results off before the giss_r250 numbers.. the one that directly compares to what Brandon is showing. That just seems dishonest to me. Also there does seem to be a drift in your own argument over time. Not acknowledging that you are now criticizing him for things that you initially criticized him for, but preening like this is the same criticism…that’s a disingenuous way to argue.

        It’s natural to expect giss_r1250 to show more warming in its reconstruction temperature field than giss_r250, because of the proximal location of Canada and its well known larger measured temperature trends.

      • Make that first sentence:

        WHT, what Brandon said was qualitative and, while you might bicker over the adjective chosen, he was qualitatively right.

        I suppose I can go out, reproduce Brandon’s results when I get the geographical regions exactly the same, show that Brandon didn’t subtract a trend as WHT has claimed

        But it appears that Brandon Shollenberger is the one that has subtracted a significant warming trend from the GISS data … for some unfathomable reason … perhaps to make the BEST data look bad.

        And then we can expect…. an admission from WHT that he was wrong?

        Heh.

      • Matthew R Marler

        WebHubTelescope: Brandon Shollenberger above all else is really a newbie when it comes to analysis. Everyone that has studied the statistics of the instrumental temperature records realizes that about 1/3 of the stations show cooling, and that if you set up the boundaries correctly one can tilt the local or regional temperatures one way or another.

        Richard Muller of BEST has written:
        “We discovered that about one-third of the world’s temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming.”

        I think that other people have already written that, but it is worth repeating, and that post of yours is one of your substantive contributions.

        However, you also miss the point that Shollenberger made: it isn’t just that the majority of stations show warming and that some select groups show no warming; the problem that undermines confidence in the “adjustments”(and where the little understood Bayesian Hierarchical Modeling plays a direct role, as well as the switchover to newer instruments) is the existence of records where the raw data show cooling, no or little warming, but where the adjusted data (Bayesian estimates) show less cooling, warming, or more warming.

        I have supported the role of the Bayesian methodology in providing estimates that have smaller mean square error than the original raw data; this has been proved mathematically to happen in some circumstances, and it has been demonstrated in some real live cases. But a large number of people remain skeptical, in part because the result (like all results in conditional probability) is counter-intuitive.

        Your assertion that Brandon Shollenberger made a misleading computational error is not supported by anything that you have presented.

      • Carrick, that Explorer is brilliant, the comparison of Datasets that you posted, was that for Raw data or Final data.
        If it is not Raw, can you do Raw as I noticed Raw in the list of options?


      • Also there does seem to be a drift in your own argument over time. Not acknowledging that you are now criticizing him for things that you initially criticized him for

        Pure projection on your part. You are the masters of drift. I am solid. I go with the scientists who know what they are doing.

        But to back up to where I started (unless you are too lazy, in Brandon’s favorite words), I let Brandon Shollenberger guess as to what he did wrong. I kept on giving him hints but he couldn’t catch on or wouldn’t admit to it.

        The main point was his mistake was not that BEST included a “huge warming trend” but that he completely low-balled his GISS chart, which I verified and then you verified.

        You guys acting so clueless about the whole episode is grand entertainment.

      • Like I said before, one can get the basic data for Springfield from these two web queries:

        GISS for Springfield:
        http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425724390050&dt=1&ds=12

        BEST for Springfield
        http://berkeleyearth.lbl.gov/locations/39.38N-89.48W

        The latter is halfway to where Shollenberger lives which is “one degree south” of Springfield, a suburb of St. Louis, where he works at a car dealership.

        These two results are not that different, which puts Shollenberger in a very uncomfortable position of having said that “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”

      • Donny, Brandon Shollenberger talked about looking at temperature in his “my area”.

        So this is how you go about looking at what is in his “my area”. He did tell us exactly where he lives, south of Springfield by 1 degree.

        GISS for Springfield:
        http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425724390050&dt=1&ds=12

        BEST for Springfield
        http://berkeleyearth.lbl.gov/locations/39.38N-89.48W

        I can also do this same thing for St. Louis, which is even closer to where he lives.

        Eh? Do you want me to do the same for StLouis, Donny? I know you don’t do charts.

      • Don Monfort,I think we’re to the WHT is fooling nobody point of this story.

        And sadly the more he talks, the less credible he is sounding.

        What a complete poser.

      • Poseur, eh?

        So now we know that Brandon Shollenberger lives closer to St. Louis than Springfield. No biggie. Let’s look at what BEST and GISS has for those areas as we dial them in

        GISS St. Louis
        http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425724340000&dt=1&ds=12

        BEST St. Louis
        http://berkeleyearth.lbl.gov/locations/39.38N-89.48W

        So what do the two trends for St. Louis look like?
        http://imageshack.com/a/img820/4797/hs2.gif

        They look to be about the same ! Who knew?

        Remember what Brandon Shollenberger said:
        ” “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”

        Who needs Mosh when the competition is so weak? Eh, Donny?

      • A C Osborn, I’m looking at gridded products. None of them give the option (AFAIK) to grid raw data.

        This is something that could be done, but is more work that I have time to allot to it.

        WHT, I think I’m just more substantive arguments from you. If you think you can reproduce what Brandon did and explain why it’s wrong you should show that. The fact you haven’t is what makes me think you have no credibility.

      • To summarize how this started. Brandon posted gridded products for an area that showed large differences in trend between GISS and BEST, and questioned BEST. Web showed that the local stations agree more with BEST, at which point all heck broke loose, especially complaining that Web was looking at the stations and shouldn’t have.

      • Donny, people that are going to look at historical temperature records for Springfield or St. Louis (or wherever they “imply” they live) are going to query for that location from comprehensive web services such as BEST or GISS. Then they will see that there is not much difference between the two.

        People are not going to ask Brandon Shollenberger to Texas sharp-shoot a location to see what kind of heinous results he can get for them.

      • We are done with you, webby.

      • @WHT July 13 at 5:38 pm St. Louis GISS vs BEST

        I guess it does not bother you that GISS is 1.0 to 1.5 deg. C warmer than BEST for the St. Louis links you provided. Not only the mean, but point by point the extremes are 1.0-1.5 deg lower with BEST than GISS.

        You see more differences if you look closely.
        check out the peaks around 1935 compared to present. BEST has a higher gradient.

        Don’t bother replying. I’m not checking back.

      • What’s this “we” thing, Donny?
        Are you the leader of the abnegators and the one-and-only decider?

        I have a feeling that Brandon Shollenberger doesn’t live near St. Louis, but more toward the center of Illinois, as that state has been his big peccadillo. So since he said he is about 1 degree latitude south of Springfield, that may put him in a town like Salem perhaps.

        Nevertheless, this location is the same latitude as St. Louis and one can see from the charts for St. Louis that BEST and GISS show the same trend

        GISS St. Louis http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425724340000&dt=1&ds=12

        BEST St. Louis http://berkeleyearth.lbl.gov/locations/39.38N-89.48W

        And the two trends side-by-side – http://imageshack.com/a/img820/4797/hs2.gif

        And remember what Brandon Shollenberger asserted:
        “The only meaningful differences between the GISS and BEST estimates for my area is BEST adds a huge warming trend.”

        Not quite.

      • Rasey is not coming back cuz he can’t handle the truth.

        I updated the comparison between GISS and BEST for St. Louis by using the 1-year average to BEST instead of the 5-year.
        http://imageshack.com/a/img842/6848/ielk.gif

        In this case, it is much easier to see the excellent agreement between the BEST and GISS data. The absolute value of the anomaly doesn’t matter because that is dependent on a base reference point — it is the differential that matters.

      • Jim D:

        Web showed that the local stations agree more with BEST, at which point all heck broke loose, especially complaining that Web was looking at the stations and shouldn’t have.

        To be a bit more detailed, Web didn’t understand Brandon was looking at the differences in the gridded products, even though Brandon quite clearly stated that he was.

        Web also claimed that Brandon made an obvious error, but Web never identified what it was. I have to assume that Web was lying.

        And Web accused Brandon of dishonesty (subtracting a trend), when in fact Web should have been accusing himself of incompetency.

        And all Web’s done since then is prevaricate and stall and attack other people’s characters, stooping as low as to mock other people for their current professions.

        And at this point, Web’s reputation is in the toilet, because he put it there himself.

        The fact remain that BEST seems to have an issue with smearing regions with larger trends into regions with smaller trends. This is probably because BEST’s adjustment scheme allows stations as far as 1000 km to be used in making adjustments. This is, I think the origin of the spatial smearing in their adjustments.

        I think there are technical issue with how BEST is computing the correlation that is exacerbating the spatial smearing problem in their kriging function:

        I think they should detrend before computing the correlation, otherwise the fact that the series has trends dominates the computation of the correlation, and this results in biased-high estimates of the trends. They are also assuming axial symmetry which I believe is a mistake.

        The argument over whether to detrend or not is an ongoing one climate science, but I’m pretty certain signal processing theory would say “you should detrend”.

      • Carrick, “I think they should detrend before computing the correlation, otherwise the fact that the series has trends dominates the computation of the correlation, and this results in biased-high estimates of the trends.”

        It is going to take more than that to salvage “surface” temperature. In a complex open system temperature can be a poor proxy for energy. You can have identical variations in energy and huge differences in the variation of temperature.

      • Alas poor Carrick, the data is on my side.

        The St. Louis data has sealed Brandon Shollenberger’s fate. This is the same latitude as where Brandon Shollenberger lives, and there is virtually no difference between what BEST estimates and what GISS provides at that location.
        http://imageshack.com/a/img842/6848/ielk.gif
        How can that be so, if BEST is adding a “huge warming trend” in his “my area” as Brandon Shollenberger has stated.

        Brandon Shollenberger was going to try to play stump the chump by saying they couldn’t get something right based on where he lives, thus playing up a fallacious argument of arbitrary selection — ala the usual phrase “if it can’t get this right , how can it get that right?”.

        What one ought to do and what your laziness (to quote Brandn Shollenberger’s favorite accusation) precludes is to do a full spatial map of trends across the USA using colors to represent what BEST says and then compare that against what GISS generates.

        So in the end, what you are left with is chasing phantoms, traveling hither and yonder to regions far from where Brandon Shollenberger lives to try to find fault with something as simple as a spatial interpolation function.

        This stuff is really not that hard. If I wanted to find out the historical climate for where I live, I would pick a station that operates with sufficient historical resolution nearby, and then perhaps choose a few others that are near neighbors and do spatial interpolation of necessary.

        My PhD was in spatial autocorrelation of surface structures and you aren’t going to be able to play stump the chump with me anytime soon, poor Carrick.

      • Carrick and Doc Martyn

        “I think they should detrend before computing the correlation, otherwise the fact that the series has trends dominates the computation of the correlation, and this results in biased-high estimates of the trends. They are also assuming axial symmetry which I believe is a mistake.”

        BEST does detrend in some manner with respect to latitude and elevation though I must admit find available discussion extraordinarily obtuse. See

        http://www.scitechnol.com/2327-4581/2327-4581-1-103.pdf [pp.2-4]
        http://www.scitechnol.com/2327-4581/2327-4581-1-103a.pdf ) p.2, p.4, p.8]

        Frankly, IMO these papers are a mess and the code virtually undocumented when compared to a norm seen with work horse codes from the NRC, DOE, and USGS. I’m tired of being nice about it–it is what it is. When BEST first came out a couple of years ago I considered it idiosyncratic but a step in the right direction and having potential. I respect people working hard on it but it is a shame that there isn’t really much to it beyond the unfinished data compilation.

        I’ve not even riffed on the constant in time function here (OT).

      • “stooping as low as to mock other people for their current profession”

        I never mocked his profession. As it turns out, I tried his home page to see if I could figure out where he lives and the clue is that he works in Salem, Illinois at last count.
        http://www.facebook.com/search/more/?q=Brandon+Shollenberger&init=public

        This puts him 1 degree latitude south of Springfield, as he admitted to upthread, and helps explain why he chose Springfield to pencil whip into submission. He obviously wanted to show that an arbitrary choice of where one lives was enough to show that BEST was wrong. And a massive fail resulted from that decision of his.

      • Correction on last sentence…

        I’ve not even riffed on the constant in time correlation function here (OT).

      • Oops, I finally made a mistake. It says right there on his page that Brandon Shollenberger lives in Collinsville lllinois, which in fact is an eastern suburb of St. Louis.

        So I was right in looking at data for St. Louis. Why do you think Brandon Shollenberger went on and on about Springfield? It’s close but not closer to his “area” than St. Louis.

      • Carrick and Doc Martyn should be Carrick and captdallas above [ http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-608087 ] …brain cramps

      • It’s interesting that Mosher and Zeke have not weighed in on this, but have left defense of BEST to a dishonest, ditzy, anonymous blog character, who doesn’t know the difference between a state and a city. It’s unlike Mosher to pass up a an opportunity to set Brandon straight.

      • Don Monfort wrote

        “It’s interesting that Mosher and Zeke have not weighed in on this, ….”

        I do not think anything here really needs a response–both in terms of content and forum–so why wade into it…shows good judgement and priorities. As a general rule not every comment unclothes an emperor. ;O)

      • Just so people know, the reason I’m not commenting now is things have gotten creepy. The central point of the dispute is WebHubTelescope is still maintaining his stupid position that I:

        query for that location from comprehensive web services such as BEST or GISS.

        When it’s been made beyond clear that’s not how I got the data I got. However, the current path he’s going down is a different story. He’s trying to track down my workplace, my home town and god knows what else, all for no reason. I told people the 1º x 1º gridcell I live in. Knowing my location with any more precision couldn’t possibly help when examining anything I’ve said. It could only serve to be a creep.

        Fortunately, basically every piece of information WebHubTelescope has posted about me has been wrong. It’s good to know WebHubTelescope is bad at stalking people.

      • mw,

        Last I saw an account of comments on this thread, Mosher had about 130. Carrick observes that BEST may be smearing data from 1000km distance. I think that deserves a comment/explanation. I bet I could find plenty of Mosher’s comments that are responses to lesser issues.

      • Brandon, someone has trashed his already dubious reputation and he has become unhinged. Pathetic.

      • mwgrant thanks for the comments. They are doing something right if they are properly detrending the data before computing correlations. So that would leave the biggest suspect here the use of stations from long distance away from a site in adjusting its temperatures.

        I think your complaint is one shared by many of us. BEST is a maturing product, not a mature product. On such projects, one can expect the documentation to be often incomplete, misleading and frequently out of date. That just happens because of the rapid rate at which it’s getting developed.

        I should note that I’m concerned about spatial resolution of the regional scale pattern of global warming, not whether BEST is giving an accurate picture of e.g. integrated over land mass temperature.

        I don’t think BEST is far off the other methods in that respect. To test this, I would like to compute the trend integrated over latitude bands (“zonally averaged”) for regions that are land-only for the various temperature field reconstruction methods.

        So this isn’t about “denying global warming” as the mindless mind-reading WHT seems to think. I don’t think it is that for Brandon either, but he can state his own motivations if he wants to.

      • WebHubTelescope

        “I told people the 1º x 1º gridcell I live in.”

        I am trying to figure out which location Brandon Shollenberger used. But I found that he creepily wrote a blog post where this exchange took place:


        Nick Stokes July 11, 2014 at 11:29 pm
        Brandon.
        I read both your posts, but I still can’t find it stated what area you are actually plotting. Which grid cell?

        Brandon Shollenberger July 12, 2014 at 12:14 am
        Nick Stokes, I didn’t post the exact grid cells because I instinctively shied away from saying where I live. Sorry about that. I should have made sure to specify which gridcell the data was from.

        So one is left guessing as to what location Brandon Shollenberger actually used.

        Brandon Shollenberger is a renowned hacker of web sites and one thing hackers like him can’t stand is when others start hacking them. They get awfully defensive. The fact that I simply used his public information, which he can choose to make private, has got him upset.

      • I think you’re being a creeper WHT, which is arguably even a more low-life behavior than the lying that you’ve been engaged in on this thread.

      • @Brandon Shollenberger
        “Just so people know, the reason I’m not commenting now is things have gotten creepy.”

        Absolutely no doubt. Looks like Judith may have acted as at least one comment appears to be gone.

        @Don Monfort

        “Last I saw an account of comments on this thread, Mosher had about 130. Carrick observes that BEST may be smearing data from 1000km distance. I think that deserves a comment/explanation.”

        Just some thoughts…

        Regarding content: Kriging is a known smoother but I do not think that that is the play here. Using data from 1000km or speaks for itself–and that isn’t good. When looking at semi-variograms(related to the correlation function) location annual station data, for each individual year, and for physiographically distinct regions, the correlation distance is much less, say 400-600 km. Zeke made a passing comment here or in a recent CE posting that correlation distances are smaller when the time period is decrease. [I may have mangled the wording but that is the idea.] Similarly recent comment exchanges with Steven Mosher suggest to me that they are looking issues such as this. So clearly there is some awareness in those quarters. What and if anything will come of it is anybody’s guess, but is their prerogative.

        Regarding forum…

        First, the thread is just too poisoned by one commenter. If I was working on BEST or CW and had things to do, I’d avoid the thread like the plague. Also the posting topic is not on the kriging or gridding. Sure there are overlaps and connections, but the kriging product grids is downstream.

        Nothing requires those people to share at this blog what they are doing or have done. I appreciate what they say on their dime even if I may disagree.

        But all that is IMO.

      • Carrick,

        Carrick I think your comments are on the mark and I enjoy your consistent ability to focus. I agree that an approach at a regional level is likely to be much much productive and useful. A natural (categorical) variable to be incorporated would be something akin to physiography. As I noted above, “When looking at semivariograms (related to the correlation function) location annual station data, for each individual year, and for physiographically distinct regions, the correlation distance is much less, say 400-600 km. ” For my quick looks at the US mixing mountains and plains plays hob with the semi-variograms. Oh well, looked enough…

        Just to be clear (and I think you understood) I have no knowledge whether “they are properly detrending the data before computing correlations.” :o) I try to keep my suffering to a minimum these days.

        and regards

      • It’s amusing WebHubTelescope says:

        one is left guessing as to what location Brandon Shollenberger actually used.

        Given the only way to be “left guessing” is to refuse to look at what I said. However, the real amusing part is:

        Brandon Shollenberger is a renowned hacker of web sites

        I’m a renowned hacker now?? When did that happen? Leaving aside the bogus accusation of criminal offenses, how am I renowned?

        Oh well, at least as mwgrant points out:

        Absolutely no doubt. Looks like Judith may have acted as at least one comment appears to be gone.

        Yup. You can tell this by what’s left as WebHubTelescope refers to the nature of (part of) that comment. And while we’re on the matter:

        Just to be clear (and I think you understood) I have no knowledge whether “they are properly detrending the data before computing correlations.” :o) I try to keep my suffering to a minimum these days.

        Saying they detrend the data is misleading. They did not detrend it. They estimated climatological parameters for latitude, altitude and season. They then removed those. That’s not detrending in the sense most people would interpret it as it has no time component.

        It’s really just a way of anomalizing the data. It can remove absolute differences in temperatures, but it cannot, by definition, remove differences in trends of temperatures.

      • mwgrant, thank you for the comments.

        I don’t know what role kriging is playing here if any in the apparent spatial smearing seen with BEST for this region of the US , but then I really don’t understand their integrated methodology either (the fact the methodology seems to change more rapidly than they can write updates to it doesn’t help).

        I do know that the assumptions of spatial, azimuthal and temporal invariance in the kriging function is likely not correct here. Probably that plays a bigger role in regions that have little data.

        I have some ideas for how one would measure these effects with Monte Carlo without having to dismantle their program. Basically the idea is you measure sensitivity to changes (resolution) rather than trying to separate out the physics.

        So imagine you start with their data set.

        You then make a perturbative change (could be adding a step, could be modifying the trend of a station, etc).

        You then measure the difference in whichever metrics you are interested in (gridded trends by cell for example) between the original and the perturbed data set.

        You could test to see how varying the trend of a station with a long temporal record with few drop outs affects the regional trend estimate for example.

        Or you could do an analysis of variance (ANOVA) using whatever categorizing variables you want. Station density, percent drop outs, latitude, elevation, proximity to the ocean, etc.

        Of course this is a lot more work than I have time to devote to it, even though it isn’t a technically difficult thing to implement.

      • Brandon Shollenberger said:


        I’m a renowned hacker now??

        Because you say so?


        Skeptical Science author Bob Lacatena wrote a 2,500 word post discussing the hacking of their web site two years ago. About the only kind of hacking he described is his own as his comments prove he is a hack. He has no idea what he is talking about. Why should you believe me, you ask? That’s simple: I hacked Skeptical Science.

        http://hiizuru.wordpress.com/2014/02/24/skeptical-science-hacked-or-just-a-hack/

        That’s from Brandon Shollenberger’s own blog, and he can possibly deny this by saying someone hacked his blog to insert those words … but I kind of doubt it.

        I am sure Brandon Shollenberger has a a significant readership because many of his readers are hoping that Brandon Shollenberger will uncover the next “climategate” through his exploits.

      • Webby

        I didn’t notice Dons sub thread so posted this reply to you in the wrong place.

        As far as I can see Brandon did not admit to being a hacker.

        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-608258

        Tonyb

      • Wow. WebHubTelescope is really silly. He quotes this intro to a post:

        Skeptical Science author Bob Lacatena wrote a 2,500 word post discussing the hacking of their web site two years ago. About the only kind of hacking he described is his own as his comments prove he is a hack. He has no idea what he is talking about. Why should you believe me, you ask? That’s simple: I hacked Skeptical Science.

        But somehow ignores this concluding statement to it:

        And if you need a reason to trust my knowledge of hacking, just remember, I hacked Skeptical Science. They said so.

        Which of course directs the reader to a post titled, “Google Hacked the ‘Skeptical Science’ Website.” Nobody could read that and think I was seriously claiming to have hacked Skeptical Science. That’s especially true as that article links to a previous post which begins:

        I’ve got mad haxor skillz. I’m a l33t hacker paid by evil organizations and shadowy conglomerates. That’s how I found Skeptical Science’s secret stash of Nazi fantasies. Or so some would have you believe.

        It’s obvious I was being facetious. I’ve been making the same sort of joke for going on a year now. The entire point of me saying things like that is to mock the people who accuse me of hacking into things.

        I routinely mock people for accusing me of certain crimes, and WebHubTelescope cherry-picks a quote from that to pretend I’ve actually confessed to those crimes!

      • Steven Mosher

        mw

        ‘First, the thread is just too poisoned by one commenter. If I was working on BEST or CW and had things to do, I’d avoid the thread like the plague. Also the posting topic is not on the kriging or gridding. Sure there are overlaps and connections, but the kriging product grids is downstream.

        duh.

      • So “Webby” thinks a tongue-in-cheek post by Brandon is an admission of being a hacker?

        LOL. “Webby” seems on performing an act of self-evisceration, for all to watch.

        The schadenfreude has become palpable.

      • Steven Mosher

        not likely mw.
        Way and I were working last night on some satellite data.
        He mentioned somebody suggesting that GISS was suppressing the trends with their hokey method. I laughed recalling the work of JeffId and RomanM. I scanned the list of names for a sane one, and yours popped up.

        Long term goal will be to add something similar to Tomas’ work and push the res down to 1km

        nice notes about the south pole and high alt few will get

        http://onlinelibrary.wiley.com/doi/10.1002/2013JD020803/abstract;jsessionid=325CF71333833ED37095F605E38CC7D5.f02t04

        back to work now,

      • Don Monfort

        Well, Steven has shown up. Nice work.

        Webby was so counting on you bailing him out. The little fella will be devastated.

        We will look forward to your more comprehensive rebuttal of Carrick’s and Brandon’s expressed doubts about BEST’s methods, on a less poisoned thread. Maybe Mr. mwgrant can get a more saccharin thread started.

      • I have no idea what Steven Mosher was telling mwgrant is “not likely,” but I’m more curious about this comment of his:

        Way and I were working last night on some satellite data.
        He mentioned somebody suggesting that GISS was suppressing the trends with their hokey method. I laughed recalling the work of JeffId and RomanM.

        Who supposedly suggested GISS was suppressing trends with a hokey method? As far as I know, nobody has criticized GISS. How has Mosher gotten this issue so completely wrong? I could understand if things got muddled in translation, but if he even skimmed any of these exchanges, he’d know that’s not what people were claiming.

      • @Steven Mosher

        Thanks for the reference it looks interesting. Spatio-temporal, I’ve been mulling that a little lately…noticed package support for that in R a little while back. (gstat is one of the packages I’ve used and it has some capabilities with the spatio-temporal) Never have looked at it–my work experience was not having enough data at a single time!…different universe. Glad to see that pop up.

      • Yup, I applied a downstream product of the BEST kriging algorithm to debunk what Brandon Shollenberger asserted.

        You guys do understand what a downstream product is?

      • Don Monfort

        Sweeeet!!!!!! I though it would be great fun to muck up ‘your’ thread with civility.

      • Downstream product? Heh.

        For some reason, this reminds me of something Don Monfort said:

        Bottom line: you are faking it, webby.

      • When Carrick quotes Don Monfort, some strange cosmic tipping point must have been reached.

      • CaptDallas:

        When Carrick quotes Don Monfort, some strange cosmic tipping point must have been reached.

        Yes, yes it has. It occurred at the point where Webby tried to hide behind lingo, though sadly he used it wrong.

        Don nailed it. Webby is faking it.

        Is he perhaps a high school student? That would fit with the behavior.

        If so, I need to play nicer.

      • The definitions are from the materials science industry, which I have been involved in since my graduate school days. The raw data is an upstream product, as are the basic algorithms necessary for processing (this would be equivalent to raw material and a refining process). The downstream products are the final output, which in the case of BEST would be their slick web interface where you can dial in a city and then get a fully interpolated and processed historical time series as an output.

        That’s what I used. So I dial in St. Louis and compare to GISS.
        http://imageshack.com/a/img842/6848/ielk.gif

        Works great and the two sets seem to agree.

      • Technically, “upstream” and “downstream” refer process flow rather than to specific products.

        See the wiki definition: “Downstream in manufacturing refers to processes that occur later on in a production sequence or production line.”

        Or see this:

        http://smallbusiness.chron.com/definitions-upstream-downstream-production-process-30971.html

        “Downstream product monitoring” refers to “downstream monitoring of a product” not “monitoring of a downstream product“.

        The point is you are manufacturing/developing/creating a product, and “downstream monitoring” is monitoring the product while it’s being created. You’re not monitoring a product after it’s been created. It doesn’t even make sense to make that distinction.

        It’s just a “product”.

        People do sometimes say “downstream product” when they mean “derived product” but that’s the misuse of lingo I was referring to. It makes them sound clever (see above), but it shows a lack of understanding of the lingo they are bandying about and trying to look clever by using.

      • Carrick,you may have worked in an industrial environment but you don’t realize that these terms are fluid. Processes work on materials to create interim products. If the interim products are upstream, then they would be upstream products. I know the semiconductor industry and the raw materials are the elemental materials, interim upstream products are wafers made from these materials, and the downstream is the litho, etc processes leading to the finished products.

        All that does not really matter, because what matters is what Mosher meant when he referenced the term downstream. He is in the heart of Silicon Vallley, so I assume he knows what he meant.

      • Steven Mosher

        Upstream is the source you got your stuff from. Standard open source term.
        Giss uses ushcn adjusted. And then merges short stations into long uses the rsm algorithm criticized by
        Romanm Mcintyre and tamino. On top of that they apply a uhi fix which is a horrid hack.
        Giss is downstream of everything.

      • Stephen Mosher, I’m not sure how anything you just said changes what I said earlier. At least I know where Webby got the term from now.

        People abuse language, especially programmers, that’s normal. I wouldn’t have said anything if he hadn’t acted so happy about learning a new word.

      • One phrase that Carrick has learned is “own goal”.

        His analysis of the Springfield area essentially proved the point that Brandon Shollenberger short-sheeted the GISS warming trend by several hundred percent. And he never did the sanity check of simply looking up the Springfield station data from GISS, which would have shown something was seriously wrong.

        That’s the take-home message. This stuff isn’t as hard as you make it out to be. Use the downstream products first and you will understand how it can save you lots of time.

      • Don Monfort

        It has been fun, but this thread looks like it is trending towards getting too civil for my taste. And I fear that we are distracting Mr. mw from his spatio-temporal mulling. I bet he is the capt of spatio-temporal mulling. Which is almost as impressive as being the capt of dallas. I salute you low ranking anonymous blog characters and wish you a good night.

      • Don Monfort

        Carrick, hasn’t Mosher changed since he was taken under Muller’s wing and they got that paper published in that pay-for-play journal-of-last-resort? Dude used to have some humility. Now he won’t give his old pals a straight answer.

    • Steven Mosher

      Carrick

      “I looked at the US SE because I’ve been aware for a while that it doesn’t follow the general global trends: much of it has seen a net cooling over the last century.”

      On what empirical basis are you “aware”

      because of GISS? CRU? NCDC? something else?

      GISS? assuming it is true
      CRU? assuming it is true
      NCDC? assuming it is true.

      At best you can find a difference between various reconstructions.

      Then you simply have to ask yourself which interpolation method has been
      shown to reduce error the most?

      Since you dont have ground truth, you only have various estimates to compare. And the comparison can only tell you that they differ.

      Diagnosing the cause isnt a simple matter of declaring one of them to be ground truth.

      Who knows why GISS fails to capture the trend correctly

      1. there method when tested against synthetic data is not better
      2. their method of combining stations ( RSM) has known correctable flaws
      3. They use adjusted data
      4. They further adjust adjusted data

      What people need to do is forget the notion that there is some historical truth captured by records or methods. There is only an estimate of the past.
      And we can always expect various methods to come up with different estimates. The question is which method is consistently better. You can get a handle on that by doing tests on synthetic data where ground truth is known.

      So start with a methods test.

      • southeast 32n-37n 80w-85w

        http://climexp.knmi.nl/data/ighcn_cams_05_-85–80E_32-37N_na.png

        http://climexp.knmi.nl/data/igiss_temp_land_250_-85–80E_32-37N_na.png

        250km really don’t want to smear doncha know cause this happens

        http://climexp.knmi.nl/data/iberkeley_tavg_-85–80E_32-37N_na.png

        Trivia: What part of the US planted the most trees in the last half of the 20th century?

      • WebHubTelescope

        Cappy,

        Try Hotlanta:
        http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=425722190000&dt=1&ds=12

        And try any other station as you see fit. Look at the ways the data is being processed. Listen to what Mosh is telling you about what constitutes ground truth.

        Mosher has years of experience in the modeling&simulation of engagements, where you learn the concept of “ground truth” the first day.

      • Webster, I your am sure that your medication will be refilled soon, then perhaps you can catch up with the conversation.. If you selected just the rural stations for the 5 degree grid I used and compared them to just the urban stations in the same grid they would likely have different trends. Your picking Hotlanta just shows your lack of common sense since the conversation has been about gridded areas from the start. Kriging or any other long range interpolation is fine for large area estimates but sucks on smaller regional scales because it tends to smear temperatures outside on the grid in question. You would need to krige inside the grid only.

      • What people need to do is forget the notion that there is some historical truth captured by records or methods. There is only an estimate of the past.
        And we can always expect various methods to come up with different estimates. The question is which method is consistently better. You can get a handle on that by doing tests on synthetic data where ground truth is known.

        Webby, in this case the “Ground Truth” are the surface measurements,as flawed as they are. Methods might make them better, but more likely it just increases the uncertainty.
        If some really think methods can fabricate a better answer than the actual measurements, why don’t we just put a few hundred stations in a sq mile or two, and just use that to calculate a global average, we have a model of how that temps relates to temps thousands of km’s away, so what’s not to like?

      • WebHubTelescope

        The question has been whether BEST is introducing a “huge warming trend”.

        So the way you look at that is to take the station data from the GISS site and try to reconstruct it yourself by applying plain common sense.

        The assertion by Brndn Shllnbrgr was that BEST was “wrong”. You don’t prove that by looking at another set of processed data that also could be “wrong”. That raises the question of which one is closer to ground truth?

        You should really try to read what Mosher wrote.

      • So the way you look at that is to take the station data from the GISS site and try to reconstruct it yourself by applying plain common sense.

        Maybe the better plan, since we’re talking about a small area is to get the station records from the NCDC, construct a trend, and then compare that to both!

  223. A fan of *MORE* discourse

    lolwot calls the score “Climate Science 1, Climate Deniers 0”

    Yes. Moreover, Slashdot’s enormous cohort of STEM professionals agrees, and their critique of denialism is scathing.

    Your many cogent observations are appreciated, lolwot!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • I can see a nose on your back, I am a stem cell researcher.

      http://www.dailymail.co.uk/health/article-2685842/Stem-cell-patient-grows-NOSE-eight-years-treatment-cure-paralysis-failed.html

      Only eight more years and you will see the paws. The doctor is doing well.

    • Matthew R Marler

      A fan of *MORE* discourse: their critique of denialism is scathing.

      It would be helpful if you could quote some “denialism” and the specific critiques. There is no “denialism” or “climate change denialism” that I know of; there are critics and critiques of specific claims about CO2 influences and the adequacy of the instrumental and proxy temperature records. With some quotes you could clarify what the heck you are writing about. I have found that your quotes of Hansen and his work generally show him not to have made any accurate predictions, and I suspect that your claims of “denialism” are baseless.

      It is amusing that among the recent attendees at the meeting in Las Vegas, as reported by Lord Monckton, 100% of attendees agreed that some amount of climate change was attributable to some human activities.

      That’s close to the 97% in Cook’s “survey” that found agreement to an equally vague statement of human effects on climate change.

      • Matthew R Marler wrote:

        It is amusing that among the recent attendees at the meeting in Las Vegas, as reported by Lord Monckton, 100% of attendees agreed that some amount of climate change was attributable to some human activities.

        That is strange. There was not a single one at all who dissented?

        I wonder whether this Lord Monckton who reported that, knows the person with the name Christopher Monckton who writes at Anthony Watt’s junk science blog. This Christopher Monckton has just recently denied there not only any anthropogenic contribution to global warming, but global warming itself altogether:

        “The last U.S. winter colder than this one was in 1911/12, before the First World War.
        Thank you, America! Most of Britain has had an unusually mild and wet winter, for you have had more than your fair share of the Northern Hemisphere’s cold weather this season.
        Global warming? What global warming?”

        (Christopher Monckton, March 26, 2014, http://wattsupwiththat.com/2014/03/26/coldest-u-s-winter-in-a-century/)

  224. Zeke, Can you help. Up blog we asked to see the 1950 Temp in Portland as observed and measured and the adjusted and corrected one. Pick a month.
    Scott

    • Looking back it was July 8 1950.

      What is the measured temperature and what was it adjusted to read?
      Scott

  225. I always wondered how much of the climate records were tampered with for the global warming alarmist money/agenda. Initially, I liked the idea of our societies becoming more environmentally cognizant, but then I became weary of the alarmist view after I studied the “hockey schtick”. It was outright incompetence or fraud.

    I grew up on Lake Michigan, and am now a retired geologist (that only worked 2 months for industry my entire career so I am not aligned to oil giants or industry; I just like the scientific method a heckuva lot). My area of expertise is the most recent geologic history (Quaternary Geologist).

    So, this past winter as the extreme and extended cold spell continued in Michigan and the mid-continent, I was noting the Great Lakes ice growth in detail on both NOAA and the Canadian Ice Services websites. It became obvious that as record ice growth was inevitable or very likely the records were changed/obfuscated/etc. Simply amazed and frustrated me the extent they would go to.

    • Just remember that ‘they’ are by extension those WE continue to put in office and pay for out of our taxes. WE live in a fundamentally dishonest society — WE the People are a part of that (we are paying school teachers to lie not just to us but in classrooms across the country) — and, WE need to recognize that before we can expect anything to ever change.

      • Hey, Wagathon, we probably agree on a lot. However, I don’t need instruction in Political Science 101 at this stage in my life on a science thread.

      • Welcome to what Tim Ball calls the World of (Political) Science. The pretense to knowledge is all that keeps the government-sponsored Ponzi scheme afloat. Voters are not as smart as they used to be. They believe those who work for the government when they say, “we have modeled your future;” and, then the people don’t understand when they learn that the, Global warming computer models are confounded as Antarctic… (It’s unprecedented: across the globe, there are about one million square kilometers more sea ice than 35 years ago, which is when satellite measurements began).

      • @Wagathon

        Okay, that feels better … more like Pol. Sci 401. Nice link, too, which I have followed (the strong Antarctic sea ice trend) with great interest. Several lines of data are suggesting to this old geologist that the globe is much more likely in a cooling trend.

  226. jrlagoni |,
    Do you have evidence of this?
    ——————-
    I’ve now read all the comments above and, as in my earlier, post appreciate Zeke’s (and also Mosher’s) description of the types of adjustments and reasons for same. I now better understand many of the difficulties and nuances in compiling temperature records, I think the effort to create an alternative, better temperature record is praiseworthy. However, I agree with omanuel and others that, despite the clarity, it’s not something politicians, pundits, or the public will understand. They want a sound bite, 100 words or less answer, not an essay with charts and over 2000 comments. While we all want the best temperature records possible, I think it would be better to refer to the original instrumental records with footnotes that suggest why the original instrumental record is biased hot or cold as changes to the instruments and sites occurred. The original instrumental temperature record is something everyone would understand, and most could understand and even accept footnote explanations if they were interested enough to look that deeply. It’s not as though we’re talking about two very different temperature reconstructions or trend lines. There’s a puny ~0.1C global temperature adjustment difference that we’re talking about. Why create so much room for misunderstanding and distrust (and confirmation bias) for a puny ~0.1 degree difference.
    My previous post, that this worthy effort at a more accurate temperature reconstruction has an unintended consequence- diverting attention from how little warming we’ve actually had (either the original instrumental or adjusted), a mere 0.8 or 0.9 degrees C since the beginning of the industrial revolution and more than half way to a CO2 double.

    • @Doug,
      “There’s a puny ~0.1C global temperature adjustment difference that we’re talking about. ”

      It’s more than 0.1°, basically max temps are nearly flat globally since the 50’s.

    • Yes, I have evidence of this as I saved the data and maps reported every day as the ice was growing on the Great Lakes. It is not incidental that that large city at the southern tip of Lake Michigan had its coldest meteorologic winter season (Dec-Mar temperature average temperature) ever recorded in 140 years of record keeping. Even NOAA reported this fact. Chicago had its coldest winter ever recorded – over the past 140 years!

  227. The adjustments are about creating the appearance of improving the accuracy of the temperature records. It makes it seem as if they are correcting the biases out of the data. But it also allows two things:
    1. to exaggerate the biases that would show warming
    2. to underestimate the biases that would show cooling

    So it is sciency without being actual science. Science would conduct experiments and measure the biases (like UHI and station siting) and subtract the measured biases out of the data. Sciency explanations make up algorithms that do not work but are really complex and so baffling that those using them can’t seem to figure out that their answers are wrong provided when average together they provide the ‘right’ answer.

    • Agreed.

      The way I look at it, even with people doing what Jones and Wigley were doing…making up how much to lower the 1940 SST blip (as much as they could without drawing too much attention, knowing that if they did that it would lower global average more than a land adjustment), still, in the end, these people cannot control it forever in that manner.
      If you raise today’s temp, then in ten years if there is a rise it will make for a lesser trend than if they hadn’t raised today’s temp.If there is a paue, it would show a drop,

      So as to warming/cooling facts, they can’t easily be corrupted “very much” that way..
      Short-term the activists can cause enough grief to get political action, though.
      Or save reputations for a decade or two.

      • Agreed with Doug Allen on this:

        ” I think it would be better to refer to the original instrumental records with footnotes that suggest why the original instrumental record is biased hot or cold as changes to the instruments and sites occurred. The original instrumental temperature record is something everyone would understand, and most could understand and even accept footnote explanations if they were interested enough to look that deeply. It’s not as though we’re talking about two very different temperature reconstructions or trend lines. There’s a puny ~0.1C global temperature adjustment difference that we’re talking about. Why create so much room for misunderstanding and distrust (and confirmation bias) for a puny ~0.1 degree difference.
        My previous post, that this worthy effort at a more accurate temperature reconstruction has an unintended consequence- diverting attention from how little warming we’ve actually had (either the original instrumental or adjusted), a mere 0.8 or 0.9 degrees C since the beginning of the industrial revolution and more than half way to a CO2 double.”

      • unless of course you have an algorithm that changes the daily temperature of the entire time series on a daily basis.

      • bit chilly,

        There’s that.
        It seems a bit like squaring an irregularly-shaped piece of plywood. You go ’round with the carpenter’s square and it gets better and better and you soon stop.
        In this case, the carpenter’s square changes every time.

        The optimist says “Things are looking up”.

      • Interesting point … eventually they will have to be blatantly dishonest, now they can just obfuscate, manipulate, and go wash their hands. If you weren’t trying to corrupt the record, you would just logically keep the original data open for scientists and public to see as well as the adjusted data set.

  228. Zeke July 5th, 2014 at 5:32 pm
    angech, All CONUS temperature reconstructions are estimates. Adding an “estimated” tag would be meaningless.
    Steven Mosher | July 10, 2014 at 1:29 pm
    all the ingredients are list. and the recipe as well.
    That’s way better than a mere label.
    No
    No
    No
    put an explanation on the graph that it is not real temperature but an estimation of what temperatures might have been if they had ongoing adjustments.
    Truth in Description.
    That is what is lacking.
    That is what makes it look bad.
    Have you bought an American flag recently, Steven. It has a label “made in China”
    I know it is not a real American flag.
    Labeled “unpackaged in America by Zeke”, I know he is selling a Chinese flag but pretending it is American .
    Like he and you are pretending USHCN is real temperature no, real historical, no
    and then say well we knew it was.
    Put the label on tell the truth
    like 1218 stations only, 650 original left, rest made up. Never denied by Nick/Zeke/Mosher
    Adding an “estimated” tag would be meaningless.

    • The only remaining question is if BEST is a construction or a reconstruction. That’s the most important question, actually.

  229. Very late this evening, actually quite early tomorrow, I finally fit together these final pieces of the 2009 Climategate puzzle:

    https://dl.dropboxusercontent.com/u/10640850/Humanity_Lost_WWII.pdf

  230. David Springer

    What… no thanks for explaining the TOBS adjustment in such a clear way? I understand this with crystal clarity. I’m not totally convinced it’s implemented appropriately with just a single paper and 200 stations to represent the entire world but the theory behind it is not rocket science.

    David Springer | July 10, 2014 at 4:56 pm |
    Time of observation does, on average, change the average temperature.

    If you reset close to time when maximum or minimum is reached and next day isn’t as extreme then you get the extreme recorded two days in a row. Afternoon resets get more double extreme highs and mornings more double extreme lows.

    So say there’s a more or less concerted shift from afternoons to mornings because 10am is very unlikely to be a high or low daily extreme. Which is what happened. So to normalize afternoon readings with morning readings you would subtract something from the afternoon numbers. To know how much to subtract is a different issue. It’s pretty clear there’s a warm bias taking afternoon readings vs. morning.

    One way to estimate how much to subtract is to have hourly data and select a certain hour each day to take the previous 24 hour min/max from and see how much they differ. If you have enough data from enough different places to sift through you can get a good idea of how to adjust.

    The thing of it is that the vast majority of land instrumentation is continental US and Europe which represents only a small fraction of the earth’s surface and happens to be areas with extremely high land use change due to industrialization and agriculture. In order to get a measure of global average temperature requires global coverage and, unfortunately, we only have that since 1979. No one more than me desires a better global temperature record to set things straight but it just doesn’t exist.

  231. With all the adjustments how can they report the results to a 10th of a degree?

    • David,

      Marketing With Impunity.

      Andrew

    • @ David

      “With all the adjustments how can they report the results to a 10th of a degree?”

      Silly you, they DON’T report the results to 10ths of a degree.

      They report them to 100ths of a degree.

      Remember the recent story about May, 2014 SMASHING the record as the coldest May on record, with records going back to 1880?

      “Driven by exceptionally warm ocean waters, Earth smashed a record for heat in May and is likely to keep on breaking high temperature marks, experts say. The National Oceanic and Atmospheric Administration Monday said May’s average temperature on Earth of 15.54 C beat the old record set four years ago.”

      The old, smashed record was 15.52 C, so you just know that their adjustments are VERY precise.

  232. Kenneth Fritsch

    As I see it from my analyses the questions remaining to be asked about the temperature instrumental record involve how well we capture and understand the uncertainty involved in adjusting temperatures, and further knowing the limitations of those methods currently being used. I agree, as do most rational and informed people, that there has been AGW warming in the last half of the 20th century and that the mean increase that comes out of the major data sets is our best effort to date. I have little doubt that those efforts have been honest and sincere. There is a potential for bias in these matters from those looking for and making adjustments to look harder for those in the direction that might favor their own views on AGW in this case. I do not see any evidence of that and the adjustment processes have been mostly transparent and available for study.

    Having said that does not mean that I agree that we have captured all the uncertainty or potential uncertainty in the adjustment processes. We have 3 main sources of uncertainty as I see it: (1) statistical , (2) spatial and (3) method. The first 2 sources can be and have been fairly easily estimated if by no better method than brute force simulations – which I am currently doing for my own edification. The third source requires, in my mind, a proper benchmarking test whereby a known temperature climate is generated and non climate effects on temperature are added and finally an adjustment algorithm applied. Benchmarking has been used and reported previously. A recent proposal has been made here: http://www.geosci-instrum-method-data-syst-discuss.net/4/235/2014/gid-4-235-2014.pdf . I am particularly interested in how the various available adjustment algorithms can handle non climate effects that slowly change station temperatures over time and seeing the benchmarking not only used with well known non climate effects but also to test the limitations of the adjustments to effects from sources either not well understood or unknown.
    A graphical view of the 3 uncertainty sources are in Figure 8 in the link below:

    http://www.scitechnol.com/2327-4581/2327-4581-1-103.pdf

    I have been hoping that these blog discussions would concentrate on those Item I have noted above and we can get off these personality clashes. I find that deterioration of the temperature discussions is fueled not only by those who do not understand the processes and instead harp on motivation but by those who should know better than to attempt to keep the discussion at the level of the ignorant. It would help the discussion if the interested and knowledgeable parties would present their views on the processes and potential improvements of the processes and ignore those who evidently are not here to learn.

  233. Judith Curry,
    This has been one of the most educational threads you have hosted with a lot of thoughtful and interesting comment. I am left with the following impressions:
    NOAA, et al, are operating in good faith if not always transparent.
    Zeke Hausfather and Steven Mosher (along with most others) have been open and honest.
    It appears that many who defend treatment of the historical temperature data consider it to be an academic issue while many who question the record view it as a “fit for use” issue.
    The raw database is a mess and any mathematical treatment to resolve problems (e.g. discontinuities, station moves, stations missing, TOBS, etc.) introduces other problems (e.g. estimating, correct adjustment to use, etc.). You are damned if you do and damned if you don’t. Occam’s Razor is certainly not being followed as witness by Zeke’s comment “I’m actually not a big fan of NCDC’s choice to do infilling, not because it makes a difference in the results, but rather because it confuses things more than it helps.”
    Other fields (e.g. medical and flight test) would consider much (not all) of the data compromised and throw it out rather than try to correct it. The FDA (Federal Drug Administration) would be universally excoriated if it approved a drug based on this “corrected” data. While I appreciate Steven’s comment “fields develop best practice over time based on the interaction with customers. different fields, different customers, different practice”, considering that the end result is being used to determine the fate of perhaps hundreds of millions if not billions of lives throughout the world, IMHO this is more important than approval of a drug and the data should be treated accordingly. As presented and advertised (Global, or USA, Temperature Anomaly), this corrected data is NOT “fit for use.” All the academic justification in the world will not change that. Sometimes “best practices” are not good enough.
    Judith, although rarely mentioned in this thread, your “Uncertainty Monster” casts a large cloud (no pun intended) over this whole subject.
    And do consider there are other methodologies available that may be more suited. Seems like a lot of resources to defend a flawed approach and not adequate resources pursuing methodologies similar to the NCDC “114 pristine stations called CRN” which I believe require no or minimal adjustments.
    And finally and most important, although perhaps for different reasons, I do agree with Steven’s comment “Fighting over the historical record is a diversionary tactic.” The real issue is not whether it is getting warmer or colder.
 The real issue is whether CO2 is driving temperature. To that extent the accuracy of the historical temperature record IS an academic exercise. The WRONG ISSUE is being discussed.
    Many thanks to our Hostess and all who contributed.

    • PMhinSC

      I agree with almost everything you write.

      The Uncertainty Monsters are large and rampant in virtually every field of climate science

      I disagree about the historical record. Based on the Mannian interpretation of climate -a static climate until modern times-which has been linked to growing co2, we have the desire you cite of drastic and expensive action.

      IF the interpretation was wrong and despite ‘record’ levels of co2 the temperature was not ‘unusual’ in the context of the Holocene would this call for action be heeded?

      In other words the co2 and historic record go hand in hand and if one was separated from the other the politicians and scientists would be forced to review their viewpoint.

      Record levels of co2 producing only ‘ordinary’ temperatures are not a banner that can be used to change the world.
      tonyb

      • tonyb,

        Record levels of co2 producing only ‘ordinary’ temperatures are not a banner that can be used to change the world.

        And that’s the problem, even with all of the good intentions and diligent attempts at science, the whole point was to change the world into what the various Greens fantasize about. The fact they have to break all of the eggs to get there doesn’t seem to have sunk in yet.

      • Mi Cro

        As Winston Churchill might have said about climate science;

        ‘never in the field of human endeavour has so much been spent by so many for so little result…

        tonyb

      • With so much at stake.

    • MiCro
      It is not the green’s eggs they intend to break. That doesn’t bother them if they bring the rest of the advanced civilization to a pause. Ths ultimate objective to rewild the open areas and corral the rest of the people into urban ghettos. under the thumbs of the elite who travel on jets and maintain massive multiple mansions as they force the serfs into homage to their enlightened ideals.
      Scott

    • @ PMHinSC

      ” As presented and advertised (Global, or USA, Temperature Anomaly), this corrected data is NOT “fit for use.”

      A classic example of ‘belaboring the obvious’, but the important thing to remember is: It makes absolutely no difference whether you (and I) are right about the ‘fitness for use’ of the global temperature data base, A cursory examination of the actions of various Western governments over the past couple of decades, and recent US policies implemented over the past few years, up until the Social Cost of Carbon regulations now coming into effect, will show that we are having a strictly academic discussion.

      In the real world, The Science is Settled! Officially, we are currently being overwhelmed by the negative consequences of ACO2 and governments, the US government in particular, are taking action NOW to ameliorate the existing damage and curb further detrimental changes to the climate.

      It makes NO difference if the Temperature of the Earth has gone up, down, or sideways over the previous 15-20 years. As in NONE. Actions have been taken and will continue to be taken to ‘Fight Climate Change’ and those who object will be excoriated or, if increasing cries of the Experts are heeded, prosecuted. It also makes NO difference whether the policies being implemented will have any measurable effect on the climate, the temperature of the earth, or any other measurable physical environmental parameter. The policies will be declared to have positive but limited success, just as predicted, proving that they MUST be expanded and strengthened to halt the catastrophe, rather than just slow it. And the expanding and strengthening will happen. Again, regardless of the fitness of the data for purpose.

      Furthermore, the actions taken cannot be untaken. Regardless of who controls Congress and/or the Presidency, the ‘sue and settle’ self-licking ice cream cone is firmly in place. If attempts are made to roll back any of the insanity through legislative action, the progressive organizations, most of which are funded in part by the government, will sue to overturn the laws and/or reinstate the regulations and the courts, firmly in control of the progressives, will simply rule that the legislative actions are ‘illegal’ or ‘unconstitutional’ and overturn them. The policies desired by the EPA but which have no chance of being passed by the legislatures will be sued into place using the same ‘sue and settle’ procedure that resulted in CO2 being declared a dangerous pollutant subject to MANDATORY regulation by the EPA, ‘forcing’ the EPA to implement the policy that it wanted but couldn’t get through legislative action.

      So, we can enjoy our mutual hissy fits on this and other ‘Climate Sites’, and all have a wonderful time discussing data collection, data adjustment, and data analysis and no matter what we conclude, it will all have exactly zero impact on either Climate Change or the policies implemented to fight it.

  234. David Springer wrote:

    Of course I can say there’s pause. The satellite data is solid and it hasn’t shown any significant warming in going on 20 years. What are you boys smoking?

    From the RSS data, the trend estimate is 0.038+/-0.154(2 sigma) deg. C/decade for the recent 20-years time period, which is not statistically significant. For the UAH data, the trend estimate is 0.128+/-0.155(2 sigma) deg. C/decade for the recent 20-years time period, which is statistically significant with more than 90% probability. Latter trend estimate is in between the trend estimates from the surface temperature data sets, which are all statistically significant with more than 95% probability for the recent 20-year time period (The temperature variability is higher in the free troposphere than at the surface. Therefore, the statistical significance is lower for the free troposphere data even with the same trend slope as at the surface). The RSS data set is clearly the outlier compared to all the other data sets. And yet, many AGW-“Skeptics” nowadays prefer to only rely on the RSS data set for their claims about the “pause”. One wonders why that is (not really). Anyone cherry picking data?

    So, David Springer, tell me please, which one of the two is the “solid one”, RSS or UAH? Or, if the satellite data are as solid as you claim, why are the trend estimates different? One shows a statistically significant warming (at more than 90% probability) very similar in magnitude to the surface temperature trends, the other one doesn’t. And then all the corrections that had to be done to the supposedly “solid” satellite retrieved data over the years, resulting in a net upward adjustment of the estimated temperature trend. And these weren’t just calibration issues or orbital drift. Anyone remember, when Spencer’s UAH data showed supposedly no warming of the lower and mid troposphere, which was used by AGW-“Skeptics” back then to claim that global warming claims based on the surface temperature data were wrong, but turned out to be actually a problem with Spencer’s own retrieval algorithm (Fu et al., Nature 2004, http://dx.doi.org/10.1038/nature02524)?

    This aside, assuming the absence of statistical significance was true for the recent 20-year time period. You claim that there was a “pause” of global troposphere warming (= no global troposphere warming for the last 20 years) as conclusion from the lack of statistical significance in the satellite data, with the Zero-trend as Null-hypothesis. However, if this was correct reasoning one could claim with equal validity, using the same data set and time period, that there has been global warming over the recent 20-year period, since the trend is also not statistically significantly distinguishable from the trend estimate over the time period since 1979 (“global warming” is the Null-hypothesis in this case), which itself is highly statistically significantly different from a Zero-trend (RSS: 0.124+/-0.067 deg. C/decade, statistical significance greater than 99.9%).

    Obviously, there is a logical contradiction here, if both conclusions, “global warming” and “pause” over the recent 20 years were equally valid, based on the same logic of reasoning and using the same data, only switching the Null-hypothesis with the alternative hypothesis, since the two conclusions are mutually exclusive. A statement and its logical negation can’t be true at the same time. Why is there a logical contradiction? Because your statistical reasoning is fallacious. You wrongly assume that lack of statistical significance was sufficient to falsify the alternative hypothesis. The successful rejection of the Null-hypothesis is a statistical falsification of the Null-hypothesis (e.g., Zero temperature trend) and a confirmation of the alternative hypothesis (e.g., there has been global troposphere warming), but a failure to reject the Null-hypothesis does not statistically falsify the alternative hypothesis. In this case, the correct conclusion from the statistical test would be that the result was inconclusive regarding the two hypotheses. This is, of course, equally valid the other way around with global warming as Null-hypothesis and Zero-trend as alternative hypothesis. A failure to reject the Null-hypothesis “global warming” with a given data set does not falsify the alternative hypothesis “pause”.

    In summary, your argument pointing to the lacking statistical significance of the temperature trend estimate for a time period is not sufficient empirical/statistical evidence or scientific justification for the claim that there was a “pause” of global surface/troposphere warming. Even if the claim wasn’t based on cherry picking the most convenient data set and ignoring all other data sets to satisfy confirmation bias.

    • Jan Perlwitz

      ” However, if this was correct reasoning one could claim with equal validity, using the same data set and time period, that there has been global warming over the recent 20-year period, since the trend is also not statistically significantly distinguishable from the trend estimate over the time period since 1979 (“global warming” is the Null-hypothesis in this case), which itself is highly statistically significantly different from a Zero-trend (RSS: 0.124+/-0.067 deg. C/decade, statistical significance greater than 99.9%).

      Obviously, there is a logical contradiction here, if both conclusions, “global warming” and “pause” over the recent 20 years were equally valid, based on the same logic of reasoning…’

      Only it’s not the same. You had to reverse the null.

      • The trend of the daily anomaly for just the actual measured surface station maximum temps from 1940-2012 is -4E-05x – 0.0035

      • ClimateGuy wrote:

        Only it’s not the same. You had to reverse the null.

        The logic of the reasoning is the same with respect to the Null-hypothesis. You must switch the Null-hypothesis, if you want to statistically test the alleged “pause”. You can’t test the “pause” by assuming the “pause” as Null-hypothesis. The result from testing the “pause” is that the “pause” is not statistically significant. If lack of statistical significance of the global warming trend within data from a given time period, with the Zero-trend as Null-hypothesis, were to be sufficient to confirm the “pause” during the time period, then lack of statistical significance of the “pause” for the same time period, with the global warming trend estimate value since 1979 as Null-hypothesis, would equally be sufficient to confirm the global warming trend during the time period, for which the “pause” was supposedly confirmed as well. But a statement and its logical negation can’t be both true at the same time.

        Or just read in some statistical textbook about type I and type II errors when statistical testing is done, before you dispute the statement that a failure of rejecting the Null-hypothesis doesn’t confirm the Null-hypothesis. Lack of statistical significance of the temperature trend is not sufficient as confirmation of the “pause”, based on statistical reasoning.

      • Jan, “the pause” does not have to tested. It’s a result of significance test for warming.

      • ClimateGuy,

        Jan, “the pause” does not have to tested. It’s a result of significance test for warming.

        No, it is not. The result of the statistical test is that the Null-hypothesis “Zero-trend” can’t be rejected for the chosen time period. There are only two possible outcomes of the statistical test. Outcome 1: Rejection of the Null-hypothesis, here “Zero temperature trend”=”pause”, and confirmation of the alternative hypothesis, here statistically significant global warming (for a chosen error probability), or Outcome 2: Failure of rejecting the Null-hypothesis.
        But in statistics, a failure of rejecting the Null-hypothesis is not a verification of the Null-hypothesis. The “pause” can’t be “proven” by testing the statistical significance of global warming. I have already provided my arguments on all of this before. And all you do is repeating the assertion by David Springer to which I already answered. You just have made the thread recursive. Go back to
        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-607090

      • Jan wrote wrote

        “No, it is not. The result of the statistical test is that the Null-hypothesis “Zero-trend” can’t be rejected for the chosen time period. There are only two possible outcomes of the statistical test. Outcome 1: Rejection of the Null-hypothesis, here “Zero temperature trend”=”pause”, and confirmation of the alternative hypothesis, here statistically significant global warming (for a chosen error probability), or Outcome 2: Failure of rejecting the Null-hypothesis.”
        But in statistics, a failure of rejecting the Null-hypothesis is not a verification of the Null-hypothesis. The “pause” can’t be “proven” by testing the statistical significance of global warming.”
        It doesn’t have to be proven, Jan. It is the result of finding no statistically significant warming.
        “The pause” is not necessarily defined as zero, either.

      • David Springer

        Oh how cute. Perlwitz is a hiatus denier!

        UAH has insignificant warming (a ‘hiatus’) from 1979 to 1997. Detrending by 0.06C shows a flat trend line for 18 consecutive years so decadel trend is way below significant (significant begins at 0.1C/decade).

        http://woodfortrees.org/plot/uah/from:1979/to:1997/plot/uah/from:1979/to:1997/trend/detrend:0.06

        The a miracle happens. Mother of all El Ninos causes a step change in global average temperature in the year 1998. Thence from 1998 through present (almost 17 years) a warming of 0.8C which is again far below decadel significance level of 0.1C/decade.

        http://woodfortrees.org/plot/uah/from:1998/to:2015/plot/uah/from:1998/to:2015/trend/detrend:0.08

        So right now, according to satellites, so called global warming happened in one big chunk when the Pacific Warm Poll let go a monumental fart of hot air in 1998.

        Warmists are praying for another mother of all El Ninos. Good luck that looks like a 100-year or longer record setter in 1998. Another 5 years or so will put worrisome global warming to bed for good. You should get used to the idea that might play out that way, Perlwitz.

      • David Springer wrote
        “Oh how cute. Perlwitz is a hiatus denier!”

        I wonder how Jan explains all the scientists going to much trouble trying to account for it?

      • David Springer

        RSS has insignificant warming (a ‘hiatus’) from 1979 to 1997 as well. Detrending by 0.13C shows a flat trend line for 18 consecutive years so decadel trend is well below significant (significant begins at 0.1C/decade).

        http://woodfortrees.org/plot/rss/from:1979/to:1997/plot/rss/from:1979/to:1997/trend/detrend:0.13

        Then a miracle happens. Mother of all El Ninos causes a step change in global average temperature in the year 1998. Thence from 1998 through present (almost 17 years) an insignificant cooling of -0.7C which is now of the wrong polarity, even farther from significant than UAH same period.

        http://woodfortrees.org/plot/rss/from:1998/to:2015/plot/rss/from:1998/to:2015/trend/detrend:-0.07

        So right now, according to both UAH and RSS, so called global warming happened in one big chunk when the Pacific Warm Pool let go a monumental fart of hot air in 1998.

        The differences between RSS and UAH are not significant. Both show a step change in global average temperature following record-setting 1998 El Nino.

        Any reasonable, objective person must take pause at this and wonder if global temperature will continue to rise at all or whether La Nina has now got the upper hand. My bet’s on La Nina predominating during the cool side of the Atlantic Multi-Decadel Oscillation (~60 yrs) and just began the 30 year falling side of it.

      • David Springer wrote
        “You should get used to the idea that might play out that way, Perlwitz.

        What was the advice given Phil Jones in case people came after them?
        Get the figures on sulphates ready as a backup?

    • Jan, here’s the zero/near zero WFT graphs for all the datasets available there: http://www.woodfortrees.org/plot/hadcrut3gl/from:1997.33/trend/plot/gistemp/from:2001.33/trend/plot/rss/from:1997.0/trend/plot/wti/from:2000.9/trend/plot/hadsst2gl/from:1997.1/trend/plot/hadcrut4gl/from:2000.9/trend/plot/uah/from:2004.75/trend/plot/hadcrut3gl/from:1997.33/plot/gistemp/from:2001.33/plot/rss/from:1997/plot/wti/from:2000.9/plot/hadsst2gl/from:1997.1/plot/hadcrut4gl/from:2000.9/plot/uah/from:2004.75

      Careful of the cherry picking claims. These graphs show how long the current trend has existed in each data set. That requires starting from the current and working iteratively backwards.

      The RSS graph that tells us even more is this one: http://woodfortrees.org/plot/rss/plot/rss/to:1996.75/trend/plot/rss/from:1996.75/trend
      For 1/2 the period RSS shows warming and the other 1/2 cooling.

      BTW, Spencer tells us the newer updated version will reduce the divergence with RSS. The actual outlier is GISS, which appears to be diverging from all others.

      • corev wrote:

        Jan, here’s the zero/near zero WFT graphs for all the datasets available there…
        Careful of the cherry picking claims. These graphs show how long the current trend has existed in each data set. That requires starting from the current and working iteratively backwards.

        And how would you know what the “current trend” was, before you search backward? You start with an a priori assumption that the “current trend” must be Zero, and then you search backward until you have found a time period for which the trend estimate was Zero (and not bothering at all about confidence intervals, statistical significance and the things about which I talked in my previous comment to which you reply here). You do this to “prove” that the trend was Zero, i.e. that there was a “pause”. You confirm what you assumed. This isn’t just cherry picking of data, it’s also circular reasoning.

        The RSS graph that tells us even more is this one: http://woodfortrees.org/plot/rss/plot/rss/to:1996.75/trend/plot/rss/from:1996.75/trend
        For 1/2 the period RSS shows warming and the other 1/2 cooling.

        And this again was an exercise in cherry picking.
        http://climateconomysociety.blogspot.com/2013/01/how-to-create-false-global-warming.html

        BTW, Spencer tells us the newer updated version will reduce the divergence with RSS. The actual outlier is GISS, which appears to be diverging from all others.

        Berkeley, HadCRUT4 krig v2, HadCRUT4 hybrid v2, and UAH have all generally larger trends than the GISS analysis over the recent 15 to 20 years, although the differences aren’t statistically significant. I hypothesize that it is actually the NOAA and HadCRUT4 data, which have some cool bias due to polar amplification of the surface warming in the Arctic and the smaller coverage of the Arctic regions by latter data sets.

      • @ Jan P. Perlwitz, corev, Mi Cro, ClimateGuy, et al

        Given: a. The satellite temperature data since 1979.
        b. The monotonic increase of ACO2 from around 330 ppm to 400 ppm since 1979.

        What I have observed: a veritable herd of highly qualified, intelligent people whose curricula vitae range from interested amateurs through world renowned PhD’s specializing in Climate Science who cannot take the observed satellite data, convert it to temperature, and agree on the SIGN of the slope of the satellite temperature trend line.

        What I conclude: Being barbecued, drowned, or discomfited in any way as a side effect of burning fossil fuels and injecting CO2 into the atmosphere does not appear anywhere on the list of things that are likely to threaten my health, happiness, and all around well being. Having the government SAVE me from the side effects of burning fossil fuels and injecting CO2 into the atmosphere is near the top, however.

      • Jan, nice dodge, but in the end just a dodge. How would you know what the trend was starting at either end? Its just amazing seeing comments that deny the data. Moreover, what part of vision and mental faculties do you ignore when comparing them against the data they observed? That was just desperation.

        I noticed in your reference the cherry picked graph was broken into three section instead of the more logical two. That too was just desperation.

        You final comparison was another example in desperately trying to obscure the obvious by cherry picking 20 years.

      • Jan wrote

        “And how would you know what the “current trend” was, before you search backward? ”

        Jan, if the question is “How long has there been no statistically significant warming for?”, then searching back is exactly what you do to find out.

        Same as if someone asked you “How long have you not eaten for?”

        What don’t you understand about that?

      • Jan Perlwitz wrote

        “You do this to “prove” that the trend was Zero, i.e. that there was a “pause”. You confirm what you assumed. This isn’t just cherry picking of data, it’s also circular reasoning.”

        Jan, you prove your critical thinking is flawed. Circular reasoning is such:
        “”A is true because B is true; B is true because A is true.”

        You prove to yourself that no circular reasoning was used when you say that Corey searched back and found “a time period for which the trend estimate was Zero”.
        The trend of zero is the evidence used to say A is true. Therefore not circular reasoning, since he found evidence to say that A is true. If he could not find that evidence, then proposition A would be false.

        Circular reasoning would be this:
        “That there is a pause is true, because the trend is flat is true.
        The trend is flat is true because that there is a pause is true.”.

  235. Don Monfort

    All pause deniers are null and void. The pause is killing the cause. Period.

  236. michael hart

    Apropos whatever, during my lifetime I have seen liquid-filled thermometers in the UK make the switch from gradation in Fahrenheit to Celsius.

    It seems reasonable to think this, and other changes, might affect those observations which are contingent upon a human making a “crisp” decision. If it were measurable, how might the signature be detected in the historical record?

    I would start by looking at observations focused around the melting/freezing point of water.

    • michael hart

      Yes, I know that may not play too well with some who are actively invested in temperature anomalies.

  237. Judith,
    Jan has pointed out above that we have confirmation of the USHCN through other non temperature means so why are we all arguing when Jan has proof that it is correct and also that there is no pause.
    It’s Bristlecones all the way down.
    Now we can all go home. Thanks, Jan.

    • angech wrote:

      “Jan has proof … that there is no pause.”

      This is not what I said. I didn’t say there was proof that there is no pause. Instead, I am saying that there isn’t sufficient empirical/statistical evidence that there was a pause. These are two different statements.

      The so called “pause” is not statistically significant. The data and the statistical analysis does not provide the evidence that the so called “pause”, a time period with a lower trend estimate than the longer-term trend estimate, was more than just a short-term fluctuation around the median warming trend, mostly due to short-term unforced internal variability in the Earth system (and some contribution from decreasing solar activity and increased reflecting aerosols in the atmosphere, counteracting the increased greenhose gas forcing to some degree), like the “acceleration” over the 16-year period from 1992 to 2007 (e.g., UAH trend: 0.296 +/- 0.213(2 sigma) deg. C/decade) was also just such a fluctuation, one to the warm side in that case.

      • Jan

        As you know the Met Office calls it a ‘pause’ and had a meeting last year of many of the great and the good to discuss it.

        When I was at the Met office at the end of last year (no I wasn’t one of the great and the good) I was speaking to a couple of their senior scientists who also spoke of the ‘pause.’

        tonyb

      • tonyb,

        but when the overwhelming majority of climate scientists agrees on that anthropogenic global warming was real (“the consensus”) then this doesn’t prove anything about the reality of AGW. Right?

      • David Springer

        “I am saying that there isn’t sufficient empirical/statistical evidence that there was a pause”

        The real deniers reveal themselves.

        You go Jan!

      • Jan

        You said;

        ‘but when the overwhelming majority of climate scientists agrees on that anthropogenic global warming was real (“the consensus”) then this doesn’t prove anything about the reality of AGW. Right.’

        The difference surely is that you believe the scientists so if they believe in AGW and the ‘pause’ then so must you?

        tonyb

      • Jan Perlwitz wrote

        “The so called “pause” is not statistically significant. The data and the statistical analysis does not provide the evidence that the so called “pause”, a time period with a lower trend estimate than the longer-term trend estimate, was more than just a short-term fluctuation around the median warming trend, mostly due to short-term unforced internal variability in the Earth system (and some contribution from decreasing solar activity and increased reflecting aerosols in the atmosphere, counteracting the increased greenhose gas forcing to some degree), like the “acceleration” over the 16-year period from 1992 to 2007 (e.g., UAH trend: 0.296 +/- 0.213(2 sigma) deg. C/decade) was also just such a fluctuation, one to the warm side in that case.”

        Jan,
        Proposed explanation of cause does not impact significance.
        For significance, if you deny the short term legitimacy, then you should write to Phil Jones, who, in The Mail, stated lack of statistically significant warming (just barely) one year and the next declared it significant warming.

        You two should sort this out between yourselves.

      • climatereason wrote:

        “The difference surely is that you believe the scientists so if they believe in AGW and the ‘pause’ then so must you?”

        Wrong. I do not believe something depending on who else believes it or who said it. My views on something depend on the scientific evidence presented for it. Has evidence been presented, and if it has is it convincing, or is it not convincing.

      • ClimateGuy wrote:

        Jan,
        Proposed explanation of cause does not impact significance.

        I do not have any problems at all with efforts to explain short-term variability, even if the deviations from the medium warming trend are not statistically significant. On the contrary. I am endorsing such efforts. We also want to understand how the energy flux through the system changes on short time scales, what the causes are for it, and how it influences the variability in the observed variables.

        For significance, if you deny the short term legitimacy, then you should write to Phil Jones, who, in The Mail, stated lack of statistically significant warming (just barely) one year and the next declared it significant warming.

        I do not have any knowledge about any interview of Phil Jones in any The Mail. I am not going to comment on any assertion by you about what Phil Jones allegedly said there without getting provided any quote and proof of source, or a link to the interview, so I can examine it myself. And from what you claim here about what Phil Jones allegedly said, I do not see any contradiction to what I said. Thus, I don’t understand why you think he and I would have to sort out something and what exactly.

      • Jan Perlwitz wrote

        ” ‘ClimateGuy wrote
        Jan,
        Proposed explanation of cause does not impact significance.’
        Jan Perliwtz wrote
        I do not have any problems at all with efforts to explain short-term variability, even if the deviations from the medium warming trend are not statistically significant. On the contrary. I am endorsing such efforts. We also want to understand how the energy flux through the system changes on short time scales, what the causes are for it, and how it influences the variability in the observed variables.”

        Jan, the point is that the warming trend is either statistically significant or it is not, and remains so with or without attempted explanations.

        Jan Perlwitz wrote
        ” ‘ClimateGuy wrote
        For significance, if you deny the short term legitimacy, then you should write to Phil Jones, who, in The Mail, stated lack of statistically significant warming (just barely) one year and the next declared it significant warming.’
        Jan Perlwitz wrote
        I do not have any knowledge about any interview of Phil Jones in any The Mail. I am not going to comment on any assertion by you about what Phil Jones allegedly said there without getting provided any quote and proof of source, or a link to the interview, so I can examine it myself.”

        Jan,
        The original 2010 interview by the BBC
        http://news.bbc.co.uk/2/hi/8511670.stm
        “BBC: Do you agree that from 1995 to the present there has been no statistically-significant global warming

        Phil Jones: Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.”

        Followed one year later by
        http://www.bbc.co.uk/news/science-environment-13719510
        “Phil Jones
        ‘Basically what’s changed is one more year [of data]. That period 1995-2009 was just 15 years – and because of the uncertainty in estimating trends over short periods, an extra year has made that trend significant at the 95% level which is the traditional threshold that statisticians have used for many years.'”

      • Jan, what you did seems inappropriate.
        By reversing the null, you split the difference. If you repeated that procedure enough times you could make zero trend not statistically different from a steep trend.

        The pause is the time period over which there is no statistically significant warming.

      • Don Monfort

        It’s OK to keep denying the pause, perlie. We understand your predicament. (We are also amused.)

      • ClimateGuy wrote on July 13, 2014 at 2:56 pm:

        “Jan, the point is that the warming trend is either statistically significant or it is not, and remains so with or without attempted explanations.”

        OK. It seems I misunderstood what you wanted to say at this point.

        There is no strict either … or. It is possible that a trend is statistically significant, for instance, at the 90, 91, 92, 93.5, 94, or 94.9% probability level, but not at the 95% probability level.

        But nothing of this has really any direct connection to what I said about the lack of statistical significance of the alleged “pause” in my reply to the comment by angech.

        “The original 2010 interview by the BBC
        http://news.bbc.co.uk/2/hi/8511670.stm
        “BBC: Do you agree that from 1995 to the present there has been no statistically-significant global warming”

        I knew that one. I just still don’t understand what the contradiction is supposed to be between what Phil Jones said in this interview and what I said about the lack of statistical signficance of the alleged “pause”.

        ClimateGuy wrote on July 13, 2014 at 3:41 pm:

        “Jan, what you did seems inappropriate.
        By reversing the null, you split the difference. If you repeated that procedure enough times you could make zero trend not statistically different from a steep trend.”

        What are you talking about? It seems you haven’t really understood what is being done. The global warming trend is always tested by analyzing whether this trend deviates non-randomly from a Zero value for a given background variability. This isn’t influenced by my testing of the “pause” at all. The Zero trend is the Zero trend. On the other hand, the “pause” is being tested by analyzing whether the shorter-term temperature trend is a non-random deviation from the longer-term global warming trend, which itself, as we know, is statistically significant. In latter case, the trend estimate for the longer-term warming trend is the Null-hypothesis. In statistics, the Null-hypothesis for a statistical test is chosen according to the question which is supposed to be answered. We want to find out whether value B deviates from value A non-randomly. In one case A is the Zero trend, in the other case A is the longer-term, statistically significant warming trend.

        “The pause is the time period over which there is no statistically significant warming.”

        If you define the “pause” like this, then there is always a “pause” for any point in time of the data series, because one always finds a time period up to that point in time, over which the trend is not statistically significant, only by choosing a short enough time period when doing the statistical significance test, i.e., by making the data sample so small that any statistical significance vanishes. Then the term “pause” becomes totally meaningless. A pause of what then?

      • Jan Perlwitz wrote

        “ClimateGuy wrote on July 13, 2014 at 2:56 pm:

        “Jan, the point is that the warming trend is either statistically significant or it is not, and remains so with or without attempted explanations.”

        OK. It seems I misunderstood what you wanted to say at this point.

        There is no strict either … or. It is possible that a trend is statistically significant, for instance, at the 90, 91, 92, 93.5, 94, or 94.9% probability level, but not at the 95% probability level.”

        Jan, we talk about the 95% level, but even if talking about other levels, either it is or it is not, according to that level.

        Jan Perlwitz wrote
        “I knew that one. I just still don’t understand what the contradiction is supposed to be between what Phil Jones said in this interview and what I said about the lack of statistical signficance of the alleged “pause”.”

        Jan, lack of statistically significant warming IS a pause in warming.

        Jan Perlwitz wrote

        ” ‘ClimateGuy wrote on July 13, 2014 at 3:41 pm:
        Jan, what you did seems inappropriate.
        By reversing the null, you split the difference. If you repeated that procedure enough times you could make zero trend not statistically different from a steep trend.’

        What are you talking about? It seems you haven’t really understood what is being done. The global warming trend is always tested by analyzing whether this trend deviates non-randomly from a Zero value for a given background variability. This isn’t influenced by my testing of the “pause” at all.”

        Jan, of course the warming trend isn’t INFLUENCED by your testing. What happens when you REVERSE THE NULL is that the error bars are used to say that the warming could be more OR LESS.

        For LESS then you claim the pause trend is not statistically distinguishable from warming.

        Test the pause trend against zero trend and you find it’s indistinguishable. Test the warming and you find it distinguishable.

        Jan Perlwitz wrote
        “ ‘ClimateGuy wrote The pause is the time period over which there is no statistically significant warming.’

        If you define the “pause” like this, then there is always a “pause” for any point in time of the data series, because one always finds a time period up to that point in time, over which the trend is not statistically significant, only by choosing a short enough time period when doing the statistical significance test, i.e., by making the data sample so small that any statistical significance vanishes. Then the term “pause” becomes totally meaningless. A pause of what then?”

        Jan, the same goes for warming or cooling. But you are correct of course that a shrinking time frame lends less and less meaningfulness for warming, cooling, or “pausing”.
        For a pause, a cooling can extend the start date backward, too.
        Sauce for the goose and all that, Jan!
        If that’s meaningless so is all the hubbub from alarmist climate scientists.

  238. Jan, stop digging. The hole is already deep enough. How does one answer these questions? “What is today’s trend? How long is it? What is its sign?

    Today’s trend in RSS is negative. It makes up half, ~17.5 years, of the total records in RSS. It therefore is negative. That is as valid of an approach as starting at the opposite end, except that when starting from a cool point the overall trend will remain positive until the trend is below the starting point for a significant period. But then that’s not climate’s, but an artifact of the math.

    But, you probably knew that!

    • CoRev wrote:

      Jan, stop digging. The hole is already deep enough.

      Oh, I am not the one here who is the clueless one.

      How does one answer these questions? “What is today’s trend? How long is it? What is its sign?

      These questions can’t be answered. These questions are simply meaningless, since there is no such thing as “today’s trend”. Trends are estimated over time periods, and depending on what the chosen length of the time period is, the trend estimates for the surface/troposphere temperature and their statistical significance will vary. Why will the answer vary? Because unforced short-term variability in the Earth system causes the trends to have a random distribution around a median trend for a given set of climate forcings. The shorter the time period (equal to smaller data sample), the wider the distribution. And for every point in time, one always can find a time period where the trend isn’t statistically significant anymore, if one chooses the time period up to the point in time only short enough. Also, the climate forcings aren’t staying exactly the same all the time either.

      Today’s trend in RSS is negative. It makes up half, ~17.5 years, of the total records in RSS. It therefore is negative.

      The trend in the RSS data set from 1997.5 to today is -0.038+/-0.194 (2 sigma) deg. C/decade.
      (http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html)

      The trend estimate value for this time period is negative, but the 2-sigma interval trend bracket is [-0.232,0.156]. The trend estimate value from 1979 to today, which itself is statistically significant, amounts to 0.124+/-0.067 deg. C/decade. It lies within the 2-sigma bracket of the shorter time period. The claim that the deviation of the shorter period temperature trend from the longer one was not just a random fluctuation has a very weak basis.

      And why do you believe the RSS data chosen by you represent the best approximation of the true trend? Why none of the other temperature data sets, the ones ignored by you? They give somewhat different answers. It very much looks like you have chosen the one data set, which appears to give the answer that is most convenient for you, based on your preconceived opinion. Cherry picking.

      • Jan, I am only going to respond to the finality of your response: ” It very much looks like you have chosen the one data set, which appears to give the answer that is most convenient for you, based on your preconceived opinion. Cherry picking.” Conveniently, you ignored my earlier WFT graphs where all the available dataset were displayed. Inconveniently for you continuing to claim cherry picking, and insisting that”… since there is no such thing as “today’s trend”.” Really? Then what was it shown?

        Jan, your desperation is obvious in a thread discussing the problems with temp data calculations.

        To prove yourself, why not calculate how long it has been cooling using the RSS dataset.

      • CoRev wrote:

        “Conveniently, you ignored my earlier WFT graphs where all the available dataset were displayed. Inconveniently for you continuing to claim cherry picking, and insisting that”… since there is no such thing as “today’s trend”.” Really? Then what was it shown?”

        What was shown? Trends over cherry picked time periods to “prove” your a priory assumption that the “current trend” was Zero. Even though now you claim the current trend was “cooling”, using the RSS data set. But I don’t really expect logical consistency from AGW-“skeptics”.

        “Jan, your desperation is obvious in a thread discussing the problems with temp data calculations.”

        You are projecting.

        To prove yourself, why not calculate how long it has been cooling using the RSS dataset.

        There is no scientific basis for the assertion that is has been globally cooling, since the empirical/statistical evidence for such an assertion is lacking. You really don’t understand the concept of statistical significance and why scientists apply statistics when they analyze trends in time series of data, do you?

      • Jan, of the desperation it burns. Why RSS? Because it has been the most consistent in process of all the datasets. For UAH, Dr Spencer, has already announced that UAH will be less divergent from RSS in the next VERSION. More importantly both satellite datasets are more expansive with far, far far, better coverage than any of the others. That does greatly matter.

        This whole posting has been about the adjustments used in the land-based datasets. Why use them without showing the need and VALUE of those adjustments? That also does greatly matter.

        You claim some superior knowledge of stats, but when you say: “The claim that the deviation of the shorter period temperature trend from the longer one was not just a random fluctuation has a very weak basis.” WEAK???? When 1/2 the data shows an opposite trend, is not WEAK.

        Stop digging

        BTW, thanks for the reference to the York tool. That appears to be superior.

  239. ZEke might want to comment on this
    https://stevengoddard.wordpress.com/2014/07/12/epa-document-exposes-extreme-climate-fraud-at-noaa/
    seems pretty clear cut criminal activity

  240. We have seen in this thread discussion about temperatures of Portland and Springfield. The large data sets seem to contain very many questionable details, but I don’t think that they distort the overall picture much. Anyway I add on more example (the error is not due to BEST, but to the older international data sources). That’s the longest time series from Finland.

    Recording systematically temperatures in Finland was started in 1829 in Helsinki. The weather station has moved a couple of times since the beginning, but is still close to the original location. That time series is Helsinki/Kaisaniemi (http://berkeleyearth.lbl.gov/stations/13544) continued from 2001 with the automatic weather station on the same location (http://berkeleyearth.lbl.gov/stations/13541). In this graph we see both the uncorrected temperatures as 11-y moving average (solid line) for Helsinki and values with UHI and other adjustments (dotted line).

    A new airport was built for Helsinki in 1952. A new weather station was taken in use soon thereafter, no weather station was located at that location before that. Now we find, however, a time series from 1829 for Helsinki-Vantaa airport. The raw time series is a very misleading combination of adjusted data from Kaisaniemi until end of 1958 with data from the airport from the beginning of 1959. The airport is almost 20 km further from the sea, and therefore has a significantly different climate (annual average at the airport is about 0.8C lower). What’s worst is that the adjustments made to the Kaisaniemi data are absurd. They make the time series at the airport warmer than Kaisaniemi over years 1941-58 (about 1.0C too warm), and too warm also before 1941.

    Using the wrong data from the airport makes the warming much weaker than the real data tells – even after the UHI-adjustments are made. This in an example, where an obvious error works in this direction.

    • I add to the above that the BEST automatic adjustments succeed reasonably well in correcting the errors introduced by the false handling of the original data.

  241. Pekka, the fact that the adjustments for an individual station can have either sign is why I like using gridded averages better.

    Even better would be comparing the distributions of the adjustments relative to their gridded mean, but comparing gridded means between different temperature reconstruction products is a good start I think (and a lot easier to do).

    • Carrick,

      Comparison of adjustments tells only, whether they vary, not which one is correct or whether the adjustments introduce a bias rather than remove one.

      Both Brandon and myself had a look at our local data. There are also other indications that people have quite often found errors, when they check just the local data. That’s not a proof but a clear indicator that all kind of errors are common. That’s not really surprising when we remember that the data has not been collected having climate relevant long term analyses in mind.

      What we would expect is a mixture of essentially random errors that are equally likely in both directions and adjustments of systematic errors, which are typically in the same direction for large subsets of data. Overall statistical indicators cannot tell about the correctness of the adjustments. Randomized sampling and careful systematic analysis of the adjustment history might give better results, but is a sizable effort.

      The number of at least partially independent analyses and the multiple tests that have been done in search of systematic errors is, however, enough for me to tell that further analysis is very unlikely to change the global averages significantly. The determination of ocean surface temperatures is less mature. Thus results on that may change more.

      • Pekka, I agree you can’t tell just by looking at difference in adjustments which is closer to truth. And while my expectation is like yours “that further analysis is very unlikely to change the global averages significantly”, the purpose of these sorts of studies is to nail down the right answer and remove doubt. Furthermore, there is increased interest in regional scale climate studies.

        It’s not clear how good any of these reconstructions are doing without spending some time sorting through them.

        I mentioned upstream, my interest is in the spatial resolution of the different products. It is true that CRUTEM, NCDC and GISTEMP both show a cooling SE US, whereas BEST shows a warming. It is also interesting and I think informative that GiSTEMP 1250 km shows a picture closer to (but not as apparently smeared out as) BEST and GISTEMP 250 km shows a picture closer to NCDC and CRUTEM,

        If I wanted to sort this out, I’d create Monte Carlo runs that include realistic geographic variability in trend, and I’d feed this synthetic data through each of the software packages. What I would be interested in characterizing with that is bias in trend and resolution with the different methods

        Since no set of algorithms is perfect, I’d expect to see differences. Because there are now many different reconstructions, I think it is a worthwhile endeavor to determine how the tradeoffs work.

      • Steven Mosher

        it is interesting how many skeptics think they prove something by comparing two methods operating on different data.

        maybe carrick will correct them on this mistake.

        Synthetic tests. yes. people forget those tests.

        First level test is how well the interpolation approach works.

        http://static.berkeleyearth.org/memos/robert-rohde-memo.pdf

        Now the topic will change to.. “I would test this, I would test that”

        None of the GISS fans will address these results directly.

      • Mosher mentions synthetic tests. Shouldn’t even bring this up because it will make their brains explode. The role of synthetic tests is critical in algorithm development. One can generate just about any kind of stochastic or deterministic data set desired and by hiding some part of the synthetic data set, one can see how well the algorithm works in predicting the hidden parts. The simplest of synthetic tests serve as sanity checks.

        That is why many of these algorithms are fully debugged before they even see the real data. And that’s the way those of us who devise scientific algorithms do the development. BEST is probably no exception to the approach. BTW, the synthetic tests stay around for regression testings should the algorithms add further capabilities.

      • ” BEST is probably no exception to the approach. ”

        So you prattle on but really have no idea, one way or the other?

      • Mosher acknowledged synthetic tests so I have to assume that BEST includes those.

      • Stephen Mosher:

        it is interesting how many skeptics think they prove something by comparing two methods operating on different data.

        If you assume the differences in the data sets are perturbative (they almost have to be if you get nearly the same answer for the global means), then subtracting the adjusted versus raw data in a self-consistent manner should yield information about how well the adjustment process is working.

        In any case, this is why I like to focus on gridded data as an estimate of the temperature field.

        You’ve probably noticed upstream that there is a discussion of the spatial resolution of the various methods. Spatial resolution refers to how finely you can resolve a feature, not to e.g. the grid spacing (1° versus 5° for example).

        It’s not clear to me that BEST is “getting this better” in terms of actual spatial resolution.

        As I mentioned, the US SE over the last century most series estimates has been cooling. Best is showing it warming.

        This suggests to me that you’re getting spatial smearing from other regions with larger warming trends. As I’ve pointed out in various places if we look at the gridded product for this region:

        You can get gridded averages from the Climate Explorer:

        http://climexp.knmi.nl/selectfield_obs2.cgi?id=someone@somewhere

        Here are the trends for 1900-2010, for the region 82.5-100W, 30-35N:

        berkeley 0.045
        giss (1200km) 0.004
        giss (250km) -0.013
        hadcrut4 -0.016
        ncdc -0.007

        Berkeley looks to be a real outlier. Note that giss (1200km) is closer to berkeley and giss (250km) is closer to hadcrut4.

        That suggests this is a spatial resolution.

  242. @Matthew R Marler July 13 at 2:47 pm |

    ….it isn’t just that the majority of stations show warming and that some select groups show no warming; the problem that undermines confidence in the “adjustments”(and where the little understood Bayesian Hierarchical Modeling plays a direct role, as well as the switchover to newer instruments) is the existence of records where the raw data show cooling, no or little warming, but where the adjusted data (Bayesian estimates) show less cooling, warming, or more warming.

    I have supported the role of the Bayesian methodology in providing estimates that have smaller mean square error than the original raw data; this has been proved mathematically to happen in some circumstances, and it has been demonstrated in some real live cases. But a large number of people remain skeptical, in part because the result (like all [??] results in conditional probability) is counter-intuitive.

    I’m skeptical, but I’m willing to learn. What papers, if any, describe how the “little understood” Bayesian Hierarchical Modeling was employed within the BEST process? It is not in Rhode-2013, at least not by that name.

    My hunch is that the Bayesian analysis is not reducing uncertainty, but misplacing it. I view uncertainty like entropy. Everything you do to a system to extract work must increase entropy. Yes, you can freeze water and lower entropy in a freezer compartment, but the entropy is increased more outside the freezer compartment.

    The breakpoints take longer segments and make them shorter. You loose low frequency information content. You might reduce mean sq error of the segment, but by shortening the segment you increase the uncertainty in the slope of the segment, the key metric that BEST seeks. I think the uncertainty is getting misplaced and lost, along with the low frequency content in the creating of empirical breakpoints.

    • Matthew R Marler

      Stephen Rasey: I’m skeptical, but I’m willing to learn. What papers, if any, describe how the “little understood” Bayesian Hierarchical Modeling was employed within the BEST process? It is not in Rhode-2013, at least not by that name.

      My hunch is that the Bayesian analysis is not reducing uncertainty, but misplacing it. I view uncertainty like entropy. Everything you do to a system to extract work must increase entropy. Yes, you can freeze water and lower entropy in a freezer compartment, but the entropy is increased more outside the freezer compartment.

      Where do you want to start? An introduction to Bayesian methods, including the proof about optimality, and some simulations, is in F. J. Samaniego, An introduction to Frequentist and Bayesian methods of estimation; it also has caveats. From there, Bayesian approaches to “small area estimation”; detection of small concentrations; Bayesian time series and hierarchical models, and so on. It is a vast literature, with mathematics, simulations, and applications in diverse fields.

      The methods used by the BEST team were tersely described in technical terms.

      Your comment about entropy is something I agree with. For a data set, there is a degree of imprecision beyond which no improvement can be shown to be achievable. That is why I have written that the Bayesian methods achieve “the smallest achievable mean square error of estimation”, and such.

      • @Matthew R Marler at 8:07 pm
        I thought I was pretty clear where I wanted to start.

        What papers, if any, describe how the “little understood” Bayesian Hierarchical Modeling was employed within the BEST process?

        I am not in the mood to start from basic theory and guess how BEST used the concepts with their data. I think my objection to this approach is reasonable. I would much rather confront how BEST applied Bayesian Hierarchical Modeling on their data and work backward toward foundational theory where necessary.

        The methods used by the BEST team were tersely described in technical terms.

        This does not increase MY confidence that uncertainty was properly accounted for in the BEST process. Terse or not, there is no place but here to start.

      • Matthew R Marler

        Stephen Rasey: I would much rather confront how BEST applied Bayesian Hierarchical Modeling on their data and work backward toward foundational theory where necessary.

        Go ahead.

        I think it will all make more sense to you if you work in the opposite order.

        I thought I was pretty clear where I wanted to start.

        I thought that you wanted to start with what you could master first, and I was unsure what you could master.

      • Kenneth Fritsch

        Matthew, I commented below, but I would like you to please give some detail on how BEST used Bayesian inference in their methods. I assume again that you are referring to the use of hierarchical priors. I believe I have sufficient familiarity with Bayesian data analysis to understand a reply without you going into the basics of Bayesian methods and concepts. I have a current interest in finding instances were Bayesian methods are better applied than frequentist ones.

      • Kenneth Fritsch

        If one has lots of data the differences in results between Bayesian and frequentist methods is going to be minimal and little affected by the choice of a prior. On the other hand if the data are sparse the Bayesian result may well depend strongly on the prior(s) used. If there is some theoretical founding for a prior that even a skeptical crowd might accept then an informed prior based on that founding can provide a better result with sparse data than a frequentist method. That may be applicable to temperature station data that is sparse and is the reason I would like to see examples.

        If BEST applied Bayesian methods, Bayes rule would have to be applied somewhere in the process. Making assumptions about distributions without applying Bayes rule is not part of a Bayesian approach.

        The uncertainty in method bias for any of these adjustment algorithms has to be estimated differently and is possible, I think, with proper benchmark testing, as I noted previously, where at least we can determine the limitations of these approaches..

      • @Kenneth Fritsch at 3:09 pm
        I agree, Kenneth.
        A Bayesian approach could be superior to frequentist. Could be. Details matter. Prior distributions matter. There has been much mischief with priors, particularly with uniform priors.

        Uniform priors might be noninformative, but it does not follow that the choice of uniform, and particularly the choice of the a,b range of that uniform distribution, is free of bias. If it is not free of bias, it is not necessarily objective. There seems to be an implied claim in many Bayesian papers that “Uniform = noninformative = nonbiased = objective.” No. There is only the appearance of objectivity. (Rasey: 4/21/13 11:36 am

        In reply to Matthew Marler,

        Bayesian analysis has as its Achilles Heel the issue of “Prior” estimates of distributions. IPCC accepted a 0-18 uniform prior for the climate sensitivity in an egregious case of “Thumb on the Scales” in a desperate and transparent effort to keep a climate CO2 sensitivity 4.5 deg C per doubling as a viable high estimate. (Rasey 7/2 10:52am

      • Matthew R Marler

        Kenneth Fritsch: Matthew, I commented below, but I would like you to please give some detail on how BEST used Bayesian inference in their methods. I assume again that you are referring to the use of hierarchical priors. I believe I have sufficient familiarity with Bayesian data analysis to understand a reply without you going into the basics of Bayesian methods and concepts. I have a current interest in finding instances were Bayesian methods are better applied than frequentist ones.

        Your request dovetails with that of Stephen Rasey, which I misunderstood on first reading. My main theme was that there is no good justification for preferring the unadjusted data to the adjusted data. The details of the implementation of the hierarchical modeling in the BEST data I read about some time ago. I am going away for two weeks. When I get back, I’ll track down the details.

      • Matthew R Marler

        Stephen Rasey: In reply to Matthew Marler,

        Bayesian analysis has as its Achilles Heel the issue of “Prior” estimates of distributions. IPCC accepted a 0-18 uniform prior for the climate sensitivity in an egregious case of “Thumb on the Scales” in a desperate and transparent effort to keep a climate CO2 sensitivity 4.5 deg C per doubling as a viable high estimate. (Rasey 7/2 10:52am

        I second your “thumb on the scales” characterization of the use of a 0 – 18 C uniform prior on the sensitivity. That puts 2/3 of the probability on a range (6 – 18) for which there is no prior justification at all. I do not remember that post of yours (which doesn’t imply that I didn’t read it), but the issue of broad uniform priors arose, among other times, in the discussion of Nic Lewis’s discussion of the estimates for the sensitivity, where he commented on the dubious broad uniform priors.

        Samaniego makes the point, based on simulations, that the prior has to be at least accurate enough in order for the Bayesian estimate to be better than the frequentist estimate. He calls that the “threshold”. For empirical Bayes methods, where the prior (across items, or sites and regions in this case) is estimated from the data, a widely cited paper by Rob Kass and Duane Steffy addresses the limits of accuracy in the posterior mean introduced by estimating the prior instead of knowing it a priori. (for “purely subjective” Bayesians, “empirical Bayes” methods are just another “frequentist” approach, a point made by someone in comments on some widely cited papers on Carl Morris.) More later, when I get back.

        Sorry, at this late date, to be vague. More later.

    • Matthew, you had two opportunities to provide links or other info to how BEST uses Bayesian Hierarchal Modeling in their process. At least two people in this thread are interested in such links and/or summary info, terse or not.

      Do you know of any documents?
      Can you provide their links if you do know?

  243. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  244. Stephen Pruett

    Love the comments. However, this statement in the initial post was particularly interesting to me, “The algorithm (whose code can be downloaded here) is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally.”

    Since this may be on of the more influential assumptions in the temperature adjusting business, I wonder if any has bothered to test it? That seems pretty obvious and pretty major to me. In addition, it makes me wonder what other assumptions have been made but not mentioned in this post (and certainly not tested).

  245. Kenneth Fritsch

    Matthew R Marler , I have some familiarity with Bayesian data analysis and I would be interested in your spelling out in more detail how Bayesian inference was applied to the BEST methods. I assume you are talking about using hierarchical priors.

  246. I’d like to point out this article by Cowtan and Way (2014) which addresses some of the issues discussed here.

  247. Kenneth Fritsch

    “If I wanted to sort this out, I’d create Monte Carlo runs that include realistic geographic variability in trend, and I’d feed this synthetic data through each of the software packages. What I would be interested in characterizing with that is bias in trend and resolution with the different methods.”

    Carrick, I think this comparison is part of what benchmark testing attempts to accomplish, although in most of the benchmarking of temperature adjustment algorithms that I am familiar with the main test is how well non climate effects on temperature that added to to the mix are found and adjusted for. .Are you attempting to compare adjustment algorithms or something else?

  248. Kenneth, mainly I’m trying to understand the spatial resolution of the different algorithms. Knowing this has some operational value for studying regional scale climate change.

    Of course that ties into the adjustment algorithm. Stephen Rasey is saying that for Denver, that BEST uses stations 1000-km away to “improve” the adjustments. This also can’t but help reduce the spatial resolution too.

    • @Carrick
      Stephen Rasey is saying that for Denver, that BEST uses stations 1000-km away
      Where I said it: http://hiizuru.wordpress.com/2014/07/13/pick-a-spot/#comment-2878
      Summary:
      For the DENVER record, BEST has on the sidebar the following stations for “Long Term Weather Stations:”
      GLASGOW MUNICIPAL ARPT – NE Montana
      VILLA AHUMADA – N.Cen Mexico
      OZONA 1 SSW – SW Texas
      FAIRFIELD RANGER STN – SE Idaho
      BLOOMFIELD 1 WNW – Iowa.
      ELDON – E. Missouri.
      every one is separated from Denver by at least 8.5 deg of latitude or 10 deg Longitude. And it isn’t like these are pristine, unbroken, uncut, continuous stations.

  249. Kenneth Fritsch

    From the Cowtan Way paper linked by Carrick above we have:

    “However recent Arctic warming presents two problems not present in the US case: firstly the station network in the high Arctic is sparse, and secondly the Arctic has been warming rapidly at the same time that the boreal mid latitudes have shown a cooling trend, especially over eastern Russia (Cohen et al., 2012), illustrated in Figure (U5). The close proximity of regions of warming and cooling on both the Eurasian and Alaskan Arctic coasts mean that it is possible for neighbouring stations to show a very different temperature trends. Automated homogenization could potentially introduce unnecessary adjustments to reconcile these trends.”

    Close proximity of stations with significantly different temperature trends, as calculated from difference series – while still maintaining reasonably good correlations – is something that is not unique to the Arctic region – although the difference might be greater in that region. The weaknesses that Cowtan Way point to in this algorithm are going exist to some extent in other regions and localities. We need a benchmarking test that will demonstrate potential weaknesses in all temperature adjustment algorithms.

    I note that Cowtan and Way have not included any uncertainties in reporting their results. This is something I suggested to Robert Way that they do several weeks ago. It is difficult to discuss differences in a comparison without using CIs.

    I have used a hybrid version of the Cowtan Way global temperature data set and it definitely shows a significant difference in warming in the Arctic region over the past quarter century when compared to other data sets and most climate models. In the version of Cowtan and Way that I used the Arctic region continues to warm while the lower latitudes have paused in warming. If we can consider the Cowtan Way data set the more correct then I would judge that it will require some major rethinking about the Arctic polar amplification and the operating mechanism(s).

    I found an excellent linear relationship between ice extent and temperature existed in the Arctic polar region using the above version of Cowtan and Way. My analysis showed no signs of an albedo feedback on temperatures. .

    • I found an excellent linear relationship between ice extent and temperature existed in the Arctic polar region using the above version of Cowtan and Way.

      Right, most stations are near the coast, which will display the effects of more 32 degree open water near the stations. Which has nothing to do with an increase in Co2.

      My analysis showed no signs of an albedo feedback on temperatures

      And I don’t think it will, I think open water radiates more energy to space than the increased incoming energy from the reduction in albedo. IE, a melting arctic is a cooling system for the planet. I think this too is controlled by the ratio of clouds to clear skies.

      I do remember reading that our hostess spent a lot of time in the arctic looking into this, I’d like to see how she weighs in on this.

    • Ken,
      I have commented on this unscientific comment of Cowtan /Way at other blogs Nd ask for your consideration.
      Steven Mosher himself has commented that is is a basic inviolable law of physics that if A is closer to a than C then A will be more likely to be like B than C.
      Correct?
      Now it so happens that there are exceptions occasionally.
      The concept of an exception is that it is not the norm or the rule.
      Cowtan/Way are denying the normal rules of physics and science.
      Sure if you have a mountain next door to your valley there will be a change in temp top to bottom but even here if you take contiguous bits in steps the temperature of the valley next to the base of the mountain will be more alike.
      You seem to be distorting Science by accepting this price of deceptive waffle without thinking about it and promoting it.
      I feel you do not want to be on the same page with me at times but I ask you to think it through logically and do me the justice of confirming my assertion or explaining why you can possibly think otherwise

  250. I should point out that buried above, mwgrant points out that BEST apparently does some form of detrending.

    link

    That alleviates one of my worries about their method.

    • @Carrick 1:00pm
      From your link to mwgrant:
      BEST does detrend in some manner with respect to latitude and elevation though I must admit find available discussion extraordinarily obtuse.

      It is better to magnify your worries.

    • “It is better to magnify your worries.”

      :O)
      No, lighten up. it is better to keep things in perspective. As Carrick noted elsewhere it is work in progress. Yes I find the material available frustrating but 1.) there is material there, 2.) not doubt my perspective on documentation is different than BEST’s but it is probably different (more strict) than most others here, and 3.) I personally know it is not easy by any stretch of the imagination to write well about technical work. It is work.

      4.) There are other things that are more fun and interesting.

    • Carrick, I responded to this in two other spots, so I’ll be brief. BEST does not detrend its data in the normal sense of the word. BEST only “detrends” data in the sense it seeks to account for changes in things like altitude and latitude when calculating anomalies. That removes a spatial trend in absolute temperatures. It does nothing to any trends over time. This means if areas warm at different rates (which they do), that difference in trend will be unaffected.

      Or to put it simply, what mwgrant referred to is merely a form of anomalizing the data.

      • ‘BEST only “detrends” data in the sense it seeks to account for changes in things like altitude and latitude when calculating anomalies. ‘

        Two yellow cards.

        1.)Imprecise language where precision is very much needed:

        ‘…in the sense it seeks to account for changes in things like…’

        2.) Conflation of residuals with anomalies.

      • mwgrant, I’m not sure it’s fair to throw a card on me for using imprecise language when the only reason I wrote my comment was your imprecise language in calling this “detrending” :P

        As for your claim I’ve conflated residuals with anomalies, anomalies are residuals. Residuals are what’s left over when something is removed. Anomalies are the deviations from an “expected” value. If you remove an “expected” value from a series, your residuals are the anomalies.

        People may be used to talking about anomalies in reference to the ones calculated via simple processes we see in temperature records, but more complicated methods can be used to calculate anomalies.

      • Again, thanks for the clarification Brandon.

        I agree that residual and anomaly mean basically the same thing.

        If you use “OLS residual” then you mean a specific type of residual that could mean something different than an anomaly.

        An anomaly in my terminology (others can improve on this) is “the difference between an individual value and an estimate of the seasonal value for that time period.”

        A residual in my parlance is the difference between a measured value and a model. It makes no prescription (without a qualifier) on how the model was obtained.

        I’m sure this changes between communities.

      • Carrick, since we can calculate anomalies from things other than surface station records, it wouldn’t make sense to have the general definition of anomaly mention “the seasonal value.” That’s why I said expected value. The expected value for a temperature station can be the seasonal value for that area, but that is only one type of anomaly.

        To demonstrate the subjectiveness of anomalies, suppose you had two groups making temperature indexes. One group baselines their temperature stations so that the mean of the 1960-1990 period is 0. Another group uses the 1980-2000 period instead. Their anomalies would be different. Would one be “right”?

        Of course not. One might be more accurate than the other, one might be more useful than the other, but neither is objectively correct. Both are just ways of saying, “We expect values to be X.” You can have whatever expectations you want. You just have to examine how reasonable they are and check for any effects they might have.

        For fun, you could even have anomalies be calculated in a totally different manner. If we had a set of perfect temperature stations, we could use them to create our expectations. We could then define our anomalies as how non-perfect stations diverge from the perfect ones. That wouldn’t be useful for creating a global temperature index, but it’d be great for examining things like microsite influences.

      • Brandon Shollenberger

        First card stands, the second is dropped.

        1.) ‘…in the sense it seeks to account for changes in things like…’ is the basis of the call. That is the kind of waffle language I reserve for myself.

        2.) Temporary insanity. Yours or mine, I am not sure which.

        HTH

  251. It is a bit of a mystery why the temperature field construction guys get in a tizzy if a station move 100 feet only to correct it with a station 1,000 km away.

    • +1

      A hazard of number crunching…

      • I appreciate your and Brandon’s, Nick’s, Stephen’s, Carrick’s, and others input to this thread. I’m learning a lot just looking up stuff you guys bring up. Also learned a lot from Mosher’s input, but had to wade though the 87% entropy content of his posts to extract the signal.

    • @jim2
      +2 Brevity is the soul of wit. Brevity is also shows clearness of thinking.
      Well done.

      One suggestion, though. I dislike mixed units within a sentence.
      in a tizzy if a station move 30 meters only to correct it with a station 1,000,000 meters away.

      • Stephen, when I wrote the comment I was focused more on the content and not so much on the form. But your criticism is spot on.

  252. Fritsch
    “If we can consider the Cowtan Way data set the more correct then I would judge that it will require some major rethinking about the Arctic polar amplification and the operating mechanism(s).”
    Cowtan and Way
    “firstly the station network in the high Arctic is sparse, and secondly the Arctic has been warming rapidly at the same time that the boreal mid latitudes have shown a cooling trend.
    it is possible for neighbouring stations to show a very different temperature trends”.

    Logic 101.
    If the network is sparse how can one have any confidence in the readings from 3 stations.
    If the stations are sparse why wouldn’t the data at the 3 stations “show a very different temperature trends”.
    Steven Mosher himself said it is a basic, inviolable law that is used in all temperature algorithms that neighboring stations are more likely to be similar in temperature trends than distant stations.
    Only someone trying to Krig data [Sorry I meant rig] could say that A is more similar to C than B with a straight face.
    When A is closer to B than C * and sparse means 3. etc

  253. It looks like Brandon Shollenberger won’t come clean as to why he messed up so badly on lowballing the GISS warming trend for the area he lives. And then he has the gall to blame it on BEST for introducing a “huge warming trend”.

  254. A tweet from John Kennedy:

    John Kennedy ‏@micefearboggis 2m
    A great set of presentations on homogenisation, observational uncertainty and interpolation:
    http://surfacetemperatures.blogspot.co.uk/2014/07/talks-from-samsi-image-workshop-on.html

      • Interesting.
        He is subtracting the mean temps from two classes of records:
        A: USHCN stations that were morning TOBS on July 15, 1936, and
        B: USHCN stations that were afternoon TOBS on July 15, 1936.
        I posted some questions about the details, particularly whether the temp records were Raw temps or TOBS adjusted USHCN temps. Or are they the Raw Anomaly temps, or something else. What was the dataset used?

        In the year 2000, all temps, even those in class B should now be morning TOBS. So any TOBS adjustment ought to be moot in 2000. Yet there is still a 0.85 deg difference between the two classes, whereas the 1936 difference is only 0.54 +/- 0.02.

        As always…. What are the error bars? What is the mean standard error of the subtracted means.

      • Goddard’s next post is better. He shows the morning and afternoon records of summer maximums separately. TOBS adjustments are unwarranted according to this data.

    • i really would suggest people watch those regardless of ability to understand the technical information. ifs,buts,maybes,we are unsure,possibly etc etc mean the same regardless of expertise. i hope john did not intend to win over any doubters by posting that. if so it highlights the level of self delusion many of the practitioners of data manipulation are suffering.

  255. If anyone is still paying attention to this topic, I have an interesting proposal for testing inhomogeneity correcting algorithms with real data. With the advent of hourly data, we have multiple data streams that arise from the a single station. What would happen if we tried using the 8am data or midnight data to correct possible inhomogeneities the 5pm data. There will certainly be a TOB adjustment between these three data streams, just like there is an average offset between the temperature record at neighboring stations. However, any inhomogeneity that is detected by algorithms can’t be easily blamed to the usual suspects (station moves, changes in the station environment, etc) and there may be useful metadata.

    Suppose we analyze 8:00am, 4:00pm and midnight data from one station in the context of data from three neighboring stations. Does introducing data from the same station at a different hour increase the likelihood that a breakpoint will be detected? Will new corrections average zero, or will they increase the warming trend?

    • Frank,
      Some keep checking back on this cause it is important. Shocking adjustments of data that don’t trace back to initial observations. Use the hourly data at any one time and then derive what desired adjustments one proposes. The cooling of the past and heating of the present shatters many underastandings of data treatment.Trust but verify
      Scott

    • did anyone ever do this frank ? i saw the question asked of nick stokes ,but it appears it is a question no one wants to answer.

      • I’ll try and ask again next time Zeke posts somewhere and/or Mosh is paying attention. Climate Etc often moves too fast for me (1600 comments now). If you have a good opportunity to ask yourself, feel free to do so.

  256. Frank and Scott, I’m still checking back.

    This thread deserves a summary of:
    issues raised,
    Questions answered,
    Questions that remain open,
    suggestions for data plots and future analysis.

    This thread is high grade ore, but it will take a couple of screening passes to extract the nuggets and gems. I’ll be working on it over the next 24 hours.

    In the meantime, before I try and build it myself,
    Can anyone provide the isotiles or deciles for the Uncertainty field from the BEST TAVE from the single valued file, by decade?

    • David Springer

      What? Steven Mosher answered all questions and addressed all issue. It was mostly by calling the questioner stupid or telling them to do the work themselves though. 8-(

      • Steven Mosher’s contributions are mostly of the mineral: Leaverite.
        It is best to “Leave ‘er right there.”
        (joke I first heard at the Mollie Kathleen Gold Mine, Cripple Creek, CO, mid-70s)
        .

  257. Brandon Shollenberger ( July 14, 2014 at 5:08 pm ) wrote:

    “Carrick, I responded to this in two other spots, so I’ll be brief. BEST does not detrend its data in the normal sense of the word. BEST only “detrends” data in the sense it seeks to account for changes in things like altitude and latitude when calculating anomalies. That removes a spatial trend in absolute temperatures. It does nothing to any trends over time. This means if areas warm at different rates (which they do), that difference in trend will be unaffected.

    Or to put it simply, what mwgrant referred to is merely a form of anomalizing the data.”

    Here is my take on the comment.

    “BEST does not detrend its data in the normal sense of the word.”

    Sure it does. Detrending does not necessarily have to involve time. Spatial detrending not involving time is common in problems in geology, hydrology, soil science, agronomy, etc. There is nothing out of the norm about either my use of the term or what BEST is doing.

    In BEST the detrending or its functional equivalent is integrated into the kriging equations [Eqns 25,26 in the BEST averaging supplement]–a formulation similar to if not the same as universal kriging. If one first detrends the data and the applies ordinary kriging to the resulting residuals* it is called regression kriging–one of several similar but alternative approaches.

    Or to put it simply, what mwgrant referred to is merely a form of anomalizing the data.”

    The reason for spatial detrending is the ordinary kriging equations are valid only for stationary (non-trending) data. Use of trending data produces biased results. That is an entirely different matter than merely calculating anomalies.

  258. Brandon Shollenberg ( | July 14, 2014 at 1:59 pm | ) wrote

    “Saying they detrend the data is misleading. They did not detrend it. They estimated climatological parameters for latitude, altitude and season. They then removed those. That’s not detrending in the sense most people would interpret it as it has no time component.

    “It’s really just a way of anomalizing the data. It can remove absolute differences in temperatures, but it cannot, by definition, remove differences in trends of temperatures.”

    It is not misleading. First, time does not have to be involved for a process to be considered detrending. Second, in geology detrending (only) in the spatial domain is quite common. Third, my terminology is consistent with practice. Fourth, in BEST detrending is carried out in the spatial domain–latitude and elevation, but not longitude. In another twist the detrending in the spatial domain was carried out within the kriging step. It is that simple.

    In more detail:

    From the discussion on p. 8 of the BEST averaging methods supplement in the formulation of the kriging equation (eqns 25 and 26 on p.8) we have that a cubic function of latitude (and location) and a quadratic function of elevation (and location) are fit within the kriging as in universal kriging–an augmented set of kriging equations handles both ‘detrending’ and kriging weights. An alternative approach to the same problem would be regression kriging where the trends are removed externally resulting in a trend surface (a polynomial function of latitude and elevation), the residuals are kriged using ordinary kriging, and the predictions from the trend surface and kriging of the residuals recombined at locations of interest. (There has been a lot of back-and-forth in the geostatistics community on parsing these variations in methodology and terminology.)

    “It’s really just a way of anomalizing the data. It can remove absolute differences in temperatures, but it cannot, by definition, remove differences in trends of temperatures.”

    Weird. No, it is not ‘really just a way of anomalizing the data’. We are talking another animal, spatial detrending and there is a lot of literature on that topic. My terminology is consistent with that literature. For example in the linked paper below, “Comparison of kriging with external drift and regression kriging (link below)”, the term ‘residual’ appears more than 45 times. The term ‘anomaly’ appears 0 (zero) times.

    link: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CC4QFjAB&url=http%3A%2F%2Fwww.itc.nl%2Flibrary%2FPapers_2003%2Fmisca%2Fhengl_comparison.pdf&ei=cmbGU4WADIXIsASNxYF4&usg=AFQjCNGLyXh96U91-V1kO9CcHwaXWR1vtA&sig2=64g6XaXK1Zi2pnOCP09rvw&bvm=bv.71126742,d.cWc

    Idiosyncratic interpretations can confuse matters.

    “It can remove absolute differences in temperatures, but it cannot, by definition, remove differences in trends of temperatures.”

    Huh? pretty vague terms used here. Oh, well I’ll try…The (spatial) detrending here has nothing to do with a need or a goal to “remove absolute differences in temperature”or any other variable. The usual purpose of spatial detrending is to remove underlying trends from the data. In the case of (ordinary) kriging trending data produces biased results–that is well understood. Thus stationarity is a requirement for kriging and we have methodologies like universal kriging and regression kriging (see above) that involve detrending in some form in order to make the data play nice. Spatial detrending is done because it has to be done. Regarding, “it cannot, by definition, remove differences in trends of temperatures.” It also can not make potato knishes but then it is not supposed to.

  259. Why is this important?
    I mean, why does BEST even bother?

    They say time and time again, they are only interested in the slope of the temperature anomalies, not their absolute values. And they say “it cannot by definition, remove differences in trends of temperatures.’ So why do it at all?
    There is more to this than meets the eye.

  260. mwgrant, that something is correct does not prevent it from being misleading. Carrick’s concern has primarily been one of temporal trends. In response to his concerns regarding temporal trends, you said BEST does detrend their data. That misled Carrick, a point demonstrated when I clarified the issue for him. That the word “detrend” can apply to temporal or spatial trends does not excuse conflating temporal detrending and spatial detrending.

    Moreover, you are wrong about what BEST does, such as when you say:

    The (spatial) detrending here has nothing to do with a need or a goal to “remove absolute differences in temperature”or any other variable.

    That is exactly what BEST does. We’ve had BEST representatives here specifically saying their climatologic regression is done to remove the deterministic portion of the temperature field. The purpose of that is to create a set of baseline values which can be extracted, leaving anomalies which are comparable to one another. Which leads us to the next point:

    Weird. No, it is not ‘really just a way of anomalizing the data’. We are talking another animal, spatial detrending and there is a lot of literature on that topic. My terminology is consistent with that literature.

    I gave the definition of what anomalies are. I don’t see how you think appealing to the popularity of another word could dispute it. Masticate isn’t a popular word, but that people say “chew” instead of it in no way affects what it means.

    Moreover, BEST’s residuals are largely equivalent to the sort of anomalies calculated in any other temperature index. They’re just what’s leftover when you remove baseline values for their area. They’re even centered on zero!

    Finally, it’s cheeky to say:

    Spatial detrending is done because it has to be done. Regarding, “it cannot, by definition, remove differences in trends of temperatures.” It also can not make potato knishes but then it is not supposed to.

    Carrick was talking about BEST removing differences in trends in temperature. It’s silly to act as though it is unreasonable for me to discuss BEST’s failure to remove differences in trends in temperatures when that was the topic you responded to. I should not be mocked for staying on topic simply because you decided to discuss a different topic instead.

    Oh. I should also point out your reference to the BEST documentation is misguided as that documentation is out of date. We’ve had changes in the approach, directly related to the climatological regression BEST performs, discussed on this very site.

    • Brandon Shollenberger, why don’t you address your assertion that ” BEST adds a huge warming trend.”.

      Spatial interpolation is just not that hard. They can check the algorithms very easily. Whatever gibberish that you are spouting out is a misdirect from your inability to defend your assertions.

      • WebHubTelescope, I have addressed it. Numerous times. With over half a dozen examples.

        It’s not my fault you insist on telling obvious falsehoods about what data I used, or that you are too lazy to use that same data to verify what I’ve said.

      • Uh… sure, mwgrant. You dredged up an issue days after it had passed, made several comments about me as a person while discussing it, and were completely wrong on the only point which mattered, but we can drop the entire thing without resolving anything.

        Even though you were obviously wrong about what Carrick was talking about.

      • Sheldon Brandenberg, You still haven’t addressed your assertion that “BEST adds a huge warming trend”.

        Talking about laziness — it is not I am lazy, but it is you that is lazy by trying to circumvent the traditional academic training route and acting like someone that you are not. It appears that you have never worked with any of your scientific peers to work out a research area in detail. This is not surprising because all you are doing is latching on equally misguided folk as yourself.

        And you can use your high school debating skills on me as much as you want, because I don’t really care.

    • Brandon

      1111———————————-
      “mwgrant, that something is correct does not prevent it from being misleading. Carrick’s concern has primarily been one of temporal trends. In response to his concerns regarding temporal trends, you said BEST does detrend their data.”

      First a few items…
      —————
      Carrick July 12, 2014 at 2:18 pm
      Part 3:
      I believe that part of the problem [with BEST] is method of interpolation used (kriging) and in particular the assumption that the correlation field relating spatially separated stations is azimuthally invariant.

      —————
      Carrick | July 14, 2014 at 4:45 am |
      I think they should detrend before computing the correlation, otherwise the fact that the series has trends dominates the computation of the correlation, and this results in biased-high estimates of the trends. They are also assuming axial symmetry which I believe is a mistake.

      —————
      mwgrant | July 14, 2014 at 8:25 am |
      Carrick and Doc Martyn
      “I think they should detrend before computing the correlation, otherwise the fact that the series has trends dominates the computation of the correlation, and this results in biased-high estimates of the trends. They are also assuming axial symmetry which I believe is a mistake.”

      BEST does detrend in some manner with respect to latitude and elevation though I must admit find available discussion extraordinarily obtuse.

      I’ve not even riffed on the constant in time function here (OT).

      So how good are you at time ordering? Spatial is there.

      “That misled Carrick, a point demonstrated when I clarified the issue for him. That the word “detrend” can apply to temporal or spatial trends does not excuse conflating temporal detrending and spatial detrending”.

      So why have you continually done that. And in your responses/comments on my comments you have continurd to ‘trend’ in ambiguously–noticeable to the point where I have made an effort to qualify ‘spatial’ and ‘temporal’ in my comments.

      Finally the simple statement, ‘BEST does detrend in some manner with respect to latitude and elevation’ seems to have been easy enough for Carrick to parse and assimilate without apparent difficulty as indicated in his comments following mine. Several times in subsequent comments he mentioned spatial assumptions and did not appear wounded.

      —————————————-
      2222———————————

      “Moreover, you are wrong about what BEST does, such as when you say:

      The (spatial) detrending here has nothing to do with a need or a goal to “remove absolute differences in temperature”or any other variable.”

      But you have omitted my observation on stationarity which compels the trending:

      “The (spatial) detrending here has nothing to do with a need or a goal to “remove absolute differences in temperature”or any other variable. The usual purpose of spatial detrending is to remove underlying trends from the data. In the case of (ordinary) kriging trending data produces biased results–that is well understood. Thus stationarity is a requirement for kriging and we have methodologies like universal kriging and regression kriging (see above) that involve detrending in some form in order to make the data play nice. Spatial detrending is done because it has to be done.

      The bottomline is if they didn’t want the nonstationary component or baseline they would still have to do the spatial detrending.

      —————————————-
      3333————————————

      The purpose of that is to create a set of baseline values which can be extracted, leaving anomalies which are comparable to one another.

      The may be accomplished but the fact is stationarity is a fundamental requirement. Extraction of the nonstationary part, i.e., the baseline values is in the general case an option. That is good for BEST because they are interested in the baseline, i.e., an option is their prize.

      —————————————-
      4444————————————

      “I gave the definition of what anomalies are. I don’t see how you think appealing to the popularity of another word could dispute it.”

      Accepted terminology in an established discipline is always more preferable to me than ad hoc terms, particularly when third parties are involved.

      —————————————-
      5555———————————

      “Moreover, BEST’s residuals are largely equivalent to the sort of anomalies calculated in any other temperature index. They’re just what’s leftover when you remove baseline values for their area. They’re even centered on zero!

      a.) ‘largely equivalent to the sort of anomalies calculated in any other temperature index.’ You’d never get past technical editing with fuzzy language like that. At times even on blogs informal does not work–one of those times is when you are trying to communicate technical concepts.

      b.) ‘They’re even centered on zero!’ That is a constraint on the residuals that could conflict with detrending (or if you prefer, generating a trend surface). Kriging requires a no trend but not a zero mean. Do not know if it would, just the possibility is there…

      c.) In general I really don’t give a rat’s ptui what you call things, but if you comment on technical statement I have made, then use that language. If a term is not clear ask. Nothing personal, it’s about communication. I particularly resent your casual ad hoc definitions when I am trying to be correct. [It is not easy.] There is so much time that is wasted on this blog as a result of imprecise language at key moments.

      —————————————-
      6666———————————

      Checking back I see:

      “I think they should detrend before computing the correlation, otherwise the fact that the series has trends dominates the computation of the correlation, and this results in biased-high estimates of the trends. They are also assuming axial symmetry which I believe is a mistake. –(Carrick | July 14, 2014 at 4:45 am | )

      Given that, I basically called attention to the fact that BEST does detrending on latitude and elevation. In that same comment I also noted, ‘I’ve not even riffed on the constant in time function here (OT). which I corrected to ‘the constant in time correlation function’ within three (3) minutes. I do not think that small amount of information would be difficult to quickly prioritize according to one’s particular interests. You just look for trouble.

      —————————————-
      7777———————————

      It’s silly to act as though it is unreasonable for me to discuss BEST’s failure to remove differences in trends in temperatures when that was the topic you responded to.

      It would be if I was responding to that topic, but I was not responding to that topic. I merely exasperated and bemused to see the comment and spent way to much time trying to figure out, ‘where the hell did that come from?’. Cheeky? I thought it was a funny to say what’s this and what does it have to do with spatial detrending. Lighten up, you were not mocked.

      —————————————-
      8888———————————

      “I should also point out your reference to the BEST documentation is misguided as that documentation is out of date.”

      Absolutely, you should. Thanks. Did you notice by the way that the page number was correct so that indeed I was referring to the current material even though I called it a supplement rather appendix?

      Slinging this out as is…it is late and nothing here changes the world…I think I’m cured now and am not going to fall off the wagon.

      • mwgrant, BEST calculates the correlation between various temporal series based upon their spatial location. Temporal trends will exist in those temporal series, meaning their spatial correlation calculations are impacted by temporal trends. That is the issue which was being discussed. We can see this by examining the quotes you selected and their context. Your first quote begins by saying it is “Part 3.” I’ll quote Part 2:

        Here are the trends for 1900-2010, for the region 82.5-100W, 30-35N:

        berkeley 0.045
        giss (1200km) 0.004
        giss (250km) -0.013
        hadcrut4 -0.016
        ncdc -0.007

        Berkeley looks to be a real outlier.

        This is a clear discussion of temporal trends. Your second quote is followed shortly after Carrick saying there is “argument over whether to detrend or not is an ongoing one climate science,” a reference to the disagreements about detrending temporal series in paleoclimate reconstructions.

        As for your claim I continually conflate spatial and temporal trends, that is false. I didn’t repeatedly specify which kind of trends I was talking about because, in context, it was clear I only referred to temporal trends. Given I hadn’t discussed spatial trends within the series, I had no reason to repeatedly distinguish which I was referring to. As for your claim:

        The (spatial) detrending here has nothing to do with a need or a goal to “remove absolute differences in temperature”or any other variable.”

        The bottomline is if they didn’t want the nonstationary component or baseline they would still have to do the spatial detrending.

        This is little more than hand-waving, hand-waving that seems contradictory to me. You clearly suggest there were alternatives for BEST to handle the spatial detrending. That is exactly in line with the point I made. Spatial detrending was necessary. Nobody disputes that. I, however, say they did more than just spatial detrending. This apparent contradiction is made more obvious by your later remark:

        b.) ‘They’re even centered on zero!’ That is a constraint on the residuals that could conflict with detrending (or if you prefer, generating a trend surface). Kriging requires a no trend but not a zero mean. Do not know if it would, just the possibility is there…

        If removing the entire determistic portion of the data was not necessary as they could perform kriging without it, then removing it was not simply detrending the data. It was, as I say, done with a specific purpose in mind. But really, this all just seems stupid to me as you make remarks like:

        You just look for trouble.

        The discussion we had had on this subject ended days ago. You didn’t make any attempt to continue it then. You didn’t despite the fact we had other exchanges afterwards. Instead, you’ve just randomly started it up again, days later. I hadn’t even thought about this topic. It was pure luck I even saw your comments in my RSS feed.

        Me, looking for trouble? The only one even interested at this point is you. And you’re apparently interested in making things about us as people, something I haven’t done at all. And I haven’t been the one trying to start arguments about semantics. The only topic I’ve cared about this entire time is establishing what was being discussed. You are the only one who has wanted to discuss anything else. If anyone has been looking for trouble, it’s you.

        On the subject of what was being discussed, you were wrong. That you were wrong is obvious given you support your claim by saying:

        Finally the simple statement, ‘BEST does detrend in some manner with respect to latitude and elevation’ seems to have been easy enough for Carrick to parse and assimilate without apparent difficulty as indicated in his comments following mine. Several times in subsequent comments he mentioned spatial assumptions and did not appear wounded.

        But one can see Carrick was misled by your comment by looking at his response to my clarification:

        Thanks Brandon. I was never able to find a discussion of detrending and was just guessing that I wasn’t reading it carefully enough.

        So this wouldn’t fix the problem with the low-freqency portion (“the trend”) dominating the estimate of the correlation coefficient, when what you really want is just the the high-frequency portion unadorned by the trend from another region.

        That he “did not appear wounded” by a minor miscommunication is hardly surprising. It wasn’t a serious issue. I posted a clarification for Carrick because he and I had been discussing the issue of temporal trends for a while now, and based upon his response to you, I could tell he misunderstood what you said. As far as I’m concerned, my quick clarification and his recognition of your intended meaning should have been the end.

        As a final note:

        Absolutely, you should. Thanks. Did you notice by the way that the page number was correct so that indeed I was referring to the current material even though I called it a supplement rather appendix?

        No, I did not notice you were “referring to the current material” because there is no current material. BEST has not released documentation for the changes I referred to. Your question is built upon a false premise, and because of that, I don’t know what you think I was saying.

      • As a side note, I should point out it is amusing to be told Carrick was not talking about temporal trends when he and I have been discussing temporal trends off and on for a few weeks now. It’d be really strange if he suddenly changed subjects without comment, especially given I was participating in the exchange.

        But uh… sure. Maybe I don’t know what subject I helped bring up. Maybe I’m just saying all sorts of things because I’m looking to start a fight. I’m apparently pretty good at it. Who would have thought making a quick clarification Carrick acknowledged was helpful would turn personal?

      • Fuggedaboutit.

      • My response wound up in the wrong fork. It’s funny really. When I participated in two lengthy forks about the same topic upthread, I didn’t misplace a single comment.

        Then, I got one simple fork here, and I apparently forgot how to find the right spot to reply.

    • Steven Mosher at the blackboard said in reply to a comment I made on the vagaries of Kriging. My comment,
      “What you said is true or the flaw is true? Sorry but your comment was a little ambiguous.Taking it that you are now defending Cowtan/Way against yourself
      I.e. saying that areas further apart can be more similar than areas close together when you only have sparse records in the first place is a bit Yogi Berrish.”
      SM —
      You dont get it. You simulate having sparse stations to test the method. Further in some places where we THOUGHT we had sparse stations and applied the method, we have now found MORE data. This data is used to show that you are wrong.
      ###############
      Or am I wrong and you are actually agreeing with yourself and me?
      If you only have 3 or 4 data points and the two furthest apart are the same it is not right to fill in hundreds of points between using the rational that two distant points are the same.
      SM —
      huh,
      Heck you just said it yourself. Either leave the data alone or infill as you do for everything else (the old trusted and true method)
      the old trusted method? yes kriging.
      it is simple to test. and you havent.
      go away and do some work. talk to mwgrant, he will help you.

      You are talking to mini SM, Brandon. BEST of luck.

      • angech

        I saw the BB comment yesterday but did not have to look at things upthread. I am curious and will take a look if you do not mind. My last ‘project’ here is now complete–well cooked, BEST of times but too much BS.

        It will take a while and I have some must do’s in the AM and decrypting thread talk takes time, but will get back somewhere on virgin turf (new thread here or an Open thread) if that works.

  261. angech

    Steven Mosher (Comment #131000) July 15th, 2014 at 4:11 pm
    “in any case, you always need to make assumptions” all the way down the line…all the way. The ‘you’ is generic.

    Now that that is out of the way:
    One difference between mini and full-sized models: for the latter its about the sandbox.

  262. hopefully anonymous

    As a trained mathematician, and a practicing engineer, I am simply awestruck when I read all the details of this stuff that results in a line (a one-dimensional object) that wiggles up and down less than about 1C over a 100+ year period. And somehow meaning is supposed to be assigned to that “result”. And that is the “evidence” of “dangerous” “warming.”

    Oh, but, hey, several groups of people doing basically the same thing, starting with the same data, relying on common adjustments, get very similar results. Yaaaaaay!

    If we all ate dinner at the same (or, heck, different) places, and crapped in our pants the next day, we would also all get very similar results.

    • If you really were a “trained mathematician” you would know that the magnitude of the signal itself doesn’t provide the relevant information about the significance of the change, but the magnitude of the signal relative to the background variability/noise. But it was just another comment from a fake skeptic troll.

      • Jan P Perlwitz,

        What is quite remarkable and paradoxical is that proxies capture very well what you refer as noise or natural variation and absolutely not what you think is the true signal (the effect of aditional CO2).

        At this point, you still have nothing to propose (raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century) ?

        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-607280

      • phi,

        What is quite remarkable and paradoxical is that proxies capture very well what you refer as noise or natural variation and absolutely not what you think is the true signal (the effect of aditional CO2).

        Well Zeke Hausfather’s article was about measured temperatures, not about proxies, thus you are changing the topic here, but what makes you saying what you just claimed?

        At this point, you still have nothing to propose (raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century) ?

        Why would I have to propose anything? I am not aware that I have had the burden here to provide anything of that. I am not aware of any statemenent supposedly made by me, which implied such a burden on my side.

        Aren’t there any published peer reviewed scientific studies or ongoing scientific projects in the world on reconstructing past temperature records from proxies, from where you could get information on how past temperatures changed with varying climate conditions?

      • Jan P Perlwitz,

        “Well Zeke Hausfather’s article was about measured temperatures, not about proxies, thus you are changing the topic here…”

        This is not really a change of topic and it is not me but Zeke and you who brought it especially :

        “Zeke Hausfather wrote on independent confirmation of global warming:…”

        (http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-606798)

        But no matter, I think it was a good initiative. Indeed, what better than a possible confirmation of the validity of adjustments by a set of independent measurements?

        Unfortunately, this confirmation does not exist. Proxies raw data tell a very different story.

        Regarding published reconstructions that would make the deal, they are heavily doctored and unusable, look for example at ClimateAudit under Briffa, Tingley and Huybers, Marcott or Gergis for the newest.

        You claim that the significant warming of the twentieth century is detected by means independent of the weather stations. It is your responsibility to prove it with something concrete and sound.

        Again, can someone provide raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century ?


      • You claim that the significant warming of the twentieth century is detected by means independent of the weather stations. It is your responsibility to prove it with something concrete and sound.

        Again, can someone provide raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century ?

        Why even bother with these people that have all the answers but can’t see the rather obvious proxy of plants and animals migrating to more northern latitudes, and of lake ice-out dates arriving earlier in the calendar year.

      • WebHubTelescope,
        Provide the data! Vague claims have no value. Yes, for the Northern Hemisphere, the last decade of the twentieth century was warmer than the two or three previous. So what? Everyone knows it.

      • phi, are you lazy like Brandon Shollenberger?

        Of course I have data, stuff that I analyzed meself
        http://theoilconundrum.blogspot.ch/2012/09/lake-ice-out-dates-earlier-and-earlier.html

        or how about this proxy data that I am using to predict ENSO
        http://contextearth.com/2014/06/25/proxy-confirmation-of-soim/

        who’d a thunk it?

      • WebHubTelescope,
        Interesting. But, a priori, I do not see anything that confirms the alleged evolution of temperatures in the twentieth century.

        What is the proxy quality, his high frequency correlation with regional temperature ?

        Is the value of the temperature trend over the twentieth century confirmed ?

      • Lengthy laundry lists of confirming data vs a footnote or two of in-process quandaries. Don’t act so ignorant about the way that science works.

      • Fun.

      • hopefully anonymous

        LOL. First of all, the furor involved in this debate is over degrees Celsius (as calculated roughly via the methodology described here). There are no interest groups on either side that are debating using relative magnitudes. In fact, I don’t know that I’ve ever read a paper that discusses relative magnitudes.

        Second, by all mean, please define, with mathematical rigor, the “background variability/noise”. I’m guessing it’s something like f(t) == 0 in your imagination, in which case (do the math!) the relative magnitude is infinity.

        Crap, meet pants.

  263. phi,

    But no matter, I think it was a good initiative. Indeed, what better than a possible confirmation of the validity of adjustments by a set of independent measurements?

    Unfortunately, this confirmation does not exist. Proxies raw data tell a very different story.

    So, you claim that the results from the study referenced by Zeke Hausfather are bogus. Right? The burden to bring the evidence for your assertion is on you then. Do you have any scientific sources to offer, which specifically falsify the Anderson et al. study and with which you want to back up your assertion?

    Regarding published reconstructions that would make the deal, they are heavily doctored and unusable,

    Well, this is one of the typical type of arguments brought by science denial cult members, when they are confronted with scientific studies, the results of which they don’t like, like the Anderson et al study, referenced by Zeke. They simply declare the data and results were all invalid and they attack the authors by accusing them to have presented false, “doctored” data. These are usually bogus accusations, rooting in the cognitive dissonance of the cult members.

    look for example at ClimateAudit under Briffa, Tingley and Huybers, Marcott or Gergis for the newest.

    Referencing to McIntyre’s opinion blog ClimateAudit as supposedly authorative source for your claims is just ridiculous from my point of view. ClimateAudit isn’t a reliable scientific source at all. It simply isn’t a scientific source at all. It’s just an opinion blog in the Internet. Nothing more. The irony is that the ones who are making bold accusations in their opinion blogs against the authors of peer reviewed published scientific studies don’t seem to think that the same scientific standards, which are mandated for scientists when they publish their stuff, should be adhered by the accusers themselves. They think that making some assertions in an opinion blog was sufficient and scientific standards don’t apply to themselves. They like to claim that the published science was all wrong or worse, deliberately “doctored”, but they aren’t even on the same footing as the peer reviewed science.

    • Jan P Perlwitz,

      It’s really amazing, I ask for a very simple thing:

      “raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century”

      Given the catastrophic warming underway, its effect must be at least measurable without any difficulty. But no, all I get is rhetoric.

      Climatology is typically a pseudo-science.

      It is time to abandon pal review and get back to science, observation and honest exchange of ideas.

      Anderson : someone would have to provide the paper so that I can talk about it.

      If you want to get out of rhetoric, you can still try to refute my criticism about Briffa et al. 2013. Good luck.

      • Climatology?
        Mock
        turtles
        all
        the
        way
        down.

      • If you don’t like my opinions, I have others.
        ====================

      • phi wrote:

        It’s really amazing, I ask for a very simple thing:

        “raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century”

        What exactly do you want from me? Are you offering me a job? Do you want me to do the scientific research for you for the claims you made? I could do that, but I’m not working for free. I would give you a special rate, let’s say $150 the hour. Or we also could agree on a project based price for my labor.

        [Some science denier rhetoric]

        Anderson : someone would have to provide the paper so that I can talk about it.

        Ah, but you already had dismissed the study with the claim that the data in these studies were “doctored” (referencing some unspecified assertions supposedly made on some non-scientific opinion blog as if this would make your assertion true). Right on!

        The link to the pdf file has already been provided by someone further above. Here once more the reference: Anderson et al., GRL (2013), http://dx.doi.org/10.1029/2012GL054271

        The paper also contains supplementary information where the references to the 173 proxy data sets are listed, which were used for the study. But if you had done your research you already would have known all of this.

        If you want to get out of rhetoric, you can still try to refute my criticism about Briffa et al. 2013. Good luck.

        And what would be the incentive for me to bother about your opinion about Briffa et al, 2013? Your opinion is scientifically not relevant.

  264. That’s a lot of overwrought huffing and puffing, perlie. And you don’t know what irony is.

    Can you give us a coherent and honest response to the meat of phi’s comment:

    “You claim that the significant warming of the twentieth century is detected by means independent of the weather stations. It is your responsibility to prove it with something concrete and sound.

    Again, can someone provide raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century ?”

      • Steven Mosher,
        You seriously think that this kind of speculation can impress anyone?

      • Don Monfort

        That looks interesting, Steven. But I don’t trust them because they studiously ignored the BEST
        dataset, climate establishment buggers.

        You shouldn’t have helped out that blusterering clown, perlie. I am sure that you don’t agree with his petulant diatribe against Steve Mc. I am guessing you are going to set the clown straight, any minute now.

      • Steven Mosher

        its better than speculation its physics.

        Essentially you run a weather prediction model which is physics.
        Now today when you run a physics model to predict the temp
        6 hours out you use all the observations you have
        satellite, radiosondes, airplanes, ground stations, pressure, temperature,
        etc. And then in 6 hours you do data assimilation .. think of updating a kalmen filter that is predicting a missiles targets flight.. that filter is built around a physics core.. that keeps the prediction within what is physically possible.

        But in this experiement they throw out all temperature data. They just use
        recorded pressure. Historical pressure. and then they run the physics
        model and say Predicted temperture is X. 6 hours later they assimilate the observed pressure and run the next 6 hours.

        They do this from 1870 on.

        And guess what?

        yup.

        now before this experiment who would tink that you could use historical pressure to predict temperature and get the temperature to match observations.

      • Steven Mosher

        Don,

        its a simple matter to compare.

        maybe even a post. but I wouldnt do it here. waste of time.

        Zeke’s study of TOBS using CRN will be the test for me whether
        skeptics have the chops to challenge their own beliefs.

      • Don Monfort

        It would be interesting to see a post on that, Steven. I am wondering about the details. What if we started today and forecast the temperature using the surface pressure? How close will we come to actual temperatures? Can we throw away the thermometers?

      • Donny meekly asks:


        What if we started today and forecast the temperature using the surface pressure

        Thermodynamics 1, Deniers 0

        Been there done that. The CSALT model uses atmospheric pressure to estimate natural variability in the global temperature signal.
        Now all we have to do is predict ENSO from a fundamental sloshing dynamics model:
        http://contextearth.com/2014/07/17/correlation-of-time-series/

        The Cause of the Pause is due to Thermodynamic Laws.

      • Having shown the warming signal with no thermometers, just barometers, it won’t be long before the skeptics start questioning the barometers too.

      • Steven Mosher,

        Frankly, climatologists are not able to agree on the evolution of temperature by simply reading thermometers graduated in ° F or ° C, how do you take seriously a reconstruction over 140 years read in mmHg on barometers?

        More seriously, there is no direct link between temperature and pressure, especially when it is pretended to highlight a global change over 140 years (and not 6 hours).

        This is speculation, perhaps interesting speculation but speculation.

      • Steven Mosher

        no phi it is not speculation.
        it is a prediction.
        one that works.

        next

      • JimD wrote:

        Having shown the warming signal with no thermometers, just barometers, it won’t be long before the skeptics start questioning the barometers too.

        I am afraid I have to inform you that you are already too late with your prediction:
        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-606811

      • @Steven Mosher 7/20 at 4:30 pm |
        Essentially you run a weather prediction model which is physics.

        Essentially you run a spatially and temporally undersampled weather prediction model based upon an incomplete set of highly chaotic physical laws of over time periods too long to calibrate and test the predictability of the model.

        It is one thing to run a weather prediction model over a continent, test its predictability over the next 1 to seven days, do this every day in parallel over 40 years. You have 14,000+ prediction-observation comparisons over those 40 years to learn something about your modeling efforts.

        In global climate models, some proponents say nothing less than 15 years will be definitive. Some say longer. The upshot is you get few prediction-observation comparisons to learn something about your modeling of a much more complex system.

    • Don Monfort wrote:

      Can you give us a coherent and honest response to the meat of phi’s comment:

      “You claim that the significant warming of the twentieth century is detected by means independent of the weather stations. It is your responsibility to prove it with something concrete and sound.

      Two peer reviewed scientific studies where this was done have already been referenced in the same thread here above, one by Zeke Hausfather, the other one by me. I am not going to write the same over and over again, just because you play the ignorant one, monfie. You are just applying typical AGW-denier diversion tactics.

      • Jan P Perlwitz,

        And the raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century, where are they ?

        What a farce!

      • Don Monfort

        perlie, perlie

        I was merely pointing out that you did not respond to phi’s recent inquiry with any substance. You just launched into a diatribe against the auditor who catches a lot of your statistically challenged alleged climate scientists screwing up papers that somehow still manage to pass peer review. Watch out for Steve McC’s friend, Mosher. He’ll spank you for that impertinent BS.

      • phi wrote:

        And the raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century, where are they ?

        I’ll ask you again. What exactly is it what you want from me? I just gave you once more the reference to the Anderson et al. study, the one that had been provided here already days ago. If you claim the results of the study were all wrong and cannot be derived from the proxy data used in the study, then the burden to provide the evidence for your assertion is on you. Then you will have to publish your own study showing this.

        I also had pointed you already to a whole archive of proxy data, which you could use for your own research, if you believe the proxy data don’t support global warming. I don’t have to disprove your assertions.

        However, f you want me to do the research for you, you will have to pay me. I am not doing it for free.

      • Don Montfort wrote:

        I was merely pointing out that you did not respond to phi’s recent inquiry with any substance.

        The inquiry would have to have substance first. Otherwise, there isn’t anything to what I can respond.

        You just launched into a diatribe against the auditor who catches a lot of your statistically challenged alleged climate scientists screwing up papers that somehow still manage to pass peer review.

        Assertions, assertions. It’s like someone is referencing an UFO conspiracy website as allegedly authoritative source where “The Truth” could be found about abductions by aliens, covered up by the evil government. There is no substantial difference.

        AGW-science deniers and other (quasi-)religious cults have one thing very much in common. They reject almost all of the published science on the object of their beliefs, and they are convinced that “The Truth” about it could be found in some obsurce sources instead.

        Watch out for Steve McC’s friend, Mosher. He’ll spank you for that impertinent BS.

        For what? For my statement that McIntyre’s opinion blog wasn’t a scientific source? Well, it just isn’t. Like any other opinion blog.

      • @Jan,
        “For what? For my statement that McIntyre’s opinion blog wasn’t a scientific source? Well, it just isn’t. Like any other opinion blog.”

        And how is what Steve does now any difference than his publishing something at PLOS?

      • Steven Mosher

        here phi.

        jan has already explained

        http://climateconomysociety.blogspot.com/2013/09/an-independent-confirmation-of-global.html

        I dont know why skeptics want to deny that today is warmer than the LIA

        1. You can use any random subset of a few hundred stations to show it
        2. you can select any subset of rural stations and show it.
        3. You can look at proxies and they show it.
        4. You can look at reanalysis and it shows it.

        Now smart skeptics might want to argue about a 1/10 here or there
        or maybe even 3/10ths..

        but None would argue that it hasnt warmed since 1850.

        and here is the rub… even if it were off by .5C we would still have all th information we neeed to do something about c02

        hell if we had no data from the past we would have enough information to do something about c02

      • @Steven Mosher 7/20 at 11:48 pm |
        I dont know why skeptics want to deny that today is warmer than the LIA

        TWISTED! A twisted strawman. A red herring fallacy of ignoratio elench.
        You cannot cite a skeptic who holds that view. It is a logical contradiction for how can there be a Little Ice Age if it wasn’t cooler in the past?

        The skeptics objected to M. Mann and W. Connolley erasing the LIA.
        It is the warmists who tried to rewrite history to deny there was a MWP and LIA.

        Skeptics say that nature caused the LIA and nature caused at least most of the warming since the LIA.

      • Steven Mosher,

        Stephen Rasey replied on the essentials.

        But I thank you in advance if you could point me raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century.

    • Don Monfort

      I was talking to Mosher, weebbee. You have zero credibility. Nobody cares about seasalt. Get back to your curve fitting.

      • Get a mirror DonKnee, All you guys are talking about is curve fitting. Curve fitting makes the world go round.

        Check out Nick Stokes 2D curve fits to the data. It’s a curve, but it’s in 2D. Who’d a thunk it?

  265. Brandon Shollenberger recently said this:

    “. My very first post looked at the BEST data for an area to see how what it said compares to what BEST said for that area. The two clearly disagreed.”

    But then lookie here:
    https://imageshack.com/i/neielkg

    For St. Louis, the data for that area (GISS) agrees with what BEST interpolates for that area.
    Ain’t that sumptin?

    • Oh my god. WebHubTelescope is still comparing individual stations to areal trends to claim my comparing an areal trend to an areal trend is wrong. What’s next? Is he going to say:

      “Brandon’s comparison of trends for southeast United States are obviously dishonest. This one temperature station in Montogomery, Alabama matches the BEST trend!”

      Actually, that wouldn’t even be as bad as what he just did. Upthread, WebHubTelescope compared a gridcell to a single temperature station, and that was stupid. Now, WebHubTelescope uses a quote of mine which references me examining the entire state of Illinois. How did he decide to rebut my examination of the entire state of Illinois? By looking at a temperature station in St. Louis.

      I look at the entire state of Illinois. WebHubTelescope acts like I’m an idiot based upon him looking at a single temperature station in the city of St. Louis.

      A single temperature station compared to an entire state that temperature station wasn’t even in.


      • Nick Stokes July 11, 2014 at 11:29 pm
        Brandon.
        I read both your posts, but I still can’t find it stated what area you are actually plotting. Which grid cell?

        Brandon Shollenberger July 12, 2014 at 12:14 am
        Nick Stokes, I didn’t post the exact grid cells because I instinctively shied away from saying where I live. Sorry about that. I should have made sure to specify which gridcell the data was from.

        Lest we forget who is trying to play who for a fool.

    • Don Monfort

      Brandon wasn’t talking about St. Louis. You know that but you persist in making a clown out of yourself. How much longer are you going to carry on with this silly BS, weebbee?

      • Brandon Shollenberger was talking about the area he lives in, which he has mentioned many times in this thread. He placed it rather precisely at 1 degree south of Springfield Illinois, which makes it just east of St. Louis.

        He said:

        “(Incidentally, I don’t live in Springfield. I live one degree south of it. I just picked Springfield for the name in my text because it is the most known city within that particular area.)”

      • Don Monfort

        Brandon gave the exact grids that he compared. You know that. Everybody here can see that you are a disingenuous quack. Keep it up. It’s pathetic, but amusing.


      • Don Monfort | July 20, 2014 at 4:49 pm |

        Brandon gave the exact grids that he compared. You know that. Everybody here can see that you are a disingenuous quack. Keep it up. It’s pathetic, but amusing.

        Never learned how to read, eh Donny?

        It looks like Brandon is quickly losing support from his erstwhile allies, c.f. The B.B.
        They must be figuring out that he is going by bluster alone. You are his last BEST hope, Donny. Hope you can pull off a miracle.

      • Don Monfort

        I never said that Brandon’s claim on his comparison with BEST and GISS was significant or correct. I merely continue to point out that you are not telling the truth about what data Brandon used. The gridded data, you @$$#0!&.

        You know that you are not telling the truth. What does that make you, weebbeee?

      • BEST doesn’t use gridded data, DonKnee. They do the kriging based on the points as they lay. There is not always a need to grid the data before doing further analysis, and gridding can often lose precision. Think in terms of adaptive step integration and similar algorithms.

        In time-series land, doing an equipartition ala FFT can lose track of potentially significant findings, see http://contextearth.com/2014/07/17/correlation-of-time-series/

        Stop digging when you get a chance, the math is above you.

      • Don Monfort

        Brandon used results data that is reported in grids. BEST and GISS, same freaking squares. You have seen it. You know that you are not telling the truth. What does that make you webbeee? A persistent, blatant ____! You are not fooling anybody.

      • No, DonKnee. BEST has a web service where you can query a location, a city for example, and it will give you an interpolated set of data for that location. So for St. Louis, the interpolation gives results that are very close to the results of a long-running St. Louis station time-series reading. That sounds pretty reasonable to me and far from the “huge warming trend” that Brandon Shollenberger claims that BEST introduces to the area where he lives.

      • Don Monfort

        Brandon told you where to find the gridded data. That is the last reply you will ever get from me. You are no fun anymore. Too pathetic.

      • Donny takes his marbles and goes home because Brandon said this:


        Nick Stokes July 11, 2014 at 11:29 pm
        Brandon.
        I read both your posts, but I still can’t find it stated what area you are actually plotting. Which grid cell?

        Brandon Shollenberger July 12, 2014 at 12:14 am
        Nick Stokes, I didn’t post the exact grid cells because I instinctively shied away from saying where I live. Sorry about that. I should have made sure to specify which gridcell the data was from.

        Kind of hard to reproduce his results if he doesn’t specify the “exact grid cells”, eh?

        So I went with where he said he lived.

        “(Incidentally, I don’t live in Springfield. I live one degree south of it. I just picked Springfield for the name in my text because it is the most known city within that particular area.)”

    • Steven Mosher

      I hope brandon finds differences and tells hansen where he got it wrong

      • Apparently Steven Mosher thinks only GISS could be wrong. To him, it apparently doesn’t matter BEST is an outlier, or that its results are practically indistinguishable for hundreds of miles in all directions.

        If you think the temperature record for all parts of Illinois should be virtually indistinguishable from the temperature record of all parts of Ohio or states even further away, then you should assume BEST is right and GISS is wrong. If you think there should be variation from area to area within a state, or even across nearby states, you should assume GISS is right and BEST is wrong. And since HadCrut shows the same sorts of variation GISS shows, you should believe the same for it as you believe for GISS in regard ti this issue.

        Basically, BEST has gotten very different results than the previous temperature constructions at scales other than global. Not only has it failed to explain this discrepancy, BEST has largely failed to acknowledge it. If you assume it is right, you have to assume the other ones were way off-base. And you have to assume that without any published explanation.

      • Brandon, all of the published global temps are worthless.

        Jan,
        And the reason they are all similar is they all do basically the same wrong things.

      • Steven Mosher

        wrong.

      • Steven Mosher

        1. you are comparing the wrong things
        2. you need to keep up

        http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf

        3. You must start by commapring the methods using the same input data

        A) GISS combines and averages stations at a distance to one “reference station” that.
        B) they use adjusted data
        C) they adjust it further
        D) they have boundary problems at gridlines which cause unphysical
        differences in trends.

        but knock yourself out. write up a paper. either zeke or rhode or I would probably be asked to review it.. otherwise #Si

      • Don Monfort

        Looks like a weel thought out approach, Steven. And you added the caveat:

        “However, we have not yet been able to rule out the possibility that the approach may be over-smoothing and surpressing real trends when down weighting divergent trends during the krieging process. Additional work is needed to confirm whether the fields produced are overly smooth or not both by using tests with synthetic data and by comparing the results to the spatial characteristics of high-resolution GCM field.”

        I spotted two typos:

        boarders, under the first chart should be borders

        surpressing in the last paragraph should be suppressing

        For more learned analysis, you will have to wait for Brandon, or somebody else (not weebbee).

      • “tells hansen where he got it wrong”

        An appeal to Hansen.

        And I thought Mosher had hit bottom in 2010.

        Andrew

      • Bad Andrew is bad at reading between the lines. Massive fail.

      • Steven Mosher’s comments are, as usual, unhelpful. First, he makes a comment which just says:

        wrong.

        Calling that spam is being generous. His next comment is more substantial, but it is also completely wrong as it says:

        3. You must start by commapring the methods using the same input data

        Nobody who has any idea what they’re talking about would say this. Comparing methodologies by using them on the same data set is a useful way of testing them. It’s one of the clearest. It is not, however, the only way.

        If you want to know whether BEST or GISS smoothes its data more, you do not need to use the same data set. If you want to see how much a methodology smooths its data, you can simply compare its input and output. If you want to compare how much two methodologies smooth their data, you can examine their individual inputs and outputs to see how much each one smooths its data then compare your results.

        There are many other ways to compare methodologies without using the same data set. It’s been done all the time in the hockey stick debate. The temperature reconstructions in the hockey stick debate used different methodologies and different data sets. You never saw Steven Mosher complain when people compared the methodologies. That’s because he knows you don’t need to use the same data when comparing different methodologies.

        Also of note, Mosher says:

        but knock yourself out. write up a paper. either zeke or rhode or I would probably be asked to review it.. otherwise #Si

        Note, he doesn’t say do the analysis. He doesn’t say do the work. He doesn’t write up the results. What he says is write a paper and publish it. That is, if you have anything to say, say it in a peer-reviewed journal. And remember, if you do try to say it in a peer-reviewed journal, his group will get to review before you can possibly get it published.

        That’s apparently how BEST handles disagreement. You have concerns about their results? They don’t care. I’ve brought up concerns every time BEST representatives have discussed their results in a public forum. Not once have I received a single useful answer. Instead, I’ve been insulted, berated, misrepresented and ignored. When I e-mailed BEST represenatives, as they instructed me to do, I received no response.

        The problem? I haven’t published my concerns in a peer-reviewed paper, a paper BEST would get to review before it was published. #ClimateScience

      • By the way, it’s funny Steven Mosher says I should publish my results in a peer-reviewed paper when BEST repeatedly promotes results it hasn’t published in any journal. Apparently he feels I have to resort to peer-reviewed publications to criticize results which haven’t been published in peer-reviewed publications.

        Apparently the idea of open discussion is anathema to BEST. If you want to raise any concerns, you shouldn’t talk about them. You should pay ~$1500 to get them published in some random journal nobody has ever heard of. I mean, that’s what BEST did, and apparently, that makes their claims more meaningful.

      • Don Monfort

        Uh, oh! Brandon has turned up the pay-for-play journal-of-last-resort card. Will Steven fold his hand?

        Whatever happened to the journal Geowhatever and Geowhatever?

      • Brandon Shollenberger wrote

        “By the way, it’s funny Steven Mosher says I should publish my results in a peer-reviewed paper … “

        Yes. It is funny and I enjoy it immensely.

        “Apparently the idea of open discussion is anathema to BEST. If you want to raise any concerns, you shouldn’t talk about them.”

        That’s a crock. Steven and Zeke have had the patience of Job (usually) working through comments here and at BB…time and time again.

      • They must have missed your questions and your emails, Brandon. Write them up and submit with $1500 to the journal Geowhatever and Geowhatever and when your paper gets reviewed by Mosher, Rohde and Zeke, they might be more informative if they bother to write up your rejection. Although, you might have to try another journal. According to their website, their most recent issue was in December, last year. Of course, for 1500 bucks they will very likely crank up the presses again.

      • Steven Mosher

        mw

        Thanks,but brandon still doesn’t get it.

        help him decipher #Si

      • Steven Mosher

        brandon.

        You have the code.
        Knock yourself out

        Take that code

        Load the GISS data

        run it.

        write a paper.

        submit to PLOS, thats a good open journal.

        do something.

        but I get paid to answer questions emailed to me from real users.
        guys in grad school, air resources, forestry, industry, epidemiology.

        I’m going back to work.

      • mwgrant, the fact BEST members respond to easy criticisms does nothing to undermine my point. That’s like saying Skeptical Science actually addresses skeptical arguments. Heck, Steven Mosher’s behavior is practically indistinguishable from theirs.

        If you want to dispute what I said, I have a simple challenge. Find a single example where Steven Mosher, or any other BEST representative, has addressed any of my substantial concerns. One example. That’s all it’d take.

        I’m willing to bet you can’t find one. I, on the other hand, can easily find half a dozen or more cases where BEST representatives have failed to address my concerns – and that’s only counting cases where one responded to me.

      • I think mr. mw will count the recent Mosher replies.

      • Steven Mosher:

        but I get paid to answer questions emailed to me from real users.
        guys in grad school, air resources, forestry, industry, epidemiology.

        It’s nice to know what you get paid to do. Apparently you only waste everyone else’s time giving unhelpful, and often stupid, answers then refusing to address their responses as a hobby.

        Tell you what, when you pay me, I’ll publish a paper showing what BEST does wrong. Until then, enjoy just being Michael Mann, Gavin Schmidt or whatever other member of the Real Climate team you choose to be.

      • Unlike Mosher I do not have patience for your never ending shtick. Nor do I have time for petty chickensh*t word games. Like Mosher and others I am under no obligation to run your perpetual gaunlets. Now go off and act wounded. I am confident in the CE and BB archives supports my comment. Can you sense where I am coming from?

      • Monfort,

        You just stir the pot and that is useless.

      • Stirring the pot can be very useful, mw. I helped win the Cold War for all you freedom loving people by stirring pots. Along with the liberal application of firepower.

        I would be interested in seeing Steven give Brandon a straight answer. Brandon’s got some personal issues he is working on, but he is knowledgeable, honest and he raises some interesting questions. I have found that badgering Steven often get’s him to open up. That dismissive cryptic crap of his is counterproductive.

        I like Steven. He is one of my blog heroes. I have learned a lot from him and would like to learn more. But I fear for his honesty, since he got mixed up with that BEST crowd.

        Are we straight now, mw?

      • You just stir the pot and it is useless. Now we are straight.

      • I have caused you some angst, little dude. That is useful. Now show me some more.

      • Don Monfort

        Second read. It been a long day.

        Yeah, I agree. We’re square on all points.

      • Don Monfort

        You are a gentleman and a scholar, mw. It takes all kinds.

      • mwgrant, you’re welcome to storm off with pointed remarks and bolded phrases. It won’t accomplish anything though. If you make a bold claim but are completely unwilling to discuss that claim, nobody will have reason to listen to you.

        Quite frankly, I don’t get your behavior in the slightest. I’ve constantly tried to engage in simple and open discussions. You’ve constantly diverted things to discussions of personalities, refused to address simple points and just walked away.

        So no, I can’t sense where you’re coming from. You may not like me or my behavior, but I can’t imagine how you would possibly justify your own.

      • Brandon S got it wrong when he said that BEST shows a “huge warming trend”. He started with zero credibility and has maintained that level.

      • Steven Mosher

        “If you want to dispute what I said, I have a simple challenge. Find a single example where Steven Mosher, or any other BEST representative, has addressed any of my substantial concerns. One example. That’s all it’d take.”

        1 you have never had a substantial concern
        2 those concerns you had ( which others had as well ) were adressed

        In fact I gave you an acknowledgment on our web page.
        Doofus.

        http://berkeleyearth.org/acknowledgements

      • Steven Mosher

        Brandon

        “Tell you what, when you pay me, I’ll publish a paper showing what BEST does wrong. Until then, enjoy just being Michael Mann, Gavin Schmidt or whatever other member of the Real Climate team you choose to be.”

        Agreed.

        get the paper published and I’ll pay you.

        1. you won’t write one.
        2. If you did, it would be rejected, because your concerns are
        A) invalid
        B) inconsequential

        get started

        you wont.

      • Steven Mosher, did you even read what you said you are agreeing to? I said “when you pay me, I’ll publish a paper.” That clearly states the payment would come first. You can’t agree to that while saying, “get the paper published and I’ll pay you.”

        1. you won’t write one.

        If someone paid me to do critical analysis on BEST, I certainly would write a paper. I wouldn’t take a job if I wasn’t willing to do the work.

        2. If you did, it would be rejected, because your concerns are
        A) invalid
        B) inconsequential

        Your opinion of my concerns means little as I’m willing to bet you couldn’t even accurately describe what they are. And I mean that literally. I would bet on your inability to describe my views as you’ve repeatedly demonstrated it. As for your remark:

        1 you have never had a substantial concern
        2 those concerns you had ( which others had as well ) were adressed

        In fact I gave you an acknowledgment on our web page.
        Doofus.

        It’s easy to make comments like this if you never do anything to support them. Why don’t you tell people what my concerns were and what you gave me that acknowledgement for? Then people could actually check to see which of us is correct.

        In other words, don’t just make a claim. Back it up.

      • Steven Mosher

        And just to be clear let me detail the important claims we tried to address in the paper.

        1. Skeptics claimed that there was a station selection bias. This is
        known as the great thermometer drop out. for reference see
        D’aleo and Watts SPPI
        2. Skeptics claimed that the warming was a result of adjustments
        3. Skeptics claimed that the methods of CRU and GISS created the warming.

        Our methods and results papers. aimed at addressing these concerns

        1. we showed that by using all the stations the answer didnt change.
        the warming in GISS and CRU is not the result of the thermometer
        drop out. Zeke and Stokes and I showed a similar result at WUWT
        but having it in the published literature is important.
        2. We approached the adjustment question by utilizing a “hands free”
        approach. Skeptics claimed that NOAA had somehow rigged the adjustment code. We showed that a purely data driven approach gave the same global results. Answer, the warming is not the result of
        some nefarious actions at NOAA
        3. as opposed to CRU and GISS we employed a different method. kriging. one used everyday by geostatisticans who create temperature fields. the result. the global warming is not the result of methods.

        lastly, you and other simply dont understand what the climate field represents. Whether you are talking about CRU or GISS or BEST the grids created represent a PREDICTION of what the temperature was.
        The correct measure to look at in comparing the methods is the error of prediction. Finding a place where method A predicts X, while method B predicts Y, tells you nothing. in other words a smoother field will have lower errors of prediction or not. The test is not smoothness, the test is error of prediction. To test the error of prediction you simply have to look at out of sample testing.

        The Station Quality paper and the UHI paper each address two other related skeptical concerns. That the warming was due to micro site bias or UHI

        To recap.

        The methods paper and the results paper address the conspiratorial rants of you folks. That warming was the result of

        1) station drop out
        2. adjustments
        3 bad methods

        nothing you or anyone else has written ( lets see, seasonality concerns,
        low frequency concerns, smoothing concerns) comes close to addressing the core issue. Further, all of these concerns presuppose a ” ground truth” about the past that doesnt exist.

        All the methods aim at predicting or estimating the past.
        The principle measure is how well does the method predict held out data?
        period.

        The UHI and micro site paper address the other core skeptical concerns

        1. micro site
        2. UHI.

        I am far less certain of our results in these areas. So, in my mind the action is in those areas. The action, the interesting bits, are not in the areas that you and others find interesting.

      • Steven Mosher sure is getting awfully prickly about what is really fairly reasonable criticism of his groups work.

        What’s not to like? It’s buried in a 1700+comment thread, so few people are even watching, and if there is substance to the criticism, it will lead to a better product.

        Regarding this comment:

        2. If you did, it would be rejected, because your concerns are
        A) invalid
        B) inconsequential

        Talk about mind-reading feats! Brandon hasn’t even written anything yet, and Mosher already “knows this”.

        I think my stock comment “you are not nearly smart enough to know what I was going to say without letting me say it first” applies here.

        I also noticed Steven is using the same wrong arguments about needing the same data sets. On a different thread, Steven would explain why gridding the data before comparing results ameliorates most issues associated with the data sets being different.

      • Steven Mosher

        carrick mistakes reasonable for relevant.
        and he still doesnt understand why you cant do the comparisons he does when the underlying data is different. well you can do it, but you havent demonstrated anything.

        he too is invited to write a paper.

        he can team up with Brandon.

      • Don Monfort

        In my ever so humble opinion, it would interesting if Carrick and Brandon wrote a guest post on their issues with BEST, GISS, etc. I would hope that a certain BEST team scientist would take the time to set them straight on their errors. According to that certain scientist, it wouldn’t take too much of his time. Just joshing you, Steven. It really would be interesting. And it might stave off another goofy guest post from Tom McClellan.

      • Don Monfort, apparently Steven Mosher would say it doesn’t count if we did. After all, a blog post is not a peer-reviewed paper. I’m not sure how he’d square that notion with the fact BEST has announced results (and changes to methodology) in blog posts, but I bet he’d find a way.

        It would be interesting though.

      • Steven Mosher, people make criticisms and don’t substantiate them and you get pissy.

        People make claims and substantiate them, you still get pissy, but suddenly we have to write a paper?

        These are simple claims, easy to test. They don’t need peer review to judge them.

        The issue is pretty simple: What is the effective spatial resolution of BEST to long term trends compared to other series?

        The answer seems to be “substantially worse than its competitors”.

        You don’t need me to hold your hand to compute this. You should be able to arrive at an answer yourself.

        As to me writing papers, good advice, but thank you, yes I do.

      • Ah, C, a pearl beyond the understanding of this member of my tribe. What is the greater meaning of this special ‘effective spatial resolution’?
        =================

      • Kim, there are two terms that get used.

        One is “spatial sampling frequency”. Basically it’s one over the average distance between spatial measurements.

        The other is “spatial resolution” which has to do with how close two features can be to each other and still resolve them as separate features.

        You can see the issue clearly with this figure.

        Best has blurred together all of the features seen by GISTEMP (1200km smoothing radius) in South America into “one big puddle of goo”.

        On another blog, Steven Mosher was trying to claim that BEST is correct in having lower resolution. Possibly he’s right, but it doesn’t seem plausible.

        My experience with natural variability is, when you look at a smaller scale, you get a repetition (to an extent) of what you saw on a bigger scale. This works down to the viscous limit, which is 1mm x 1mm x 1mm.

        Put other way, I have no plausible explanation for why there would be a “cut-off spatial frequency” below which climate becomes ultra smooth.

        It would be cool in a way if this were really correct, because it would simplify the work of climate modelers.

        I really don’t that is correct: I think Hansen is got it right here, and Muller got it wrong.

      • Much gracious, that’s helpful.
        ============

      • By the way that figure was generated using a tool by Nick Stokes:

        http://moyhu.blogspot.com/2014/07/trends-of-gridded-best-and-giss-shown.html

        I don’t mean to use this thread to pick on Steven Mosher, but gosh man… when people find something that might be wrong with your work, you should feel free to get your hackles up. But you do need to take seriously the possibility you might have gotten something wrong.

        After all of the times you’ve participated in climate “circle the wagon and defend against all incomers” discussions, we now find you inside the wagon circle firing at anything that moves.

        Less lecturing please, more insight.

  266. Not too far from the topic here, a new paper on the comparison between CMIP5 climate model simulations and observations has just been published:

    Risbey et al., “Well-estimated global surface warming in climate projections selected for ENSO phase”, Nature Climate Change (2014), http://dx.doi.org/10.1038/nclimate2310

    Abstract:
    “The question of how climate model projections have tracked the actual evolution of global mean surface air temperature is important in establishing the credibility of their projections. Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations. We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.”

    • Don Monfort

      Yes, they are calling it the Risible Risbey et al. paper. Let’s pretend that our post hoc data snooping analysis shows that 4 of the multitude of models are good, therefore they are all good. Brilliant use of extrapolation. Whatever it takes to keep hope alive and the grants rolling in.

      • Utter rubbish. Nowhere in the paper is this the presented reasoning, and that is not what the paper is about at all. Where did you get this nonsense? Or you are just making things up yourself. It is obvious that you haven’t read the paper and that you don’t know what you are talking about.

      • I don’t know if this paper is retard or petard but it sure blows a big hole in the wall around the models. How can the modelers be happy with this beast loose in Nature?
        ====================

      • @Jan P Perlwitz at 2:24 am |

        .. RE: Don Monfort 7/21 2:11 am:
        our post hoc data snooping analysis shows that 4 of the multitude of models are good, therefore they are all good. Brilliant use of extrapolation.

        Utter rubbish. Nowhere in the paper is this the presented reasoning, and that is not what the paper is about at all. Where did you get this nonsense?

        Maybe it wasn’t in the paper word for word. But it was the purpose of the paper and its central message. A message that got through loud and clear to the popular media.
        A common refrain by climate sceptics that surface temperatures have not warmed over the past 17 years, implying climate models predicting otherwise are unreliable, has been refuted by new research led by James Risbey, a senior CSIRO researcher.
        From Sidney Morning Herald. http://www.smh.com.au/environment/climate-change/climate-models-on-the-mark-australianled-research-finds-20140720-zuuoe.html#ixzz388SxurAT

        But let’s look not a press reports, not a press releases, let’s look at the abstract itself:

        These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

        Not “some climate models.”
        Not “climate models selected for ENSO phase.”
        “climate models” unqualified, as in ALL climate models.

        Peer Pal Review at its best.

    • heheh Jan!

      Which 4 are good?

      • The paper doesn’t make any such statement that 4 models were “good” (and the others weren’t.)

        Who told you such a thing was stated in the paper?

      • Don Monfort

        You are overwrought again, perlie. Are you sure that you have read the right Risible Risbey et al. silly paper?

      • You missed a big opportunity, Jan, not getting your name on that paper with Oreskes and Lewandowsky.
        =========================

      • OK, Jan. heheheheeeee

        “Best”. Which are the best not worst?

      • The paper also doesn’t make any statement that 4 models were the “best” ones (or others the “worst” ones). There is no such classification of the models in the paper.

        I ask you again. Who told you that there was?

        Why don’t you answer my question where you got that from?

      • That the paper does not tell which few of the 18 models form the best and which few the worst is an obvious fault of the paper. That even the counts are not given is even worse. The claims presented in the blogosphere that both selections contain 4 models is plausible. Whether the number is exact changes little.

        What the paper actually shows is that the models do have the same correlation between ENSO and global average temperatures than observations. Nothing less and nothing more as far as I can see.

      • Pekka Pirilä wrote:

        That the paper does not tell which few of the 18 models form the best and which few the worst is an obvious fault of the paper.

        How can this be an “obvious fault” of the paper, even though the paper doesn’t make such a classification of the models, which were the “best” ones and which were the “worst” ones?

        Now, I’m asking you the same I asked ClimateGuy. Where did you get from that the paper did such a thing?

      • It’s an obvious fault, when the number of models is so low. Listing the models is obviously useful information and leaving obviously useful information out is an obvious fault.

      • Why doth youn Riseby smirk, and crackle hyenacally penultimately?
        ==================

      • Pekka Pirilä wrote:

        It’s an obvious fault, when the number of models is so low. Listing the models is obviously useful information and leaving obviously useful information out is an obvious fault.

        Before declaring an “obvious fault” regarding this matter, you would have to show first that your assertion is correct that the paper classifies models according to which some model were “best” and others were “worst”. Because that is the presumption for your statement. “Best” and “worst” with respect to what?

        If your presumptions for the alleged “obvious fault” are already wrong, then your claim is obviously bogus.

      • Jan P Perlwitz commented

        Before declaring an “obvious fault” regarding this matter, you would have to show first that your assertion is correct that the paper classifies models according to which some model were “best” and others were “worst”. Because that is the presumption for your statement. “Best” and “worst” with respect to what?

        Did the paper not highlight that 4 model runs had a “better” match to reality? The isn’t obvious that of the other model runs some of those were the most wrong?

      • Seeing that you’re late to the party, we’ll give you a break, Jan. But given the opportunity, would you put your name on this paper? Would you pass it in review? Did you?
        ================

      • Again, with all caps “The grey dots show the average 15-year trends for only the models with THE WORST correspondence to the observed Niño3.4 trend. “

      • Mi Cro asked:

        Did the paper not highlight that 4 model runs had a “better” match to reality? The isn’t obvious that of the other model runs some of those were the most wrong?

        No, the paper did not, and no, it isn’t.

      • Jan P Perlwitz commented

        No, the paper did not, and no, it isn’t

        ABSTRACT

        The Risbey et al. (2014) :

        The question of how climate model projections have tracked the actual evolution of global mean surface air temperature is important in establishing the credibility of their projections. Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations. We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

      • So, Jan Perlwitz,

        Do you still deny that they are supposedly finding best and worst?

        Thanks

      • ClimateGuy wrote:

        Again, with all caps “The grey dots show the average 15-year trends for only the models with THE WORST correspondence to the observed Niño3.4 trend. “

        You obvioulsy don’t get that for each of the sliding 15-year periods from 1950 onward the subset of models, which is or is not in phase with the observed Nino3.4 trend, is newly determined every time. It’s not always the same models. Which models make it in the “best” category and which ones in the “worst” category varies with the sliding 15-year period. At least 2 models are needed to make a dot in the graphic, but it also can be more than 4 models. Thus, there aren’t 4 specific models that were “best” or “worst”. What models make it in the “best” or in the “worst” category, the agreement or non-agreement of the modeled trends with the observed trend in the El Nino3.4 region for any specific 15-year period is just by chance. There is no qualitative classification about the ability of some models to match reality better than others in the paper.

        Thus, requesting to name the 4 “best” or 4 “worst” models is a meaningless request. It’s not a request that follows from the analysis the authors have done.

      • Figure 5., perlie. Read the freaking paper.

      • ClimateGuy wrote:

        Do you still deny that they are supposedly finding best and worst?

        Yes, I still deny that they did what you claimed: Finding 4 specific models that supposedly were “best”. And the abstract of the paper doesn’t support your claim.

      • I particularly like ‘What models make it in the “best” or in the “worst” category, the agreement or non-agreement of the modeled trends with the observed trend in the El Nino 3.4 region for any specific 15-year preiod is just by chance.’

        ‘just by chance’. Words to sup long-spoonedly with. Keep your fork, there’s pie.
        ================

      • Perlie has only read the abstract. We all know that.

      • Don Montfort:

        Figure 5., perlie. Read the freaking paper.

        Figure 5 shows the composites of the four models, which are, purely by chance for the period 1998-2012, best in phase with the observed El Nino3.4 trend and of the four models which are, again purely by chance, most out of phase with the El Nino3.4 trend for this time period. This doesn’t say anything about the quality of the models that make the composites, whether those models were “good” or “bad” regarding their ability to match reality. The authors could also have shown the composites for some other arbitrary 15-year period. Then, the composites would have been made, again purely by chance, by simulated data from some other subsets of all the models.

      • So basically folks (if what Jan said is correct, though I’m inclined to believe), none of the models were able to model reality, what they did was take bits of some number of model runs, and then assemble them into a scenario that did match reality.

        Note, no single model, or any average of 4 models matched, it required a “Frankenstein” of stitched together segments.
        What I don’t get is how Jan thinks this in some way confirms that the Models are valid, imagine you took and created billions of random audio signals, chopped them into little bits, reassemble them into Stairway to Heaven, and then claimed you had a program that could imitate Led Zeppelin.

      • Take me back to Tulsa, she’s too young to bake a pie.
        ===============

      • Jan Perlwitz wrote

        “What models make it in the “best” or in the “worst” category, the agreement or non-agreement of the modeled trends with the observed trend in the El Nino3.4 region for any specific 15-year period is just by chance”

        Then please answer.

        Which were the best and worst as per the categorization you define above?

      • According to our little friend perlie, the authors labeling 4 models “best” and 4 models “worst” doesn’t mean anything regarding the quality of the models. This is after perlie vehemently proclaimed:

        “The paper also doesn’t make any statement that 4 models were the “best” ones (or others the “worst” ones). There is no such classification of the models in the paper.”

        Your credibility is in tatters, perlie. You should be quiet now.

      • Jan,
        Even give us one stated resulting “best”, to show that the authors have been transparent and not hiding information.

      • Kim wrote

        “‘just by chance’ Words to sup long-spoonedly with. Keep your fork, there’s pie.”

        Just by chance. Remarkably good models that just by chance, a few get it right SOMETIMES.
        Everything’s been corroborated. CAGW is on. Deniers deserve a bashing.

      • Don Monfort wrote:

        According to our little friend perlie, the authors labeling 4 models “best” and 4 models “worst” doesn’t mean anything regarding the quality of the models.

        This is not what I said. I said that the claim is false according to which the authors labeled 4 models “best” and 4 models “worst” in their study. There are no 4 “best” and 4 “worst” models in the study, and none of the results imply that there were.

      • Jan Perlwitz wrote

        ” There are no 4 “best” and 4 “worst” models in the study, and none of the results imply that there were”

        Yes, there were, Jan. Labeled as such but not identified.
        Only under differing conditions, which ones were best/worst due to luck MAY have changed.

      • @ClimateGuy
        Jan said this up above:

        I don’t know which specific subset of four models made it into the composites for the 1998-2012 period. And it isn’t scientifically relevant for the study. For some other 15-year period, e.g., the one 1951-1965

        To which I replied:

        what they did was take bits of some number of model runs, and then assemble them into a scenario that did match reality.

        So there isn’t a set of 4 best runs, for each period they picked the 4 best models and assembled those for just for that period, then for the next period, they picked 4 different runs to assemble the next segment, and so on.

      • The basic fault is that the paper hides relevant information. Telling more would help essentially in judging, what their analysis tells and what not. Based on the information included in the paper it’s difficult to conclude much more than that cherry picking results in what cherry picking typically results.

        I was somewhat inaccurate in my above comments. What’s done in the analysis behind the paper does not involve choosing best and worst models but best and worst model runs, when the criterion is agreement on ENSO index with the data.

        It would be interesting and very relevant to know, whether the selected model runs come from many of the models or just a few. It’s also essential to know, whether the models that have produced those model runs deviate systematically in some way from the rest of the models.

        As the paper is written, the only natural reaction is to assume that the analysis done tells essentially nothing new. There’s absolutely nothing new in observing that cherry picking works. It’s well known that the correlation between ENSO and GMST is strong. Therefore selecting on ENSO can be expected to be almost the same as selecting on GMST. (Of course this an expectation that should be verified, and their paper does something like that.)

        If there’s something more in the paper, it’s not easy to observe.

    • You’re a bit late to the party Jan. And it pretty much says it in the title.

      ‘Well-estimated global surface warming in climate projections selected for ENSO phase.’

      Pick the four that happen to have some variability in the right direction at the right time. The trick of course is to do that beforehand and not do post hoc rationalization of a method that is inherently – but only nominally – capable merely of the discrimination between probabilities.

      Now if they had said beforehand that there was a high probability of no warming – and this seems more likely than not to persist for decades more – we might be impressed. As it is we are amused.

      I haven’t read it and don’t intend to. Life and learning is far too short. But what the hell do they expect Oreskes and Lewandowsky to contribute to any credible research let alone in technical aspects of climate science. The maniacs seem in charge of the asylum at Nature. Where the hell is their common sense let alone a healthy skepticism?

      • You know, Rob, I about half wonder if Oreskes and Lewandowsky don’t have some sick(healthy) subconscious need to blow up their own misbegotten cause.

        Jan, this paper hardly inspires confidence in the GCMs. There are even some politicians who can figure that out. The question is why didn’t you figure that out? Why didn’t Nature and the reviewers figure it out? God only knows why Lewandowsky and Oreskes didn’t figure it out.
        ================

      • It is obvious that the ENSO dynamics are difficult to characterize. It is also obvious that the ENSO dynamics contribute a significant fraction to the year-to-year average global temperature variability. Therefore it is deductively obvious that the natural variability will be difficult to predict in years hence. So that any simulations that happen to characterize the behavior of the ENSO over a span of years are useful for understanding the trends in temperature. Kosaka and Xie recently made a similar observation:
        [1]Y. Kosaka and S.-P. Xie, “Recent global-warming hiatus tied to equatorial Pacific surface cooling,” Nature, vol. 501, no. 7467, pp. 403–407, 2013.

        And just in case any of you want to help investigate ENSO dynamics, feel free to visit and contribute to the Azimuth Project — http://azimuth.mathforge.org/

      • Rob Ellison wrote:

        You’re a bit late to the party Jan. And it pretty much says it in the title.

        How could I be late to the party. The paper was published yesterday.

        Pick the four that happen to have some variability in the right direction at the right time. The trick of course is to do that beforehand…

        Is this what you expect climate models should be able to do so that you wouldn’t say they were severely flawed? That they should be able to make an accurate weather forecaset decades or centuries ahead? Now that really would be some neat trick. But such a thing is objectively impossible. Have you ever heard of the Lorentz attractor? Even a perfect model, i.e., a model that was flawless by definition could not achieve that.

        It looks to me that you don’t understand the difference between weather forecast and climate simulations/projections.

        Now if they had said beforehand that there was a high probability of no warming – and this seems more likely than not to persist for decades more – we might be impressed. As it is we are amused.

        How would you know today that such a statement about a high probability of “no warming” would have been right, if it had been made beforehand?

        I haven’t read it and don’t intend to…

        So you don’t even bother to read the paper and try to understand the methodology and the arguments, but you already “know” it’s all wrong.

      • Jan P Perlwitz commented

        Is this what you expect climate models should be able to do so that you wouldn’t say they were severely flawed?

        I expect it to get regional things like temp and rain to at least be realistic, and not rely on having to average these things globally to appear accurate.
        http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
        http://icp.giss.nasa.gov/research/ppa/2001/mconk/

      • “It looks to me that you don’t understand the difference between weather forecast and climate simulations/projections.”

        I have a standing interest for a demonstration, proof or numerical study that shows that there is any difference. References?

      • kim wrote:

        Jan, this paper hardly inspires confidence in the GCMs. There are even some politicians who can figure that out. The question is why didn’t you figure that out? Why didn’t Nature and the reviewers figure it out?

        Yes, it’s strange, isnt’it? How come that the ones who actually know the science don’t seem to have the same problems to understand what was done and what the results of the study say as you do. Or as some politicians. What could be the reason for that? Anything that comes to mind? No?

      • Yah, Jan, cherry picked, and sans the stones to show.
        =======

      • ‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

        The paper was being discussed yesterday. Your copy and paste abstract and a statement that some models got it right is risible nonsense and a bit late to the party.

        Far from disputing that some models runs accidentally approach reality – I think that is likely just by a nonlinear throw of the die.

        What is expected from models – at best – is a pdf of possible outcomes. Opportunistic ensembles – btw – do not come close to providing this in any theoretically rigorous way.

        It all follows from that. The madness of all this is palpable.

    • “Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations.”

      These IPCC science deniers say that actual measured evidence is not evidence against trends, because a few models get close sometimes!

      What next from The Insane Clown Posse?

      • “Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution.”
        The paper suggests the IPCC Fifth Assessment was wrong.

      • The paper shows that the average of the model ensembles over-estimated the warming, which is in agreement with the AR5.

      • Jim D:
        “We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.”

        We have a better test. We are testing models.

        “In sum, we now have four converging lines of evidence that highlight the predictive power of climate models.”

        “The figure clarifies that internal climate variability over a short decadal or 15-year time scale is at least as important as the forced climate changes arising from greenhouse gas emissions.” – Lewandowsky

        A new fourth way highlighting their predictive power. One of the other ways was the Kosaka and Xie paper.

        The good estimates of 15-year trends unfortunately isn’t going to help us predict the next 15 years. Let’s discuss why Lewandowsky and Oreskes ended up as co-authors?

      • It’s a demonstration that 15 years is too short a sample unless you are going to expect natural variability to be predicted. Over longer time scales we see that natural variability becomes a small factor since its amplitude is about 0.1 C which gives smaller trends over multiple decades, and may cancel out entirely over 30-60 years. It’s just a signal to noise issue. More samples reduce the noise.

      • Jim D,
        Where do you get 0.1C from? It’s far larger than that.

      • AMO, PDO, stadium wave, etc. These are 0.1 C in global averages. Do you know bigger ones? Looking at annual averages, even ENSOs don’t get much above that, and they come down below 0.1 C when you average them over a decade. With solar and volcanic effects, you can get to 0.2 C (Lovejoy’s natural variability magnitude). Not much against the 2-4 C warming expected.

      • AMO, PDO, stadium wave, etc. These are 0.1 C in global averages. Do you know bigger ones? Looking at annual averages, even ENSOs don’t get much above that, and they come down below 0.1 C when you average them over a decade. With solar and volcanic effects, you can get to 0.2 C (Lovejoy’s natural variability magnitude). Not much against the 2-4 C warming expected.

        So you’re projecting a regional effect that has a much higher amplitude globally to get a 0.1C
        The only complaint I have is that it is a regional effect, and smearing it out over the globe throws away valuable information. How did these “best” runs do regionally? One of the big issues with models is there lack of regional realism.
        http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
        http://icp.giss.nasa.gov/research/ppa/2001/mconk/

      • JD is correct, the effect is about 0.1C. And better still, this is a zero-sum effect, so that it has virtually no impact on the longer-term trend.

        I figured this out the easy way. What I did was create a multiple regression model with all of the factors and found the best fit for natural variability.

        It really is not that hard, you just have to do it.

    • I also participated in a discussion on this on the Open Thread, and especially Bob Tisdale’s WUWT response to it, which got wrong what many skeptics here are getting wrong. Because the models were run for a century or more to get to 2000, no one, except perhaps these skeptics, expects the runs to have the correct ENSO phase. They selected the ensemble members that had closest to the correct ENSO phase. This doesn’t make them best except by chance. It is irrelevant if ensemble members 14, 36, 48, and 51 of a model happened to get the phase right in a particular time window. These are not better members. They might even come from the same or different models. They run ensembles to encompass the chances of the trajectories. The other important point in the paper is that in the 15 prior years to 1998, the models were not warming fast enough on average, but the skeptics pay little attention to that. Fifteen-year trends just vary that much about the mean in nature. Models only get that variation by chance.

      • Jim D wrote

        ” no one, except perhaps these skeptics, expects the runs to have the correct ENSO phase. ”

        Are you another IPCC science denier? They said the models predicted too high a warming .

      • The nutcases say

        “Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends”

        Measured evidence obviously IS evidence against modeled trends that re different. How silly can these physical measurement deniers possibly get?

      • Here’s one of their diagrams by way of Appell:
        http://davidappell.blogspot.com/2014/07/models-that-predict-pause.html
        The successful models drive in the middle of road until they veer to the low side around 1995. The successful models also whiff with the ’77 and ’01 break years.
        The diagram also shows the successful models almost 0.2 C lower than the average model.

      • ClimateGuy, yes, and, as this paper shows, the models had too little warming from 1984-1998. Funny how that works out to be just right over the last 30 years.

      • They nutcases claim that the physical measurements are not evidence against model trends. Ho ho.
        Then Jim D claims the nutcases have shown evidence for.

        It doesn’t get more bizarre

      • Maybe the nutcases don’t understand the word “evidence”.
        That wouldn’t explain all the nuttiness, but it would help.

      • ClimateGuy, what you said was just incoherent and had no relevance to the points made.

      • It’s like this, Jim D.

        A shoolclass gets a class average of 20 % on the exam. She gets some flack because she was not a very good teacher.
        She declares it’s not evidence against the learning that took place.
        The teacher then takes answers from a subset of the class and finds that for a number of questions one or the other student had a fairly decent answer.
        Teacher then parades about declaring that this is a better comparison.

      • Little jimmy dee’s strawman:

        “” no one, except perhaps these skeptics, expects the runs to have the correct ENSO phase. ”

        Name some of these skeptics who expect the models to get ENSO right. Skeptics don’t have that expectation of models, jimmy. We expect them to fail. Can you guess why?

        “A new study shows that when synchronized with El Niño/La Niña cycles, climate models accurately predict global surface warming.”

        No it doesn’t. Look at figure 5. Compare a and c. And even if it does, why should we be impressed if a few of the multitude of expensive models accidentally and unpredictably get’s ENSO right, once in a while? And having gotten lucky, why should we be in impressed that the model got somewhere in the ballpark (deep left field) on the temperature, given that ENSO is a big player on the natural variability team? Please forgive us if we are not impressed the latest Lewandowsky scam.

        Little jimmy never met a Lewandowsky scam that he didn’t like.

      • @Don,
        Did you notice that Jan said the SSTs We’re An input, Which Has The ENSO built in, the models didn’t even have to do that correctly! And most of them still got temp wrong, what a joke!

      • Scratch this part:

        “And even if it does, why should we be impressed if a few of the multitude of expensive models accidentally and unpredictably get’s ENSO right, once in a while? And having gotten lucky, why should we be in impressed that the model got somewhere in the ballpark (deep left field) on the temperature, given that ENSO is a big player on the natural variability team?”

        That doesn’t compute.

      • Or does it? What do you say, jimmy?

      • It’s not even as if some of the models got it kinda right throughout.
        They hide the facts on which model for which period so people don’t get to see how lame their exercise is.

      • They aren’t claiming that some models get ENSO right consistently. So I don’t see any reason to worry about identifying particulars of some small number of models that happened to accidentally get in phase, presumably once. Their main claim is that it is a good sign for the gaggle of models that when some find themselves accidentally in phase with ENSO, they get the temperature nearly right. Of course, they didn’t really get it right. It would be interesting to see more data on more models to discover if accidentally being in phase might consistently corresponded with the models showing enhanced skill in modeling temperature. I am willing to be persuaded there is something to it.

      • I think I see where you people are going wrong. Let’s say you have a weather model and run it a hundred times out to a year. Now, a year later, let’s look at which ones got it kind of right. We know we can’t do one-year predictions, but with enough ensemble members some might be close for the US for the days chosen. Would you call those the “good” ensemble members and demand to see the model and initial conditions that produced them, and would you not expect them to be bad if you chose to verify them a month later or in Australia instead. What is special about these members that you want to call them “good”? Lucky, maybe, good, no. Predictability limits are like that. Lorenz butterfly effect and all that. Know what I mean?

      • Don M, regarding what you think is a straw man, you haven’t read what Bob Tisdale wrote at WUWT. He more than implies that models are no good unless they get the phase right. Quote “If models had any skill, the outputs of the models would be in-phase with observations.” Watts didn’t disagree either. You can go over there and put them straight before they spread it out to millions of skeptics (oops, too late). Anyway I didn’t look at later comments, but it seemed everyone over there was agreeing with them.

      • I think Nuccitelli is making the point that for any 15 year period, there may be different models being pulled in to handle that 15 year period. At The Guardian he also makes the interesting observation:
        “…Foster & Rahmstorf statistically filtered out the noise from ENSO and other short-term temperature influences and showed that the remaining human-caused trend was in line with model projections. This study sort of takes the opposite approach in leaving those short-term factors in, but filtering out the model runs that didn’t accurately simulate ENSO.”
        The ability to pick your models if true, reminds me of this:

      • That would be a waste of time, jimmy. Try another story.

        I am going to read this, when I get done with my extended cocktail hour:

        http://onlinelibrary.wiley.com/doi/10.1029/2006GL028937/abstract

      • Mi Cro wrote:

        @Don,
        Did you notice that Jan said the SSTs We’re An input, Which Has The ENSO built in, the models didn’t even have to do that correctly! And most of them still got temp wrong, what a joke!

        You misunderstood what I wrote. The SSTs were all calculated by the 38 CMIP5 models. All these models are coupled ocean-atmosphere models with fully dynamic ocean. Of the calculated SSTs, the SSTs simulated with 18 models were available from the CMIP5 archive. The SSTs from the simulations with these 18 models were used to calculate the Nino3.4 trends over sliding 15-year periods for each of the simulations, from which the “best” and “worst” composites were derived for each of the sliding 15-year time periods. With “input” I mean they were used as input in the composite analysis.

      • I don’t think Tisdale believes that the models should get ENSO in phase. Why don’t you ask him, jimmy?

      • Jim D:
        “Let’s say you have a weather model and run it a hundred times out to a year.”
        Some of the weather models would get it right by chance. Now if we say area B has a great effect on our forecast area A, and see which line with up with that using a time proportional scale like a few weeks and pick those ones, then haven’t we done about the same thing?

      • Don M, I interpret his measure of having “any skill” as having those ENSOs in phase. The quote looks obvious enough to me. You should be going after Tisdale on that quote. I think you see where he is wrong.

      • Ragnaar, ensembles can be used to extend forecasts to a couple of weeks, but by then it is all probabilities, and you only know in retrospect which ones were better, or stayed right for longer. Beyond a few weeks, the individual members may oscillate between more right and more wrong, by which time it really is just luck what phase of rightness they are in when you verify them.

      • Don Monfort wrote:

        I don’t think Tisdale believes that the models should get ENSO in phase. Why don’t you ask him, jimmy?

        I wonder whether monfie is talking about the same Bob Tisdale about whom Jim D and I have been talking, considering what statement by Bob Tisdale I quoted in judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-610453

        I also am curious whether Bob Tisdale has a stock broker.

      • Don Monfort

        You are changing your story, jimmy. This is your strawman:

        ” no one, except perhaps these skeptics, expects the runs to have the correct ENSO phase. ”

        I challenged you to name names of skeptics who expect the model runs to have the correct ENSO phase. You claim that Tisdale more than implies it, not that he actually said it. But this is the quote you provide:

        “If models had any skill, the outputs of the models would be in-phase with observations.”

        He was talking about the models being in phase with observations over a 15 year period. He didn’t say in phase with ENSO. You are substituting ENSO for observations. That doesn’t fool us, jimmy.

        Tisdale may be wrong. The models’ skill may yet show up over a 30 year period, or in a hundred years. We should put the models on the back burner, until at least 2030.

  267. Jan P Perlwitz,

    “What exactly do you want from me?”
    Just that you bring proof of what you claim.

    Ah, but you already had dismissed the study with the claim that the data in these studies were “doctored”
    Certainly not. I did not read it. Thank you for the link but the paper is not freely available.

    • Don Monfort

      phi,
      If the paper is behind a paywall, you are allowed to pretend that you read the whole paper if you have at least read the abstract. That’s what perlie does.

      • You are jumping to conclusions. Just because you can’t access papers behind paywalls doesn’t mean I can’t either. Have you ever heard of a library and/or subscriptions, monfie?

      • I have heard of libraries and subscriptions, perlie. We have noticed that you have not claimed that you read the paper. And your pointless yammering reveals that you don’t know what the paper contains. You still ain’t read the paper, perlie.

        Figure 5., perlie. That is where the 4 “best” comes from. It’s central to their story, perlie:

        http://bobtisdale.files.wordpress.com/2014/07/figure-5-risbey-et-al-2014.png

        If you are not going to read the paper, perlie, you should go to wuwt and read Bob Tisdale’s deconstruction. Then you would be able to pretend you read the paper more believably.

      • Check out Bob Tisdale’s animation of cells a and c from figure 5., perlie. Even the 4 “best” don’t get it. The GCMs don’t do ENSO, period. Risbey et al. is risible:

        http://wattsupwiththat.com/2014/07/20/lewandowsky-and-oreskes-are-co-authors-of-a-paper-about-enso-climate-models-and-sea-surface-temperature-trends-go-figure/#more-113224

      • Heh, that won’t be modeled until the mystery driver is introduced to those coy computers.
        =================

      • I don’t think perlie is ever going to read the paper, kim.

      • I can’t decide whether it would be funnier if he hadn’t read the paper or if he has and passed it in review. Nevermind, I know which is funnier.
        =============

      • Don Montfort wrote:

        Check out Bob Tisdale’s animation of cells a and c from figure 5., perlie. Even the 4 “best” don’t get it. The GCMs don’t do ENSO, period. Risbey et al. is risible:

        http://wattsupwiththat.com/2014/07/20/lewandowsky-and-oreskes-are-co-authors-of-a-paper-about-enso-climate-models-and-sea-surface-temperature-trends-go-figure/#more-113224

        The only thing you demonstrate here is that you are as clueless as Bob “El Nino causes global warming” Tisdale. Of course, even the 4 “best” in the 1998-2012 composite don’t agree perfectly. Why should they? Even a perfect model, which by definition was flawless, wouldn’t. And even any agreement between a perfect model and the observations would be purely by chance. One would have to do many simulations with such a model to find a subset at the end with high probability, the composite of which would be in an approximate agreement with the observations.

        Nature only provides one single realization of a chaotic system from an infinite (but bound) number of possible realizations. Each individual model simulation is like another single realization of the chaotic system. Any approximate match between the single realization of Nature and an individual realization simulated with a model is only by chance. Even a perfect model wouldn’t be able to reproduce the chronological sucession of events in Nature beyond a predictability time limit due to the chaotic nature of the system.

      • Jan Perlwitz wrote

        “Of course, even the 4 “best” in the 1998-2012 composite don’t agree perfectly.”

        Then just tell which is one of the best, Jan Perlwitz!

      • You are dishonest, perlie. You did not read the paper. Your argument is shifting. Bob Tisdale didn’t make Risible Risbey try to show that GCMs could by chance simulate ENSO, badly. They did it on their own. The paper is a crock. Or maybe you can explain why it isn’t. This is just another example of trash passing peer review, because it supports the cause. Carry on with your clowning, perlie. At least now you know where 4 “best” came from.

      • Jan Perlwitz wrote

        “..as clueless as Bob “El Nino causes global warming” Tisdale.”

        Jan, are you up for being ” ‘El Nino does not cause surface temp warming’ Perlwitz”?

        :)

      • ClimateGuy:

        “Of course, even the 4 “best” in the 1998-2012 composite don’t agree perfectly.”

        Then just tell which is one of the best, Jan Perlwitz!

        I don’t know which specific subset of four models made it into the composites for the 1998-2012 period. And it isn’t scientifically relevant for the study. For some other 15-year period, e.g., the one 1951-1965, it very likely is another subset of models that made it in the “best” composite. And it is equally scientifically irrelevant for the conclusions of the study.

        If it really interests you, why don’t you reproduce the study? All the necessary information needed for being able to do that is provided in the paper.

      • Jan Perlwitz wrote

        “CLimateGuywrote
        ‘Then just tell which is one of the best, Jan Perlwitz!’

        I don’t know which specific subset of four models made it into the composites for the 1998-2012 period. And it isn’t scientifically relevant for the study. For some other 15-year period, e.g., the one 1951-1965, it very likely is another subset of models that made it in the “best” composite. And it is equally scientifically irrelevant for the conclusions of the study.”

        So the study is not transparent.
        Thank you.

      • Yeah perlie, we deniers should reproduce our version of the study using the same logic as the Risible Risbey crew. From WUWT:

        “Richard M says:
        July 21, 2014 at 5:22 am

        One could use exactly the same logic and pick the 4 worst models for each 15 year period and claim that climate models are never right. Anyone think that would have gotten published?”

        Case closed.


      • The only thing you demonstrate here is that you are as clueless as Bob “El Nino causes global warming” Tisdale.

        Good nickname. Tisdale thinks that El Nino acts as a ratchet, and that every time an El Nino makes an appearance, he believes that the global temperature ratchets permanently upward. Nevermind that El Nino events have been around for eternity, and by that reasoning the earth would be on an ever-upward trend for who knows how long..

        That is the kind of brain power that Tisdale brings to the table.

      • WebHubTelescope (@WHUT) commented

        That is the kind of brain power that Tisdale brings to the table.

        And the level you bring can’t imagine that an El Nino just temporarily redistributes existing heat to different areas, and in doing so causes the downwind land areas to register an increase in temperature, as well as altering the jet stream.

      • Jan Perlwitz wrote

        “And it isn’t scientifically relevant for the study. ”

        A study on cancer treatments finds the best and worst during certain conditions but refuses to indicate which, because it’s not relevant.

        heheheheeeeee, Jan, you’re funny

      • Don Monfort

        The Risible Ripsey method is similar to the way they do quality control in Chinese factories. Take 100 widgets off the production line and select the 4 best and the 4 worst. The 4 best are by definition better than the 4 worst, so the whole lot is good enough to ship to the U.S.

      • Steven Mosher

        “Good nickname. Tisdale thinks that El Nino acts as a ratchet, and that every time an El Nino makes an appearance, he believes that the global temperature ratchets permanently upward. Nevermind that El Nino events have been around for eternity, and by that reasoning the earth would be on an ever-upward trend for who knows how long..”

        they dont get that either.

      • Don Monfort

        Steven, I don’t recall seeing Tisdale say this: “Tisdale thinks that El Nino acts as a ratchet, and that every time an El Nino makes an appearance, he believes that the global temperature ratchets permanently upward.”

        Do you got a quote for us? Does he similarly believe that every time a La Nina makes an appearance the global temp permanently ratchets down?

      • It looks like a ratchet. Fast warming, slow cooling:
        http://www.nc-climate.ncsu.edu/images/climate/climate_change/global_co2_temp.png
        A full on El Nino appears to transfer heat from the oceans to the atmosphere and it doesn’t appear to escape from their quickly.

        “…in terms of the global mean temperature, instead of having a gradual trend going up, maybe the way to think of it is we have a series of steps, like a staircase. And, and, it’s possible, that we’re approaching one of those steps.” – Kevin Trenberth

        It is useful to think of it this way? If we plug in the assumption without a care for if it’s true, do we get useful results? Or in other words, do the models then work?

        The ENSO region to me appears to release ocean heat, and might involve the opposite of storing heat in the oceans. A rachet effect would be consistent with a warming world, a recovery from the LIA.


      • Ragnaar | July 21, 2014 at 3:12 pm |

        It looks like a ratchet. Fast warming, slow cooling:

        And that’s the constant fear — you try to explain why something is wrong, and some lamebrain takes it as confirming evidence that the junk theory is correct.

        Tisdale graphs it as a ratchet but calculates it as an integration. Yet anybody with a background in calculus can see that his choice of a baseline temperature anomaly will cause the integrated response curve to either trend upward or downward. So Tisdale manipulates the integrand to give an upward trend.

        Voila, he can explain a warming trend.

        Voila, Tisdale is an idiot. And now that he floods the internet with his junk graphs, that anybody doing a Google search on ENSO runs into his stuff, and thinks he has something worthwhile to say believing volume = quality

      • Your comment is awaiting moderation.


        Ragnaar | July 21, 2014 at 3:12 pm |

        It looks like a ratchet. Fast warming, slow cooling:

        And that’s the constant fear — you try to explain why something is wrong, and some lamebrain takes it as confirming evidence that the junk theory is correct.

        Tisdale graphs it as a ratchet but calculates it as an integration. Yet anybody with a background in calculus can see that his choice of a baseline temperature anomaly will cause the integrated response curve to either trend upward or downward. So Tisdale manipulates the integrand to give an upward trend.

        Voila, he can explain a warming trend.

        Voila, Tisdale is a ___. And now that he floods the internet with his junk graphs, that anybody doing a Google search on ENSO runs into his stuff, and thinks he has something worthwhile to say believing volume = quality

      • Don Montfort wrote:

        You did not read the paper. Your argument is shifting. Bob Tisdale didn’t make Risible Risbey try to show that GCMs could by chance simulate ENSO, badly.

        No, monfie, I haven’t shifted my argument. It’s still the same as before. You just aren’t able to follow. As for reading the paper, the opposite is true. I am the one who read the paper. You are the one who didn’t. You just have read and believe the nonsense Bob Tisdale is telling on his own and Anthony Watts’ junk science blog. You are just as clueless as Bob Tisdale about climate modeling, and about the chaotic character of the atmosphere-ocean dynamics and about the implications for climate modeling from the chaotic character of the atmosphere-ocean dynamics.

        Like Bob Tisdale, you (and apparently many others here) don’t understand that it is not a flaw of the climate models when the simulations with these models principally don’t reproduce the chronological succession of events, e.g., the succession of El Nino and La Nina-events, how they are observed in the real atmosphere-ocean system. You do not understand that this results from the intrinsic nature of a chaotic system to not be predictable beyond a time limit. You and Bob Tisdale are faulting climate models to not be able to achieve something that is objectively impossible to achieve, i.e., you are faulting the models for not being able to predict something that isn’t predictable. Something that even a perfect model, which by definition would be flawless couldn’t achieve.

      • ClimateGuy wrote:

        So the study is not transparent.

        According to my understanding of “transparency”, it is given when all the information is provided in a study, which is needed to reproduce the results of the study. I don’t see why this wasn’t fulfilled in the case of this study. From reading the study I know what I would have to do, what data to use and what methodology to apply, if I wanted to reproduce it. Satisfying all curiosities I may have in addition to the necessary information is not a requirement for transparency. If the study is transparent to me, but not to you, then the study may not be the reason for that.

      • Don Monfort

        perlie, perlie

        “You and Bob Tisdale are faulting climate models to not be able to achieve something that is objectively impossible to achieve, i.e., you are faulting the models for not being able to predict something that isn’t predictable.”

        Not at all, perlie. We are faulting Risible Risbey et al. and the CAGW alarmist media hacks for making bogus claims that the silly paper doesn’t support. Have you found figure 5. yet, perlie? End of story. I don’t have any more time for you.

      • SkS on steps:
        http://www.skepticalscience.com/its-a-climate-shift-step-function-caused-by-natural-cycles.htm

        For the step shifts we have, Rapp, Tisdale, Douglass and Jens Jensen.

        SkS mentions Tsonis and climate regime shifts and arrives at mixed conclusions. I don’t think we will ever see SkS directly disagree with Tsonis. They have to work the edges. While Tsonis and Swanson have their major possible shifts years, I think they serve as an example of shorter timescale steps. The glacial/interglacial temperature graphs also show these steps up followed by slow cooling.

        Then we have Swanson’s first diagram here:
        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        Notice the horizontal line intersecting the trendline.

      • It’s simple, Jan.

        Suppose a cancer treatment paper examined treatments and found best and worst under different scenarios but refused to indicate which was which.

        It’s not transparent if they refuse to identify which worked best, when, and which were worst treatments when. They have that information, so why hide it?

        You’re saying “Oh, you can do the study again yourself.” as an excuse. They did the study and refuse to identify which were best. They hid information that they have.

      • Jan Perlwitz, “Just because you can’t access papers behind paywalls doesn’t mean I can’t either.” Would that be with my tax dollars, Perlie?

      • Bob wrote:

        Jan Perlwitz, “Just because you can’t access papers behind paywalls doesn’t mean I can’t either.” Would that be with my tax dollars, Perlie?

        I very much hope so.

      • This hasn’t even the dignity of Marie A’s ‘Let them eat cake’; she thought she was offering an alternative to the shortage of bread.
        ====================

      • ClimateGuy wrote:

        Suppose a cancer treatment paper examined treatments and found best and worst under different scenarios but refused to indicate which was which.

        False analogy. Obviously in this case, the purpose of the study would be to identify those treatments which perform better. So the information, which treatments these are, is essential for the conclusions in the study. Without this informations you couldn’t make any statements, which ones of the treatments performed better.

        In contrast, in the Risbey et al. study, the information what specific models provided the simulations that ended up in the composites for any of the sliding 15-year periods by chance, and this information would vary with the 15-year periods, is not essential for the conclusions in the study.

      • Jan Perlwitz wrote

        “False analogy. Obviously in this case, the purpose of the study would be to identify those treatments which perform better. So the information, which treatments these are, is essential for the conclusions in the study. Without this informations you couldn’t make any statements, which ones of the treatments performed better.

        In contrast, in the Risbey et al. study, the information what specific models provided the simulations that ended up in the composites for any of the sliding 15-year periods by chance, and this information would vary with the 15-year periods, is not essential for the conclusions in the study.”

        It would not be necessarily so that a cancer treatment study had the sole purpose of identifying which treatment ‘is better”.
        As I said, under different scenarios different ones could perform better.

        Yet It would be an outrage if they did not identify which, when.

        With these nutcases, it’s just a frivolous comedy.

      • Jan Perlwitz wrote

        “I don’t know which specific subset of four models made it into the composites for the 1998-2012 period. And it isn’t scientifically relevant for the study”

        Oh, but they did say which set of models were selected for showing sea surface temps. They named every one. An irrelevancy ? Then for the next selection, for the subset – they kept that secret.

      • ClimateGuy wrote:

        “I don’t know which specific subset of four models made it into the composites for the 1998-2012 period. And it isn’t scientifically relevant for the study”

        I just see that I wasn’t precise in this statement that is quoted here. It should say, “I don’t know which specific subset of model simulations made it into the composites …”

        There isn’t just one simulations done with each model. Usually, there is an ensemble of simulations in the CMIP5 archive. The study analyzed 82 simulations done with 38 models. Each simulation of the ensemble was started from a different initial condition. So each model simulation, even with the same model, goes a different individual path. It even is principally possible that, for a given 15-year time period, the temperature trend of one simulation with a model ended up in the “best” composite for this time period, and the temperature trend of another simulation with the same model ended up in the “worst” composite of the same time period. Exactly because it’s just a matter of chance, in what composite the trend from an individual simulations ends up.

        Oh, but they did say which set of models were selected for showing sea surface temps. They named every one. An irrelevancy ?

        The sea surface temperature trends from the named models are an input used for the analysis. Thus, the information on the SSTs of which ones of the 18 models out of the 38 model were used for the calculations is not irrelevant here for being able to reproduce the analysis.

      • Jan Perlwitz wrote

        “The sea surface temperature trends from the named models are an input used for the analysis. Thus, the information on the SSTs of which ones of the 18 models out of the 38 model were used for the calculations is not irrelevant here for being able to reproduce the analysis.”

        oh, but wasn’t it your excuse before to say “that’s information one could work out for oneself if interested ?

      • I did not say that with respect to that information. Don’t make things up.

      • Jan Perlwitz wrote

        “I did not say that with respect to that information. Don’t make things up.”

        I didn’t say you said it with regard to THAT information.
        It was your reasoning for not divulging which models did best and worst.

      • Jan,
        You said the study was transparent because one could work it out which models did best for oneself.

        Yet for this the same could apply.

        They only withheld the resulting best/worst results identifications.
        People might laugh even more if they saw, eh,Jan?

      • ClimateGuy wrote:

        It would not be necessarily so that a cancer treatment study had the sole purpose of identifying which treatment ‘is better”.
        As I said, under different scenarios different ones could perform better.

        Yet It would be an outrage if they did not identify which, when.

        If the information wasn’t essential for the conclusions of the study, why would it be “an outrage”? And from whose point of view?

        So, is your argument now, Risbey et al should have listed those at least 48 and maybe even around 150 names of the models whose simulations ended up in the composites, even if this information wasn’t relevant for the conclusions, because otherwise it would be “an outrage”? How is this a rational argument? It looks to me we have entered the realm of irrationality now.

      • Jan Perlwitz,
        The cancer treatments study could be analogous to the climate model study.

        It would be an outrage because they withheld vital information on results..

      • Great Climate Model sat on a wall,
        Into many pieces Mod had a fall.
        All the King’s statsers and psychesters then
        Humptied that model together again.
        ================

      • “So, is your argument now, Risbey et al should have listed those at least 48”

        They listed what, 38(?) models that they originally chose the 18(?) from.? Listing isn’t that hard to do, Jan.

      • ClimateGuy wrote:

        The cancer treatments study could be analogous to the climate model study.

        It would be an outrage because they withheld vital information on results..

        You are contradicting yourself. Why would it be “vital”, if it isn’t necessarily relevant for the conclusions, as you just had postulated before? And how is a feeling of “outrage” a criterion for the validity of a scientific argument?

        Well, again. It just means the analogy is false, because the naming of the models isn’t vital. And “outrage” isn’t a rational argument. Why would the naming of the models be vital? For what would it be vital? It’s not vital just because you postulat it to be.

      • Jan Perlwitz wroe

        ” However, one still can make a prediction about the statistical distribution of the numbers after 1000 throws, or what the average value of the numbers will approximately be. You already know what the average value would be, right? But you still couldn’t predict the outcome of the next throw”

        Yes, but this study claims that by selecting those die which cast a series which corresponds by chance to the realization of sea surface temps and then through selection of those which correspond to hiatus surface temps, over a bunch of initial conditions, it’s a better comparison than between models output and reality.
        They say they produce evidence, and IPCC does not.

        it’s a comedy.

      • Jan Perlwitz wrote

        “You are contradicting yourself. Why would it be “vital”, if it isn’t necessarily relevant for the conclusions, as you just had postulated before?”

        Jan, you switched between purpose and conclusions

      • Jan Perlwitz wrote
        “Why would the naming of the models be vital? For what would it be vital?”

        It would presumably be vital to those producing policy based on prediction and to the people supporting the trillions of $ to be spent.

      • ClimateGuy wrote on July 22, 2014 at 10:34 am:

        Yes, but this study claims that by selecting those die which cast a series which corresponds by chance to the realization of sea surface temps and then through selection of those which correspond to hiatus surface temps, over a bunch of initial conditions, it’s a better comparison than between models output and reality.

        This is not what they did. A “hiatus”-like global surface temperature trend wasn’t a selection criterion for the comparison between models and observations in the study. You are making things up.

        What they found is that the globally averaged surface temperature trend over 15-year periods is closely related to the trend of the sea surface temperature in a small region of the planet, the Nino3.4 region, statistically. It is a confirmation of the results also found by other studies before, with varying methodology, that the recent alleged “pause” is very likely, to a large degree, nothing more than just a temporary downward deviation from the median trend by chance, mostly due to the chaotic ENSO variability imprinting itself on the global temperature trends, like the “acceleration” between 1992 and 2007 (with a trend of about 0.25-0.3 deg. C/decade) was a temporary upward deviation from the median trend (although there was probably some contribution from the recovery from the Pinatubo eruption to it). We also can conclude from this that the global surface temperature trends will likely be higher in the coming years than the ones observed since 1998 (assuming no major volcanic eruption, impact by a killer asteroid, or nuclear war). How it is with variability around a mean/median. The further the state of the system has moved into one direction, the higher the probability that it will move into the other direction again.

        Well, I guess my statement goes against the general expectations of the AGW-“skeptic” crowd who believe, because the recent trends over 15-years have been below median, the trends in the coming years will stay low or wll go even lower, or even cooling.

        We will see who is right. I and all the others on the side of AGW-science or all those “skeptics” who believe there will be only a small or even a cooling trend in the next years/decades.

        They say they produce evidence, and IPCC does not.

        No, they didn’t say that. You are making things up. Although the statement itself is half right, since it isn’t the IPCC who produce evidence. The evidence is produced by the scientific studies on which the IPCC reports rely, instead.

        it’s a comedy.

        Yes, it is, indeed. Just not in the way you imagine it is.

        ClimateGuy wrote on July 22, 2014 at 10:41 am:

        “You are contradicting yourself. Why would it be “vital”, if it isn’t necessarily relevant for the conclusions, as you just had postulated before?”

        Jan, you switched between purpose and conclusions

        No, I didn’t. I wrote:

        “False analogy. Obviously in this case, the purpose of the study would be to identify those treatments which perform better. So the information, which treatments these are, is essential for the conclusions in the study.”
        (http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-610501)

        Both purpose (or the objectives) of a study and the conclusions from it are closely related.

        And you haven’t answered my question why naming the models was “vital”? “Vital” for what?

      • It is a confirmation of the results also found by other studies before, with varying methodology,

        It’s easily seen in the surface record, as long as you don’t mash it all together. But you haven’t mentioned that the same thing happens in the Atlantic (caused by I presume the AMO), you can see it in the Min temp values for both Europe and Africa, if anyone bothers to look.

        that the recent alleged “pause” is very likely, to a large degree, nothing more than just a temporary downward deviation from the median trend by chance, mostly due to the chaotic ENSO variability imprinting itself on the global temperature trends, like the “acceleration” between 1992 and 2007 (with a trend of about 0.25-0.3 deg. C/decade) was a temporary upward deviation from the median trend (although there was probably some contribution from the recovery from the Pinatubo eruption to it). We also can conclude from this that the global surface temperature trends will likely be higher in the coming years than the ones observed since 1998 (assuming no major volcanic eruption, impact by a killer asteroid, or nuclear war). How it is with variability around a mean/median.

        Who’s to say (other than those whose hate for fossil fuels lead them to any conclusion that allows them to vilify it) that the warming isn’t just the positive swing to the median trend and the pause is just the return to median? The AMO is still in it’s positive mode, the PDO just switched from positive to negative, this does make more sense, this fits all of the evidence, not just what you want to cherry pick.

      • ClimateGuy wrote:

        Jan Perlwitz wrote
        “Why would the naming of the models be vital? For what would it be vital?”

        It would presumably be vital to those producing policy based on prediction and to the people supporting the trillions of $ to be spent.

        Now you are just resorting to silly political rhetorics, instead of providing an argument based on reason. You aren’t seriosly claiming here that the outcome of some real policy decision we are facing vitally depended on listing the names of a bunch of models or not in exactly this specific study, which wouldn’t even be consequential for the conclusions of the study, are you?

      • Jan Perlwitz wrote

        ” ‘ClimateGuy wrote
        …. through selection of those which correspond to hiatus surface temps, over a bunch of initial conditions, it’s a better comparison than between models output and reality.’

        This is not what they did. A “hiatus”-like global surface temperature trend wasn’t a selection criterion for the comparison between models and observations in the study. You are making things up.”

        No, Jan, not selection criterion FOR the comparing to hiatus, but for SAYING it’s a better comparison.

      • Mi Cro wrote:

        The AMO is still in it’s positive mode, the PDO just switched from positive to negative, this does make more sense, this fits all of the evidence, not just what you want to cherry pick.

        Please could you name me some of the scientific references, which provide the scientific evidence that the global warming in the second half of the 20th century up to present was mostly caused by AMO or PDO or a combination of the two? Your mere assertions that it was so don’t count.

      • Jan P Perlwitz commented

        Please could you name me some of the scientific references, which provide the scientific evidence that the global warming in the second half of the 20th century up to present was mostly caused by AMO or PDO or a combination of the two? Your mere assertions that it was so don’t count.

        Of course you know there aren’t any (though Dr Curry’s Stadium Wave theory is in the right direction), because doing so doesn’t fit the cause.

        But, you’re making the same claim that this is the cause of the pause, when it’s more likely that a positive AMO and PDO both simultaneously warmed the Northern Hemisphere at the end of the 20th century, the History of the late 30’s had the same high temps (and melted Arctic), which were followed by cold PDO phase. Surface records show these as regional swings in Min temp.
        http://appinsys.com/globalwarming/AMO_files/image002.gif
        http://cses.washington.edu/cig/figures/pdoindex_big.gif

        Notice the overlap in warm phases (the 40’s and 90’s, both hot periods), and then the cold PDO in the 50-80’s corresponded to a cold US.

      • Jan Perlwitz wrote

        “Now you are just resorting to silly political rhetorics, instead of providing an argument based on reason. You aren’t seriosly claiming here that the outcome of some real policy decision we are facing vitally depended on listing the names of a bunch of models or not in exactly this specific study, which wouldn’t even be consequential for the conclusions of the study, are you?”

        I didn’t say they ARE, Jan. I’m saying that if it’s true that certain models do better at this task, then which ones they are should be important information for planning, no?

        What other practical purpose is more important?

      • Jan Perlwitz wrote

        “Now you are just resorting to silly political rhetorics, instead of providing an argument based on reason. ”

        No, Jan. You asked me who it would be vital for.
        I replied to your question, and I did not approve or disapprove of doing or not doing, spending or not spending, lean to one side or the other, so I think your attack is bullshit

      • Micro, If it is merely the result of a natural cycle, why is the next peak higher than the last peak (e.g. the avg temp in 1998 is significantly greater than the peak temp in 1940)?

      • Joseph commented

        Micro, If it is merely the result of a natural cycle, why is the next peak higher than the last peak (e.g. the avg temp in 1998 is significantly greater than the peak temp in 1940)?

        Sampling and Processing. The 30’s are poorly sampled (it’s not very good prior to 1972, and it’s plain lousy prior to the 50’s), and then the post processing is utter garbage.

        Now, I’d accept that a small fraction is from an increase in Co2, but I don’t think it’s much.

        But let me add additional evidence that I’m right, Get an IR thermometer and measure a the sky directly overhead (zenith) on a clear day, and then measure the bottom of some clouds. I routinely see temps 70F (or more) colder than the surface, while clouds are 10 or 20F colder, clouds control the radiative cooling rate of the surface, not Co2. A week or so ago on a 50F night I measured -35F.

      • Jan Perlwitz wrote

        “Now you are just resorting to silly political rhetorics, instead of providing an argument based on reason. ”

        Jan, you asked me who it would be vital information for. I replied without approval or disapproval for planning or spending, for or against any political party, system, or thought. Your comment is unwarranted.

    • phi wrote:

      “What exactly do you want from me?”
      Just that you bring proof of what you claim.

      What is it what I allegedly claimed for what you ask me to bring proof?

      Please quote what I allegedly claimed with proof of source. Because I don’t know what you mean.

      On the other hand, you were the one who made following claim:

      “What is quite remarkable and paradoxical is that proxies capture very well what you refer as noise or natural variation and absolutely not what you think is the true signal (the effect of aditional CO2).”
      (Source: judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-609843)

      Since you are making the assertion, you also have the burden of proof for it. It isn’t my job to do the research so your assertions can be tested. Well, unless you pay me for doing it. I told you my special rate for you, if you want me to work for you.

      Ah, but you already had dismissed the study with the claim that the data in these studies were “doctored”
      Certainly not. I did not read it.

      Now you claim you didn’t dismiss it? This was your reply regarding the Anderson et al study:
      “Unfortunately, this confirmation does not exist. Proxies raw data tell a very different story.

      Regarding published reconstructions that would make the deal, they are heavily doctored and unusable,…”
      (Source: judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-609871>

      Apparantly, you dismissed it right out of hand w/o even reading it, using some bogus claim about “doctored” data.

      Thank you for the link but the paper is not freely available.

      But the supplemental material is freely accessible where you can find the list with the references of the 173 proxy data sets used for the study. And a link to an accessible pdf-file had already been posted in the same thread here in this comment:
      http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-605977

      • Jan P Perlwitz,

        For example, you wrote:

        “The more important fact here is that the trend of the surface temperature over the last 100 years is not just positive (ca. 0.073-0.085 K/decade), it is also statistically significant with more than 13 standard deviations.”
        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-606375

        The problem is that only data from weather stations after adjustments give such values ​and proxies invalidate these constructions. It’s annoying. And it is not serious at all.

        You write further:

        “There is another paper which provides an independent confirmation of global land warming w/o using any measurements from meteorological stations.”
        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-606798

        According to you there would be at least one paper (mentioned by Zeke) confirming independently temperature curves. Bad luck, it is not accessible by your link nor by that of Carrick.
        I repeat, to date, no paper has verified the temperature evolution of the twentieth century with proxies. Those who pretended to do so have all been shown deficient.

        Since I don’t base any policy on these rotten curves, the burden of proof is certainly not for me.

        And I still do not see any raw data of proxies with annual resolution that would confirm the alleged evolution of temperatures in the twentieth century.

      • phi wrote:

        The problem is that only data from weather stations after adjustments give such values ​and proxies invalidate these constructions. It’s annoying. And it is not serious at all.

        This is just an assertion. And it is your assertion. You have the burden of proof for it. I don’t have the burden to disprove your assertions, although, from my experience this is how AGW-science deniers usually would like to have it. That the scientists don’t just should have the burden to provide the evidence for their own hypotheses and theories (which is fine), no, the deniers just pile up assertion after assertion, with which they try to cast doubt regarding the results from scientific research, and then the scientists are supposed to have the burden to disprove all those assertions as well. This is not how it works. I am not going to play this game.

        According to you there would be at least one paper (mentioned by Zeke) confirming independently temperature curves.

        No, at least two papers. Also the one referenced by me further above in a comment to which you had replied:
        http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-606798

        The Compo et al. study, I referenced doesn’t use proxy data at all, though. So it’s also independent on any proxy reconstructions. Thus there are three different approaches with data that are fully independent on each other, which come to very similar results about the reality of global warming over the last century.

        Bad luck, it is not accessible by your link nor by that of Carrick.

        I just checked. I don’t have any problems to access it. And I am doing it from my home. Here is the link once more:
        http://www.environ.sc.edu/sites/default/files/files/anderson2013.pdf

        I repeat, to date, no paper has verified the temperature evolution of the twentieth century with proxies.

        You claim no paper has done it, because you haven’t been able to access the Anderson et al, study for some unknown technical reasons? Now, that is just some silly reasoning.

        Those who pretended to do so have all been shown deficient.

        Translation: You don’t like the results, because they contradict your preconseived views, so you just postulate that they all were wrong.

      • Deny the Incline!
        =============

      • Jan P Perlwitz,

        Your link to Anderson still does not work for me, unfortunately I still can not say anything.

        Few papers claim to confirm independently the evolution of temperatures in the twentieth century, to my knowledge, there could have been:
        – Gergis 2012 (the paper is withdrawn due to a significant error discovered on ClimateAudit),
        – Marcott 2013 (very poor resolution and various tricks),
        – Tingley and Huybers 2013 (temperature of stations dominate the reconstruction in the twentieth century),
        – Briffa 2013 (selection bias and lack of standardization).

        So Anderson, I expect to see.

        Two other examples of proxies:
        http://oi60.tinypic.com/2jdq590.jpg

        “No, at least two papers.”
        Please remain serious. Your story with barometers can not be used as confirmation of a secular trend.

      • phi wrote:

        Please remain serious. Your story with barometers can not be used as confirmation of a secular trend.

        OK. That’s it. I draw the line now. Rejection of thermodynamics as an “argument” to dismiss a scientifc study. This is what science is up against in the public when it comes to AGW-deniers.

      • Jan P Perlwitz commented

        OK. That’s it. I draw the line now. Rejection of thermodynamics as an “argument” to dismiss a scientifc study. This is what science is up against in the public when it comes to AGW-deniers.

        Your outrage is a joke.

        I suggest you look at the actual surface measurement, and not the reconstructed time series published as such.
        If you don’t want to do it yourself, and I’m surely not paying you, follow the url in my name, I’ve done the work for anyone interested. No homogenization, no infilling, just the measured anomalies averaged by various regions.

        Yeah, Yeah, Yeah Steve, I know it’s wrong, but it also doesn’t show a raising temperature trend like the the published series, which all follow the same “let’s make up data where necessary to get the trend we’re looking for” process. If I wanted to get this published, and you were a reviewer, you’d reject it wouldn’t you?

      • Jan P Perlwitz,
        Are you kidding?
        A temperature long-term trend measured on a barometer?

        The trend comes from there, excerpt from the article p3170 :

        “…a physically based state-of-the-art data assimilation system, to infer TL2m given only CO2, solar, and volcanic radiative forcing agents…”

        Ridiculous!

      • phi wrote:

        A temperature long-term trend measured on a barometer?

        No one said anything about that the temperature trend was measured with a barometer.

        Have you ever heard of the ideal gas law, which connects temperature, density, and pressure of a gas in a very fundamental way? One of the basic physical relationships in thermodynamics. Didn’t they teach you that at school?

      • Jan P Perlwitz,

        You are totally incoherent.
        You wrote:
        “No one said anything about that the temperature trend was measured with a barometer.”
        And then:
        “Have you ever heard of the ideal gas law, which connects temperature, density, and pressure of a gas in a very fundamental way?”

        What do you want to express?

        You still do not understand how ridiculous it is to use Compo et al. for independent confirmation of temperature curves?

        The trend result from the CO2 modeling.
        Thus :
        Models are confirmed by thermometers which are confirmed by models which are confirmed by thermometer which are ….

      • PhiPhoPhum, I suggest you ingest this and become even more enraged:
        http://contextearth.com/2013/10/26/csalt-model/

        Anyone with any knowledge of thermodynamics can model the global temperature trends. You seem to be upset that the professionals can do it as well.

      • WebHubTelescope,
        This is only a joke, the trend comes from CO2. Professionals prove that they can also be ridiculous.


      • phi | July 21, 2014 at 9:00 am |
        the burden of proof is certainly not for me.

        Of course this guy phi can not be burdened by proof. As he says , “the burden of proof is certainly not for me.” So it is not for him, which contravenes the pillars of science.

        He would much rather just make assertions than to construct any kind of proof or argument built upon observational evidence.

      • WebHubTelescope,
        I do not have the burden of proof. This has not prevented me to show that Combo et al. could in no way be used to confirm the temperature curves. This has not prevented me to provide three high quality proxies showing the failure of these curves.

        http://imageshack.us/a/img21/1076/polar2.png
        http://oi60.tinypic.com/2jdq590.jpg

      • Jan Perlwitz, your are pretty cocky for a tax payer funded “climate scientist” who has been the senior author on only 2 publications since 1995.

      • phi said:


        phi | July 21, 2014 at 9:00 am |
        the burden of proof is certainly not for me.

        Read what phi says here : “burden of proof is not for me”.

        That is like saying an experimental study is not for me, or a mathematical formulation is not for me. It’s like phi wants to turn back the clock to the dark ages of science, and rely on assertions and voodoo.

      • WebHubTelescope,
        Don’t play the fool.
        It is not for me to prove the merits of adjustments. If climatologists claim that they have a reliable representation of the evolution of temperatures in the twentieth century, it is for them to be convincing. I reminds you that so far there is no independent confirmation of these curves. Moreover, the known high quality proxies demonstrate their failure.

      • phi, if you made a grammatical mistake, admit to it and don’t be such a cry-baby.

        What you said was “burden of proof is not for me”, which means that you do not personally care for scientific proof.

      • WebHubTelescope,
        I guess I make a bunch of grammar mistakes.
        The complete sentence was:

        “Since I don’t base any policy on these rotten curves, the burden of proof is certainly not for me.”

        Sorry if there is a grammar mistake.

    • WHT wrote

      “he believes that the global temperature ratchets permanently upward. Nevermind that El Nino events have been around for eternity, and by that reasoning the earth would be on an ever-upward trend for who knows how long”

      WHT, do you truly believe what you said, or are you crazy or just lazy?

      • That’s what Tisdale believes, No joke. He has no other explanation for warming so he uses the one originally offered up by McLean, deFreitas, and Carter.

        This was squelched at the time:
        Foster, G., et al. “Comment on “Influence of the Southern Oscillation on tropospheric temperature” by JD McLean, CR de Freitas, and RM Carter.” Journal of Geophysical Research: Atmospheres (1984–2012) 115.D9 (2010).
        http://onlinelibrary.wiley.com/doi/10.1029/2009JD012960/pdf

        But the idea won’t die, much like all the other climate zombie theories.

      • Wow! A “climate zombie theory” that required the efforts of at least a half a dozen climate heavyweights to refute.

        Just sayin…

      • Meant to add: Folks, that’s your tax dollars at work.

      • Tell G.Foster that his small business is subsidized. What is with you freaks?

      • And while you’re about it, tell that to Annan, Jones, Mann, Schmidt and Trenberth, at least.

        You only see what you want to, don’t you?

      • I mean, c’mon webbie, a whole bunch of guys from the top rung of the climate ‘A’ list ganging up against three nobodies?

        Overkill much?

        Makes David and Goliath look like an evenly-matched pair.

      • WHT, you did not address the part of the assertion that says he thinks temps ratchet permanently upwards forever.
        Obviously temperatures has gone up and down, and you have not shown that he believes they always go only up.

      • You phatboy phreaks obviously never write research articles. Grant Foster (aka Tamino) obviously wrote the research rebuttal article and he then passed it on to others who placed their stamp of approval on it.

        The glaring assertion by Blob Carter and his minions was that well over half of the longer-term warming trend was due to ENSO. Foster revealed the chicanery of da down-under boyz by showing how they were deviously playing calculus tricks on the data. In truth, the ENSO adds a zero-sum factor on the warming trend, which is something that the AGW abnegators refuse to believe.

        Tamino can do these kinds of rebuttals in his sleep.

      • Why did Foster need their approval attached?
        When someone brings out the big guns, or even threatens to, they must see the target as being important.

      • “Grant Foster (aka Tamino) obviously wrote the research rebuttal article and he then passed it on to others who placed their stamp of approval on it.”

        Where I come from, you can only name yourself as a co-author if you’ve made some substantial contribution to the content.


      • Where I come from, you can only name yourself as a co-author if you’ve made some substantial contribution to the content.

        They could have provided data, caught glaring errors, etc. Tamino is obviously a big enough man to share his work and insights. And plus, you may have forgotten, that if Tamino is wrong they will also share the blame.

      • …of tangled webs, and all that

    • Don Monfort wrote:

      “You and Bob Tisdale are faulting climate models to not be able to achieve something that is objectively impossible to achieve, i.e., you are faulting the models for not being able to predict something that isn’t predictable.”

      Not at all, perlie.

      Original quote by Bob Tisdale:

      “Curiously, in their abstract, Risbey et al. (2014) note a major flaw with the climate models used by the IPCC for their 5th Assessment Report—that they are “generally not in phase with observations”—but they don’t accept that as a flaw. If your stock broker’s models were out of phase with observations, would you continue to invest with that broker based on their out-of-phase models or would you look for another broker whose models were in-phase with observations? Of course, you’d look elsewhere.”
      (http://wattsupwiththat.com/2014/07/20/lewandowsky-and-oreskes-are-co-authors-of-a-paper-about-enso-climate-models-and-sea-surface-temperature-trends-go-figure/)

      As I said. Tisdale is faulting the models for not being able to do something that is objectively impossible, i.e., predict the unpredictable. He believes this was a “major flaw” of the models. Tisdale also reveals here that he is equally clueless about the stock market. He actually believes that there are stock brokers out there who have available stock market models, with which the stock market could be predicted “in phase with observations”. (I wish such models existed and I had one.)

      And this is the guy from whom you copy your nonsense, monfie.

      Have you found figure 5. yet, perlie?

      I already had commented on this figure.

      • I’m amused at the irony of those who invested(not just money) on the basis of the climate models.
        ================

      • “As I said. Tisdale is faulting the models for not being able to do something that is objectively impossible, i.e., predict the unpredictable.”

        Why is it unpredictable, Jan Perlwitz? Are you now saying that it follows no laws of physics? :)

      • Jan Perlwitz now says prediction is impossible.
        Then what exactly did this study show?

        That by chance some wiggles wiggled up sometimes at the same time as temperature did.

        Wooohooo!

        And measured evidence of reality is not evidence that the models do not correspond.

        Can this get any funnier?

      • We agree, stock market models are useless. The people promoting them have low value to their investors, their customers.

        Some have opined that the average broker reduces a long term average market return of 8% to 6% with the 2% going to the broker. Compound that difference over many years using the rule of 72.

      • kim wrote:

        I’m amused at the irony of those who invested(not just money) on the basis of the climate models.

        Who has invested anything on the basis of individual climate model simulations?

      • Don Monfort

        This study shows that Lewandowsky is the go to guy, when they desperately need to cook up some bogus crap to resuscitate the dying cause.

      • Jan, a gold mine is a hole in the ground with a liar at the top.
        ==============

      • desperately need to cook up some bogus crap to resuscitate the dying cause

        It’s somewhat ironic that your conspiracy theory involves Lewandowsky.

      • The ironies recurse furiously.
        =========

      • Even magazine reviews for cheap computer parts tell which performed better and which performed worse under differing scenarios.

        More evidence that Climate Science has a serious if not lethal illness.

      • ClimateGuy wrote on July 21, 2014 at 6:24 pm:

        Why is it unpredictable, Jan Perlwitz? Are you now saying that it follows no laws of physics? :)

        Because it’s a physical system that is governed by chaotic dynamics, i.e., by dynamics where an arbitrarily small difference between to initial states will lead to an eponential divergence of the following trajectories after some time has past. And no, such a behavior of a physical system is not against the laws of physics.

        ClimateGuy wrote on July 21, 2014 at 6:30 pm:

        Jan Perlwitz now says prediction is impossible.

        “Now”? I am not saying anything different compared to before.

        Then what exactly did this study show?

        Firstly, that the statistical distribution of the observed 15-year global temperature trends since 1880 isn’t distinguishable from the distribution of 15-year global temperature trends derived from an ensemble of model simulations, and, secondly, whether simulated global temperature trends over 15 years since 1950 lie in the same tail of the statistical distribution as the observed 15-year temperature trends or whether the simulated and observed trends lie in opposite tails of the distribution largely depends on whether the simulated and observed ENSO variablity over the 15-year periods are in phase or out of phase by chance.

      • Don Monfort

        It’s little “nuke em” joey, again. Another clown who calls just about anything ironic. Look up the word, joey.

      • I think he’s hypnotized himself with the handwaving.
        ===============

      • Jan Perlwitz wrote

        “Because it’s a physical system that is governed by chaotic dynamics, i.e., by dynamics where an arbitrarily small difference between to initial states will lead to an eponential divergence of the following trajectories after some time has past. ”

        Can you prove that your assertion is correct?

      • Edward N. Lorenz, “Deterministic Nonperiodic Flow”, J. Atmos. Sci. (1963), doi: 10.1175/1520-0469(1963)0202.0.CO;2, http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2

      • Not quite true. The models have sensitive dependence b because there is a feasible range of valid starting points and boundary conditions.

        The solution space looks something like this – http://rsta.royalsocietypublishing.org/content/369/1956/4751/F2.expansion.html

        ‘What defines a climate change as abrupt? Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.’ NAS 2002

        Climate has control variables that push the system past thresholds every 3 or 4 decades.

        Perlwitz is a drone with not much of a clue and a penchant for smug and misguided disparagement. We are not in the bully thread still – I wish he would drop the condescention act Especially in defense of what appears to be an inadequate idea – this time intended to prove that models sometimes accidentally get it right? Sounds suspiciously like a trivial result.

        I will do a Springer. Go away Perlwitz – we are bored with your nonsense.

      • Ellison shows zero skill in analyzing climate change based on his thousands of comments here with not one single example of analysis by his own hand. What a phony phreak.

      • “Rob Ellison” wrote:

        Not quite true. The models have sensitive dependence b because there is a feasible range of valid starting points and boundary conditions.

        Is this your private opinion or who says this?

        Regardless whether the physically possible range of initial conditions is small or large, it is always the “feasible range”. So the phrase “feasible range of valid starting points” doesn’t say anything meaningful. And “sensitive dependence” means that a small perturbation will lead to a large change in the solutions.

        The exponential divergence of the solutions for an arbitrarily small perturbation of the initial conditions is the essential feature of a system with deterministic chaotic dynamics. This is the difference to a non-chaotic deterministic system where the solutions stay in the vicinity of each other when the initial conditions are in the vicinity of each other.

        The solution space looks something like this – http://rsta.royalsocietypublishing.org/content/369/1956/4751/F2.expansion.html

        The graphic is consistent with what I said about chaotic dynamics. A small uncertainty in the initial conditions leads to a very large range of possible solutions.

        ‘What defines a climate change as abrupt? Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.’ NAS 2002

        This text fragment talks about abrupt climate change, but not about what chaotic dynamics is.

        Climate has control variables that push the system past thresholds every 3 or 4 decades.

        Is this a fact? Who says this? And which ones are the control variables of climate, which push the system over those thresholds? How do these variables do that? Why every 3 or 4 decades?

        [Some smear of my person]

        How long does it usually take, before you have burned your newest alias?

        I will do a Springer. Go away Perlwitz – we are bored with your nonsense.

        This kind of requests just decreases the likelihood that I am going to do that. And who is “we”? Pluralis majestatis? Multiple personalities? An anonymous crowd that has elected you as its speaker?

    • Mi Cro wrote:

      Who’s to say (other than those whose hate for fossil fuels lead them to any conclusion that allows them to vilify it) that the warming isn’t just the positive swing to the median trend and the pause is just the return to median?

      Peer reviewed scientific attribution studies say so. There is a whole chapter about this in IPCC Report 2013, like in previous IPCC Reports. The choice is to study the arguments and the evidence presented in these studies, or just to dismiss them, based on conspiracy ideation to keep the worldview whole.

      • Jan P Perlwitz commented

        Peer reviewed scientific attribution studies say so. There is a whole chapter about this in IPCC Report 2013, like in previous IPCC Reports. The choice is to study the arguments and the evidence presented in these studies, or just to dismiss them, based on conspiracy ideation to keep the worldview whole.

        Appeals to Authority, biased ones at that, nice!

        But it isn’t based on any conspiracy, it’s based on NCDC surface data evidence.

        I’d share it with you, but I make $400/hour, certified bank checks only please.

      • Pointing to the scientific literature and recommending to read it is not “appeal to authority”. Falsely claiming a fallacy where there is none is itself a fallacy.

        You will never be able to do science, if you refuse to study the science that has already been done.

      • Jan P Perlwitz commented

        Pointing to the scientific literature and recommending to read it is not “appeal to authority”. Falsely claiming a fallacy where there is none is itself a fallacy.
        You will never be able to do science, if you refuse to study the science that has already been done.

        I’ve read the science, including Hansen’s papers on GCM’s (since I have a decade and a half professionally supporting models and simulators), and found it chock full of bias, so I decided to get the surface records and look at them myself (over 122 million records), so your dismissal is without merit.

        I’ll have to do some serious work for the cabal now. This all has already been too distracting. And most of the arguments have become recursive anyway. Good Bye.

        I did see that you’re at GISS, you might want to go talk to the Model E guys, actually you should suggest they find someone who knows how to code simulations that are actually validated, that’s things a piece of trash :) It’s a good thing you guys are government funded, you’d be out of business if you had to make a living by selling it.

      • It’s a MATTER OF CHANCE whether the door slams him from behind on the way out.
        =============

      • He was a tough guy, had all the time in the world until presented some evidence that didn’t agree with the cause, then ran off like a little girl (no offense Kim).

    • ClimateGuy wrote:

      I didn’t say they ARE, Jan. I’m saying that if it’s true that certain models do better at this task, then which ones they are should be important information for planning, no?

      But such a conclusion from the knowledge what models ended up in what composites in the study wouldn’t be a valid one. You seem to think that if a simulation with a model ended up in the “best” composite, and another one one with another model in the “worst” composite, it would mean that the first model was “better at this task” than the second model. But this conclusion is false. It’s a non-sequitur. You still haven’t understood this thing with the chaotic variability. Even simulations with a perfect model, which by definition would be the best for the task could have ended up in the “worst” composite for some of the 15-year time periods. Actually they extremely likely would have, since the perfect model also would perfectly reproduce the observed statistical distribution of the SST trends in the NIno3.4 region. And for some other 15-year periods they extremely likely would have ended up in the “best” composites. And in some others in none.

      There still hasn’t been presented any convincing scientific reason why the authors should have named those models whose simulations ended up in the composites. Let’s the “skeptics” keep whining about that it hasn’t been done. Even worse would have been, if the authors had listed only the ones that provided the four “best” ones and the four “worst” ones for the 1998-2012 period as demanded by AGW-“Skeptics”. Because there isn’t any scientific justification to single out those four ones compared to the members of the composites of any of the other 47 15-year time periods. It’s not consequential for the conclusions, and people like you would just have drawn silly and totally wrong conclusions from this as one can see above.

      I’ll have to do some serious work for the cabal now. This all has already been too distracting. And most of the arguments have become recursive anyway. Good Bye.

      • Jan Perlwitz wrote

        “But such a conclusion from the knowledge what models ended up in what composites in the study wouldn’t be a valid one. You seem to think that if a simulation with a model ended up in the “best” composite, and another one one with another model in the “worst” composite, it would mean that the first model was “better at this task” than the second model. But this conclusion is false. It’s a non-sequitur. ”

        No, it’s not necessarily so.
        I’m saying if one model performed best over and over it would indicate that its preferable for a certain task. We can’t know because they hid the information.

      • ClimateGuy commented

        I’m saying if one model performed best over and over it would indicate that its preferable for a certain task. We can’t know because they hid the information

        .
        It’s self evident it didn’t, or they would be shouting at the top of their lungs about it.

      • Or at least it would be preferable under similar initial conditions.

      • Mi Cro wrote

        “It’s self evident it didn’t, or they would be shouting at the top of their lungs about it.”

        Even if it was a model that didn’t help the CAGW cause so much? :)

      • ClimateGuy commented

        Even if it was a model that didn’t help the CAGW cause so much? :)

        If it was somehow able to generate a plausible source for the pause, I’d be on the front pages of the NYT.

      • Mi Cro,
        Since the information is hidden, it would have required the authors announcing that.
        If it were to be a model or models that project less warming, that would be inimical to the cause of these skeptic haters.

      • ClimateGuy commented

        Since the information is hidden, it would have required the authors announcing that.
        If it were to be a model or models that project less warming, that would be inimical to the cause of these skeptic haters.

        This my point, if they could have used it for this, they would have told us, but since they didn’t it has to be that this model has other flaws that cause it to not have this value. Remember they just cut chunks out of different runs to assemble their “best” results, so since their results are not from 4 complete runs, there’s something else horribly wrong with them.

    • Micro, if you are claiming the temp record is unreliable, then it would be impossible to find a natural cycle in the data

      • Joseph commented

        Micro, if you are claiming the temp record is unreliable, then it would be impossible to find a natural cycle in the data

        LOL, Either:
        A)The temp record is good enough to show warming, and therefore I can use it to show that warming isn’t what it’s proclaimed to be.
        or
        B)It isn’t suitable for either.

        I tend to believe B is the correct answer, but A is what’s believed by the Warmist’s. So since I have the skills to sort through 122 million records, I work with what we have. I improve my results by not going past 1940, and more often not before 1950. And when you look at the data in less than global chunks, you get to see the data that’s conveniently thrown away.

      • micro sez —


        So since I have the skills …

        Where exactly are these skills?

      • “Where exactly are these skills?”
        Ask your Mom, she knows.

  268. Arcs_n_Sparks

    Dr. Curry and Mr. Hausfather,

    This has been an outstanding post, and I really appreciate Mr. Hausfather’s explanation and patience in responding to the various comments. I have learned a great deal.

  269. Jan provides an encouraging, for him, paper abstract titled “Well-estimated global surface warming in climate projections selected for ENSO phase”. He is encouraged because some of the CIMP5 models, those that included ENSO, we’re accurate regarding the pause. He is apparently encouraged because, to him, it is a sign of the veracity of the CIMP5.
    The title of the paper, and Jan’s excitement, impliy that selection of models can change for different phases. Is Jan OK with this? Also, just thinking, would ENSO compatible models also be compatible with the Stadium Wave Theory?

  270. They say a fool can ask more questions than a wise man can answer, but it looks like the wise Zeke did a yeoman’s job at keeping up. This was a helpful post (for this skeptic), and a somewhat enlightening thread. Thanks.

  271. ClimateGuy, and every one else who insists that the models which provided the simulations that were in the “best” and “worst” composites should have been named in the Risbey et al. study.

    At least, you should be aware then, that the number is not 4 models for the “best” composite and 4 models for the “worst” composite that would have to be named then. Instead, the number of model names for the “best” composites would have been at least 48. And nother 48 for the “worst” composites, assuming that composites were formed for all of the sliding 15-year time periods. And you should not be surprised, if the number of model names had been around 144 or even more. Perhaps, you are able to figure it out yourself how I got to these numbers. I doubt it, though.

    • Models that accidentally get it right over very small periods?

    • Jan Perlwitz,

      If prediction is impossible, then why is this study claiming evidence that models did a decent job of prediction?

      • He still doesn’t understand just how damning this all is.
        =================

      • The plot’s been lost. It is hard to believe that serious modelers think this study is helpful to their cause. That’s why you have propagandists authoring this charade.
        ======================

      • Commenter ‘basicstats’ says it well over at Watts Up @ 2:33 AM near the end of the Tisdale/Risbey thread:

        ‘On limited reading there does not seem anything particularly wrong with this paper, just of limited significance. Everyone, it seems, agrees that GCMs can not replicate actual climate over 15 year timescales. It is demonstrated that choosing 4 models which do a half way decent job of reproducing ‘natural variability’ in terms of ENSO also reproduce observed temperatures rather better. This is hardly novel, let alone a vindication of GCMs, especially when the authors themselves suggest GCM agreement with observed natural variability is a matter of chance.

        Matter of chance.

        Better: MATTER OF CHANCE.
        ====================

      • Oops forgot to close quote at the end of the quoted paragraph. Also, the commenter added just a bit more to that paragraph.

        Trenchant, and damning.
        =============

      • ClimateGuy wrote:

        If prediction is impossible, then why is this study claiming evidence that models did a decent job of prediction?

        I am talking about predicting the chronological succession of events in the ocean-atmosphere system. Nature only provides a single realization of this. The path of this realization is not predictable with any model after some time has passed, due to the chaotic dynamics of the system. This even would be true, if all the models were perfect, i.e., without any flaws at all. In contrast, the authors talk about a statistical estimate, which is being derived from a whole bunch of realization. Each model simulation is like a single realization of Nature. The individual path of a realization in a chaotic system can’t be predicted, but one still can apply means of statistics to such a system.

        Imagine you have a die and throw it a thousand times (not chaotic, but the principle is the same). What the next throw will bring, or what exact sequence of numbers will occur is not predictable. However, one still can make a prediction about the statistical distribution of the numbers after 1000 throws, or what the average value of the numbers will approximately be. You already know what the average value would be, right? But you still couldn’t predict the outcome of the next throw.

        And this is also the difference between numerical weather forecast and climate projection with climate models. Weather forecast is the attempt to make a best possible prediction of the exact state of the system at all locations at a specific point in time in the future. This only works up to a predictability time limit, due to the chaotic nature of the system. In contrast, climate projections done with climate models are what-if statements about the statistical properties of the system for a given configuration (which can be changing in time) of external climate drivers. Weather forecast is analog to the exact sequence of numbers the die produces, although the predictability time range of former is longer. Climate projections are analog to the statistical properties of the whole set of the throws with the die. And this is why the authors of the study can make their statements, even though the pathway of each individual realization (Nature or models) is not predictable.

      • Jan,

        Weather forecast is analog to the exact sequence of numbers the die produces, although the predictability time range of former is longer. Climate projections are analog to the statistical properties of the whole set of the throws with the die.

        You explain here exactly why GCM’s and this study are flawed. ENSO’s, and weather systems are chaotic and over a short period have minimal effect on the “climate”. AMO/PDO on the other hand are system states that last 20-40 years, and there’s very good reasons to think that they are the cause of the entire modern warming, these should be modeled by GCM’s, but they don’t do this either, and they have a far bigger effect on “climate” while the smaller scale chaotic artifacts have no effect on “climate”. You can see the impact of these states in regional surface temperature records, but since all of that information is throw away, no one realizes it.

  272. I have been noting smear after smear Perlwitz. You can seriously object to be called a clueless drone? Tough.

    ‘Is this your private opinion or who says this?’

    James McWilliams? Time Palmer? I have given references – indeed I included a sketch from a Royal Society Philosophical Transactions article. Your snide dismissals merely make you look ridiculous.

    ‘Regardless whether the physically possible range of initial conditions is small or large, it is always the “feasible range”. So the phrase “feasible range of valid starting points” doesn’t say anything meaningful. And “sensitive dependence” means that a small perturbation will lead to a large change in the solutions.’

    The feasible range of starting points and the spread in boundary conditions provide physically feasible limits that ultimately determine the topology of the solution space.

    ‘The exponential divergence of the solutions for an arbitrarily small perturbation of the initial conditions is the essential feature of a system with deterministic chaotic dynamics. This is the difference to a non-chaotic deterministic system where the solutions stay in the vicinity of each other when the initial conditions are in the vicinity of each other.’

    Did I not quote the NAS? Yes I did – go back and look at the definition.

    ‘The graphic is consistent with what I said about chaotic dynamics. A small uncertainty in the initial conditions leads to a very large range of possible solutions.’

    Seriously? You think so? That’s why I linked the sketch. That one or more models is accidentally correct over short time frames is something that should be anticipated. The trick is to determine the probability of an outcome in a perturbed physics ensemble.

    ‘This text fragment talks about abrupt climate change, but not about what chaotic dynamics is.’

    The definition of abrupt climate change mentions chaotic elements in the climate system. But then there is a whole book from the NAS linked to the definition.

    ‘Climate has control variables that push the system past thresholds every 3 or 4 decades.’

    Is this a fact? Who says this? And which ones are the control variables of climate, which push the system over those thresholds? How do these variables do that? Why every 3 or 4 decades?

    Control variables are a defining feature of chaotic systems. I don’t know – they just do.

    e.g. http://www.geomar.de/en/news/article/klimavorhersagen-ueber-mehrere-jahre-moeglich/http://heartland.org/sites/all/modules/custom/heartland_migration/files/pdfs/21743.pdf

    ‘How long does it usually take, before you have burned your newest alias?’

    This is my name – and has always been known here from my first post a couple of years ago. Chief Hydrologist was the moniker referencing the Simpsons. The was a Generalissimo Skippy who was a climate warrior on the blue horse called Shibboleth. Google Chrome signs me in as Rob Ellison – and quite frankly I went with it because I am bored with humourless twats who think they can muddy the waters further.

    ‘This kind of requests just decreases the likelihood that I am going to do that. And who is “we”? Pluralis majestatis? Multiple personalities? An anonymous crowd that has elected you as its speaker?’

    I had little hope – but surely you have seen the consensus response to your banal, trivial and misguided arguments? Your endorsement by webby is unsurprising coming from his position of noxious partisan insult, freaky physics and incompetent math. By all means educate him on chaos – but I doubt that is at all possible. As it is – you have far from distinguished yourself. Instead you proceeded from bad faith – lecturing and hectoring replete with straw men and poor attempts at ridicule – to ridiculous posturing. A sad spectacle you have made of yourself.

    My essential point remains – the likelihood more than not of no warming for decades at least emerging from decadal climate shifts. Predicting that seems the essential quest – and some did. It emerges from ideas and not from models. Certainly not from webby’s absurdly simplistic back of the envelope math and woefully inadequate physics. Have a look at his bathtub ENSO model. You are obviously well behind the curve – only just in front of webby by a nose. How embarrassing for you. If you were nicer we would let it pass.

    • Skippy Rob Ellison said:

      “My essential point remains – the likelihood more than not of no warming for decades at least emerging from decadal climate shifts. ”
      ____
      But this general statement says nothing interesting, and in fact, obscures a more interesting question as to whether the system as a whole continues to gain energy during a period of a so-called tropospheric “hiatus” in temperature gains. Continuing to just mouth “no warming” for decades, and focusing on tropospheric sensible heat (which is highly dependent on ENSO) misses larger and more interesting dynamics of changes to the oceans and cryrosphere. Increasing GH gases necessarily means the system will accumulate more energy, and the changes in flux of energy from ocean to atmosphere caused by cool phase PDO or ENSO changes don’t change the fundamental external forcing caused by increases in GH gases. The energy can appear to hide to those who want to focus myopically on sensible tropospheric heat, but a broad perspective reveals the system very likely continues to accumulate energy quite strongly.

      • R. Gates commented

        The energy can appear to hide to those who want to focus myopically on sensible tropospheric heat, but a broad perspective reveals the system very likely continues to accumulate energy quite strongly.

        Unless it’s not:
        http://ocean.mit.edu/~cwunsch/#C.%20Wunsch%20and%20P.%20Heimbach,%202014,%20Bidecadal%20thermal%20change%20in%20the%20abyssal%20ocean,%20in%20press,%20J.%20Phys.%20Oc.,%20(pdf)

      • Randy the video guy repeats a narrative for the umpteenth time that has been discussed endlessly and was pretty tedious the first time.

      • R Gates wrote

        “But this general statement says nothing interesting, and in fact, obscures a more interesting question as to whether the system as a whole continues to gain energy”

        Which obscures a more interesting question of how much warming to expect, and since the figures given previously are dependent on feedbacks and were calculated upon the very same “obscuring” R Gates mentions, it is R Gates who obfuscates.

      • “…a more interesting question as to whether the system as a whole continues to gain energy during a period of a so-called tropospheric ‘hiatus’ in temperature gains.”

        An interesting question indeed. Now you scurry on over to Obama and tell him to divert some of the billions being poured down the toilet of innumerable statistical revisions of the limited data we actually have, and start getting real data on that ocean heat content about which we actually know very little.

        If we could land a man on the moon, why can’t we measure the damn ocean heat content, before turning over control of the global energy economy to R. Gates, fan and Jim D?

        I’ll answer myself –

        Because then where would all the current crop of non-statistican climate scientists whose “science” consists of statistical manipulation of data, get their funding?

        And what if the real data was, like the surface temp data of the last 17 years, not “helpful?”

        Why measure something that might well undermine your political movement, when you can use statistics to create any new supporting “data” you need?

      • Rob Starkey

        Gates
        But isn’t it true that if CO2 does not result in fairly rapid warming then almost all of the hyped CO2 mitigation activities do NOT make sense? Imo, that is a critical issue

      • ‘These shifts were accompanied by breaks in the global mean temperature trend with respect to time, presumably associated with either discontinuities in the global radiative budget due to the global reorganization of clouds and water vapor or dramatic changes in the uptake of heat by the deep ocean. Similar behavior has been found in coupled ocean/atmosphere models, indicating such behavior may be a hallmark of terrestrial-like climate systems.’ https://pantherfile.uwm.edu/kswanson/www/publications/2008GL037022_all.pdf

        But just to reprise – because these twits constantly go over the same ground.

        It is probably both. In the first case the residual trend in 1976/1998 warming is dramatically reduced. In the second case satellites show that most of the warming was cloud related.

        ‘One important development since the TAR is the apparent unexpectedly large changes in tropical mean radiation flux reported by ERBS (Wielicki et al., 2002a,b). It appears to be related in part to changes in the nature of tropical clouds (Wielicki et al., 2002a), based on the smaller changes in the clear-sky component of the radiative fluxes (Wong et al., 2000; Allan and Slingo, 2002), and appears to be statistically distinct from the spatial signals associated with ENSO (Allan and Slingo, 2002; Chen et al., 2002). A recent reanalysis of the ERBS active-cavity broadband data corrects for a 20 km change in satellite altitude between 1985 and 1999 and changes in the SW filter dome (Wong et al., 2006). Based upon the revised (Edition 3_Rev1) ERBS record (Figure 3.23), outgoing LW radiation over the tropics appears to have increased by about 0.7 W m–2 while the reflected SW radiation decreased by roughly 2.1 W m–2 from the 1980s to 1990s (Table 3.5).’ AR4 WG1 s3.4.4.1

        The data is confirmed by ISCCP-FD and consistent with changes in ocean heat (IPCC, 2007). It shows most of the 1976/1998 warming caused by cloud changes.

        What do we know about cloud changes?

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/cloud_palleandlaken2013_zps3c92a9fc.png.html?sort=3&o=132

        In the absence of compelling alternative data – we are entitled to go with the evidence – and not Randy the video guy’s simplistic narrative repeated ad nauseum. .

  273. Jan “

    Is this a fact? Who says this?

    Why don’t you look at why instead of pestering the messenger? Since the ocean have 1000 times the thermal capacity of the troposphere and due to land mass configuration and Corriolis effect the oceans are divided into three compartments with choke points restricting flow. The North Atlantic basin, Northern Pacific Basin and the southern hemisphere oceans. The volumes are roughly 1x for the NA, 2x for the NP and 4x for the southern hemisphere. Set up a simple three compartment model and see what you get.

    Then search for Drake Passage, Meridional and Zonal sea surface temperature gradients and J.R. Toggweiler GFDL.

  274. Has any thread here had 2,000 comment before? This should be the 1,955th.

    • Lost in the nearly 3MB of comments is a comment from Judith Curry addressing that.

      Zeke should post more often. ;-)

      In the mean time, here’s the top 20 contributors to the thread.

      Since there isn’t a way to currently embed tables (there is, but it’s “yuck”), here’s tabular data using the “|” to separate comments.

      RANK|AUTHOR|NUMBER OF COMMENTS|PERCENT OF COMMENTS
      1|Steven Mosher|216|11.1
      2|Don Monfort|103|5.3
      3|Jan P Perlwitz|92|4.7
      4|WebHubTelescope (@WHUT)|81|4.2
      5|ClimateGuy|75|3.9
      6|Stephen Rasey|70|3.6
      7|Brandon Shollenberger|70|3.6
      8|Carrick|59|3.0
      9|sunshinehours1|56|2.9
      10|Mi Cro|55|2.8
      11|Matthew R Marler|50|2.6
      12|David Springer|50|2.6
      13|Zeke Hausfather|47|2.4
      14|Wagathon|36|1.8
      15|mwgrant|33|1.7
      16|phi|31|1.6
      17|nickels|31|1.6
      18|A fan of *MORE* discourse|29|1.5
      19|kim|25|1.3
      20|Jim D|25|1.3

      • Steven Mosher

        The next installation on TOBS will be a defining moment for skeptics.

      • Steven Mosher commented

        The next installation on TOBS will be a defining moment for skeptics.

        2 part answer.
        First part, we need to increase the number of posts in this thread.

        Second part, If you quit using the numerical daily average temp, and used the measured Min and Max temps (which you might even be able to average after the fact), and then subtract day over day values by station (like I do), TOBS adjustments become unnecessary. Now, Before you reply saying I’m wrong, really think about the effect of subtracting today’s min (or max) temp from yesterdays min (or max) temp on a station by station basis and what a time of observation error (which would not change day to day) means to that difference.

      • @Mi Cro 7/22 at 5:07 pm |
        Second part, If you quit using the numerical daily average temp, and used the measured Min and Max temps (which you might even be able to average after the fact), and then subtract day over day values by station (like I do), TOBS adjustments become unnecessary.

        I don’t know that TOBS adjustments become unnecessary, i.e equal to zero. The fatal assumption is that the time between day over day maxes are of equal time length, when because of weather fronts, they are not necessarily of equal length in time between one max and the next max. The same goes for the day over day mins.

        BUT!! the unequal and unknown periods in time between the mins and the maxes (the only measurements made!) is further why the uncertainty in the average temperature, the mean standard error of the TAVE for a month is large, on the order of 0.5 to 0.8 deg C. The 95% confidence of the TAVE is plus or minus 1.0 to 2.0 deg C. Which makes TOBS adjustments comparatively insignificant. This TAVErmse uncertainty is being ignored early in the analysis and its mishandling has contaminated all homogenization, breakpoint, uncertainty, and confidence calculations downstream of the TAVE calculation.

      • Stephen Rasey commented

        I don’t know that TOBS adjustments become unnecessary, i.e equal to zero. The fatal assumption is that the time between day over day maxes are of equal time length, when because of weather fronts, they are not necessarily of equal length in time between one max and the next max. The same goes for the day over day mins.
        BUT!! the unequal and unknown periods in time between the mins and the maxes (the only measurements made!) is further why the uncertainty in the average temperature, the mean standard error of the TAVE for a month is large, on the order of 0.5 to 0.8 deg C. The 95% confidence of the TAVE is plus or minus 1.0 to 2.0 deg C. Which makes TOBS adjustments comparatively insignificant. This TAVErmse uncertainty is being ignored early in the analysis and its mishandling has contaminated all homogenization, breakpoint, uncertainty, and confidence calculations downstream of the TAVE calculation.

        Let’s start first by ignoring weather fronts.
        Second that Min/Max recording thermometers, and modern thermometers all do what they’re suppose to.
        That leaves manually read thermometers. Daily Min temp happens just before Sunrise. Measured early or late, and you miss min, but if you measure it the next day at the same time, you’re going to get the same error, and the difference between the two methods will be very small. And if you take the difference between yesterday’s and today’s min on a station by station basis where the measure is done the same way, I think you get the best possible value, plus any error in the measurement can’t get any larger because they don’t accumulate, it can only get as large as the yesterday’s and today’s error added together.

        Weather, can alter the actual min (ie it doesn’t happen at Sunrise), But it’s random, and will likely happen at all stations. It doesn’t seem to effect the Day over Day changes in Min, which does evolve based on the seasons. Here’s a chart of day over day change for >N23 Lat, 1950-2010
        http://wattsupwiththat.files.wordpress.com/2013/05/clip_image022_thumb.jpg?w=864&h=621

      • @Mi Cro 7/23 at 9:55 am
        What you say is largely true. But do not miss the forest for the trees.

        The Uncertainty in the TAVE anomaly is underestimated because processes do not preserve the uncertainty derived from calculating the TAVE from the TMin and TMax measurements. The mean standard error of the TAVE monthly anomaly doesn’t come from 31 daily TAVE measurement, but from 31 daily TMins and 31 daily Tmax measurements.

      • Stephen Rasey commented

        The Uncertainty in the TAVE anomaly is underestimated because processes do not preserve the uncertainty derived from calculating the TAVE from the TMin and TMax measurements. The mean standard error of the TAVE monthly anomaly doesn’t come from 31 daily TAVE measurement, but from 31 daily TMins and 31 daily Tmax measurements.

        Oh, I absolutely agree, somewhere up thread I suggested to Steven that they should stop using Tave altogether as it’s not an actual measurement, and they’re just compounding unknowns, and then coming up with some silly correction adding even more errors.

        As I learned more about what was being done to the data, well as a data professional I find it… unacceptable.
        So, I got the data and worked with the data alone, no corrections by someone 50 years after it was taken by someone who thinks they know better than the people who actually took the measurement, idiotic.

        Anyone interest in this data (except Jan, he has to pay for it ;) ) can find if you follow the url in my name. And due to the conversation about a 1×1 grid, I’m building a data set for that sized grid. What’s interesting about that is almost all of the grids have a single station, almost none of which have consistent over the period of 1950-2013, which means most of the series that are published are made up, cause they aren’t based on readings lol! It’s also interesting that since making this data available, someone with a Spanish domain is about the only one whose downloaded all of it. Good for them!

      • ‘who has’, but wonderful work, Master of Ceredatamonies; I’d download it if I could use it, but I don’t have to. You make better sense of it than I would.
        =========================

      • @Mi Cro at 7/23 11:37 am
        I thought we were in agreement when you brought up the Min and Maxes as part of TOBS. I just took the opportunity to reflect that the bigger problem in the mean standard error of the TAVE, TAVErmse, also derives from the Mins and Maxes. TAVErmse, and the corresponding TAnomally(rmse) are much too big to ignore, particularly when BEST is making 2-15 year segments with a mean standard error of the slope that must swamp any climate signal.

        almost all of the grids have a single station, almost none of which have consistent over the period of 1950-2013, which means most of the series that are published are made up, cause they aren’t based on readings lol!

        Could you please restate with more clarity?

        “Almost all the grids have a single station” Do you mean all the cells within a grid? If so, I find that rather surprising. I would have expected more clustering.

        What percentage of these single station cells, if that is what they are, are zombies?

        “almost none of which have consistent over the period of 1950-2013, “ Well, I’m not surprised there has been a change of equipment or location at most stations over a 63 year span. But how bad is the consistency? What may be systematic about the changes?

        which means most of the series that are published are made up,
        What series? And who published them?
        Can you provide an example?

      • Stephen Rasey commented
        Hopefully I can answer must of your questions, otherwise it will be tomorrow afternoon probably. First, I’ve written a lot at the url in my name in this post, Earlier post point the the actual generated data.
        For a set of min/max latitude/longitude point I find all of the stations with those coordinates and average the data from those selected stations. Any station in NCDC global summary of days becomes part of that set of averages. I have various data sets, Contintents, latitudinal bands, 10 x 10 degree boxes. I’m running 1×1 boxes now. It’s started at the South Pole and is working it’s way north. It was near 23N Earlier. Spot checking the reports, almost every one had a single station in it, and did not have any records for some number of years during the period of 1950-2013. So that station did not record any data during those particular years, other years it had a partial to full year of data. I don’t infill, so there’s either data or there isn’t. I feel that this is what’s measured, and all of the other temp series GISS, BEST, CRU, they all infill, they all add data to area’s that were not measured. I wanted to see what our measurements alone had to say.
        For more details, at least until tomorrow, follow the link in my name, there’s 5 or so blogs there. And if you want data follow an earlier post to source forge.

      • “Second part, If you quit using the numerical daily average temp, and used the measured Min and Max temps (which you might even be able to average after the fact), and then subtract day over day values by station (like I do), TOBS adjustments become unnecessary. Now, Before you reply saying I’m wrong, really think about the effect of subtracting today’s min (or max) temp from yesterdays min (or max) temp on a station by station basis and what a time of observation error (which would not change day to day) means to that difference.”

        Better than Mosher’s drekwork

      • Yup, everyone in the world is wrong but you. The delusional mind at work.

  275. @Mi Cro at 7/23 11:11 pm |
    So that station did not record any data during those particular years, other years it had a partial to full year of data. I don’t infill, so there’s either data or there isn’t.
    I would like to suggest that once your 1×1 grid assignment is complete, an early product should be a multiple line chart of
    number of grid cells (y)
    vs Year(x)
    by Number of stations less than or equal to N (Line).
    It will give a contour plot in time of the grid coverage.

    If the zombie station observation by Goddard is important, then we should see a fall off in the number of grids with one station or two stations in them. If we don’t see the drop off, then the zombies have been removed from the cells with many stations and might be a lesser problem.

    A little trickier would be another pass for the same chart but only include stations with 20 years of nearly unbroken record, say less than 5% missing.

    Dumb question, but are you keeping track of grid cells with zero stations?

    • Stephen Rasey commented

      I would like to suggest that once your 1×1 grid assignment is complete, an early product should be a multiple line chart of
      number of grid cells (y)
      vs Year(x)
      by Number of stations less than or equal to N (Line).
      It will give a contour plot in time of the grid coverage.

      As part of the build I also build a station table, that lists each station that is included in an area, how many samples each station has (and TAvg, and Variant) by year, you could use the one generated for the global station area to do this.

      If the zombie station observation by Goddard is important, then we should see a fall off in the number of grids with one station or two stations in them. If we don’t see the drop off, then the zombies have been removed from the cells with many stations and might be a lesser problem.

      In the same station table I include station lat/lon (I also generate a google maps KML file, but it’s just got a pin so far), so you can see stations that stop generating data for some years then start back up, as well as stations that have different station numbers with the same coordinators.

      A little trickier would be another pass for the same chart but only include stations with 20 years of nearly unbroken record, say less than 5% missing.

      I include this in the parameters that select what stations to include already (as well as how many samples per year it has to have for x number of years), it doesn’t do exactly what I had intended, as it includes years that do not meet these requirements as long as the same station does meet them. But if you get real picky station counts drop rapidly. Conversely, if you’re not picky you get lots of stations with very sparse data.

      Dumb question, but are you keeping track of grid cells with zero stations?

      No, not directly, but I generate a csv file for each area with data, and I don’t generate it if it’s empty, and they can just be counted, and I mark the name with the location.
      I have been running out of table space during this run (which is nearing it’s second week of running), and since I write the table out once it’s generated I’ve been deleting them. I have 200GBs of table space, which holds 40-30% of the earth. So I don’t have any of the southern Hemispheres table any more, though I do of course have all of the csv files. You should follow the url to sourceforge (this posts url), grab some of the data to get a better look.

      While I’ve had a good response to what I’m doing, because it’s contrarian and I’m not at a point where I could write a paper and get it through a review process past people who think homogenization, kriging, infilling is required, I just don’t have the time, so I’ve in general stopped making updates to the code. If something were to change, I might develop additional analysis, I also have access to a 3d data tool, but there’s a lot to the tool that will take time, and if it’s all just a novelty, it’s not worth the effort.
      But what the actual data shows is there is no warming trend in max temps at all, and min temps flutter around regionally not globally. And that’s sort of the problem, people don’t know what to make of it, so they don’t.

      • Thanks again. Well, I make low climate sensitivity to CO2 out of it.
        ===========

      • kim commented on

        Thanks again. Well, I make low climate sensitivity to CO2 out of it.

        Between surface data, nightly cooling, and the IR temp of sky and clouds it is apparent to me, that even if DWIR adds some additional energy it is near meaningless, and not in control of anything.
        Max temps aren’t going up, min’s might be in the more tropical areas, summers might have a little higher average temp, things like that, but mostly inconsequential, other than anti-fossil fuel green’s looking for political influence that is, useful to them, all those lining their pockets, not so much for the rest of us.

        For instance Michael Moore has 8-9 homes, he’s real big on all of this stuff, pretty much anti- everything that helps people do better for themselves, I wonder how much Co2 gets generated keeping all of those homes room temp? Same with Al Gore.
        Actually it’s sad.

      • micro sez:


        For instance Michael Moore has 8-9 homes …

        And that makes your own analysis correct how?

      • WebHubTelescope (@WHUT) commented

        And that makes your own analysis correct how?

        They are two separate facts that stand on their own.

      • And your “analysis” is facts how?

      • WebHubTelescope (@WHUT) commented

        And your “analysis” is facts how?

        Because averaging 4 and 6 = 5 is a fact.

      • So you have one research article that you have penned. That’s good.
        But now you think that you can somehow show that the global temperature trend is not what scientists with years of experience are telling us?
        That it is much flatter than what seems to be obvious, and what the BEST team have substantiated with their independent analysis.

        We will all wait for how micro convinces everyone that only his approach is correct.

      • WebHubTelescope (@WHUT) commented

        So you have one research article that you have penned.

        One related to Climate Change.

        That’s good.
        But now you think that you can somehow show that the global temperature trend is not what scientists with years of experience are telling us?
        That it is much flatter than what seems to be obvious, and what the BEST team have substantiated with their independent analysis.
        We will all wait for how micro convinces everyone that only his approach is correct.

        “You can lead a horse to water, but you can’t make him drink”
        I’m not planning on convincing anyone, either you get basis math, and believe in it’s transparency, or you don’t.


      • One related to Climate Change.

        No you don’t. If you did, I would see a citation to a peer-reviewed work you did related to climate change.

      • WebHubTelescope (@WHUT) commented

        So you have one research article that you have penned. That’s good.

        I have this climate related research, I presumed that is what you were referring to.

        No you don’t. If you did, I would see a citation to a peer-reviewed work you did related to climate change.

        But maybe what you were referring to was the non-climate related paper that I published.

      • WHT goes from foolish strawman fallacy:
        “And that makes your own analysis correct how?”
        As if Mi Cro argued that, to fallacy of appeal to authority…funny, WHT constantly tries to puff up his “not worth salt” shenanigans.

      • You make grand claims that everyone else is doing the global temperature time series incorrectly and that only you are doing it the correct way.

        The top-level post is arguing about adjustments to the temperature, not a complete redo according to MiCro.

        You truly are tilting at windmills.

      • WebHubTelescope (@WHUT) commented

        The top-level post is arguing about adjustments to the temperature, not a complete redo according to MiCro.

        I’m completely on topic, I’m arguing that you shouldn’t be doing any adjustments. And when you look at the data with adjustment vs without, they are completely different!
        Now, I get that the starting data is lacking, but there are areas that have decent sampling, and guess what, with throwing about 8% of the worst of it, it’s consistent with the US data which is almost 70% of the data set.

      • BTW, I can’t help that the climate guys, instead of trying to analyze the measured data, decided that what we really needed was a Rube Goldberg Machine to run the data through.
        But I’ve learned that when you’re working with data, you do a sanity check to make sure what you get out is a better version of what went in, not completely different.

      • Mi Cro
        You are so correct. Just use the actual data. Later if one feels justified one may publish projected corrections but to correct and obscure the adjustments is unethical in most scientific fields.

        I too feel the practifioners are sincere but the original data is paramount.
        Scott

      • Scott commented on

        Later if one feels justified one may publish projected corrections but to correct and obscure the adjustments is unethical in most scientific fields.

        You should be able to over lay them, and you should be able to see that the fixed data is the same, but better.

        I too feel the practifioners are sincere but the original data is paramount.

        Most days I do too, and in general the adjustments seem reasonable, until you compare them.

      • Naive question here: Where is UHI in this 122,000,000 data point record, or can it be isolated? Surely you’d think that effect would nudge the record up a little more than you show.
        =============

      • “Naive question here: Where is UHI in this 122,000,000 data point record, or can it be isolated? Surely you’d think that effect would nudge the record up a little more than you show.”
        Well, I can’t say. But remember, temperature isn’t what’s important, it’s the daily difference in temp. And how would you describe the daily difference of temp change of a city? It max temp would slowly change over time, but max temp tomorrow is going to be about the same as today, right? Absolute max temp has gone up in some places, but the difference, basically the difference between how much the temp went up today and then drops tonight, and then vise versa. Those are the keys, and besides doing a yearly average of difference, the plot of daily difference in the extra tropics show the evolution due to the change in the length of day. This slope has changed, maybe that’s your UHI?

      • So you don’t want adjustments?
        Then the warming trend over the past 100 years is worse yet
        http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php

        The SST comprise 70% of the signal and without the necessary adjustments, the calibration issues of historical raw temperature readings will not get corrected.
        http://www1.ncdc.noaa.gov/pub/data/cmb/temperature-monitoring/image001.jpg

        Are you really that anti-science?

      • “The SST comprise 70% of the signal and without the necessary adjustments, the calibration issues of historical raw temperature readings will not get corrected.”

        122 million measurements, and you can’t use them to make a time series because some how they have to be adjusted first?

        LMAO, you rode the short bus, right?

      • WebHubTelescope (@WHUT) commented

        Are you really that anti-science?

        Are you really this stupid?

        So you don’t want adjustments?
        Then the warming trend over the past 100 years is worse yet
        http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php

        You obviously do not understand what data I’m using, this link only partially applies to about 10% of my stations, and I don’t care that a station dropped out, it would be unfortunate if GSoD infills dead stations, but based on looking at my data they don’t.

        The SST comprise 70% of the signal and without the necessary adjustments, the calibration issues of historical raw temperature readings will not get corrected.
        http://www1.ncdc.noaa.gov/pub/data/cmb/temperature-monitoring/image001.jpg

        Almost all of my data is surface data, and I rarely go past 1950, so I don’t care that SST’s are off prior to 1940.

        Instead of just making stuff up, Download this :
        http://sourceforge.net/projects/gsod-rpts/files/Reports/ContinentsReports.zip/download
        Start with the 2 YRLY_GB csv’s that all stations (well ~90%), and the look at YRLY_US as that’s has the best coverage. Temps aren’t important MNDIFF and MXDIFF are, I incluse the number of samples in the last column, and year 9999 is the average of all years.

      • micro thinks there is no warming in the average global surface temperature.

      • WebHubTelescope (@WHUT) commented

        micro thinks there is no warming in the average global surface temperature.

        See, you are not paying attention, because I have no such belief.
        What the annual evolution of the day to day max temp shows that this difference is very small and does not have a trend, and that min temps have rather large regional drops then a recovery, And the daily trend >N23 Lat of the change per day by year has evolved with a slight trend in slope, but no trend in offset.
        My opinion on global average temp as published is they have no relation to the actual measurements. Though I’m sure I (well I don’t think I could do it, but I know the data is diverse enough to make it possible) could take the data from the 20-30,000 surface stations, pick through them and make any trend you’d like. Especially since Tavg is just the average of Tmin and Tmax.

      • Then make a decent graph with the axes marked with units and spell out what you are trying to say. Playing games with second-order effects is not obvious when the first-order effects themselves are often subtle.

        Are you one of these?
        http://tinyurl.com/lqoathd

      • WebHubTelescope (@WHUT) commented

        Then make a decent graph with the axes marked with units and spell out what you are trying to say. Playing games with second-order effects is not obvious when the first-order effects themselves are often subtle.

        Would adding “Average degrees per day change in F” to the vertical axis, and Year for the horizontal axis make you understand it any better? I thought describing the chart, and making the data and code available would be enough for most people, it’s more than most of the published series do.

        Basically If you don’t like my graph, make your own.

        As for an explanation, as day get longer the surface warms up, and then as the day get shorter the surface cools, it’s cools as much at night as it always has.

      • Change in average temperature is a first-order effect. To see changes in second-order effects on top of that will take much more skill than you are demonstrating, if it is possible at all.

        Yea, I am sure that you have unlocked some secret that only you have been able to decipher. I really don’t think so.

      • WebHubTelescope (@WHUT) commented

        Change in average temperature is a first-order effect. To see changes in second-order effects on top of that will take much more skill than you are demonstrating, if it is possible at all.

        So wait, you’re saying you can’t understand averaging the difference in temperature 2 days in a row and plotting that out by year?

        That this is some mystery that climatologists can’t understand?

        Yea, I am sure that you have unlocked some secret that only you have been able to decipher. I really don’t think so.

        I don’t know, did I? I don’t see anyone other than me talking about it, all I see is GAT’s that are full of made up data.

      • Mi Cro,

        I understand that you calculate the daily differences of the minimum (or maximum temperature) from the temperature measurements at stations, area average those for the domain you want to study, and then you plot these averaged daily differences over time. And then you get something like this (here for the domain North of 23 N, from 1950 to 2010):

        http://wattsupwiththat.files.wordpress.com/2013/05/clip_image022_thumb.jpg?w=864&h=621
        (Source: http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-610870)

        From looking at the plot, you conclude that there wasn’t any warming trend in the data. Is this correct?

        If my interpretation is correct, please could you tell me what you believe you would see in your plot with the yearly spaghetti with a warming trend in the data, instead? To make it simple let’s assume the warming trend was 0.6 deg. over the whole time period 1950-2010, and it was a linear trend. How would it appear in your plot? How would it look differently to what is seen in the plot now?

        Thanks in advance for answering.

      • Jan, nice of you to visit.
        “From looking at the plot, you conclude that there wasn’t any warming trend in the data. Is this correct?”

        I don’t use the daily data to find a warming trend, I use the yearly data.
        The daily data I calculated both slope and offset for each peak to peak pairs. Offset jumped all over, slope however did have a slight trend, great fit to a straight line lots 0.99’s, but the slope does change in a constant direction. But it has what could be an inflection point at the end, but it could also be nothing. I need more data.

      • Loss of resolution on a second-order effect and micro then complains about needing more data. It comes down to the realization that he has no feel for the data and lacks the intuition necessary to add any value whatsoever.

      • “Loss of resolution on a second-order effect and micro then complains about needing more data. It comes down to the realization that he has no feel for the data and lacks the intuition necessary to add any value whatsoever.”

        Lol, what are you talking about.
        The is no loss of resolution, and doing it the way I am I have a smaller error. And I have about 112 million samples, which has nothing to do with poor reporting by some stations or lack of coverage. You’re just making stuff up, my feel for data is just fine.

      • BEST has added data that corresponds to daily readings and have incorporated that into their estimates of global average temperature. You have something that looks like noise plots and then you leave a honeypot of unprocessed data that no sane person will touch.

      • “BEST has added data that corresponds to daily readings and have incorporated that into their estimates of global average temperature. You have something that looks like noise plots and then you leave a honeypot of unprocessed data that no sane person will touch.”
        And yet they still Can’t reproduce A Temp Series That Matches Actual measurements.

      • Sure they can. It is you that is experiencing difficulty.

      • I find it hard to believe that a supposed smart guy like you can’t grasp subtracting today’s temp from yesterday’s temp and then averaging them together. Hey I know, I can add another value where instead of averaging them I add them, That would take away the scary complex average concept that the slower readers like you can’t understand.
        Actually I think the difference between how much the temperature goes up today, and how much it falls tonight is important, and if you struggle with that, you should find something less difficult to do your trolling on.

      • MiCro at some point you will come to the realization that you are the 3% t r o l l,

        Corrected average temperatures are a fine metric, which you can’t seem to comprehend.

      • “Corrected average temperatures are a fine metric,”
        Oh, it’s a fine metric, just look at all they do to turn some minor regional warming into a crisis.

      • Oh, so your agenda is that because you don’t like the political implications of the results, you have to act the contrarian? I have seen that attitude time after time from skeptics.

      • WebHubTelescope (@WHUT) commented

        Oh, so your agenda is that because you don’t like the political implications of the results, you have to act the contrarian? I have seen that attitude time after time from skeptics.

        You’re the guy who sits behind either the other teams batters box, or bullpen and yells obscenities at the other teams players.

        Ask a substantive question, include specific data from one of the dozen’s of data files I share with anyone who would like to see just the measurements, or shut up. You want to see the way I put data together, get the code, I published that too. At least if you’re going to complain about what I do, look at it first.

      • Put together a decent graph of what you are trying to show, or shut up. Your graphs look like inkblots.

      • Do it yourself, I don’t work for you. Quit making excuses, “oh, I don’t like the color on you’re pictures” the data is all there to make them whatever color you want.

      • You don’t do quality work, that’s your problem not mine. If some student hands in a shoddy lazy homework assignment, the instructor won’t bend over backwards to figure it out.

      • “You don’t do quality work, that’s your problem not mine. ”
        Your complaint is that excel didn’t use a color you like, so down load the same data I used (which has far more useful information than just Tmin and Tmax) and make the graphs however you want them to look.

      • You don’t seem to get it. There is something hideously wrong with what you are doing; we can see it from your charts, but no one wants to invest the time to figure out exactly what you did wrong. Spend some more time at it and maybe someone will care.

      • WebHubTelescope (@WHUT) commented

        You don’t seem to get it. There is something hideously wrong with what you are doing; we can see it from your charts,

        You almost actually get it, what you got wrong is that what I’m doing is simple, and I’m not doing it wrong.

        GSod Readme ftp://ftp.ncdc.noaa.gov/pub/data/gsod/readme.txt

        MAX 103-108 Real Maximum temperature reported during the
        day in Fahrenheit to tenths–time of max
        temp report varies by country and
        region, so this will sometimes not be
        the max for the calendar day. Missing =
        9999.9
        MIN 111-116 Real Minimum temperature reported during the
        day in Fahrenheit to tenths–time of min
        temp report varies by country and
        region, so this will sometimes not be
        the min for the calendar day. Missing =
        9999.9

        My file read code where I create Mn and Mx diff, today’s temp (mn or mx) – Yesterday’s (I think I’ve been saying it backwards in posts).

        ymxtemp – ymntemp,
        ymxtemp – to_number(trim(SubStr(file_line,111,6))),
        ydate,
        to_number(trim(SubStr(file_line,103,6))) – ymxtemp,
        ymxtemp,
        trim(SubStr(file_line,111,6)) – ymntemp,
        (ymxtemp – ymntemp) – (ymxtemp – to_number(trim(SubStr(file_line,111,6)))),
        ymntemp

        These are:
        Yesterday’s Rising temp,
        Last night’s falling temp,
        Date,
        Today’s Mx – Yesterday’s Mx is MxDiff,
        Yesterday’s Mx,
        Today’s Mn – Yesterday’s Mn is MnDiff ,
        (Yesterday’s Mx – Yesterday’s Mn) – Yesterday’s Mx – Today’s Mn is Diff (identical to MnDiff),
        Yesterday’s Mn.
        The quoted part is the actual sql code. I also remove records with bad values (8-10 million rows out of 122 million)

        if ymxtemp -199
        and trim(SubStr(file_line,111,6)) -199
        and trim(SubStr(file_line,103,6)) -199
        then

        And this is the code that process a set of stations defined on a pair of Lat/Lon points.

        avg(max_temp) as MaxTemp,
        avg(min_temp) as MinTemp,
        avg(rising_temp_diff) as Rising,
        avg(falling_temp_diff) as Falling,
        avg(Diff) * 365.25 as YrDiff,
        avg(Diff) as Diff,
        var_pop(Diff) as V_Diff,
        0.141 * count(Diff) * 2 / power(count(Diff) * 2,2) as Diff_Error,
        avg(case when temp > -199 and temp < 199 then temp else null end) as Temp,
        0.316 * count(temp) / power(count(temp),2) as Temp_Error,
        avg(MNDiff) as MNDiff,
        avg(MXDiff) as MXDiff,

        avg(case when dewpoint < 9999 then dewpoint else null end) as DewPoint,
        avg(DewPt_to_RelHumidity(temp,dewpoint)) as RelH,
        avg(case when SEA_LEVEL_PRESSURE < 9999 then SEA_LEVEL_PRESSURE else null end) as SeaLevelPressure,
        avg(case when STATION_PRESSURE < 9999 then STATION_PRESSURE else null end) as StationPressure,
        avg(case when PRECIP < 99 then PRECIP else null end) * 365.25 as Rain,

        And then here is the station selection code.

        v_select := ‘create table climate.’ || v_tablename || ‘ as ‘ ;
        ‘(select ‘ || Grouplist || ‘ , ‘ || Selectfield || ‘ from climate.filtered_data f where ‘;
        ‘ f.year = ”’ || mnYr || ”’ and ‘;
        ‘ (f.station_number,f.wban) in (select distinct fd1.station_number,fd1.wban from climate.filtered_data fd1 where ‘;
        ‘(fd1.station_number,fd1.wban) in (select distinct fd.station_number,fd.wban from climate.filtered_data fd where ‘;
        ‘ (fd.station_number,fd.wban) in ‘;
        ‘ (select distinct usaf,wban from ish_history ish where ‘;
        ‘ to_number(ISH.LAT) >= to_number(”’ || mnLat || ”’) and to_number(ISH.LAT) = to_number(”’ || mnLon || ”’) and to_number(ISH.LON) = ”’ || Days || ”’ ‘;
        ‘group by fd.station_number,fd.wban,fd.year) ‘;
        ‘having min(fd1.day) > ”’ || Years || ”’ ‘ ;
        ‘group by fd1.station_number,fd1.wban,fd1.day ) ‘ ;
        ‘group by ‘ || Grouplist || ‘ ) ‘;
        ‘order by ‘ || Grouplist || ‘ ‘;

        The selection codes complicated, but it just selects stations by Lat/Lon, and then groups the fields based on either a year or a day.

        That’s it, 3rd grade math, that you can look at. Does anyone see a fault?

        Just quote a specific segment of code, tell me why it’s wrong.

        There is something hideously wrong, but it isn’t what I’m doing, and it isn’t the data I’m using, when you look at all of the related data I generate, once the number of samples get above a thousand or so, the noise settles out.

  276. Mi Cro
    Keep up the good fight.

    Looking forward to graphs of the actual original data and then a time series of overlays of adjusted or manipulated data and graphs.

    Thanks to BEST but they should go back and restablish the original as a reference point.
    Scott

  277. @Mi Cro 7/26 at 7:05 am |
    The is no loss of resolution, and doing it the way I am I have a smaller error.

    Is your error smaller? There is no free lunch here. There is missing data. There is bad data. These contribute error and uncertainty into the result.

    Where do you have a summary of the process? Is there an error analysis in it?

    What is it that you are performing your linear fit over? What is the distribution of lengths in time over the least Square fit? What is the distribution of points in each fit? What is the uncertainty in the slope of each piece? Do these slope uncertainties build as you move from today into history? (They should).

    • @ Stephen
      NCDC say the GSOD data has a +/-0.01 F error. When you use the min/max to create an average it will be twice as large +/-0.02. When I subtract today’s temp from from yesterday’s that too has +/-0.02 error, but when I subtract tomorrow’s temp from today, both differences both can’t have the worst case error, the longer the string of days is the smaller the measurement error is.
      I don’t infill, so I add no error there.
      Beyond this, as long as there are enough stations with a full year for the yearly average, part year data just seems to blend in, as it does when I average all the samples taken on a particular day does. This does need more attention though.
      I have a lot more specifics if you follow the url in my name for this post. Just look at the blogs there.

      • @MiCro 7/28 5:42 pm
        I found some reference to calculating error margin based on measurement error of 0.1 F,

        GIGO.
        You and I both believe that a daily measurement error of 0.1 deg F is way too low. Why endorse it?

        You now have the Min and Max in the same data query. Why not calculate the error from these measurements? Yes, I know there are 200 million rows. It might take a week, but it is now low hanging fruit.

        Go ahead and include the 0.1 deg F error on the mins and maxes, but it won’t make a speck of difference compared to differences between min and maxes of over 20.0 deg F.

      • I already subtract min from today’s max, and tomorrow’s min from today’s max, I refer to them as rise and fall, that was the first thing i did. But I don’t see that as an error. If you’d like something else, be specific.

      • MiCro 2:57 am
        The rise and fall aren’t error.
        The error, the Garbage In, is the assumed 0.1 deg F error in each measurement.

        Each measurement has at least a 0.3, perhaps 0.4 deg F.
        when you subtract two such measurements, the difference now is 0.45 to 0.55. Now as you do the zigzag of rise and falls, these errors accumulate — in part — these errors are not independent, in fact they are negatively correlated and in part compensate.

        I think if we do the math and re-associate the terms, you end up back with the mean standard error from the average of 62 terms (31 maxes and 31 mins separated by something like 18 deg F) each with an uncertainty of 0.3 to 0.4 deg C. You’ll wind up with a mean standard error of 0.6 to 0.7 deg C of the monthly average.

      • Stephen Rasey commented

        Each measurement has at least a 0.3, perhaps 0.4 deg F.
        when you subtract two such measurements, the difference now is 0.45 to 0.55.

        I could easily see the actual being +/-0.5. And I understand why it’s important, but I originally ignored it like all of the published series did. But whatever the measurement error is, the way I treat processing Tmin and Tmax, I have half the error they have because Tmin and Tmax are not correlated, so Tavg as 2 errors, and Tmin day1 is correlated to Tmin day 2.
        Now I hadn’t thought of how Tmin’s correlation with Tmin would improve Tavg in the same way, so that error isn’t as big as I thought it was. Thanks for pointing that out.
        But I also know that I need to get the impact of adding part years into the average, even though I think once you get a large enough sample set, the part years get filled in from other part years, and the Tmax average that gets generated isn’t overly influenced by this. I have two lines of evidence, first is when I find large fluctuation in the average of Tmax and I go look at the number of samples for that year, it’s just a few days, and I just manually remove that year from the output. The second line of evidence is how stable Tmax is, it’s really flat, most years it’s 0.0 something, the Tmax average of 119 million records for the world is 0.00193, so Tmax is flat, Tmin however flutters around, it’s not flat and the changes are regional, they don’t happen at the same time in different places, like the ocean SST’s change, and the downwind surfaces detect it. Tmin’s global average is -0.34397. I would expect the influence of bad data to show equal instability in both Tmin and Tmax.

      • @MiCro 7/30 9:14 am
        Rasey: Each measurement has at least a 0.3, perhaps 0.4 deg F.
        when you subtract two such measurements, the difference now is 0.45 to 0.55.

        I could easily see the actual being +/-0.5.

        Assuming that the bulk of the temperature records are min-max thermometers and measured and/or reported to the nearest deg F, then conservatively the measurement is a rounded value that is the midpoint from a uniform distribution the width of the rounding, 1 deg. F. If so, then the standard deviation of the measurement error is = 1/sqrt(12) = 0.289. So 0.3 is a reasonable lower limit to the standard error of uncertainty, provided that 34.55 degrees is reliably recorded as 35 and not 34. and 34.45 degs is reliably recorded as 34 and not 35.

        Now, whether the standard error of the reading is 0.3 or 0.4, it will likely be a smaller contribution to the total mean standard error than the differences between tmins and tmaxes and even tmaxs across a time period.

        I did not follow much of your 9:14 am post in detail. It is not clear what quantities (by duration, station or region) you want at various stages in the analysis You might want to rewrite with particular attention to what it is you are calculating and its sources of error.

        This however caught my attention:
        first is when I find large fluctuation in the average of Tmax and I go look at the number of samples for that year, it’s just a few days, and I just manually remove that year from the output.

        If you have criteria for the number of samples you need per year, season, or month, fine. But you mustn’t look at the data average to decide whether to keep it.

    • I can also try give more details in the morning, once I have a keyboard.

    • @Mi Cro 7/27 10:59 pm
      NCDC say the GSOD data has a +/-0.01 F error.

      I’m talking about mean standard error of DAILY TAVE at each STATION. Most of the USHCN readings of Tmin and Tmax are only good to the nearest 1.00 deg F with a min-max thermometers.

      I’d really like to see the justification for a +/-0.01 F error estimate because I don’t think it passes the laugh test. Do you?

      • Stephen Rasey commented

        NCDC say the GSOD data has a +/-0.01 F error.
        I’m talking about mean standard error of DAILY TAVE at each STATION. Most of the USHCN readings of Tmin and Tmax are only good to the nearest 1.00 deg F with a min-max thermometers.
        I’d really like to see the justification for a +/-0.01 F error estimate because I don’t think it passes the laugh test. Do you?

        I have two answers, the first is that the NCDC states this(and I misstated the error it’s +/-0.1F, as sated below):

        The daily elements included in the dataset (as available from each
        station) are:
        Mean temperature (.1 Fahrenheit)

        Since the data are
        converted to constant units (e.g, knots), slight rounding error from the
        originally reported values may occur (e.g, 9.9 instead of 10.0).

        And even this is more accurate than I would expect is possible from a person reading a thermometer.

      • @Mi Cro 7/28 1:02 pm |
        And even this [0.1 deg F error] is more accurate than I would expect is possible from a person reading a thermometer.

        Yes, it does profess more accuracy than I expect, too. It still fails the laugh test. I guess the error is related to significant digits in units of conversion and not precision of measurement.

        This confirms what little I saw in the BEST single value TAVE (monthly) file with a bunch of 0.05 deg C uncertainties. Just a spot check so far. It ought to be at least ten times bigger.

        The Uncertainty Monster is birthed right here, in the random and systematic errors and imprecision of the daily Tmin and Tmax readings. The pedigree of this mutt is lost and forgotten.

      • Stephen Rasey commented

        This confirms what little I saw in the BEST single value TAVE (monthly) file with a bunch of 0.05 deg C uncertainties. Just a spot check so far. It ought to be at least ten times bigger.

        I include this naive estimation of measurement error in my reports:
        Population Variant of diff (mndiff) as V_Diff
        0.141 * count(Diff) * 2 / power(count(Diff) * 2,2) as Diff_Error,
        0.316 * count(temp) / power(count(temp),2) as Temp_Error

        I have a variety of things I can look at, but I don’t know the differences in them.

        The Uncertainty Monster is birthed right here, in the random and systematic errors and imprecision of the daily Tmin and Tmax readings.

        If this were all there was, I could believe these all cancel out. It’s what they do after they get a station value that I was curious about.
        What got me to download all this data was a simple observation. I record time and temp to know what temp a dark frame I need to cancel the thermal noise during long exposure astrophotography. My logs show a really large drops in temp once the Sun goes down when it’s clear out, I wanted to see if it changed over time.

      • Population Variant of diff (mndiff) as V_Diff
        0.141 * count(Diff) * 2 / power(count(Diff) * 2,2) as Diff_Error,
        0.316 * count(temp) / power(count(temp),2) as Temp_Error

        Is this a SQL clause?
        wny is “count” rather than “sum” in the numerators?
        where are the coefficients (0.141, 0.316) from?

        If this were all there was, I could believe these all cancel out. It’s what they do after they get a station value that I was curious about.
        The key point is that large error bars in the TAVE(monthly) and subsequent TAnom(Monthly) throws into insignificance all tests in homogenization.

        Furthermore, when BEST fractures the temperature records into decade scale fragments [who’s justification is entirely suspect with these error bars] looking for 0.0-0.3 deg C / decade warming, the segments with error bars of +/- 1.0 deg C makes the noise swamp the signal. The law of large numbers is not going to ride to the rescue with any fidelity.

      • Stephen Rasey commented

        Is this a SQL clause?

        Yes, It’s convenient.

        wny is “count” rather than “sum” in the numerators?
        where are the coefficients (0.141, 0.316) from?

        I found some reference to calculating error margin based on measurement error of 0.1 F, So the .141 is for subtracting Tmx(mn) from Tmx(mn), and .316 is for averaging Tmn and Tmx, count it the number of samples.


      • The Uncertainty Monster is birthed right here, in the random and systematic errors and imprecision of the daily Tmin and Tmax readings. The pedigree of this mutt is lost and forgotten.

        The fact that aggregate temperature readings are precise enough to capture subtle temperature shifts caused by El Nino events and volcanic eruptions substantiates the claim that the global temperature model is plenty good enough.

        Hilarious to watch the desperate trip over this fact.

      • WebHubTelescope (@WHUT) commented

        The fact that aggregate temperature readings are precise enough to capture subtle temperature shifts caused by El Nino events and volcanic eruptions substantiates the claim that the global temperature model is plenty good enough.

        I know when I want a Global Temperature Model, I want one that’s “plenty good enough”!

      • (reposted, original was on the wrong sub thread)
        @MiCro 7/28 5:42 pm
        I found some reference to calculating error margin based on measurement error of 0.1 F,

        GIGO.
        You and I both believe that a daily measurement error of 0.1 deg F is way too low. Why endorse it?

        You now have the Min and Max in the same data query. Why not calculate the error from these measurements? Yes, I know there are 200 million rows. It might take a week, but it is now comparatively low hanging fruit.

        Go ahead and include the 0.1 deg F error on the mins and maxes, but it won’t make a speck of difference compared to differences between min and maxes of over 20.0 deg F.

    • As I’ve written before, daily TAVE is not a measurement — it is a calculation from TMIN and TMAX.

      The monthly TAVE must come from 31 TMin and 31 TMax. If the TAVE for a month is absolutely constant, but the TMin and TMax are separated by 10 deg C = 18 deg F, then the mean standard error of the TAVE should be about 0.6 deg C or about 1.0 deg F.

      This is an Inconvenient Truth of statistics. There is a lot of uncertainty in the records that does not seem to be handled properly.