by Zeke Hausfather
There has been much discussion of temperature adjustment of late in both climate blogs and in the media, but not much background on what specific adjustments are being made, why they are being made, and what effects they have. Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends. The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.
Figure 1. Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth. Series are aligned relative to 1990-2013 means. NCDC data is from GHCN v3.2 and USHCN v2.5 respectively.
Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best to interpret large datasets with numerous biases such as station moves, instrument changes, time of observation changes, urban heat island biases, and other so-called inhomogenities that have occurred over the last 150 years. Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.
This will be the first post in a three-part series examining adjustments in temperature data, with a specific focus on the U.S. land temperatures. This post will provide an overview of the adjustments done and their relative effect on temperatures. The second post will examine Time of Observation adjustments in more detail, using hourly data from the pristine U.S. Climate Reference Network (USCRN) to empirically demonstrate the potential bias introduced by different observation times. The final post will examine automated pairwise homogenization approaches in more detail, looking at how breakpoints are detected and how algorithms can tested to ensure that they are equally effective at removing both cooling and warming biases.
Why Adjust Temperatures?
There are a number of folks who question the need for adjustments at all. Why not just use raw temperatures, they ask, since those are pure and unadulterated? The problem is that (with the exception of the newly created Climate Reference Network), there is really no such thing as a pure and unadulterated temperature record. Temperature stations in the U.S. are mainly operated by volunteer observers (the Cooperative Observer Network, or co-op stations for short). Many of these stations were set up in the late 1800s and early 1900s as part of a national network of weather stations, focused on measuring day-to-day changes in the weather rather than decadal-scale changes in the climate.
Figure 2. Documented time of observation changes and instrument changes by year in the co-op and USHCN station networks. Figure courtesy of Claude Williams (NCDC).
Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves. Most of the stations have changed from using liquid in glass thermometers (LiG) in Stevenson screens to electronic Minimum Maximum Temperature Systems (MMTS) or Automated Surface Observing Systems (ASOS). Observation times have shifted from afternoon to morning at most stations since 1960, as part of an effort by the National Weather Service to improve precipitation measurements.
All of these changes introduce (non-random) systemic biases into the network. For example, MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location. There is a very obvious cooling bias in the record associated with the conversion of most co-op stations from LiG to MMTS in the 1980s, and even folks deeply skeptical of the temperature network like Anthony Watts and his coauthors add an explicit correction for this in their paper.
Figure 3. Time of Observation over time in the USHCN network. Figure from Menne et al 2009.
Time of observation changes from afternoon to morning also can add a cooling bias of up to 0.5 C, affecting maximum and minimum temperatures similarly. The reasons why this occurs, how it is tested, and how we know that documented time of observations are correct (or not) will be discussed in detail in the subsequent post. There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings.
Because the biases are large and systemic, ignoring them is not a viable option. If some corrections to the data are necessary, there is a need for systems to make these corrections in a way that does not introduce more bias than they remove.
What are the Adjustments?
Two independent groups, the National Climate Data Center (NCDC) and Berkeley Earth (hereafter Berkeley) start with raw data and use differing methods to create a best estimate of global (and U.S.) temperatures. Other groups like NASA Goddard Institute for Space Studies (GISS) and the Climate Research Unit at the University of East Anglia (CRU) take data from NCDC and other sources and perform additional adjustments, like GISS’s nightlight-based urban heat island corrections.
Figure 4. Diagram of processing steps for creating USHCN adjusted temperatures. Note that TAvg temperatures are calculated based on separately adjusted TMin and TMax temperatures.
This post will focus primarily on NCDC’s adjustments, as they are the official government agency tasked with determining U.S. (and global) temperatures. The figure below shows the four major adjustments (including quality control) performed on USHCN data, and their respective effect on the resulting mean temperatures.
Figure 5. Impact of adjustments on U.S. temperatures relative to the 1900-1910 period, following the approach used in creating the old USHCN v1 adjustment plot.
NCDC starts by collecting the raw data from the co-op network stations. These records are submitted electronically for most stations, though some continue to send paper forms that must be manually keyed into the system. A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.
Quality Control
Once the data has been collected, it is subjected to an automated quality control (QC) procedure that looks for anomalies like repeated entries of the same temperature value, minimum temperature values that exceed the reported maximum temperature of that day (or vice-versa), values that far exceed (by five sigma or more) expected values for the station, and similar checks. A full list of QC checks is available here.
Daily minimum or maximum temperatures that fail quality control are flagged, and a raw daily file is maintained that includes original values with their associated QC flags. Monthly minimum, maximum, and mean temperatures are calculated using daily temperature data that passes QC checks. A monthly mean is calculated only when nine or fewer daily values are missing or flagged. A raw USHCN monthly data file is available that includes both monthly values and associated QC flags.
The impact of QC adjustments is relatively minor. Apart from a slight cooling of temperatures prior to 1910, the trend is unchanged by QC adjustments for the remainder of the record (e.g. the red line in Figure 5).
Time of Observation (TOBs) Adjustments
Temperature data is adjusted based on its reported time of observation. Each observer is supposed to report the time at which observations were taken. While some variance of this is expected, as observers won’t reset the instrument at the same time every day, these departures should be mostly random and won’t necessarily introduce systemic bias. The major sources of bias are introduced by system-wide decisions to change observing times, as shown in Figure 3. The gradual network-wide switch from afternoon to morning observation times after 1950 has introduced a CONUS-wide cooling bias of about 0.2 to 0.25 C. The TOBs adjustments are outlined and tested in Karl et al 1986 and Vose et al 2003, and will be explored in more detail in the subsequent post. The impact of TOBs adjustments is shown in Figure 6, below.
Figure 6. Time of observation adjustments to USHCN relative to the 1900-1910 period.
TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.
Pairwise Homogenization Algorithm (PHA) Adjustments
The Pairwise Homogenization Algorithm was designed as an automated method of detecting and correcting localized temperature biases due to station moves, instrument changes, microsite changes, and meso-scale changes like urban heat islands.
The algorithm (whose code can be downloaded here) is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally. If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.
To detect localized biases, the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors (separately for min and max) and looks for breakpoints that show up in the record of one station but none of the surrounding stations. These breakpoints can take the form of both abrupt step-changes and gradual trend-inhomogenities that move a station’s record further away from its neighbors. The figures below show histograms of all the detected breakpoints (and their magnitudes) for both minimum and maximum temperatures.
Figure 7. Histogram of all PHA changepoint adjustments for versions 3.1 and 3.2 of the PHA for minimum (left) and maximum (right) temperatures.
While fairly symmetric in aggregate, there are distinct temporal patterns in the PHA adjustments. The single largest of these are positive adjustments in maximum temperatures to account for transitions from LiG instruments to MMTS and ASOS instruments in the 1980s, 1990s, and 2000s. Other notable PHA-detected adjustments are minimum (and more modest maximum) temperature shifts associated with a widespread move of stations from inner city rooftops to newly-constructed airports or wastewater treatment plants after 1940, as well as gradual corrections of urbanizing sites like Reno, Nevada. The net effect of PHA adjustments is shown in Figure 8, below.
Figure 8. Pairwise Homogenization Algorithm adjustments to USHCN relative to the 1900-1910 period.
The PHA has a large impact on max temperatures post-1980, corresponding to the period of transition to MMTS and ASOS instruments. Max adjustments are fairly modest pre-1980s, and are presumably responding mostly to the effects of station moves. Minimum temperature adjustments are more mixed, with no real century-scale trend impact. These minimum temperature adjustments do seem to remove much of the urban-correlated warming bias in minimum temperatures, even if only rural stations are used in the homogenization process to avoid any incidental aliasing in of urban warming, as discussed in Hausfather et al. 2013.
The PHA can also effectively detect and deal with breakpoints associated with Time of Observation changes. When NCDC’s PHA is run without doing the explicit TOBs adjustment described previously, the results are largely the same (see the discussion of this in Williams et al 2012). Berkeley uses a somewhat analogous relative difference approach to homogenization that also picks up and removes TOBs biases without the need for an explicit adjustment.
With any automated homogenization approach, it is critically important that the algorithm be tested with synthetic data with various types of biases introduced (step changes, trend inhomogenities, sawtooth patterns, etc.), to ensure that the algorithm will identically deal with biases in both directions and not create any new systemic biases when correcting inhomogenities in the record. This was done initially in Williams et al 2012 and Venema et al 2012. There are ongoing efforts to create a standardized set of tests that various groups around the world can submit homogenization algorithms to be evaluated by, as discussed in our recently submitted paper. This process, and other detailed discussion of automated homogenization, will be discussed in more detail in part three of this series of posts.
Infilling
Finally we come to infilling, which has garnered quite a bit of attention of late due to some rather outlandish claims of its impact. Infilling occurs in the USHCN network in two different cases: when the raw data is not available for a station, and when the PHA flags the raw data as too uncertain to homogenize (e.g. in between two station moves when there is not a long enough record to determine with certainty the impact that the initial move had). Infilled data is marked with an “E” flag in the adjusted data file (FLs.52i) provided by NCDC, and its relatively straightforward to test the effects it has by calculating U.S. temperatures with and without the infilled data. The results are shown in Figure 9, below:
Figure 9. Infilling-related adjustments to USHCN relative to the 1900-1910 period.
Apart from a slight adjustment prior to 1915, infilling has no effect on CONUS-wide trends. These results are identical to those found in Menne et al 2009. This is expected, because the way NCDC does infilling is to add the long-term climatology of the station that is missing (or not used) to the average spatially weighted anomaly of nearby stations. This is effectively identical to any other form of spatial weighting.
To elaborate, temperature stations measure temperatures at specific locations. If we are trying to estimate the average temperature over a wide area like the U.S. or the Globe, it is advisable to use gridding or some more complicated form of spatial interpolation to assure that our results are representative of the underlying temperature field. For example, about a third of the available global temperature stations are in U.S. If we calculated global temperatures without spatial weighting, we’d be treating the U.S. as 33% of the world’s land area rather than ~5%, and end up with a rather biased estimate of global temperatures. The easiest way to do spatial weighting is using gridding, e.g. to assign all stations to grid cells that have the same size (as NASA GISS used to do) or same lat/lon size (e.g. 5×5 lat/lon, as HadCRUT does). Other methods include kriging (used by Berkeley Earth) or a distance-weighted average of nearby station anomalies (used by GISS and NCDC these days).
As shown above, infilling has no real impact on temperature trends vs. not infilling. The only way you get in trouble is if the composition of the network is changing over time and if you do not remove the underlying climatology/seasonal cycle through the use of anomalies or similar methods. In that case, infilling will give you a correct answer, but not infilling will result in a biased estimate since the underlying climatology of the stations is changing. This has been discussed at length elsewhere, so I won’t dwell on it here.
I’m actually not a big fan of NCDC’s choice to do infilling, not because it makes a difference in the results, but rather because it confuses things more than it helps (witness all the sturm und drang of late over “zombie stations”). Their choice to infill was primarily driven by a desire to let people calculate a consistent record of absolute temperatures by ensuring that the station composition remained constant over time. A better (and more accurate) approach would be to create a separate absolute temperature product by adding a long-term average climatology field to an anomaly field, similar to the approach that Berkeley Earth takes.
Changing the Past?
Diligent observers of NCDC’s temperature record have noted that many of the
values change by small amounts on a daily basis. This includes not only
recent temperatures but those in the distant past as well, and has created
some confusion about why, exactly, the recorded temperatures in 1917 should
change day-to-day. The explanation is relatively straightforward. NCDC
assumes that the current set of instruments recording temperature is
accurate, so any time of observation changes or PHA-adjustments are done
relative to current temperatures. Because breakpoints are detected through
pair-wise comparisons, new data coming in may slightly change the magnitude
of recent adjustments by providing a more comprehensive difference series
between neighboring stations.
When breakpoints are removed, the entire record prior to the breakpoint is
adjusted up or down depending on the size and direction of the breakpoint.
This means that slight modifications of recent breakpoints will impact all
past temperatures at the station in question though a constant offset. The
alternative to this would be to assume that the original data is accurate,
and adjusted any new data relative to the old data (e.g. adjust everything
in front of breakpoints rather than behind them). From the perspective of
calculating trends over time, these two approaches are identical, and its
not clear that there is necessarily a preferred option.
Hopefully this (and the following two articles) should help folks gain a better understanding of the issues in the surface temperature network and the steps scientists have taken to try to address them. These approaches are likely far from perfect, and it is certainly possible that the underlying algorithms could be improved to provide more accurate results. Hopefully the ongoing International Surface Temperature Initiative, which seeks to have different groups around the world send their adjustment approaches in for evaluation using common metrics, will help improve the general practice in the field going forward. There is also a week-long conference at NCAR next week on these issues which should yield some interesting discussions and initiatives.
Adjustments to data ought always be explained in an open and transparent manner, especially adjustments to data that become the basis for expensive policy decisions.
Good faith was undermined about the time James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988. Good faith collapsed completely with the Climategate emails two decades later.
Good faith my ass.
I realised HADCRUT couldn’t be trusted when I started realising that each and every cold month was delayed (I think it was 1day per 0.05C), whereas each and every hot month was rushed out.
I realised HADCRUT could be trusted, when I went back to check my figures a year later and found that nothing was the same any longer.
I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet
I realised HACRUT couldn’t be trusted when I saw the state of their code.
I realised HADCRUT couldn’t be trusted when I realised the same guys were doing it as those scoundrels “hiding the decline”.
And I still know I can’t trust it … when academics like Judith Curry still don’t know the difference between “Quality” as in a system to ensure something is correct and “Quality” as in “we check it”.
This is not a job for academics. They just don’t have the right mind set. Quality is not a matter of figures but an attitude of mind — a focus on getting it right for the customer.
I doubt Judith even knows who the customer is … I guess she just thinks its a vague idea of “academia”.
Of course, none of that contradicts anything Zeke said. Do you have a substantive argument to make?
Quality are all those features and characteristics of a product or service that bear upon the ability to meet stated or implied needs.
The problem with this definition is the word “needs” if there is a need to confuse or give rise false conclusions then tinkering with the data may well give rise to quality data, I.e. It achieved it’s purpose.
Quality in terms of data does not imply accuracy or truth.
David may appreciate new Earth-shattering insight into global warming:
http://stevengoddard.wordpress.com/2014/07/07/my-latest-earth-shattering-research/
David Springer wrote:
Such a claim sounds very nuts to me. How is James Hansen supposed to have sabotaged the air conditioning at such an event in such a building? If you don’t want to be called a liar who spreads libelous accusations, what about you provide the evidence for such an assertion?
A noob who didn’t know. Precious. The air conditioning was sabotaged by opening all the windows the night before so the room was filled hot muggy air when the congressional testimony took place. The testimony was scheduled on the historically hottest day of the year. One of the co-conspirators, Senator Wirth, admitted to all of it in an interview.
http://www.washingtonpost.com/wp-dyn/content/article/2008/06/22/AR2008062201862.html
http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html
David Springer: “James Hansen sabotaged the air conditioning and opened the windows to scorching outside temperatures in the congressional hearing room in 1988.”
No, it most assuredly was not James Hansen who switched off the air conditioning. And no doubt if somebody had closed the windows, instead of opening them, you’d be making the same claim it was done purposely to trap heat in the room. Nothing Hansen said that day hinges on whether the windows were open or closed. All very silly.
It wasn’t Hansen himself, it was (then) US Senator Timothy Wirth, who boasted of doing so on the PBS program “Frontline” —
http://www.pbs.org/wgbh/pages/frontline/hotpolitics/interviews/wirth.html
Senator Wirth said WE opened the windows the night before. He wasn’t alone. The “we” was purportedly him and Al Gore. Hansen was the originator of the idea that if the hearing was scheduled during hot weather it would be more effective.
http://www.aip.org/history/climate/public2.htm
Wirth also boasted (same interview) of how they intentionally picked the hottest day/week of the year in DC, how the weather co-operated, and how the original campaign was integral to politics of the Democratic Party and to that year’s (unsuccessful) presidential campaign by Michael Dukakis. So, whatever Hansen thought he was doing, he certainly allowed himself to be the political tool of manipulative and dishonest political partisans of the Democratic Party:
Scottish sceptic says: “I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet”
And why would a competent programmer want or need to use a spreadsheet.for data processing?!
Spreadsheets are for accountants. It is pretty amateurish to use one data processing. However most amateurs that manage to lash up a “chart” in a spreadsheet for some reasons think they are then qualified to lay into anyone who is capable of programming and has never needed to rely on point and click , cut and paste tools to process data.
You’d also look a lot more credible if you could at least get the name of dataset right and realised that it is the work of two separate groups.
There’s plenty to be criticised at CRU, at least try to make credible criticisms.
Skiphil wrote: “Wirth also boasted (same interview) of how they intentionally picked the hottest day/week of the year in DC”
How could Timothy Wirth have known it was going to be the hottest day of the week–let alone the entire summer–weeks in advance of the hearing having been scheduled? Seriously, show a modicum of scepticism. It transpires the air conditioning wasn’t even switched off; it was simply made less effectual because a senator had opened some windows the night before. People believe this diminishes Hansen’s testimony. It does not. Enough distraction. Can we move forward now?
Anon said:
“How could Timothy Wirth have known it was going to be the hottest day of the week–let alone the entire summer–weeks in advance of the hearing having been scheduled?”
They checked the records and found the most-often hottest day of the year in the city.
“People believe this diminishes Hansen’s testimony.”
No, people believe it “embellished” it.
Hansen’s testimony itself was bogus. It needs no diminishing.
He used part of a hot year to make his point about anthro warming.
I agree that they defenestrated “good faith” when East Anglia lost the original climate they had collected, at the same time, writing in the Climategate emails that they would rather destroy the data than hand it over to skeptics.
So they need to keep all versions of the data. It is not like it wouldn’t fit on $50 worth of hard drive. Except they don’t. They keep it hid and the only way people find out about adjustments is if they take their own snapshots.
The time for “assuming good faith” is long gone, “trust but verify” is more what is needed today.
Anon,
can you READ?? It is Wirth who boasted of seeking the hottest day… of course he couldn’t be sure he would get the very very hottest, but that is what he sought and that is (according to him) what he got. As for distraction, when people like you can explain how Wirth and co. are honest and competent, then we can move on.
Jan,
While sabotaged is not the best term, (air conditioning was turned down or off), it doesn’t change the overall point. Steps were taken to ensure the hearing room was hotter than it normally would have been in order to emphasis the point Wirth and Hansen wanted to get across.
timg56,
the accusation was made by David Springer specifically against James Hansen. Regardless, whether you call it “sabotaging” or “turning off”, I am still waiting for the evidence to back up this accusation. So far, nothing.
Opening up windows the night before on the historically hottest day of the year overwhelmed the air conditioner. Sabotage is exactly the right word. It was Hansen’s suggestion to Wirth to hold the hearing on the hottest day of the year so there’s collusion in black & white. Wirth admitted “we” opened up the windows the night before. The only question is whether “we” included Hansen whose idea it was to stage the hearing in hot weather to be more effective.
Please continue to wait, perlie. Watching you make a fool of yourself over a throwaway comment that you want to blow up into libel is very amusing. Are you going to hold your breath? And stamp your little feet? We can tell that you are not a lawyer, perlie.
David Springer wrote:
Noob? We will see who has the last laugh.
I can even better, thanks to Anthony Watts with his junk science blog. Here is a video excerpt of the TV broadcast, where the opening of the windows and the AC issue is addressed and Wirth is asked about this. Watts had tried this one on me already some time ago, and linked the video himself, apparantly totally delusional about what it would prove.
https://www.youtube.com/watch?v=wXCfxxXRRdY
Not a single word in there that implicates James Hansen in the matter. Neither by Wirth, nor by the narrator. So how does this work with such an accusation in “skeptic” land? By some “skeptic” assigning of guilt by association?
It’s all just about throwing dirt, isn’t it? Facts don’t matter.
As someone else has already correctly pointed out. The windows and AC thing is irrelevant for the content of Hansen’s statement anyway.
Like I pointed out with links Hansen suggested to Wirth that his November testimony would have been more effective in hot weather. Wirth then says in an interview “we” (maybe his staff, maybe a climatologist) determined that June 23rd was on average the hottest day of the year in Washington and scheduled the hearing on that day. Then “we” (Wirth and unnamed others) opened up all the windows the night before so the hot humid air overwhelmed the air conditioning. I don’t but usually the way these things work is Hansen would have flown in the day before and spent some face time with those in the senate on his side. Al Gore was US Senator from Tennessee so almost certainly all three were in town that night and no one is going to question two United States senators prepping a hearing room. It went off like a frat club stunt. Given the heat was Hansen’s idea in the first place and knowing how guys behave probably all three of them were in on it and not exactly sober either. But hey, that’s just a guess. Wirth knows and didn’t say.
Let’s see, Jan Perlwitz!
“A Climate Hero: The Testimony
Worldwatch Institute is partnering with Grist to bring you this three-part series commemorating the 20-year anniversary of NASA scientist James Hansen’s groundbreaking testimony on global climate change next week. Read part one here.
“The greenhouse effect has been detected, and it is changing our climate now,” James Hansen told the Senate Energy Committee in 1988.An unprecedented heat wave gripped the United States in the summer of 1988. Droughts destroyed crops. Forests were in flames. The Mississippi River was so dry that barges could not pass. Nearly half the nation was declared a disaster area.
The record-high temperatures led growing numbers of people to wonder whether the climate was in some way being unnaturally altered.
Meanwhile, NASA scientist James Hansen was wrapping up a study that found that climate change, caused by the burning of fossil fuels, appeared inevitable even with dramatic reductions in greenhouse gases. After a decade of studying the so-called greenhouse effect on global climate, Hansen was prepared to make a bold statement.
Hansen found his opportunity through Colorado Senator Tim Wirth, who chose to showcase the scientist at a Congressional hearing. Twenty years later, the hearing is regarded as a turning point in climate science history.
To build upon Hansen’s announcement, Wirth used the summer’s record heat to his advantage. “We did agree that we should figure out when it’d be really hot in Washington,” says David Harwood, a legislative aide for Wirth. “People might be thinking of things like what’s the climate like.”
They agreed upon June 28. When the day of the hearing arrived, the temperature in the nation’s capital peaked at 101 degrees Fahrenheit (38 degrees Celsius). The stage was set.
Seated before the Senate Committee on Energy and Natural Resources, 15 television cameras, and a roomful of reporters, Hansen wiped the sweat from his brow and presented his findings. The charts of global climate all pointed upward. “The Earth is warmer in 1988 than at any time in the history of instrumental measurements,” he said. “There is only a 1 percent chance of an accidental warming of this magnitude…. The greenhouse effect has been detected, and it is changing our climate now.”
Oh, a one percent chance of a heat wave.
Great science testimony too, Jan!
Question Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?
Answer Because considering *ALL* the available information yields *FAR* better betting strategies.
Question Why does the strongest climate science synthesize historical records, paleo-records, and thermodynamical constraints??
Answer Because considering *ALL* the available information yields *FAR* better assessments of climate-change risk.
These realities are *OBVIOUS* to *EVERYONE* — horse-betters and climate-science student alike — eh Climate Etc readers?
Why do climate scientists hide the raw data? Why do they use anomalies and 5 years smoothing to hide the data?
You can’t spell anomalies with LIES.
… without … :)
Ooh, can I try? :-p
If you average absolutes and the composition of the network is changing over time you will be absolutely wrong because the change in underlying climatology will swamp any signal you are looking for.
On second thought, it doesn’t have the same pithy ring.
I told you and showed you the graph that with so many station in the USA gridding makes very small changes and none to the trend.
NOAA/USHCN uses absolutes.
It seems very misleading for you to discuss USHCN/NOAA using anomalies when they don’t use anomalies.
By gridding I mean 1×1 … 5×5 makes a big difference.
Judith,
I dont think it fosters a good discussion to invite someone with Zeke’s experience in this field who is doing this work for free ( this is not on the list of things we are currently doing at Berkeley) and then allow people like bruce to insinuate that Zeke and others are lying.
Unfortunately I am in and out of airports today so I can’t catch everything. ‘liar’ is one of the forbidden words (triggers moderation) looks like I need to add lying also.
Mosh
Lets hope everyone takes a step back and takes the time to read this substantial volume of data without insinuating bad faith
http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-605444
tonyb
Mosher: “Stop criticizing us …. ”
Is it not true that NOAA uses absolutes?
Bruce,
Apart from some very specific cases, NCDC uses anomalies: http://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/
Zeke, USCRN is a special case.
“Why not display USCRN absolute temperatures?
As was shown under the NTI tab, the normal temperatures for nClimDiv stations are different than those for USCRN stations due to differences in location, station exposure, and instrumentation. For example, June maximum temperatures are warmer in absolute terms at nClimDiv sites, so USCRN absolute national maximum temperatures would tend to be about 1.5°F less even if both are exactly normal for each network that month. To avoid the confusion of an apples-to-oranges comparison, both USCRN and nClimDiv national temperature index values are represented as anomalies from their respective normals. nClimDiv absolute temperatures for the nation or climate regions can be seen on NCDC’s Climate at a Glance.”
http://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/faq
Judith
I was put in moderation for saying there was NO hoax or conspiracy!However presumably the moderating didn’t pick up the ‘no’ bit. So the system isn’t perfect but its not important.
tonyb
Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?
Because that is what the customer wants.
Now answer me this … would you be happy with a bank statement with “adjusted” figures for each and every transaction.
And what would you say if they saidAnswer Because considering *ALL* the available information yields a *FAR* better assessmentk.
Scotthish Sceptic and A fan of *MORE* discourse: Why does the Daily Racing Form publish “adjusted” Beyer speed figures for each horse? Why not just the raw times?
Because that is what the customer wants.
Now answer me this … would you be happy with a bank statement with “adjusted” figures for each and every transaction.
And what would you say if they saidAnswer Because considering *ALL* the available information yields a *FAR* better assessmentk.
The issue relates to how accurately the fundamental data have been recorded in the first place. There are people, including auditors, who do sample financial records and perform Bayesian hierarchical modeling in order to assess the overall effects of errors, and their likely prevalence.
Don’t give the banksters any ideas, Scottish. ;)
The Beyer speed analogy got to me. It succeeds at what it was designed to do. Kudos.
As I am oft wont, lay curiosity (in climate science and horse betting) forced an immediate investigation into Beyer speed.
As a thought and pattern matching exercise, the Beyer speed analogy is quite good. However, within a few minutes, I found an erudite bettor who supplies a different take on the underlying premise that Beyer speed, while working as designed, furnishes reliable data on which to bet one’s wad of cash. He wrote:
“The theory:
Horses that can win races are the ones that can significantly IMPROVE their previous race speed figure. Today’s winner is not the horse with the highest figure from its last race but the horse that is most likely to REACH its highest figure today. Bold-face Beyer figures function essentially as mirages, optical illusions that distort racing reality. Yes, they are more than reasonably accurate most of the time. But they are not worth their face value, for an accurate rendering of the past is not the same thing as an objective prediction of the future. Better stated, the past performances are something that should be seen dynamically, as if they were part of a moving process.”
It seems climate science and horse betting share more than one initially thinks.
I enjoyed the analogy. As we attempt to understand scientific research, numskulls like me could use more of them.
Fan,
What is your favorite conspiracy today?
He is too busy working on his “Climate Youth” project to bother answering a question like that.
The author states: “Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.”
But surely incentives matter. Peer pressure matters. Government funding matters. Beware of the ‘romantic’ view of science in a politicized area.
When an auditor checks accounts, they do not assume bad faith.
Instead they just assure the figures are right.
So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?
Because academics don’t have a culture of having their work checked by outsiders
The simple fact is that academics cannot stomach having outsiders look over their figures. And this is usually a symptom of an extremely poor quality regime
Here’s an audit of HADCRUT3
In July 2011, Lubos Motl did an analysis of HADCRUT3 that neatly avoided all the manipulations. He worked with the raw data from 5000+ stations with an average history of 77 years. He calculated for each station the trend for each month of the year over the station lifetime. The results are revealing. The average station had a warming trend of +0.75C/century +/- 2.35C/century. That value is similar to other GMT calculations, but the variability shows how much homogenization there has been. In fact 30% of the 5000+ locations experienced cooling trends.
What significance can you claim for a 0.75C/century claim when the standard deviation is 3 times that?
Conclusions:
“If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.
Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority.
Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.”
http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html
“So, why then when skeptics try to audit climate figures do they immediately assume we are acting in bad faith?
We dont.
But imagine this.
Imagine an auditor came into your company
A= audior
S= Scottish
A: Can I see your books.
S: Yes here they are.
A: (ignoring the books). Here is a chart I found on the internet showing
your bogus adjustments to income.
S: please look at our books.
A: no first explain this random stuff I found on the internet.
S: here are the books, can you just audit us?
A: you should be audited
S: I thought thats what you were doing, here are the books. please look.
A: What are your interests in this company?
S: I own it. I make money
A: AHHHH, so how can I trust these books
S: can you just look at the books.
A: first I want to talk about this youtube video. See this chart, the red is really red.
S: I didnt make that video, can you just look at the books.
A: do you have an internal audit.
S: ya, here are some things we published, you can read them.
A: Ahhh, who reviewed this.
S: It was anonymous, just read the paper.
A: How do I know your friends didnt review that, I dont trust those papers.
S: well, read them and ask me questions.
A: I’m giving the orders here tell me what is in the papers.
A: and where are your books?
S; I gave you the books.
A: who is your accountant?
S: my wife, she does all the books
A…. Ahhh the plot thickens… you need to be audited.
S: err, here are the books.
A: oh trying to make it my job huh.. Im here in good faith
S: ah ya, to audit, here are the books.
A.not so fast, youre trying to shift the burden of proof
“But surely incentives matter. Peer pressure matters. Government funding matters. Beware of the ‘romantic’ view of science in a politicized area.”
When JeffId and RomanM ( skeptics) started to look at temperature series the incentive was to
A) find a better method
B) Show where GISS and CRU went wrong
Their results showed more warming.
When I first started looking at temperatures my incentive was simple.
I wanted to find something wrong, specifically with adjustements.
7 years later I can only report that I could find nothing of substance
wrong with them.
When Muller and Berkeley started to look at this matter their incentive
was to build a better method and correct any mistakes they found.
Koch and others found this goal laudable and funded them.
With this incentive what did Berkeley find? Well, the better method
extended the record, gave you a higher spatial resolution and showed
that the NOAA folks basically get the adjustments correct.
Many people, all with the incentive to find some glaring error, some mistake that would overturn the science, all came to the same conclusion.
While NOAA isnt perfect, while we can make improvements at the margin,
the record is reliable. The minor issues identified dont change the fundamental facts: It has been warming since the LIA. There are no more
frost fairs in London. The estimates of warming since that time using
some of the data, or all of the data, using multiple methods
( CAM, RSM, Kriging, Least Squares, IDW ) all fall within narrow bounds.
The minor differences are important to specialists or to very narrow
questions ( see Cowtan and Way ), but the big picture remains the same
Yup, that is right. Small changes (a la Cowtan and Way) at the margins do happen. But nothing fundamental has changed. Is there still some uncertainty? Sure, at the margins, but the data are quite clear: there has been average warming in the range of 0.8C to 0.9C since the mid 19th century.
Mosh: One of the things that personally gives me faith in some of the newer temperature records is that skeptics like you, Roman, Jeff and then Muller et al get similar results. Unfortunately, dealing people like Goddard is now prompting you say dubious things like: “Well, the better method extended the record, gave you a higher spatial resolution and showed that the NOAA folks basically get the adjustments correct”. Several years ago, you would have recognized that no one knows the “correct adjustments”. You would remember that the half-dozen reconstructions that “reproduced” Mann’s hockey stick did not make Mann “correct”. Pairwise adjustment are hypotheses than make assumptions about the nature of the events that produced undocumented breakpoints, not tested theories. More than half of US warming and about a quarter of global warming can be traced back to breakpoint corrections and the total number of breakpoints identified has risen to about one per decade (if I remember correctly). Only a modest fraction of these breakpoints are due to properly-studied phenomena like TOB and instrumental changes. Any undocumented breakpoint could represent a return to earlier observing conditions (which had gradually deteriorated) or a shift to new conditions. Worst of all, temperature change still appears to reported as if all the uncertainty arises from scatter in the raw data and none from systematic errors that could arise from processing the data.
This also happens:
A: Can I see your books?
S: No – you just want to find something wrong with them.
Trust is not a part of the game and hasn’t been for some time. About the time cordiality disappeared from the landscape.
@Frank 5:14 pm
Pairwise adjustment are hypotheses than make assumptions about the nature of the events that produced undocumented breakpoints, not tested theories. ….. Worst of all, temperature change still appears to reported as if all the uncertainty arises from scatter in the raw data and none from systematic errors that could arise from processing the data.
Agree. Every adjustment adds error.
Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.
But is temperature uncertainty reported as if it derives from the average anomaly and not derived from the measured daily Tmin and Tmax? If a month’s mins and maxes are 10 degrees C apart, the Trmse (mean standard error) of the month’s Tave is a minimum of 0.67 deg C.
Stephen Rasey: Every adjustment adds error.
That is not true. Errors and random variation are in the data, but the best adjustments (like the BEST adjustments) do the best job of reducing the error. This is proved mathematically for some cases, and it has been shown computationally by simulations where the “true” values and “errors” and “random variation” are known by fiat. I put some references in my comments to Rud Istvan.
“Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.”
Proof by assertion.
Not backed up by any example, any data, or any analysis showing what is claimed.
Typical skeptic.
Matthew R Marler,
“…but the best adjustments (like the BEST adjustments) do the best job of reducing the error.”
If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods. Indeed, BEST removes very effectively the fixes present in the raw data in the form of discontinuities.
For this reason the average of absolute temperature is a better method than anomalies.
@Matthew R Marler at 11:42 am |
Stephen Rasey: Every adjustment adds error.
That is not true. Errors and random variation are in the data, but the best adjustments (like the BEST adjustments) do the best job of reducing the error.
It is true. Every adjustment, even the subtraction of the mean to create the anomaly is the addition of an estimated parameter. Error is always added.
What may be confusing is that adjustments can improve signal to noise as you add error. Or more precisely, the act of improving signal to noise, must add error in the process, but jin some circumstances the signal adds faster than the error.
A case in point is the seismic common depth point move-out correction. It is a process by which a recorded signal, offset by a known distance from he source, is variably compressed in the time-domain to estimate an adjusted record equivalent to a zero-offset source-receiver pair. The velocity used in the move-out is estimated, an average of subsurface velocities, but the right estimate increases coherence of events that arrive at different times in the raw data. When you get it right, it greatly increases signal/noise ratio. But high signal to noise doesn’t prove it is right. It is possible to make noise coherent, too.
Homogenization could a act in much the same way as seismic stacking. It is possible that “stacking” temperature anomalies will improve the signal to noise ratio as it adds error to the process. The question is, does it? It adds error — of that there is no doubt. Does signal improve faster than error? Or are we just making coherence out of noise and added error?
(reposted, first attempt was at the wrong parent in the thread)
@Steven Mosher at 11:57 am |
Rasey: “Undocumented breakpoints derived from differences to a krigged fuzzy surface (one with error bar thickness) defined by uncertain control points in an iterative process is a source for huge amounts of error.”
Proof by assertion.
Not backed up by any example, any data, or any analysis showing what is claimed.
Please argue any of the following points by methods that exclude ad hominem.
1. Breakpoints are derived from something.
2. Breakpoints are created where documentation of changes to the station are do not exist.
3. BEST, and others, use krigging to create a regional field to compare to the station under study.
4. Breakpoints, empirical undocumented breakpoints, can be created from a function of differences between the station and the krigged field.
5. The krigged regional field is defined by control points.
6. These control points are other temperature record stations.
7. Every temperature record contains error and thus contain some uncertainty. (I will expand on this in a following comment)
8. When at least one control point of a krigged surface has uncertainty, i.e. error bars, the krigged surface itself is fuzzy — every point of the uncertain control point influences the surface gains uncertainty.
9. All stations have uncertainty, so all control points of the krigged surface have uncertainty. Therefore the krigged surface is fuzzy at all points.
10. Zeke himself said it was an iterative process.
11.a source for huge amounts of error. Well, now there you have me…. I didn’t define “huge”. Huge in this case means “at least on the order or larger than the signal sought.”
What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
Better for what, certainly not the historic record.
How can declaring old temperatures “WRONG” by replacing them with “calculated temperatures” be right.
The people that lived through the 30s in the USA did not experience “calculated” temperatures, they experienced the real thing as reported by the thermometers of the day. They experienced the real affects of the temperatures and the Dust Bowl droughts.
In Australia in the 1800s they experienced temperatures soo high that Birds & Bats fell out of the air dead of Heat Exaustion, in the early 1900s they had the biggest natural fires in the world and yet according to the Climate experts after adjustments it is hotter now than then.
It is like historians going back to the second world war and changing the number of Allied Soldiers who died and making it far less than the real numbers. Try telling that to their familes and see how far you would get.
Based on these CRAP adjustments we hear the “Hottest” this and “Unprecedented” that, the most powerful storms, Hurricanes & Typhoons, more tornadoes, faster sea level rise when anyone even over 60 knows, based on their own experiences that they are Lies.
I remember as a child in Kent in the UK during the 50s & 60s the Tar in the road melting in the summers due to the heat, followed by a major thunderstorm and flooding with cars washed down the streets and man hole covers thrown up by the water. It is no hotter in the UK now than it was then.
THE ADJUSTMENTS DO NOT MAKE IT A MORE ACCURATE ACCOUNT OF HISTORY.
It is not REAL, that is why the work that Steve Goddard does with Historic Data is so important, it SHOULD keep scientists staight but it doesn’t.
Stephen Rasey: What may be confusing is that adjustments can improve signal to noise as you add error. Or more precisely, the act of improving signal to noise, must add error in the process, but jin some circumstances the signal adds faster than the error.
I think that you are going in circles. The Bayesian hierarchical model procedure produces the estimates that have the smallest aggregate mean square error. They do not add error to the data, or add error to the estimate.
A. C. Osborne: What I find absolutely amazing about the people making the adjustments and the people defending the adjustments is their belief that it is “Better”.
Better for what, certainly not the historic record.
The procedure used by the BEST team produces estimates that have the smallest attainable mean square error. There is a substantial literature on this topic.
phi: If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods.
How is that known? The BEST team and others have made extensive efforts to estimate and account for UHI effects, and they are not the major source of warming in the instrumental record.
More on Point 7 above:
7. Every temperature record contains error and thus contain some uncertainty.
Let us list the sources of uncertainty in each temperature record:
1. Systematic temperature miscalibration of the instrument.
2. Weathering of the instrument as a function of time
3. Instrumental Drift away from calibration.
4. Precision of daily reading
5. Accuracy of daily reading, including transposition in record)
6. Instrument min-max reset error resulting from Time of Observation policy.
7. Data gaps from vacation, instrument failure, etc.
There are others, but I want to turn to the big errors that occur in processing.
A great deal of the temperature record used is based upon the station’s Average monthly temperature Anomaly. What are the sources of uncertainty involved with it? What is the Temp Anomaly “Mean Standard Error” (TArmse)
First we must find the Trmse of the month’ avg temp.
Trmse(Month i) = StDev(30 Daily Ave. Temp) / sqrt (30)
Right?
Wrong. We never measure a Daily Ave. Temp. We measure instead a min and a max. Instead,
Trmse(Month i) = StDev (30 Daily Min + 30 Daily Max) / sqrt (60)
If we assume a flat constant avg temp of 10 deg C for the month, coming from thirty 5 deg C min readings and 15 deg C max readings.
Trmse = 0.645 deg C.
So the Mean for a month is 10.000 deg C, but the 90% confidence range is 8.92 to 11.08. deg C. That is a big error bar when you are looking for 0.1 to 0.3 deg C/decade.
You want to convert Tave(month) to an anomaly TAavg.
Well that’s just a bulk shift of the data. There is no uncertainty.
Wrong.
A bulk shift would apply if and only if each station and each month received the same bulk shift. But we don’t do that. Each station-month is adjusted by an estimate of the mean for that month and that station
Ok. Suppose we have 30 years of the very same month: 30 days of 5 deg low and 15 deg high. The 30 year mean is 10 deg C. What is the Trmse(30 year, month I)? It is (Trmse(month i)/sqrt (30). In this case
Trmse(30 year, month I) = 0.645 / sqrt(30) = 0.118 deg. C.
So, the 30 year Tavg for a month is known to +/- 0.193 deg C at an 90% confidence.
But, we are going to create the anomaly for the month: that quantity is (Tave(month), Trmse(month)) + (-Tave( 30 year, month), Trmse(30 year, month)
The temp anomaly mean is a nice fat zero.
but the rmse of the anomaly = sqrt(0.645^2 + 0.118^2)
TArmse (month, 30 year base) = 0.656 deg C. or +/- 1.079 deg C at 90% confidence.
The uncertainty in the 30 year mean did not add much to the TArmse of the month, but it did never reduces it. Furthermore, in this discussion of breakpoints, if we make segments short, say 5 years, then the uncertainty of the mean, Trmse(5 year, month) = 0.289 deg C. Adjusting by a 5 year mean between breakpoints would yield a
TArmse(month, 5 year base) = sqrt(0.645^2 + 0.289^2) = 0.716 deg C
or +/- 1.179 deg C at a 90% confidence interval.
So more breakpoints, shorter segments, increases the uncertainty in the Temperature Anomaly data stream. If you want to tease out climate signals of a fraction of a degree, you need long segments.
Matthew R Marler,
Excuse me, but you write a lot on this thread while you do not seem to master the subject. I suggest some literature:
http://onlinelibrary.wiley.com/doi/10.1029/2001JD000354/pdf
http://onlinelibrary.wiley.com/doi/10.1002/joc.689/pdf
Good reading.
@Matthew R Marler at 1:59 pm |
I think that you are going in circles
No. I don’t deny that you can reduce mean standard error or mean squared error through increasing the sample size when errors are random. But in the process, the errors, the variance, to be more specific, add at each step. The mean error can be reduced by an increase in number of samples.
You cannot subtract error, at least not when the error is random. Errors accumulate. Every estimate and adjustment contains error.
Matthew R Marler,
I specify that Hansen et al. 2001 will show you why the BEST method is inadequate in case of increasing UHI. Regarding Böhm et al. 2001, you will find an interesting evaluation of the UHI effect on the Alpine Network at the end of the nineteenth century (greater than 0.5 ° C).
Stephen Rasey says:
Well, it’s a good thing that the errors aren’t random! =D
Seriously, though. TOB is a systematic error, not random.
phi: http://onlinelibrary.wiley.com/doi/10.1029/2001JD000354/pdf
http://onlinelibrary.wiley.com/doi/10.1002/joc.689/pdf
I have written enough for one thread, but I do thank you for the link to the paper.
phi, I read the paper that you linked to, and here is a quote from the summary: This paper discusses the methods used to produce an Alpine-wide dataset of homogenized monthly
temperature series. Initial results should illustrate the research potential of such regional supra-national
climate datasets in Europe. The difficulties associated with the access of data in Europe, i.e. related to the
spread of data among a multitude of national and sub-national data-holders, still greatly limits climate
variability research. The paper should serve as an example of common activities in a region that is rich
in climate data and interesting in terms of climatological research. We wanted to illustrate the potential
of a long-term regional homogenized dataset mainly in three areas:
(i) the high spatial density, which allows the study of small scale spatial variability patterns;
(ii) the length of the series in the region which shows clear features concerning trends starting early in
the pre-industrial period; and
(iii) the vertical component in climate variability up to the 700-hPa level.
All these illustrate the advantage of using carefully homogenized data in climate variability research.
Not only did they “homogenize”, but they worked with deviations rather than restricting themselves to absolute temps, and they estimated breakpoints. They were able to identify a trend “like” UHI, despite your assertion that such methods were the worst when such trends are present. I don’t see how it supports your original claim: If a parasite trend affects the raw data, for example the increase in UHI, BEST uses the worst methods. Indeed, BEST removes very effectively the fixes present in the raw data in the form of discontinuities.
For this reason the average of absolute temperature is a better method than anomalies.
The main obvious difference is that the Best team carried out an explicitly Bayesian hierarchical model, whereas this team seems not to have.
@Windchasers at 4:41 pm |
Well, it’s a good thing that the errors aren’t random! =D
Seriously, though. TOB is a systematic error, not random.
I agree. Systemic corrections can be added —- as long as they contain the uncertainty in the magnitude of the correction. That flows back to the move-out example I used above. It is a real effect whose magnitude must be estimated, perhaps by looking for the value that maximizes coherence.
TOB is a valid correction under some circumstances. (personally, I think it is overrated, but valid) The magnitude of the correction can only be estimated, even if it is a Bayesian estimation. But the mean standard error of the estimated TOBS correction is no zeros and could be more than half the size of the correction itself. We must estimate how much to apply at that station, at that month, at that year (when the time of the change is not documented).
To apply a TOBS correction AFTER the recording time policy was really changed is certainly adding error.
Matthew R Marler,
To remove discontinuities is a bad method if these discontinuities are in fact corrections. The results of Böhm and BEST are identically bad since both remove these fixes to recover the bias in its full amplitude. I proposed Böhm because he explains the bias of discontinuities by a large UHI effect on the network in the nineteenth century. If it was important at this time, it could only have progressed until today.
Otherwise, I can only encourage you to read chapter 4 of Hansen et al. 2001 You will read, for example:”…if the discontinuities in the temperature record have a predominance of downward jumps over upward jumps, the adjustments may introduce a false warming, as in Figure 1.”.
This character is actually present in the raw temperature data worldwide.
RE: Stephen Rasey at 5:34 pm |
TOB is a valid correction under some circumstances. …. The magnitude of the correction can only be estimated, even if it is a Bayesian estimation. But the mean standard error of the estimated TOBS correction is no zeros and could be more than half the size of the correction itself.
I must add that the error associated with the uncertain estimation of the magnitude and time of application of the TOBS correction is also a systematic, non-random error. If you over or under estimate the TOBS correction for one month, you will do so systematically for many other months. So we cannot assume the error will decrease by the sqrt(number of months it is applied).
Likewise, when we create the temperature anomaly, we must add the negative of the mean for the month with its mean standard error. The error applied for May 2013 and June 2013 come from different estimates of the mean and so the errors added random. But the error added between TA(May 2013) and TA(May 2012) come from the same estimate of the mean, so the mean standard error is NOT random between years for the same month, but is likely random between stations.
To apply a TOBS correction AFTER the recording time policy was really changed is certainly adding error.
No, I don’t think so. The TOB creates an ongoing bias – a hot bias if temperatures are recorded near the hottest part of the day, and a cold bias if temperatures are recorded near the coldest part.
If we switched from recording in the afternoon to recording in the morning, I’d rather see us adjust for both biases, not just one. It seems more logically consistent that way.
@Windchasers at 6:03 pm |
If we switched from recording in the afternoon to recording in the morning, I’d rather see us adjust for both biases, not just one. It seems more logically consistent that way.
I cannot argue it wouldn’t be more consistent.
If you want to apply a different TOBS(morning), a TOBS(Afternoon), a TOBS(Noon), and a TOBS(late evening), I have no theoretical objection —– Provided the mean standard error of the adjustment is applied and another error source is added to account for the probabilistic uncertainty that the wrong adjustment is used.
You want to apply a 0.05 deg C TOBS(morning) adjustment with a 0.15 deg C mean standard error uncertainty? Knock yourself out.
“The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.
I used to laugh at accusations of conspiracy among establishment climate scientists. Then I read the climate-gate emails. I’m not laughing anymore.
If there is an ‘unfortunate narrative,” these guys have no one to blame but themselves.
please do not tar the NOAA people with the same brush as the CRU people.
You know early on in climategate when the focus was on CRU, I used to get mails from right wing organizations and people telling me that ‘we” had to find a way to turn this into a NOAA scandle.
needless to say they got an earful from me.
Climategate is not an indictment of the whole profession.
peoples attempt to make climategate about the temperature series or about all climate scientists is part of the reason why the investigations were botched
Steve: Surely you don’t believe the Climategate investigation was botched ONLY because of a need to protect the validity of CRUTemp? The profession had other temperature records to fall back upon. Has the profession even recognized the mistakes that were made? What actions have been taken to ensure that problems don’t occur again? How about releasing all data and processing programs with publication? (You might wish to re-read your own book.)
Steven Mosher: please do not tar the NOAA people with the same brush as the CRU people.
the difficulty there is that some NOAA people (including writers at RealClimate) defended the bad practices revealed in the CRU emails. So the NOAA people tarred themselves.
I have to make this my last post, so if you reply you’ll have the last word. Your tenacity in defense of Zeke’s post and the BEST team is admirible, though I disagree with you here and there.
=> “If there is an ‘unfortunate narrative,” these guys have no one to blame but themselves.”
Indeed. They made you do it.
Hole on there big fella.
They could have published all adjustments, with original data, and justifications based on the literature, instead of having skeptics discover it in the worst possible way, suspecting something was up, recording a snapshot, then watching the data change unanounced, always in ways that increased the warming trend. So yeah, they made skeptics do it.
They could have published all adjustments, with original data, and justifications based on the literature, instead of…
The adjustments and justification are right there in the literature, in papers ranging from 10-30 years old. And the data, justifications, adjustments, and explanations are available on the NCDC website:
http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php
How much longer were they supposed to wait, for you to do your DD?
Don’t blame the scientists for your laziness.
Since the antics of Phil Jones and the CRU data, there is a certain Caesar’s wife expectation of historical climate data, on which depends decisions regarding trillions of dollars.
Every time published data is modified, it should be noted as modified where it is published, along with a link to the previous data, and a link to the peer reviewed justification for the change.
I am just suggesting strategies for coping with the appearance of a “thumb on the scale” since the apparent fact that the adjustments strongly trend in a single direction already looks bad enough.
You guys are just trying to make skeptics, I swear. Take the steam out of these criticisms up front. Treat this data as transparently as if it were a bank statement to the owner of the money, because it is far more important than that.
“Trust us” and name calling or motivation questioning of anybody who doesn’t automatically trust such important data on the say so of obviously politically motivated climate scientists like Hansen, for example, is simply no longer an option.
Mosher said:
“Climategate is not an indictment of the whole profession.”
Oh, so the profession took care of it in a timely , open and and transparent manner.
Thanks for bringing truth, Steven
That is an interesting question. How much can we hold the profession responsible for the actions of some of its prominent members?
Mosher – how do you rate the profession’s response to CRU emails?
For me, how the profession reacts to their outing is critical. Certainly, my information about the response by the profession was partial and probably biased, but the reaction of the climate/temperature profession to the CRU emails as a whole did not bolster my confidence in it.
Most in the profession were probably either a) doing climate science and not paying attention or b) frightened by the furore and decided to keep their heads down.
Climategate is an indictment–of about half a dozen people who chose one of the worst times possible to act like complete bozos. It is in no way an indictment of climate science or the overwhelming majority of climate scientists.
And, the whitewash of the climategate investigations is and indictment – of what ?
Tom Fuller,
Keeping your head down and being too fearful/busy, is an offense and an indictment of the profession. Who spoke out publicly?
A rotten bunch for sure.
When somebody who is purported to be a responsible scientist and the custodian and curator of a central repository of historic temperature data writes “I would rather destroy the data than hand it over to skeptics” then, amazingly, like the IRS, the very data in question is destroyed, I would say that the ‘profession’ has taken a severe black eye and has some serious reputation restoration work to do.
How could you have written this article without once mentioning error analysis?
Data, real original data, has some margin of error associated with it. Every adjustment to that data adds to that margin of error. Without proper error analysis and reporting that margin of error with the adjusted data, it is all useless. What the hell do they teach hard science majors these days?
the error analysis for TOBS for example is fully documented in the underlying papers referenced here.
first rule. read the literature before commenting.
looking at the time of your response I have to wonder what you were taught.
you didnt read all the references
I read about TOBS. They had a set of station data to analyze from the 50s and 60s (no hourly data was stored on mag tape after 64 or 65).
One station moved 20km and one moved 5 km and other moves were “allowed” up 1500m … but they broke the rules for those two stations.
How many stations were in the same place from the beginning to the end of the data?
It could be zero.
Bruce,
Looks like you didnt read the papers. read the original paper and then the 2006 paper.
And then do your own TOBS study.. oh ya, dont make the same mistakes you made with Enviroment Canada data
Zeke posted a link at WUWT to the papers.
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/
The stations moved. The height of the thermometers changed.
What a crappy “reference” collection …
Karl 1986
” For these reasons seven years of hourly data (1958—64) were used at 107 first order sta-tions in the United States to develop equations whichcan be used to predict the TOB (Fig. 4). Of these 107 stations, 79 were used to develop the equations, and28 were reserved as an independent test sample. The choice of stations was based on their spatial distribution and their station histories.
Spatial station relocations were limited to less than 1500 m except for two stations—Asheville, North Carolina, and Talla-hassee, Florida.
These stations had relatively large sta-tion moves, 20 km and 5 km respectively, but they were retained because of their strategic location with respect to topography and major water bodies.
At 72 of the 79 stations used to develop the TOB equations,temperature was recorded very close to 2 m above thesurface.
At the remaining seven stations, the instru-ments were repositioned from heights in excess of 5 m to those near 2 m sometime between 1958 and 1964.
Changes in instrument heights from the 28 independent stations were more frequent: at nearly 50% of these stations the height of the instruments was reduced to 2 m above the ground from heights in excess of 5 m sometime in the same period”
“first rule. read the literature before commenting.”
The great thing about this site is that is not nanny moderated like some of the other climate sites…
two papers bruce.
read them both.
post your code
stop calling people dishonest unless you have proof.
Mosher, aren’t you going to thank me for reading the first TOBS paper and point out the serious problems with the data?
Whats the name of the 2nd paper?
Patrick B: Every adjustment to that data adds to that margin of error.
That is not true. The best adjustments reduce the error the most, whereas naive adjustments do not do a good job at all (example of a “naive” adjustment: concluding that a data point is “bad”, and omitting it from the analysis is computationally equivalent to a second-rate method of adjustment.) This is explained in the vast quantity of mathematics and simulation analysis of diverse type of estimation, including the methods used by the BEST team. The papers of the BEST team explain their analyses in good detail, with supporting references. I put some references in comments on the posts by the estimable Rud Istvan.
Congratulations, you’ve written a long post, managing to avoid mentioning all the main issues of current interest.
“Having worked with many of the scientists in question”
In that case, you are in no position to evaluate their work objectively.
“start out from a position of assuming good faith”
I did that. Two and a half years ago I wrote to the NCDC people about the erroneous adjustments in Iceland (the Iceland Met Office confirmed there was no validity to the adjustments) and the apparently missing data that was in fact available. I was told they would look into it and to “stay tuned for further updates” but heard nothing. The erroneous adjustments (a consistent cooling in the 1960s is deleted) and bogus missing data are still there.
So I’m afraid good faith has been lost and it’s going to be very hard to regain it.
Hi Paul,
Iceland is certainly an interesting case. Berkeley doesn’t get nearly the same scale of 1940s adjustments in their record: http://berkeleyearth.lbl.gov/stations/155459
I wonder if its an issue similar to what we saw in the arctic? http://www.skepticalscience.com/how_global_warming_broke_the_thermometer_record.html
GHCN-M v4 (which hopefully will be out next year) and ISTI all contain many more stations than GHCN-M v3, which will help resolve regional artifacts due to homogenization in the presence of sparse station availability.
Zeke, I have no idea who you are, but posting links to sks, a website that still doggedly defends the Hockey Stick in public while trashing it when they thought nobody could see, is just over the top.
How are we supposed to know what they really think when their editorial positions, when exposed, showed that they value propaganda over true “skeptical science”?
Paul
I dont see how good faith is lost.
Like you I’ve reported any number of errors to NCDC
remember NCDC collects data supplied by sources.
In some cases the errors have been corrected. NCDC informs
the source and the change is made upstream.
in some cases NCDC informs the source and changes are not made
in one case the change was made upstream and then in the next
report the mistake was back in the record.
you assume bad faith on one data point.
bad science.
“I dont see how good faith is lost.” Seven Mosher
“Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.” – CRU
They couldn’t have printed it?
” If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send it to anyone.”” – Phil Jones.
Nope, nothing to see here with Phile “Rosemary Woods” Jones.
Honestly, pretending that Climategate never happened and so no good faith has been lost is lunacy.
Paul, even if they cocked up Iceland data completely, it’s kind of a postage stamp in terms of global temps, isn’t it? And of course you could legitimately reply that the entire globe is made up of postage stamps, but I would then ask if you have noticed similar problems elsewhere.
If it were a conspiracy to drive temp records in one direction, wouldn’t they choose to fiddle with statistics in a wider region on smaller scales?
What if it is not a conspiracy, but bungling? They design a bad adjustment algorithm, run it and it gives them data that looks like what they expect to see. So they declare it good, publish it in a journal with scant review and no data to speak of. Then when you look under the hood you find that the actual adjustments don’t fit reality, the errors aren’t uniform but are most prevalent where data is less dense but the data was never tested at the station level. Just compared to the expected result, and since it confirmed the expected result the details were never looked at or understood. Then people will defend it saying it is based on 30 year old published results failing to notice that it gives the ‘correct’ answer by getting it all wrong.
Paul Matthews: Congratulations, you’ve written a long post, managing to avoid mentioning all the main issues of current interest.
That is unfair.
Could you mention specifically one of the main issues of current interest that managed to avoid mentioning? clearly, he couldn’t address every issue of current interest in a posting of finite length, but perhaps you have a specific issue he might bring up next time, relevant to adjustments to the temperature data.
An issue of current interest:
http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/
Anthony Watts:
“This isn’t just some issue with gridding, or anomalies, or method, it is about NOAA not being able to present historical climate information of the United States accurately. In one report they give one number, and in another they give a different one with no explanation to the public as to why.
This is not acceptable. It is not being honest with the public. It is not scientific. It violates the Data Quality Act.”
What are you still using anomalies? There are only 50 US stations with relatively complete monthly data from 1961 to 1990 in USHCN ? The “anomaly” baseline is corrupted.
Secondly, why not use Tmin and Tmax temperatures? Tmin is corrupted by UHI and therefore so is Tavg.
Thirdly … 5 years smooth? Quit tampering Zeke.
https://sunshinehours.wordpress.com/2014/07/03/ushcn-tmax-hottest-july-histogram-raw-vs-adjusted/
Smoothing is not tampering.
I suggest you go to Jonova and tell david Evans that smoothing TSI is tampering.
dare you.
Smoothing is misleading in this case since we are trying to determine relatively small changes in trends.
Smoothing removes data pertinent to this discussion.
bruce,
go to jonova. accuse them of being dishonest.
prove you have principles.
post your code.
If Zeke posts his R code for his infill graph, I’ll fix it and add trend lines and do one graph per month. And I’ll post his code.
I have a bunch of USHCN data already downloaded.
sunshine hours: Smoothing removes data pertinent to this discussion.
That is not true. Smoothing does note “remove” data. Do you perhaps have evidence that Zeke Hausfather has “removed” data. You are not disputing that they preserve their original raw data, and write out the adjustments and many other supporting statistics in separate files, are you?
Hi Bruce,
Anomalies only use infilled data in the fourth case examined (QC + TOBs + PHA + infilling). In all other cases missing months during the baseline period are simply ignored. They are rare enough that the effect will be negligible.
The reason I used a 5-year smooth on the first graph is that using monthly or annual data makes the difference between adjusted and raw data too difficult to see due to monthly and annual variability in temperatures. Smoothing serves to accentuate the difference if anything. The rest of the graphs show annual differences (though I could have been clearer in stating this in the text).
The mean # of Estimated values for tmax December 1961-1990 is 3.14.
A little over 10%. Not rare.
I haven’t checked for distribution by Elevation or Lat/Long.
when you do bruce, post your code.
we want to ISO9000 audit you.
given your mistake with Env canada..
Mosher, you really are a bitter man. Just ask Zeke to redo his infilling graph to bolster his claim infilling doesn’t change the trends.
You read way too many climategate emails. You just want to be as bloody-minded as them.
Estimated data is about 30m in elevation higher than non-estimated for the 1961-1990 period.
The reason I used a 5-year smooth
Hopefully the frequency response of your smoothing method doesn’t have large side lobes.
The question of how to smooth was discussed here at length some months ago in a post by Greg Goodman. For smoothing as a low-pass filter, a Gaussian filter can be taken as a good starting point. The many comments at Greg’s post by a number of contributors considered variants of the Gaussian filter with different criteria for how to minimize side lobes. No one spoke up in defense of moving-average smoothing.
More sophisticated methods get into band-pass filters, for which even-order derivatives of the basic Gaussian filter are good, starting with the so-called Mexican hat or Ricker filter.
Zeke Hausfather | July 7, 2014 at 12:14 pm | Reply
Hi Bruce,
Anomalies only use infilled data in the fourth case examined (QC + TOBs + PHA + infilling). In all other cases missing months during the baseline period are simply ignored. They are rare enough that the effect will be negligible.
The very thing Steve Goddard was slated for.
OK, but I still have two concerns:
1. Can purely formula adjustments be fully adequate. That is, wouldn’t it be better to look at the actual characteristics of each weather station over time? (Granted, that’s a big job.)
2. How much variation is added by the adjustment process? Is this variation reflected in various models? My impression is that this source of variation is ignored; that models take the adjusted values as if they were actual certain readings.
I
David, you said at WUWT, If you want to understand temperatures changes, you should analyze temperature changes, not temperatures. You are right, and that is what Motl did on the HADCRUT3 dataset.
http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html
“OK, but I still have two concerns:
1. Can purely formula adjustments be fully adequate. That is, wouldn’t it be better to look at the actual characteristics of each weather station over time? (Granted, that’s a big job.
Be more specific.
A) instrument changes. A side by side test was conducted on the
LIG versus MMTS. MMTS was demonstrated to introduce a bias.
That bias has a mean value and an uncertainty. This correction
is applied uniformly to every station that has the bias.
What would you suggest.
B) how do you handle stations that started in 1880 and ended in 1930?
time travel to investigate the station?
C) yes formal adjustments are adequate.
2. How much variation is added by the adjustment process? Is this variation reflected in various models? My impression is that this source of variation is ignored; that models take the adjusted values as if they were actual certain readings.
A) what models?
B) what do you mean by “variation added” the best estimate of the bias
is calculated. It is added or substracted from the record.
Roy Spenser does the same thing for UAH, ask him how it works
Absolutely, I vote for time travel. Then we can educate all those farmers about ISO 9000.
Steven Mosher | July 7, 2014 at 11:35 am | Reply | Reply w/ Link |
This made me curious, and is probably more rhetorical as opposed to being actual questions. Did they not calibrate them in a metrology dept? And did they compare more than one? You could be adding a half degree adjustment for an issue with potentially only a subset of the actually deployed thermometers.
Which, is yet another reason why IMO any adjustment after the fact is based on less information than was available when the record was recorded, at least generally. I understand why you want to correct the data, but as I tell my data customers, at some point after enough changes, it’s not your data anymore, it’s made up. I’ll even go as far as saying it’s probably more accurate, but the error of that data is larger, it has to be.
Why does figure 5 use 1900-1910 as the reference period when the graph it is trying to emulate uses 1900 to 1999?
Its not the “reference period”
1900-1910 is used to show the difference over the whole series, so you can clearly see the change from the begining
You didn’t answer my question.
yes I did bruce.
read harder.
and justto remind you, dont forget the QC flags like you did with Env canada.
ISO9000 for you!
It all sounds very logical except for the assumptions e.g. assuming current measurements are more accurate. And from what I can see from studying this for close to a decade now is that the ‘revisions’ always seem to make the past colder to the point that they are now in conflict with non NOAA & NASA temperatures records. There is no way I would believe that the data is not being manipulated to some degree without an ‘independent’ and openly published study.
“There is no way I would believe that the data is not being manipulated to some degree without an ‘independent’ and openly published study.”
See BerkeleyEarth.
“It all sounds very logical except for the assumptions e.g. assuming current measurements are more accurate.”
There are 114 pristine stations called CRN that have been in operation for a decade. these stations are stamp with a gold seal by WUWT.
Guess what happens when you compare these 114 to the rest of the stations: NO DIFFERENCE.
“USCRN absolute national maximum temperatures would tend to be about 1.5°F less”
http://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/faq
A visual aid for adjustments in Kansas.
http://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-kansas-mapped-july-1936-and-2012/
where is your code bruce.
Iso9000 for you.. get crackin.
Are these comments really necessary? They seem more like a past issue that Mr. Mosher cannot get over.
FTA: Are these comments really necessary? They seem more like a past issue that Mr. Mosher cannot get over.
There is that problem that “seems” is in the mind of the beholder. It seems to me that sunshinehours1 and some other people are posing the same misunderstandings over and over (ignoring the substantial statistical literature on methods of estimation and their error rates), forcing Steven Mosher and some others to make the same statistical points over and over.
No FTA.
I hold all people to the same standard.
where were you we we badgered hansen for code?
in your mothers basement?
Matthew than wouldn’t it simply be better to refer readers to that fact? The comments from Mosher simply don’t help the dialogue along and instead turn it combative and nonproductive.
Mosher – you don’t seem capable of being civil from my perspective – as a newcomber to this topic. I’ll note you as an idealogue and focus my interest in learning towards others such as Zeke (who presents an excellent article and continues to answer professionally).
It appears there should be a limited number of stations that did not change their TOBS. How does the trend of those stations, assuming they wouldn’t require a TOBS adjustment, compare to the trend of the stations in the same region where the adjustment has been made? Has this analysis been done? If there is no difference the TOBS corrections are probably accurate. If not why don’t they match up?
tobs has been validated by out of sample testing TWICE. see the references.
Stepwise differences due to USHCN adjustments.
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif
As one can clearly see in this breakdown, straight from the horse’s mouth, that without TOBS and SHAP adjustments there is no warming trend in the US instrument record.
Zeke addressed that read harder.
Thats the old graph for USHCN version 1 that I was trying to update in my Figure 5. Its fairly old and refers to adjustments (SHAP, for example) that are no longer made.
The raw data didn’t change so it remains true that there is no temperature trend in the raw data.
I can’t seem to find where Zeke said “There is no temperature trend in the raw data”.
Cool! Thanks for writing this. I look forward to working through it.
“The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.
I think the genie is out of the bottle. At best, we can conclude that due to the fact that there are adjustments being done, means there hasen’t been and is not a good process in place for measuring and reporting temperature.
Andrew
Without making too much of it, I would have to agree. Unfortunate for the hotties that the stations switched time of day.
TOB has to be difficult. Living in Colorado tells me that. Without thunderstorm knowledge the temperature adjustment has got to be incredibly difficult. I cant reach the ftp site yet. Interested in the further discussion of TOB.
Bad Andrew: I think the genie is out of the bottle. At best, we can conclude that due to the fact that there are adjustments being done, means there hasen’t been and is not a good process in place for measuring and reporting temperature.
That is just plain ignorance.
“infilling has no effect on CONUS-wide trends. ”
Not true.
http://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1945-to-1980/
http://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
Zeke, you should be ashamed.
sadly you dont post your code so I cant find your error.
unlike the time you botched the enviroment canada data when you error was obvious.
Zeke can easily recreate my graphs if he wants to.
I’ll apologize if I am wrong.
But he has to graph the differences between Estimated, Non-Estimated by month for tmax.
It would take him a few minutes. And since we’ve been arguing about this since May sometime it seems strange.
June 5th at the Blackboard (not May).
And don’t forget trendlines (which he did forget on his infilling graph),
In view of all you’ve written Zeke, should the record ever be used to make press releases saying ‘warmest on record’ or unprecedented when no matter how honest the endeavour, the result has to be somewhat of a best guess? Especially when the differences between high scorers are so small.
If I ran the organisation doing these stats and anyone even so much as implied anything “good” or “bad” about the temperature, I’d kick them out so fast that their feet would not touch the ground.
That is what you need in an organisation doing these stats. Instead, it is utterly beyond doubt that those involved are catastrophists using every possibility to portray the stats in the worst possible light.
That is why I’d kick the whole lot out. The principle aim indeed, perhaps the sole aim should be to get the most impartial judgement of the climate.
Instead we seem to have people who seem no better than greenpeace activists trying to tell us “it’s worst than we thought”.
Yes, it’s always worse than they thought – but not in the way they suggest. It’s worse, because nothing a bunch of catastrophists say about these measurements can ever be trusted.
Press releases claiming ‘warmest” or “coolest” are rather silly in my mind.
precisely for the reason you state.
now, back to the science.
Steven, In I think it must be March 2007, whilst I was waiting for the February HADCRUT figure to come out, there was a deluge of climate propaganda so that nightly the news was full of climate related stories. Then eventually (I would guess more than a week late) the figure came out and it showed the coldest February in 14years. Of course there was no official press release, and in retrospect it was obvious the propaganda and late release of the data was to saturate the media with stories so that they would not pick up on the story that global warming had come to an end (at least for that month).
Over the next few months/years that figure has “warmed”. For anyone working in a quality environment, that kind of creeping change is a total anathema. For those producing climate data it seems to be a given that they can constantly change the data in the past without so much as an explanation.
That February 2007 was the point I realised the figures are so bound up in propaganda that even with the best will in the world, the people involved could not be trusted. Climategate proved me right.
Now 7 years later, nothing really has changed. We still have people making excuses for poor quality work. And to see the difference between “trying your best” and “fit for purpose”, see the image on my article:
https://scottishsceptic.wordpress.com/wp-admin/post.php?post=3657&action=edit&message=6&postpost=v2
None of them are accused of not “trying their best” – it was just that they didn’t produce something that met the requirements of the customer.
Scottish Sceptic: For those producing climate data it seems to be a given that they can constantly change the data in the past without so much as an explanation.
Given the plethora of explanations, why the cllaim that there has not been an explanation?
More pr comments about pr.
Back to the science
Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best
Well it isn’t good enough.
You sound like someone talking about a charity where no one quite knows where the money has gone and some are claiming “they are doing their best”.
We don’t need the academics “best”, what we need is the standard of qualify, accountability and general professionalism you see in the world outside academia.
So:
1. Fully audited methodology and systems
2. Quality assurance to ISO9000
3. Some come back WHEN we find out they weren’t doing the job to the standard required that doesn’t involve putting them in jail.
4. Accountability to the public – that is to say – they stop saying “we are doing our best” and start saying “what is it you need us to do”.
ISO 9000 on readings taken by farmers 100 years ago?
Think of it as repairing cars – the cars may be junk, but that does not mean you can’t do a good job.
ISO9000 cannot improve the original data, but it will create a system which ensures quality in handling that data and the key to the system is the internal auditing, fault identification and correction.
Instead, the present system is:
1. Pretend its perfect
2. Reluctantly let skeptics get data – “only because you want to find fault”.
3. Deny anything skeptics find
4. When forced to admit they have problems – deny it is a problem and claim “we are only trying our best”.
Basically: Never ever admit there is any problem – because admitting problems shows “poor quality”.
In contrast to ISO9000 … only by searching for problems and admitting them can you improve quality.
“We don’t need the academics “best”, what we need is the standard of qualify, accountability and general professionalism you see in the world outside academia.”
standard of “qualify”?
stones and glass houses.
The data is all open
The code is all there.
Yes in a perfect world everyone would be ISO9000. But as you know you are very often faced with handling data that was generated before the existence of ISO9000.
According to ISO9000 how are these situations handled.
Be specific, site the standard.
This is not the scientific standard as I understand it.
http://www.nytimes.com/2014/07/07/us/how-environmentalists-drew-blueprint-for-obama-emissions-rule.html?_r=1
What do you say?
@ Steve Mosher
“The data is all open
The code is all there.”
And as Zeke went to great lengths to point out, the actual data stinks. Without going into motivations, the simple fact is that the actual data is being heavily massaged and used to produce headline after headline that states some variation of ‘Year X or Month Y is the hottest year/month of the last thousand years (or some other long period), beating the old record by a small fraction of a degree, and proving that we need to take action now to control ACO2 to avoid catastrophic climate change.’. And no amount of correcting, kriging, infilling, adjusting, estimating, or any other manipulation of sow’s ear data is going to turn it into silk purse data capable of detecting actual century or multi-century anomalies in the ‘temperature of the Earth’, whatever that is, with reliable hundredth or even tenth of a degree precision. The actual instrumentation system and the data collected by it is not ‘fixable’, no matter how important it is to have precision data, how hard the experts are trying to massage it, or how noble their intentions are in doing so. Using the previous analogy of the auditor, if the company to be audited kept its books on napkins, when they felt like it, and lost half of the napkins, no auditor is going to be able to balance the books to the penny. Nor dollar.
We are told that anthropogenic climate change is the most important problem facing the human race at this time and for the foreseeable future. If so, why don’t the climate experts act like it?
Want to convince me that it is important? Develop a precision weather station with modern instrumentation and deploy a bunch of them world wide.
Forget the 19th century max/min, read them by hand thermometers and deploy precision modern instruments that collect data electronically, every minute if necessary, buffer it, and send it back to HQ at least daily for archiving. Make sure that they include local storage for at least a year or two backup, in case of comms failure. Storage is cheap, in the field and at HQ.
Deploy the stations in locations where urban heat is not a factor and in a distribution pattern that guarantees optimum geographic coverage. It is no longer necessary to have humans visit the stations for anything other than routine maintenance or, for really remote sites where electronic data forwarding is not feasible (Where would that be nowadays?), periodic data collection.
Set up a calibration program; follow it religiously. Ensure that the ACTUAL data collected is precise enough for its intended purpose and is handled in a manner that guarantees its integrity. If data is missing or corrupted, it is missing or corrupted. It cannot be ‘recreated’ through some process like the EDAC on a disk drive. It’s gone. If precise data can be generated through kriging, infilling, or whatever, why deploy the collection station in the first place?
Collect data for a long enough period to be meaningful. Once collected, don’t adjust, correct, infill, krig, or estimate the data. It is either data or it isn’t.
Oh, and give up the fiction that atmospheric CO2 is the only important factor in climate variability, the climate models that assume that it is, and the idea that we can ‘adjust the thermostat of the Earth’ by giving the government, any government, taxing and regulatory authority over every human activity with a ‘carbon signature’
You know that Mosher, your repeating “the data is open, the code is all there” doesn’t relieve you of responsibility. You act as if this absolves you and your colleagues. Those of us out here in the real (regulated) world find that attitude arrogant and counterproductive. My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. Until then, your snide remarks are undoing what credibility you may have had.
site the standard?
stones and glass houses indeed!
Bob Ludwick
+1000
Bob Ludwick: the simple fact is that the actual data is being heavily massaged and used to produce headline after headline that states some variation of ‘Year X or and blah, blah, blah.
The BEST team is doing the best possible with the records that exist. Silk purses and sow’s ears are not in the picture. That some people may be motivated to prove global warming and others may be motivated to prove there is no global warming, there is no justification for ignoring the temperature record outright or using purely naive methods.
Whether CO2 is important or not, getting the best inferences possible out of the data that exist is the best approach.
You are not advocating that the whole temperature record be ignored, are you? If not, what exactly is wrong with the BEST team using the best methods?
“You know that Mosher, your repeating “the data is open, the code is all there” doesn’t relieve you of responsibility. You act as if this absolves you and your colleagues. Those of us out here in the real (regulated) world find that attitude arrogant and counterproductive. My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. Until then, your snide remarks are undoing what credibility you may have had.”
1. Who said we were relieved of responsibility.
2. you find it arrogant. boo frickin hoo. your job is to find the mistake.
you dont like my attitude, see your therapist. get some meds.
3. What makes you think that ISO9000 is even the right standard?
4. No amount of process will change your mind. You are not the least
bit interested in understanding. Look you could be a skeptical hero.
go do your own temperature series.
5. credibility. Whether or not you believe me is immaterial. You dont matter. get that yet? when you do work and find the problems, then you matter. or rather your work matters. Appealing to credibility is the flip side of an appeal to authority.
Is the current product worth the price paid?
“2. you find it arrogant. boo frickin hoo. your job is to find the mistake.
you dont like my attitude, see your therapist. get some meds.”
Okay Mr. go read a book. Go read this book:
http://www.amazon.com/How-Sell-Yourself-Winning-Techniques/dp/1564145859/ref=sr_1_3?ie=UTF8&qid=1404838524&sr=8-3&keywords=selling+yourself
Some relevant quotes:
“Communication is the transfer of information from one mind to another mind…. Whatever the medium, if the message doesn’t reach the other person, there’s no communication or there’s miscommunication….
We think of selling as being product oriented….Even when there’s a slight price difference, we rarely buy any big-ticket item from someone we really dislike.
Ideas aren’t much different. The only time we pay close attention to an idea being communicated by someone we don’t like is when we have a heavy personal investment in the subject….
Don’t waste your time with people on your side. They’re already yours…Forget about trying to convince the people on the other side. You’re not likely to make a convert with a good presentation. They’re already convinced that you’re wrong, or a crackpot, or worse. The only people who matter are the folks who haven’t made up their minds. The undecided. And how do you win them? By presenting yourself as a competent and likable person.”
You can thank me later.
@ Mathew R. Marler
“……….headline that states some variation of ‘Year X or and blah, blah, blah.
The BEST team is doing the best possible with the records that exist. Silk purses and sow’s ears are not in the picture. That some people may be motivated to prove global warming and others may be motivated to prove there is no global warming, there is no justification for ignoring the temperature record outright or using purely naive methods.
Whether CO2 is important or not, getting the best inferences possible out of the data that exist is the best approach.
You are not advocating that the whole temperature record be ignored, are you? If not, what exactly is wrong with the BEST team using the best methods?”
WHY is the BEST team doing the ‘best possible with the records that exist’? Why is it important that multi-century old data, collected by hand using data handling procedures that in general would earn a sophomore physics student a D-, at best, using instruments wholly unsuited to the task, be massaged, corrected, infilled, kriged, zombied, and otherwise tortured beyond recognition in order to tease out ‘anomalies’ of small fractions of a degree/decade, if NOT for the ‘……..headline that states some variation of ‘Year X or and blah, blah, blah……’? What OTHER purpose justifies the billions of dollars and thousands of man-years of effort? Were it not for the headlines, and the accompanying demands for immediate political action to control ACO2 to stave off the looming catastrophe that it will cause if we don’t control it, all citing the output of the ‘best efforts’ of the BEST team and others as evidence, would anyone notice that we are, as we speak, being subjected to the ongoing ravages of ACO2 driven climate catastrophe?
Are you actually claiming that the ‘best efforts’ of the data massagers are able to not only tease out temperature anomalies with hundredth degree resolution for the ‘annual temperature of the Earth’ going back a thousand years or more, all but the most recent couple of hundred years based solely on a variety of ‘proxies’, but, having teased them out, are able to successfully attribute them to some specific ‘driver’, like ACO2?
Nickels. You are not the customer.
I am not interested in selling to you or anyone else.
Folks who want the data get it for free.
Psst
You did a bad job of selling the book.
Perhaps you should reread it
Sorry Mosher, I didn’t realize your efforts were all mental masturbation. By all means, carry on both with your efforts to create information from data that isn’t up to the task and at trying to convince whomever it is you are trying to convince of whatever it is you are trying to convince them. Because honestly, most of the scientific world doesn’t believe you or your data.
Good luck with that.
k scott denison wrote:
“My advice to,you is to develop an ISO9000 QMS system and have it audited. That would buy a lot of credibility. “
I think anyone who has worked in the regulated world has an appreciation for that comment but also can see the fleeting sardonic smile on your face when your wrote the above.
Thanks for the sensible post Zeke…you may not get the kindest reaction here for suggesting there’s no massive conspiracy.
Zeke is doing a good enough job of proving there is a small conspiracy to mislead.
Lewandowsky loves skeptics like you.
Mosher, Zeke has had since June 5th to prove me wrong by emulating my graphs.
http://rankexploits.com/musings/2014/how-not-to-calculate-temperature/
If I’m wrong I will apologize.
Sorry Bruce, averaging absolute temperatures when the network isn’t consistent gives you screwy results. The graphs in this post are nearly identical to those in Menne et al 2009, and use a method (anomalies + spatial weighting) used by pretty much every published paper examining surface temperature data.
Zeke, you wrote in this blog post: ” infilling has no effect on CONUS-wide trends.”
Yet you won’t post a graph with trendlines or post the trend difference.
And you graph has a -.2 to .5 scale and the data barely gets away from 0.
We could be arguing about the trends if your post had numbers.
The graph has a scale consistent with all the other graphs. The impact of infilling is pretty much trend-neutral (rather by definition since it mimics spatial interpolation). The big adjustments are TOBs and the PHA.
sunshinehours1: Zeke is doing a good enough job of proving there is a small conspiracy to mislead.
This is total ignorance. You plain and simply do not understand how the statistical analysis procedure works. And your evidence for a conspiracy to mislead is that your demonstrably inferior inferences are different in some cases?
Chris,
From the outside looking in, the direction the adjustments almost always go seems pretty “convenient.” But the implication that most of us believe AGW is a “massive conspiracy” is also convenient. Seems the true conspiracy whack jobs are on your side of the fence.
Would you care for a little cream and sugar with your straw man?
see the sunshine.
Uh, when Obama says he is making $1 billion available to fight “climate change”, just who in academia do you think will get this through grants? Anything even REMOTELY skeptical will not even be allowed the light of day. Yes, that computes to MASSIVE…..
Should Marcott get more grant money?
DAYHAY
changes the subject. not interested in understanding science
Mosh, you don’t want to be accused of being only interested in those who change the subject ;-)
phatboy
I think I could build a bot to parse comments and classify them
So I assume my graphs are not wrong, you just disagree with me on their significance.
What do you mean by “screwy results” since you left the trend lines out of your infilling graphs
http://sunshinehours.wordpress.com/2014/07/07/misleading-information-about-ushcn-at-judith-currys-blog/.
Chris Colose,
In a development which is nearly as shocking as Nixon going to China, for once I agree with you; Zeke has done a good job of explaining a fairly messy process. I also agree he won’t convince some people of anything, but at least he has laid out a clear explanation. Lets hope it influences the less strident.
You left-wing scientivists love conspiracy theories far more; eg every skeptic apparently receives money from Exxon or the Koch Brothers (who?) or about the USA going to war with Iraq because of oil. I bet most of you believe some other big whoppers too. Where did the expression Big Pharma come from anyway? So physician heal thyself!
Of course conspiracies do actually happen but I don’t believe you are a conspiracist. I believe you and your fellows genuinely believe the planet is warming dangerously due to manmade emissions. The main problem is that nature fundamentally disagrees with you. This is actually a very common occurence in the history of science and is perfectly normal, even necessary for science to progress. It is also perfectly normal to find it difficult to admit you have been teaching (or been taught) the wrong thing for years. So conspiracy no, cognitive dissonance hell yeah!
We have now conducted the experiment of adding a large slug of manmade CO2 and planet earth just shrugged it off. This expoeriment tells us that CO2 is clearly no more than a minor feedback to the climate system. Never mind the skeptics, that is what the actual data is screaming at you. You and your cronies just refuse to believe it, for reasons that are likely nothing to do with climate should you bother to think about it objectively.
I agree that Zeke’s post is sensible and helpful. It underscores the absurd nature of the task of trying to make sense of massive amounts of data collected in a haphazard way over the course of many many years by a lot of different groups. To further assert that the results of analyzing the data are adequate to determine that CAGW is real and the most important problem facing mankind is troubling.
Chris Colose: Thanks for the sensible post Zeke…you may not get the kindest reaction here for suggesting there’s no massive conspiracy.
Thank you for that.
Zeke,
I’m a bit confused by figure 3, the distribution of Tobs over the USHCN. There are now only ~900 actual stations reporting rather than ~1200. However, the total station count in figure 3 appears to remain constant near 1200. How can a Tobs be assigned to a non-reporting station?
Zombie stations getting TOBs adjustments?
Zeke, which version of USHCN was used? Because USHCN recalculates a lot of its temperatures daily I always try to put version numbers on the graphs.
http://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-updated/
The changes tend to warm the present as usual.
Bruce,
USHCN v2.5 downloaded July 2nd.
“Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves”
What is the major cause of station moves?
Is the general trend to move from a more urban environment to a more rural environment?
Can we surmise that just after the move of a station the data is likely to be less wrong than at any other time in the station history?
In the 1940s there was a big transition from urban rooftops to more rural locations. When MMTS instruments were installed most stations had to move closer to a building to allow for an electric wired connection. Other station moves happen frequently for various other reasons.
surely in this situation the adjustments to the raw data for an individual station should only apply at the point in time the change in location/instrument/tobs took place ?
Zeke, that you mentioned you had worked with many of the people involved would prevent you from any analysis in the private sector. By definition, you are biased not only because of this, but also because you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions. Did you honestly believe you’d be viewed as objective?
“By definition, you are biased not only because of this, but also because you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions.”
The problem with this is that you havent read any of my comments on the issue of adjustments between 2007 and 2010.
in short I wass highly skeptical of everything in the record.
until I looked at the data.
Then again perhaps we should use your rule.
Anthony is a non warmist. he is not objective
Willis is a non warmist he is not objective.
All humans have an interest. We cannot remove this.
We can control for it.
How?
Publish your data. Publish your method. let others DEMONSTRATE
how your interest changed the answer.
Oh, two years ago WUWT published a draft study. no data. no code.
and you probably believe it.
Scaffetta argues its the sun. no data. no code. you probably believe it.
bit chilly,
They are only applied when and where the breakpoint is detected. However, because these breakpoints tend to add a constant offset going forward (e.g. 0.5 C max cooling when switching to MMTS), you need to either move everything before the breakpoint down 0.5 C or everything after the breakpoint up 0.5 C. NCDC chooses the former as they assume current instruments are more accurate than those in the past, though both approaches have identical effects on resulting anomaly fields.
“you and Mosher have declared yourself to be warmists/lukewarmers on multiple occasions”
I’ve pointed this out many times. The chances of them presenting information that contradicts their own declarations is zero.
Andrew
And by the same logic any chance of you accepting information which contradicts you own declarations is also zero. So basically none of us can ever really learn anything, or educate others, so we may as well give up on any hope of improving human knowledge.
“I’ve pointed this out many times. The chances of them presenting information that contradicts their own declarations is zero.”
Actually not.
see my declarations about adjustments and UHI and microsite before I actually worked through the data. I used to be skeptical. I declared that.
I was dead wrong.
The chances of you looking at my past declarations is zero.
“see my declarations about adjustments and UHI and microsite before I actually worked through the data”
Why don’t you post one in a comment and link a reference to it? Should be easy.
Andrew
easy
start there
http://climateaudit.org/2007/06/14/parker-2006-an-urban-myth/
there are tons of other.
read much.
Mosher,
Why do I have to dig for it? Why don’t you just quote what you had in mind?
Andrew
Andrew, why not grow a pair and do your own leg work.
I looked through Mosher’s link to CA and there are no “declarations” from him concerning adjustments and/or UHI.
Thanks for nothin Mosher, as usual.
Andrew
Adjust this:
http://evilincandescentbulb.files.wordpress.com/2013/09/uhi-effect.jpg
Such is, the Socio-Economics of Global Warming!
Easy.
Zeke shows you how in his paper. The sum total of UHI in the US is around
.2C. Correctable.
However, linking to a chart from the EPA that has no documentation of its source data, effectively one data point, is just the sort of science one expects from Wagonthon.
one data point. from an EPA chart. that doesnt show its source..
man, if you were Mann trying to pull that sort of stunt, Styne would write a column about it
Kristen Barnes (Ponder the Maunder) at 15 years old could figure this out. Making decisions based on a climate model that is a simple construct of, “a climate system,” according to Pete Spotts of, The Christian Science Monitor, “that is too sensitive to rising CO2 concentrations,” would be like running a complex free enterprise economy based on the outcome of a board game like Monopoly. There is a “systematic warm bias” that, according to Roger Pielke, Sr., “remains in the analysis of long term surface temperature trends.” Meanwhile, the oceans that store heat continue to cool.
“Kristen Barnes?”
you realize that her work was really done by someone else..
hmm maybe I should dig those emails up..
US Temperatures – 5year smooth chart.
As a layman I cannot comprehend how “adjustments” to around 1935 RAW can generate a 0.5C cooling to the RAW recordings. Sorry, but I just do not believe it and see it as an attempt to do away with 1935 high temperatures and make current period warmer all in the “cause”. As stated above, it is suspicious that all adjustments end up cooling the past to make the present look warmer.
Unofrtunately, if it does happen, it will damage science credibility for centuries…
You mean, among the less than 50% of the population actually that puts a value on truth?
This is entertaining, a tweet from Gavin:
Gavin Schmidt @ClimateOfGavin 1m
A ray of sanity in an otherwise nonsenslcal discussion of temperature trends and you won’t believe where! http://wp.me/p12Elz-4cz #upworthy
Oh geez. You’ve poisoned the well by saying Gavin liked the post.
Judith, this is hardly a trivial matter. You are yet again trying to defend a culture which does not allow outside scrutiny to ensure it is producing quality work by saying “they are trying their best”.
In my experience in industry almost everyone “tries their best”, but that in no way guarantees quality. But instead it is those in a culture that accepts rigorous inside and outside scrutiny and then have a system to identify and correct problems and then drive through improvement that ever achieves the highest quality.
And in my experience, those that “sweep problems under the carpet” and have a general culture of excusing poor quality because they are “trying their best that are usually the ones with the greatest gap between the quality they think they are producing and the actual quality of what comes out.
“defend a culture which does not allow outside scrutiny to ensure it is producing quality work by saying “they are trying their best”.
outside scrutiny?
Zeke doesnt work for NOAA
They provided him ( and you) access to their data
They provided him ( and you) access to their code.
you dont work for NOAA.
Zeke applied outside scrunity
You can apply outside scrunity and you are not even a customer.
Zeke has the skill
You have the skill ( If I believe what you write)
Take the data
Take the code.
Do an Audit
Be a hero.
The comments prove Gavin right, again.
Chris Colose: The comments prove Gavin right, again.
Very droll. They are an instance of his not being wrong.
i really hope sunshinehours1 questions do not get lost in the comment thread. the answers to them should lead the discussion.
Here is an idea.
Start a thread. Collect ALL the questions you think need answering
That is what we did at climate audit when we questioned Parkers paper
http://climateaudit.org/2007/06/14/parker-2006-an-urban-myth/
he repsonded
http://climateaudit.org/2007/07/10/responses-from-parker/
time to inject some humor . a nice quote from your second link above . ironic,or just plain funny ?
” and I don’t like this business of “in filling” data ” ;)
Jeepers. The denizens are not showing their best side in the comments. “Consider that you may be mistaken.”
In the UK there is a sale of goods act that gives us the right to ask for our money back for goods or services that are “not fit for purpose”.
We are just trying to exercise that right – except there is an academic cartel of like minded catastrophists who are stopping a reliable and impartial organisation coming in to do the job in a way that can be trusted.
Let me put it this way. A cowboy builder comes in and puts up your house without proper foundations. They may well have done “the best they are able”, but that doesn’t mean it wasn’t good enough.
We want people in charge on these temperature measurements who stop trying to excuse bad quality work and instead some organisation that takes quality seriously.
And to start – they have to understand what quality means – so Judith go read up about ISO900o
Then tell me how many of those organisations doing these temperature figures even know what ISO9000 is let alone have it.
Scottish Sceptic: We are just trying to exercise that right – except there is an academic cartel of like minded catastrophists who are stopping a reliable and impartial organisation coming in to do the job in a way that can be trusted.
You continue to miss several important points. (1) the statistical methods used by BEST are in fact the best available; (2) they have described their methods in published papers and have made their data and code available to anyone who wishes to audit them; (3) no one is stopping anyone from coming in to do the job in a way that can be trusted.
They are all experts.
And they forget their feynman about the ignorance of experts.
Note how NONE of them address the science.
Note how many commented before reading the papers zeke linked to.
Note that none took time to look at the data or the code.
Why?
because they are not interested in understanding.
period.
Actually I designed temperature control and monitoring systems ran a factory with several thousand precision temperature sensors and then went into meteorological weather stations for the wind industry.
From that experience I learnt that it was impossible to reliably measure the temperature of a glass slid about 1cm across to within 0.01C let alone an enclosure a few tens of cm.
Then I came across a bunch of academics who told me the end of the world was nigh because they were absolutely certain global temperature had risen since the days of hand-held thermometers to the modern era of remote instrumentation.
… and I laughed … until I realised they were serious … and worse … people actually took them seriously. And then I was down right despairing when I saw that rather than the carefully planned sites I had imagined, there were sensors in parking lots.
And then when those responsible said that none of that mattered and then started calling us “deniers” – in any other walk of life, ministers would resign and those responsible would go to prison.
really sceptic?
I dont believe you.
show your data and code.
appeals to personal experience and authority by someone who calls themselves a sceptic..
tsk tsk.
also, your iso9000 certs.
thanks Ill wait
One of the issues you’ve ignored is how the picture has been changed in the last few years. Back in 2000 the US temperature plots showed clearly that the 1930s were warmer than the 1990s, with 1936 0.5C warmer than 1998. Since then this cooling has been removed by the USHCN adjustments. This is Goddard’s famous blinking gif that appears regularly at his site. On the other hand it still seems to be acknowledged that most of the state record highs occurred in the 1930s (there are lists at various websites).
Paul,
Until this year the climate division dataset used raw rather than TOBs corrected and homogenized data, which led to some folks creating record lists based on raw data and others based on homogenized data. As of March 2014 all of the products should be using the same underlying data, which should help reduce confusion.
Isn’t this the current data?
http://www.ncdc.noaa.gov/extremes/scec/records
HaroldW,
Figure 3 ends in 2005, when there were still about 1100 stations in the network still reporting.
Zeke,
I agree with your point that figure 3 goes only to 2005, but that doesn’t explain the situation. From figure 2, the station count in 2005 was between 1000 and 1100, say 1075.
Reading the most recent (2005) values from figure 3:
AM: 750
PM: 350
Midnight: 120
Other: 10
The total is over 1200. There’s a minimal error involved in reading these values under magnification, and it’s not large enough to reconcile this total with an active station count below 1100. Non-reporting stations were associated in Menne with a time of observation, which is puzzling.
HaroldW,
Its hard for me to effectively eyeball the numbers, but it is an interesting question. I’ll look into it.
Pingback: Did NASA and NOAA dramatically alter US climate history to exaggerate global warming? | Fabius Maximus
I am unconvinced of the need to “adjust” the data. There are thousands and thousand of data points and associated error margins. The results are by their very nature statistical.
“Adjustments” invariably invite abuse, whether intended or not.
Mike, I think Zeke’s explanation for why the adjustments are absolutely essential for calculating temperature changes over space and time was clear and compelling. I find it difficult to think of a cogent argument against it.
Pingback: Have the climate skeptics jumped the shark, taking the path to irrelevance? | Fabius Maximus
Pingback: Comment threads about global warming show the American mind at work, like a reality-TV horror show | Fabius Maximus
BS baffles brains….you can bet every apostrophe was double checked on this message to say as little as possible.
“But I want to say one thing to the American people. I want you to listen to me. I’m going to say this again: we did not screw around with the temperature data”
The Fig. 8 caption appears to be incorrect.
Shouldn’t it say Pairwise Homogenization Algorithm adjustments?
Good catch. Asking Judy to fix it.
should the years also be 1900-2010 period ?
The adjustments are shown relative to the start of the record to better show their cumulative effects over time. This is following the convention from the USHCN v1 adjustment graph on the NCDC website to use a baseline period of 1900-1910. In reality, what matters is the impact of the adjustments on the trend, so the choice of baseline periods is somewhat irrelevant and only really impacts the readability of the graph.
If someone could explain why, after the initial adjustments are made to raw data (assuming they are valid/correct which may or may not be the case), additional adjustments are made on a nearly annual basis. I might accept that there is “good faith” in making these adjustments.
Read the second on PHA again.
Then go get the code.
Zeke’s graph fig 5 shows that the total effect of the adjustments is a warming of about 0.5C from the 1930s to now.
There is a graph at Nick Stokes’s Moyhu blog, also at Paul Homewood’s, showing 0.9F, ie the same.
And I think this is what Goddard says also, so maybe that’s something everyone agrees on?
Indeed, everyone agrees that adjustments increase the CONUS trend. Goddard just errs in incorrectly ascribing them to infilling rather than TOBs corrections and homogenization.
Infilled data has about a 0.1C/decade higher trend than non-infilled Final USHCN tmax data from 1895 to 2013.
http://sunshinehours.wordpress.com/2014/07/07/misleading-information-about-ushcn-at-judith-currys-blog/
However, most of what infilling does is try and continue a trend.
So from 1998 to 2013 the raw data trend was -1C/decade.
The infilled data was about -0.5C/decade.
Infilling was trying to keep the 1980-1998 trend going.
hey bruce,
how about some sunshine for your code.
cough it up.
All Zeke has to do Mosher is graph Estimated vs non-Estimated with trend lines for tmax.
It would be better by month.
His graph in this article left out the trendlines. Why?
bruce all you have to do is link to your code.
Mosher, does Zeke actually have code? Remember, he’s the one who wrote this blog post. Shouldn’t he archive or include it?
using current methodology ,if the time series was extended by 500 years at either end,and random data from the existing data input,would these adjustments even out to create a realistic manufactured temperature record ,or would the past temperatures continue to decline,and future temps continue to rise,at a similar rate.
if so,whilst your current methodology may be the best mathematically possible,it would indicate a problem.
Andy Skuce of SkS tweets:
Andy Skuce @andyskuce 13m
Great piece by @hausfath at @curryja blog, but don’t read the crazy comments. http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/ …
And be careful about trusting what Zeke claims about infilling.
http://sunshinehours.wordpress.com/2014/07/07/misleading-information-about-ushcn-at-judith-currys-blog/
we should trust you when
A) you botched Environment Canada -forgetting to apply QC flags
B) you dont publish code.
C) you haven’t even submitted anything any where.
Submit a paper, if it gets rejected you’ll be a hero.
Why did Zeke leave out the trend lines in his infilling graph Mosher?
It should take a couple of lines of code to add a trend,
And a couple more to do it by month.
bruce, go ahead and release your code.
I want to check that you did the trend right.
I’m waiting for Zeke’s trendlines and code. He did archive his code … didn’t he?
“I’m waiting for Zeke’s trendlines and code. He did archive his code … didn’t he?”
you got zekes code.
cough up your fur ball..
or should it come out the other end?
An excellent and informative post. This is a “must read” by anyone who would hope to understand the complexities of this subject. Thanks for taking the time to write this, and thanks to Judith for providing the opportunity!
Do a count of denizens who actually engage the science.
you know a count of those who want to understand
Do a count of denizens who
A) invoke conspiracy
b) question zeke’s motives.
c) derail the conversation
d) say they dont believe but provide no argument.
e) refuse to do any work with the data or code, and yet call themselves engineers. eg springer.
Mosher, you spend a lot of time attacking me instead of the graphs I post.
Maybe you should politely ask Zeke to add trendlines to this infilling graph. And change the scale a little. And do it by month.
Cut it out Mosher. Defensiveness is unbecoming. We didn’t say we didn’t believe Zeke. We said he is not in a position to be objective. Tell us you agree that!
I’ve read the emails of a lot of denizens. I’ve read takedowns of the remarkably poor quality of their work. They are totally untrustworthy people. Anyone who relies upon or has endorsed their work, knowing that they are untrustworthy, is also untrustworthy.
sunshine.
your graphs come from your code.
in the past you made boneheaded mistakes.
I’ll comment on the graphs when I study the sources and methods.
See. I treat every problem the same.
Zeke makes a claim. I go to the sources. FIRST
You make a claim. I want to go to the sources. FIRST
So, cough up your code. I will audit you and let you know.
“Cut it out Mosher. Defensiveness is unbecoming. We didn’t say we didn’t believe Zeke. We said he is not in a position to be objective. Tell us you agree that!”
huh. I already said that.
Every human including you has an interest.
none of us are objective, none of us are free from interest.
We CONTROL for this by releasing our data and code.
that way you can look all you like to see if you can DEMOSTRATE
any piece or part where our interest changed the result.
Doing science means you accept that individuals are not objective.
Now, can I be objective about my judgements about zekes objectivity?
Can you be objective about your observations?
theres a paradox for you. go think about that.
No code yet Steve? No trend for infilling?
R. Gates: An excellent and informative post.
I agree.
Please don’t be afraid of space again.
The root cause of the bias between MMTS and LIG measurements was not determined past some generalities: closer to buildings, wood temperature changed via coating type. I didn’t see any testing that swapped or paired the thermometers in the housings. Nor were hosing maintenance and temperatures paired. It’s not unusual for some instrumental methods to have biases with some changes, for example, gas chromatography. However, there are methods to correct those biases. I didn’t see any of that here.
I haven’t seen the description of QC procedures for the instruments. Were they calibrated to some traceable reference standard once or periodically? If the latter, then what adjustments and annotations have were made to the data based on calibration and drift corrections? If this hasn’t been done, then you don’t know the accuracy of the measurements. I’ve been required by government or customers to recertify NIST traceable thermometers, including the master reference thermometer at 2-5 year periods and check the ones I used for actual measurements periodically. Anything like that going on with these measurements?
Continuously adjusting past data products to match some current activities? I think it is a poor practice and in some cases, such as environmental data, it could be quite problematic. The same goes for infilling missing data. You either have the data for that station or you do not. It may be a reasonable assumption that the temperature at stations 10-30km apart will be similar, but you don’t know that and the estimate has to add significantly to uncertainty.
“It may be a reasonable assumption that the temperature at stations 10-30km apart will be similar”
Actually, the Pielkes studied a region that included multiple stations and found that sites even a few km apart show very different climate records. And none of the stations replicated the regional averages.
Hi Ron C.
Do you recall which paper that was exactly?
http://cires.colorado.edu/science/groups/pielke/pubs/
Thanks,
John
John Kennedy
This is the one I read:
https://pielkeclimatesci.files.wordpress.com/2009/10/r-234.pdf
On your link, they go into the details on R-107
Thanks Ron C.
Ron C,
I’d be very surprised if stations a few km apart showed significantly different longer-term trends unrelated to localized biases (TOBs, instrument changes, etc.). Generally speaking, anomalies are pretty well spatially correlated in the U.S.: http://rankexploits.com/musings/2013/correlations-of-anomalies-over-distance/
Zeke, your study is a macro view of the dataset. Pielke Sr. et al were looking the microclimate in physical detail. They concluded: “There were many geographic anomalies where neighbouring weather stations differed greatly in the magnitude of change or where they had significant and opposite trends. We conclude that sub-regional spatial and seasonal variation cannot be ignored when evaluating the direction and magnitude of climate change.”
Note they are describing landscape anomalies, not statistical constructs. Their analysis showed that geographical differences lead to different weather patterns and trends despite proximity.
For more of the details, see this:
Influence of landscape structure on local and regional climate
http://pielkeclimatesci.files.wordpress.com/2009/09/r-107.pdf
Hi Bob,
I dug into the MMTS issue in much more detail a few years back here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/
The best way to analyze the effect of the transition is to look at pairs of otherwise similar stations, one of which transitioned to MMTS and the other of which remained LiG. There is a pretty clear and consistent drop in maximum temperatures. The rise in minimum temperatures is less clear, as there is significant heterogeneity across pairs. I’ve suggested in the past that the difference in min temperature readings might be a result of the station move rather than the instrument change, as many MMTS stations are located closer to buildings than their LiG predecessors.
Thanks Zeke,
I read that from the link in your post. It sounds like a reasonable way to estimate a bias in the absence of basic QC validation of the equipment change. For all the data messaging going on with this, I’d expect the adjustments to be made using a higher level of QC. Instrument/method validation is a pretty standard QC practice. Did they put the MMTS out thinking any difference was minor for the purpose (agriculture) and now we are trying to force fit it into something more serious, like a data source for rearranging economies?
Bob,
The MMTS transition was dictated by the desire of the National Weather Service to improve hydrologic monitoring and forecasting. The climate folks at the time were very unhappy with this choice, as they wanted a consistent record, but climate monitoring was presumably less of a priority than weather monitoring back in the 1980s, and the stations were used for both.
Also, Bob, here is a good side-by-side study conducted after the transition: http://ams.confex.com/ams/pdfpapers/91613.pdf
Did no one think to run MMTS and LiG measurements in parallel at the same location for a few years (hell, days) to estimate the bias?
“The root cause of the bias between MMTS and LIG measurements was not determined past some generalities: closer to buildings, wood temperature changed via coating type. I didn’t see any testing that swapped or paired the thermometers in the housings”
read Qualye and then comment.
Specific link or should I just find the first Qualye on google?
Bob proves that he did not read zeke.
had bob read zeke and followed all the references
he would have found Qualye
instead bob wants me to do his homework
Here is the link that zeke provided
http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/
read everything there. you are qualye hunting now.
Hubbard and Lin are a bit more recent: http://onlinelibrary.wiley.com/doi/10.1029/2006GL027069/abstract;jsessionid=26B071CC2FFBCEB1C12076F4503F4A25.f01t04
Zeke, thanks for the Hubbard-Lin link, clarifies for me what has been done
Mosher, (a) I had “read zeke” (b) It’s “Quayle” not “Qualye” (c) I suppose it is easier to make cryptic remarks than actually put up a link and discuss the what you consider important.
My questions on root cause analysis of the differences seems to be answered. It wasn’t done. Instead, comparisons were made using large numbers of stations and only one proximate set (CSU). In Quayle, mention was made of some stations having both CRS and MMTS for a while, but the data were ignored for months 0-5. I assume, but don’t believe it was mentioned, that they may not have been recording both. The differences between the stations are conjecture: liquid separation (but no documentation of readings with this), differences between heating of shelters (but no documentation), siting (but no documentation). No discussions of instrument drift, calibrations or any of those messy QA/QC things.
I’m late to this game and my questions were an attempt to form an opinion on the quality of this high-quality dataset and the adjustments. As has been said, the system wasn’t designed for what it is being used for.
Mr. Greene, these measuring instruments were not put into place to monitor climate change, as Zeke explains. They were pressed into service decades later. This has caused problems, obviously. Many of those problems have been cited by skeptics for a decade now. I think Zeke in this post has gone a long ways towards answering the questions posed by most and does, in my opinion, serve as an honest guide for anyone with an open mind.
From: Tom Wigley
To: Phil Jones
Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer
“It would be good to remove at least part of the 1940s blip,
but we are still left with ‘why the blip.'” and
‘So … why was the SH so cold around 1910? Another SST problem?
(SH/NH data also attached.)’
So they “fixed” the Southern Hemisphere as well.
Well that certainly proves “good intentions” to me.
The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes.
And there as no land blip
There is a blip in the land-only data too, and both blips occur around 1940. It seems to be a robust feature of the data, even if we do have to make a bucket correction for some of the SST measurements.
thisisnotgoodtogo, see this:
http://www.columbia.edu/~mhs119/Temperature/
Wood for trees comparison:
BEST, CRUTEM3 and HadSST2
The argument that it’s an artifact does not seem to be a plausible one.
The peak of the AMO was around 1944.
http://www.woodfortrees.org/plot/esrl-amo/from:1909/to/plot/hadcrut4gl/from:1909/to/plot/esrl-amo/from:1909/to:1944/trend/plot/hadcrut4gl/from:1909/to:1944/trend/plot/hadcrut4gl/from:1977/to:2010/trend/plot/esrl-amo/from:1977/to:2010/trend
Get ready for 35 years of declining temperatures,
Hi Carrick.
“There is a blip in the land-only data too, and both blips occur around 1940. It seems to be a robust feature of the data, even if we do have to make a bucket correction for some of the SST measurements.”
Yes, there is. WHUTTY was trying to slide stuff by again.
We see that Tom and Phil were confabulating on how to adjust by figuring how much they wanted to take away from appearances. Like this: “Must leave some because there is a land blip, how much removal can we get way with?”
WWII was nasty. It affected measurements in ways that we will never quite figure out. The SST bias is well known and the data is patchy, the land measurements are possibly biased as well . But since the ocean is 70% of the global temperature signal, that is the one that clearly stands out.
WHT wrote
“The early 1940’s blip was related to precautions taken by ships to avoid getting blown out of the water by u-boats and kamikazes”
As noted by Tom and Phil , and circumlocuted by WHT, that does not explain the land blip.
His response:
“since the ocean is 70% of the global temperature signal, that is the one that clearly stands out”
Clearly ! And getting rid of it by off-the-cuff figurings on what they could get away with, would affect Global average so much more ! Perfect.
ClimateGuy commented
The early 40’s blip was due to a warm AMO and a warm PDO overlapping.
Partly, and that is accounted for in the natural variability. There is still a tenth of a degree bias due to mis-calibration as military vessels took over from commercial vessels during WWII.
Chuck, the mail is about SST.
This post is about SAT.
Note another skeptic who cant stay on the topic of adjustments to the LAND data.
doesnt want to understand.
When Zeke shows up to discuss land temps, change the topic to SST.
Steven Mosher: doesnt want to understand.
Assume good faith, and a range of intensities in “want”. Point out the error and then stop.
mathew,
How about this.
How about YOU police the skeptics.
Spend some time directing them to what the real technical issues are.
Yea Marler, during WWII the navy and merchant marine took over the responsibility for collecting SST measurements. Do you have any clue as to the calibration issues that resulted from that action?
What are they supposed to say in emails? That Hitler and Hirohito really messed things up?
steven mosher: How about YOU police the skeptics.
I read most of your posts and I skip most of the posts of some others. I’d rather not be distracted by the junk that you write.
“Assume good faith” was taken from Zeke Hausfather. I guess you don’t think it’s a good recommendation.
I know what the post is about. I am questioning whether some of the players have “good intentions.” (No aspersions are being cast on what Zeke and even you, despite your drive-by cryptic arrogance, are doing.)
Mosh, these deniers see exactly what they want to see. Amazing that they can put blinders on to WWII — its almost a reverse Godwin’s law.
WebHubTelescope: That Hitler and Hirohito really messed things up?
Well they did, dontcha know?
Matthew
Again,
how about you police the skeptics.
give it shot.
show your chops.
its good practice to call out BS whereever you see it.
be a hero.
Mosher: its good practice to call out BS whereever you see it.
I can’t do everywhere. In particular, I try to ignore people who are always wrong. There are a couple who are right or informative just barely often enough, but others whom I never read.
Hey Steve–the skeptics don’t need to be policed. Some of them might benefit from being ignored a bit…
That’s right, you don’t “police” little kids that make a mess of the house and get chocolate all over their face.
“Hey Steve–the skeptics don’t need to be policed. Some of them might benefit from being ignored a bit…”
yes you ignore them and they show up to say that there questions were never answered, their demands never met, that zeke is hiding something, blah blah blah.
I suggest that people who suggest ignoring should start by ignoring me as I play wack a mole.
Its fun
I get to have fun.
Chuck L. :From: Tom Wigley
To: Phil Jones
Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer
Why exactly is that relevant to Zeke Hausfather and Steven Mosher and the BEST team?
Pingback: Misleading Information About USHCN At Judith Curry’s Blog | sunshine hours
Zeke
Well done for writing this long and informative post. It warrants several readings before I would want to make a comment. I do not subscribe to the grand conspiracy theory nor that scientists are idiots or charlatans or a giant hoax is being perpetrated on us. Which is not to say that I always agree with the interpretation of data or that often extremely scant and dubious data is given far more credence than it should.
I will read your piece again and see if I have anything useful to say but thanks for taking the time an effort to post this.
tonyb
+1000.
Tony seeks understanding.
In my opinion, Zeke and Mosh are just two more “scientists” who are trying to change history by waving their hands. Leave the 1930 alone! You are no better than Mann and Hansen.
dont address the science, attack the man.
sceptical Lysenkoism
Steve,
I appreciate what Zeke has done here, and consider both he and you as basically reasonable and trying to be honest. However, this last comment is strange, since 99% of those that attack the scientists, are attacking skeptics (Lindzen, Christy, Spencer, etc,), and do exactly attack the man, not the science. It is a fact that many skeptics (including myself) started out accepting the possibility of a problem, and by studying the facts in depth came to the conclusion that CO2 effects are almost certainly small, dominated by natural variation, and mainly are desirable. I agree that there has been warming in the last 150 years, and a small part of that likely due to man’s activity. I really don’t care if it was 0.5C or 0.8C total warming, and if man contributed 0.1C or 0.4C of this. The flat to down temperature trend of the last 17 or so years, and likely continued down trend clearly demonstrate the failure of the only part of CAGW that is used to scare us: The models. I think the use of data adjustment and then making an issue of 0.01C as a major event is the bug in many of the skeptics here.
leonard.
good comment.
here is the problem.
there is all this skeptical energy. it should be focused on the issue that matters.
how can I put this. After 7 years of looking at this stuff.. this aint where the action is baby.
+1 to Leonard
I totally agree. The focus on the “measured” temperature record is akin to mental mas…bation.
So where do you think the action is?
Leonard Weinstein: The flat to down temperature trend of the last 17 or so years, and likely continued down trend clearly demonstrate the failure of the only part of CAGW that is used to scare us: The models. I think the use of data adjustment and then making an issue of 0.01C as a major event is the bug in many of the skeptics here.
It is useful to address the measurement and temperature problems, and then to address the modeling and theoretical problems separately. Some of the people who have posted “skeptical” comments here clearly (imo) do not understand the statistical methods that have been employed in the attempt to create the best attainable temperature record. That’s independent of whether the same people or different people understand any of the CO2 theory or its limitations.
This thread initiated by Zeke Hausfather is very informative about the temperature record and the statistical analyses. His next two promise more information about the temperature record and the statistical analyses.
“dont address the science, attack the man”
Ah. Like you did earlier attacking me for calling myself an engineer? Actually it was my employers since 1981 who insisted on calling me an engineer. I prefer to call myself “Lord and Master of all I survey.”
You are such a putz, Mosher. Of course you know that already.
Regardless of whether these adjustments are made in good faith or not, I would like NASA to run some experiments. Take the pre global warming scare algorithms, and run them against the 1979 – current temperatures. Compare these to UAH. Then take today’s algorithms. Compare them to UAH. At least then the amount of adjusting that’s going on would be known.
Hi Ed,
You can do one better: compare raw data and adjusted data to UAH. Turns out that over the U.S., at least, UAH agrees much better with adjusted data than raw: http://rankexploits.com/musings/wp-content/uploads/2013/01/uah-lt-versus-ushcn-copy.png
rankexploits doesnt like my ip…
The UAH trend for USA48 by month is all over the place.
1998 – 2013
Jan -0.14
Feb -0.7
Mar 0.66
Apr 0.17
May -0.46
Jun 0.33
Jul -0.1
Aug -0.05
Sep 0.03
Oct -0.28
Nov -0.21
Dec -0.27
You know that this argument is not valid. Why use it again and again?
http://img215.imageshack.us/img215/5149/plusuah.png
Well ed?
Zeke answered your complaint.
Are you interested in understanding? can you change you mind based on evidence.
It was your question..
Second, you realize that UAH is highly adjusted.
right?
you realize that the UAH records has changed many times by adjusting for
instrument changes..
right?
Zeke,
Yeah, absolute temperatures are interesting, but I’m mostly interested in the change in the shape of the graphs. If modern day adjustments more closely follow the UAH shape than the algorithms of ten, or twenty years ago, then that gives food for thought. Specifically, I’m thinking UAH methodology is completely different from NASA’s, and so it’s unlikely errors in one are identically reflected in errors in the other. If the modern day adjustments more closely reflect UAH, that’s a good indication the approach is getting better. On the other hand, if modern algorithms yield cooling in the 1980s and warming in the 2000s vs. UAH and this effect is pronounced compared to earlier NASA algorithms, then that could indicate bias in either NASA or UAH algorithms, though probably in the NASA algorithms since now the previous NASA algorithms must be wrong and UAH too must be wrong.
Why look at previous NASA algorithms? In my view bias is a subtle thing, and even people with very solid credentials and the best of intentions can get snookered.
Mosher:
What complaint am I making?
Ed
UAH and SAT are two different things.
Suppose I had an method for calculating unemployment
Suppose I had a method for calculating CPI
both methods require and use adjustments.
You dont learn anything by comparing them.
“You dont learn anything by comparing them.”
How then do you interpret Zeke’s comment within the prism of your claim?
“You can do one better: compare raw data and adjusted data to UAH. Turns out that over the U.S., at least, UAH agrees much better with adjusted data than raw:”
I can think of several, one being he doesn’t agree with you. Here, Zeke is using UAH to bolster the idea that NASA adjustments make for a better temperature record. If you agree with that, then these are comparable data-sets. If not, take it up with Zeke.
Meanwhile, I’m still waiting for you to explain my “complaint.”
pretty simple Ed. Zeke is using your argument against you.
next.
“pretty simple Ed. Zeke is using your argument against you.
next.”
What argument did I make, Mosher?
the one up thread.
Pingback: The Skeptic demands: temperature data (draft) | ScottishSceptic
Good post Zeke, but I’m curious that if you have several readings for day and average them to a temperature mean, wouldn’t that wipe out any need for a TMax or Tmin adjustment?
Dale,
If you had hourly readings you would no longer need TOBs adjustments. You would still have to do something about station moves and instrument changes, however. I’m a bit more of a fan of the Berkeley approach of treating breakpoints as the start of a new station record, rather than trying to conglomerate multiple locations and instruments into a single continuous station record.
Dale, hourly data is what is used to estimate the TOBS correction.
See for example this post from John Daly’s site:
http://www.john-daly.com/tob/TOBSUMC.HTM
Judith Curry
When I have had to change instruments, I’ve run concurrent outputs for the same experiment to see if the results are the same: i.e., overlap.
When I see that there has been a change using Liquid in Glass and two automated systems which necessitated physically moving the automated systems closer to buildings as well as Time of Observation changes, I am curious as how long the readings run concurrently so that there is overlap in using all of the instruments.
For example: TOB, when there was a switch to AM from Afternoon, how long (and I am assuming there was overlap observations) was the observation period that had morning and afternoon recorded, a season? a year? a decade? ongoing?
When the switch from LiG to MMTS or ASOS, how long was the overlap field observation? Or was this another in lab experiment?
“NCDC assumes that the current set of instruments recording temperature is
accurate,” Electronics don’t drift? go haywire? Issues with my computer tell me otherwise.
I am first concerned with the fundamentals/integrity of the observations vs the fiddling with the outputs. Output fiddling is the game of statisticians on whom I am dependent for their own integrity.
RiH008:
There have been a number of papers published looking at differences between side-by-side instruments of different types. This one for example: http://ams.confex.com/ams/pdfpapers/91613.pdf
The NCDC folks unfortunately had no say over instrument changes; it was driven by the national weather service’s desire to improve hydrologic monitoring and forecasting. Per Doesken 2005:
“At the time, many climatologists expressed concern about this mass observing change. Growing concern over potential anthropogenic climate change was stimulating countless studies of long-term temperature trends. Historic data were already compromised by station moves, urbanization, and changes in observation time. The last thing climatologists wanted was another potential source for data discontinuities. The practical reasons outweighed the scientific concerns, however, and MMTS deployment began in 1984.”
Zeke Hausfather
Thank you for your response. As I understand it, NWS made the decision to change the instrumentation and in some cases location of the observing stations.
I did not see anywhere how the transition took place.
A 20 year retrospective analysis of one station in Colorado:
“Is it possible that with aging and yellowing of the MMTS radiation
shield that there is slightly more interior daytime
heating causing recent MMTS readings to be more
similar to LIG temperatures. But in a larger
perspective, these changes are very small and
would be difficult to detect and explain, except in a
controlled co-located environment. Vary small
(less than 0.1 deg F) changes in MMTS-LIG
minimum temperatures have also been observed,
with MMTS slightly cooler with respect to LIG. The
mean annual MMTS-LIG temperature differences
are unchanged.
Just as in the early years of the
intercomparison, we continue to see months with
larger and smaller differences than the average.
These are likely a function of varying
meteorological conditions particularly variations in
wind speed, cloud cover and solar radiation.
These are the factors that influence the
effectiveness of both the MMTS and LIG
radiations shields.”
If I am understanding what the article you provided said: There was no side-by-side comparisons of LiG and electronic observer in a proscribed way. There may have been some side-by-side, and there are anecdotes, but the transition was not geared for climate research, particularly longitudinal. The instrument period observations are influenced by meteorological conditions not quantitated.
It appears to me that the instrument period, at least from the transition onward, is spurious because of that transition. The adjustment mechanisms are ill designed and suited to this data set, and there is @ 0.5 C adjustments based upon a best….estimate. This is all here in the USA. What happened around the world?
I am still curious.
RiH008,
There is no prescribed 0.5 C adjustment for MMTS transitions. Its handled by the PHA, which looks for breakpoints relative to neighbor difference series. Instrument changes tend to be really easy to pick up using this approach, as they involve pretty sharp step changes up or down in min/max temperatures.
In that particular case its pretty clear that there is a ~0.5 C difference in max temp readings between the instruments. I looked at many other examples of pairs of stations here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/
In case anyone wondered whether the Karl 1986 TOBS paper had good data …
http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/#comment-605470
See the follow up paper.
Which one? Is the data any better?
read zeke again.
take notes.
write down the references.
read the papers
get the data.
get the code.
write your own code.
compare the results.
write a paper.
be a hero.
Second paper
Ooops:
” Data for the analysis were extracted from the Surface
Airways Hourly database [Steurer and Bodosky, 2000]
archived at the National Climatic Data Center. The analysis
employed data from 1965 –2001 because the adjustment
approach itself was developed using data from 1957 –64.
The Surface Airways Hourly database contains data for 500
stations during the study period; the locations of these
stations are depicted in Figure 2. The period of record
varies from station to station, and no minimum record
length was required for inclusion in this analysis”
Wow. The stations could and would have moved spatially and elevation.
yes bruce.
and the station moves are part of the reason why the error of prediction is
what it is.
If you had been reading my comments from 2007 to 2010,you’d know
how important the error of prediction is.
Its not that hard to understand.
give it a try.
you could actually go through the records and find the stations that moved. its pretty simple.
show us your chops.
Oh when you do tell roy spenser he uses the same data without accounting for the moves.
After the appalling comment by Judith “that they are only trying their best”, it seemed to me rather than saying what is currently wrong with the present system, what I really wanted to do is to say what we needed instead. So, I’ve decided to “list my demands” on my own website. I would welcome any comments or additions.
https://scottishsceptic.wordpress.com/wp-admin/post.php?post=3657&action=edit&message=6&postpost=v2
ScottishSceptic: I had a problem with your link
read ISO9000 for starters. thats my advice.
Scottishsceptic
Your link goes to a place which asks for my email mail AND a password.
I have no wish to create yet more passwords. When I bought underlay for my carpet online I was required to create a password so these days I tend to steer clear of new places that require one for no good reason.
Tonyb
he needs ISO9000 for his links
Mr Hausfather,
I tend to agree with some comments regarding the lack of credibility caused by the “scientific community´s” bad apples as they try to evolve into “scientific manipulators”. I can see they are giving you a headache.
The problem, as I see it, is that data manipulation is quite evident. They do tend to treat the public with a certain contempt.
And I´m not referring to the temperature adjustments. I´m referring to the use of the red color palette by NOAA to display worldwide temperatures, and similar issues, or the use of tricked graphs and similar behaviors. You know, if we use a search engine and start searching for climate change graphs and maps, there´s a really insteresting decrease in the number of products after 2010. It seems they realized the world´s surface wasn´t warming, and they stopped publishing material. This is seen in particular in US government websites. Is the US President´s “science advisor´s” political power reflected in the science they show us?
Anyway, I realize this thread is about temperature adjustments in the USA. But I do wonder, does anybody have a record of the temperature adjustments by independent parties, for example Russia and China? Do you talk to personnel in the WMO Regional Climate Center in Moscow?
If you dont like the colors download the data and change the palette.
Mr Mosher, I´m sophisticated enough to catch “palette bias”. I don´t need to download the data. However, US government websites intended for the general public do have a significant bias. And it´s not reasonable to expect individual members of the public to understand there´s a bias, download the data, and plot it using software most of them lack.
I´m extremely cynical when it comes to honesty by government leaders in general, and this applies to the whole spectrum. Thus my social criticism isn´t aimed at a particular population of politicians (although I do admit I have an issue with real red flag waving communists).
Take US politics. Those of us who are smart enough realize we got lied about the Tonking Gulf Incident, that Clinton lied about genocide in Kosovo, that Bush lied about WMD in Iraq, etc etc etc.
Therefore I´m not really surprised to see government agencies toe the line and use deceit to plug the party line du jour. On the other hand, I do write and talk to explain these deceptions do go on. During the Tonking Gulf Incident I was sort of innocent and I wasn´t too aware of what went on out there. Later, as I realized things were being distorted, i made it my hobby to research what really went on. And what I found wasn´t so nice.
This climate warming issue is peanuts. How do you like the fact that we spent $1 trillion invading Iraq looking for those fake WMD and here we are 11 years later watching a Shia thug allied with Iran fighting a civil war against a bunch of Sunni radicals? This climate warming issue is peanuts compared to the lies and the mistakes the US government makes when it lies to the people to justify making irrational moves.
“Mr Mosher, I´m sophisticated enough to catch “palette bias”. I don´t need to download the data. However, US government websites intended for the general public do have a significant bias”
Show me the experiment you did to prove the bias.
If you dont like the palette, do what I do.
change it.
Fernando – excellent. It went completely over Mosher’s head of course so his instinct was to simply repeat the unreasonable demand.
Jennifer Marohasy has documented “cool the past and/or warm the present for specific stations in Australia by its BOM (equivalent to NCDC), in their so called High Quality (HQ) data set. The bias was so obvious that a national audit of HQ was demanded under an Australian Law. The BOM response was to drop HQ and commence with a new homogenization program.
In New Zealand, NIWA has aggressive and apparently unjustifiable cooled the past. A lawsuit was filed seeking technical disclosure. It got rebuffed at the highest court level on dubious legal grounds similar to Mass. V. EPA. Appeals Courts are not well positioned to determine matters of fact rather than law, and depending on how laws are written have to defer to fact finders like EPA or NIWA even if biased.
Frank Landsers RUTI project has similarly documented at least regional warming bias in HadCrut.
Steriou and Katsoyiannis documented warming homogenization bias in global GHCN using a sample of 163 stations. Paper was presented at EGU 2012 and is available on line from them. Quite a read.
Topic is NOAA.
Topic is USHCN.
Mosher and Zeke,
After all the adjustments, how do you determine if the information is more accurate than before the adjustments?
Andrew
simple. out of sample testing.
With TOBS what you do is this ( this is how it was developed)
you take 200 stations
you make two piles
You use 100 to develop the correction.
your predict what the other 100 should be recording.
you compare your prediction to the observations.
You see that your prediction was solid
You publish that paper years ago.
Then you watch skeptics avoid reading the paper, and you watch them demand proof.
When you point them at the proof, they change the subject.
When you point out that they are avoiding reading the proof they demand, they get nasty an attack zeke’s motives.
“you take 200 stations
you make two piles
You use 100 to develop the correction”
Doesn’t sound very scientific to me. Just sounds like you are making group A more like group B. There is no scientific reason to do this.
Andrew
This assumes the stations are independent of each other and not affected by independent variables, which is not always the case. If the in sample and out of sample data consistently read incorrectly the same way, a “confirmation” could still occur. Out of sample testing can be very useful, but there are many ways to do it wrong and sometimes no way to do it right depending on the data sets available. Not saying it was done wrong here, only saying that stating OOS testing was done is not a blanket confirmation. Certainly better than not doing it at all.
Another example, if one claimed the post 1980 divergence issue in tree rings was out of sample confirmation data, then it would fail and clearly invalidate the tree ring proxy record. So we have an OOS failure but the reconstruction still holds for many.
“Then you watch skeptics avoid reading the paper, and you watch them demand proof.
When you point them at the proof, they change the subject.
When you point out that they are avoiding reading the proof they demand, they get nasty an attack zeke’s motives.
”
All the good work Zeke is doing to help improve communication on this issue…..
another “just saying”….
“you make two piles
You use 100 to develop the correction.
your predict what the other 100 should be recording.
you compare your prediction to the observations.”
It seems to me the only way to actually verify a “correction” for a change in equipment, location or procedure, is to continue taking temps at the same location(s) using both methods/instruments over an extended period of time. If you do that, with enough stations, and the change in each is the same within a certain range, it seems to me that that gives you your correction with error bars for that change. (You could then use it to “predict” the change in temps at other sites, but I don’t see the purpose. How do you know the temps/average temps/trends of the other stations remained the same?)
Is this what “develop the correction” means?
If on the other hand, you are making a statistical “correction” based on assumptions and then comparing it against other stations to see if your “predictions” are correct, I don’t see the value in that at all.
The time of observation ate my global warming.
Priceless.
If Zeke is to be allowed three long guest posts here, how about allowing Goddard to write one?
Another skeptic who changes the subject.
Nothing is stopping Goddard from submitting something to Judith. Unlike sites like WUWT, she probably won’t post it if the science doesn’t make sense to her. Her earlier posts show she has an open mind on the subject.
Have you seen the two posts, so far, posts by Tom McClellan? Pure foolishness. Goddard’s stuff is a least wrong.
Zeke should post his code.
Here you go. Note its in STATA, which not everyone uses…
If you have STATA or can get your hands on it and have any questions, let me know and I’ll walk you through it.
https://www.dropbox.com/s/a2bs6a0i9knww8n/Code%20for%20Climate%20Etc%20Post.zip
Nice try.
You can also open it in plaintext with any text editor. There are some conventions somewhat unique to the language, but most of it should be pretty easy to follow.
waiting for you bruce?
I have my trial license for STATA today. Give me a day to see if Zeke’s code works. Had to see eye doctor today.
My code is pared down and ready to post if his works. I’ll find a spot for the data and code.
One question Zeke.
The 1900-1910 climatology baseline you use has the highest percentage of estimated data in the whole USHCN record. Over 30%.
Do you use Estimated data in the climatology baseline?
Paul Matthews:
Why not somebody who is technically competent instead?
JeffID maybe?
> Why not somebody who is technically competent instead?
I’d suggest Carrick.
Pingback: Adjustments To Temperature Data | Transterrestrial Musings
Skeptics are better off barking up another tree than the temperature record.
I trust they can read a thermometer without letting their political activism get in the way. This is one measurement area where attempting to corrupt the record would be easy to identify, as opposed to the paleo record which is a mess of assumptions, guesses, and questionable statistics.
One problem I have with the temperature record is when it is presented without a vertical scale in the media, which seems to happen much more often than one would expect. The same goes for sea level rise.
Another issue is when it is shown only from 1950 or 1980 which hides the fact that first half of the 20th century had significant warming which was not AGW based. This is such old news that it is never discussed anymore, but I think it is significant relative to how much natural forces may be responsible for the last 50 years of warming.
Presenting the magnitude of the temperature change over the past century relative to how much the temperature changes on a daily or yearly basis can be quite an eye opener to many people who seem to believe this warming is “dramatic”.
http://instituteforenergyresearch.org/wp-content/uploads/2012/05/Nordhaus-4.png
There’s no problem with the temperature record. The problem is with the ‘adjustments’, which with each revision add in more and more warming. The first USHCN version added in 0.5F warming, now they are adding 0.9F. It’s the so-called scientists who can’t read a thermometer without their political activism getting in the way.
“It’s the so-called scientists who can’t read a thermometer without their political activism getting in the way.”
Im a libertarian.
Your theory is that liberal scientists are making stuff up because of their activism.
TOBS was first done in 1986. before the IPCC
Im a libertarian, where is my activism.
So much for your theory.
more bad science from you.
You have excelled yourself here. It’s all about you! As with climategate, you seem to have a delusional view of your own importance.
“So-called scientists”? No ad homs here. No sirree.
Paul Matthews:
I thought you were smarter and better informed than this.
Of course there are problems with the (raw) temperature record. Given the manner in which the data were collected, the issue isn’t whether the data should be adjusted to correct for the errors, but whether sufficiently good adjustments could ever be made, and whether we could know that they had been made.
@Carrick
… and how much error is added to the data with each estimated correction and adjustment and how much uncertainty flows to the results of the analysis.
” and how much error is added to the data with each estimated correction and adjustment and how much uncertainty flows to the results of the analysis.”
That is a good question.
One thing that I droned on about for maybe 3 years was the propagation of errors due to adjustment.
Its one reason I like the Berkeley approach better. Its top down
AND we have much larger errors than Jones.
He flipped out when he read this and could not understand the math
Tom Sharf: Skeptics are better off barking up another tree than the temperature record.
I agree, but I am glad that other people are watching this with energy and alertness.
I would certainly say that “miraculously” many temperature adjustments seem to make the past colder and the present warmer, and the adjustments mostly trend that direction over time. This certainly brings confirmation bias into question, but you have to look into what they actually did, and I don’t see any authentic corruption here.
Enough people have looked into it (particularly BEST in my opinion) that it seems good enough to me and not likely to get much better, or change much from here on out.
Nice point about the presentation. I’d thought of that but your link was the first time I had seen the presentation in a normal scaling… Telling, eh?
Pingback: More Globaloney from NOAA - Page 4 - US Message Board - Political Discussion Forum
Thanks, Zeke and Judith, for this post. It is exactly the kind of thing I look for on climate blogs: basic information to better my own personal understanding (and with less hype, even if the lower hype makes it a bit less exciting than the latest post ostensibly threatening to up-end the field).
The USCRN doesn’t seem to be working properly:
http://www.forbes.com/sites/jamestaylor/2014/06/25/government-data-show-u-s-in-decade-long-cooling/
Adjustments will be needed.
Not really; USCRN actually has a higher trend than USHCN over the recent period, though the two are nearly identical: http://rankexploits.com/musings/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-1.25.23-PM.png
Regarding your above link, I think that’s a winner, along with the UAH comparison.
They both need adjusting.
Zekd Hausfather, thank you for your post, and the responses to comments. I look forward to your next posts.
Steven Mosher, thank you for your comments as well.
From Zeke: Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.
Yes to understanding exactly what has been done.
“Assuming good faith” is a problem. The assumption should be that errors have been committed, and that the people who made the errors will be very defensive about having them pointed out. Sorry. It’s hard to avoid thinking that a check of your work (or my work) is an assault on your integrity or value as a person (or mine). Assuming good faith is why journal editors generally have trouble detecting actual fraud; everybody makes mistakes, and the reputation of academia is that they do not do as good a job checking for errors in programs as do the pharmaceutical companies, who have independent contractors test their programs. “Assuming good faith” ought to be reciprocal and about equal, and equally conditioned.
Should FOIA requests be granted the “assumption of good faith”, however conditioned or qualified? Say the FOIA requests made to the U of VA by news organizations and self-appointed watchdogs for the emails of Michael Mann? Or perhaps the re-analyses by Stephen McIntyre of data sets that have had papers published about them? It’s a tangent from your post, which is a solid contribution to our understanding.
““Assuming good faith” is a problem. The assumption should be that errors have been committed, and that the people who made the errors will be very defensive about having them pointed out.”
Err no.
Assuming good faith is not a problem.
you do work for me. I assume you will make mistakes. that is not bad faith.
you do work for me. I claim you must have made mistakes because you
are self interested and because some one across the ocean made mistakes in a totally different field, and I refuse to look at your evidence
until you prove you are a virgin. that is what most skeptics do.
Mosher: Err no.
Assuming good faith is not a problem.
you do work for me. I assume you will make mistakes. that is not bad faith.
you do work for me. I claim you must have made mistakes because you
are self interested and because some one across the ocean made mistakes in a totally different field, and I refuse to look at your evidence
until you prove you are a virgin. that is what most skeptics do.
How you do go on.
There are professionals whose work is always audited. I mentioned the pharmaceutical companies, whose programs are always checked by outsiders. Financial institutions have their work audited; professional organizations like AAAS and ASA have their finances audited; pharmaceutical and other scientific research organizations maintain data audit trails and they are subject to audits by internal and external auditors.
Whether the auditors assume good faith or not, mistakes are so prevalent that it ought to be assumed by auditors that they are present.
“Whether the auditors assume good faith or not, mistakes are so prevalent that it ought to be assumed by auditors that they are present.”
I can tell you with CERTAINTY that there are mistakes in our product.
it is not a can of pringles.
Lets start form the top.
1. De duplication of stations.
We decide algorithmically when two stations are the same or different
starting with over 100K stations we reduce this to 40K unique.
there WILL BE errors in this. even if our algorithm were 99% perfect
Central Park was a nightmare of confused source data.
Another user pointed out an error that led to a correction of 400 stations.
There are error in the EU where the metadata indicates two stations
and some user insist that historically there was only one.
These errors dont effect the global answer but the Local detail will
not be the best you could do with a hand check of every station record.
2. The climate regression. we regress the temperature against elevation
and latitude. This captures over 90% of the variation. However, these
two variables dont capture all of the climate. Specifically if a station is in an area of cold drainage the local detail will be wrong in certain seasons.
Next, because SST can drive temps for coastal stations and because the
regression does not extract this, there will be stations where the local detail is wrong. However, adding distance to coast doesnt remove any
variance on the whole. so the global answer doesnt change. If you’re really interested in the local detail, then you would take that local area and do
a targeted modelling effort.
3. Slicing. the slicing can over slice and under slice. It relies on metadata
and statistical analysis. So there will be cases of over slicing and under slicing. This is one area where we can turn the slicing knob and see the effect. there will be a local effect and a global effect.
4. Local detail. one active question under research is how high a resolution can we drive to. Depending on choices we make we can oversmooth the local or undersmooth it. Some guys like Prism drive the resolution down to sub 30minutes.. this tends to give answers that are thermodynamically suspect. On the other hand you have CRU which works at 5 degrees.
Now, you can play with this resolution. from 5 degrees down to 1/4 degree
what you find is the global answer is stable, but the local detail is increase.
The question is “is this local detail accurate”
The question of bad faith is this. Are these errors which we freely admit the result of my libertarian political agenda? or Zeke’s more liberal polilitcal agenda? please decide which one of our agendas created these errors which we freely admit to
New Rule: Anyone who doesn’t trust the temperature data, can’t use that data as evidence for the Pause.
Which set of data?
It’s like this davey, if the Soviet Union admitted one year that production of cement had declined, you could believe them.
This is a very nice illustration of how confirmation bias works.
David, you are expecting consistency from skeptics.
they will question the record when it fits their agenda
they will endorse the record when it fits there agenda.
They will ignore that the very first skeptical attempt to construct a record
(jeffid and romanM) actually showed more warming
Pointing out that their record shows no warming is not necessarily endorsing their record. You know that.
Don citing the record AS PROOF of a pause,
citing the record AS PROOF of c02 is not the cause,
requires, logically, endorsement.
Merely pointing is one thing. citing as proof is another.
I own a gun.
you find your enemy dead.
the bullet matches my gun.
You argue against the match, you raise doubts.
You find your dog dead.
the bullet matches my gun.
You argue I killed your dog
Mosher will be denying the Pause any moment now.
no bruce.
I’m pretty clear on the pause.
wishful thinking.
1. If you assume that the underlying data generating model is linear
2. And you fit a straight line model to the data.
3. the model will have a trend. not the data, data just is.
4. The trend in that model will have an uncertainty.
5. Depending on the dataset you select and the time period you can
find a period in the recent passed where the trend of the assumed model is “flat”.
some people refer to this as a pause, hiatus, slowing, etc.
Its just math.
“The data just is” with no properties? What a strange concept of reality!
You are making sweeping generalizations about skeptics, Steven. Maybe you should say ‘some skeptics’ blah…blah…blah. That’s to differentiate yourself from apple and the rest of that mob.
“David Wojick | July 7, 2014 at 3:25 pm |
“The data just is” with no properties? What a strange concept of reality!
yes david. data doesnt have trends
the data is what it is.
you produce a trend by making an assumption and doing math.
hmm go read briggs. Then come back.
no link for you, you have to do the work for yourself.
hint. you have to choose a model to get a trend. the trend is not ‘in’ the data.
the trend is in the model used to apply to the data.
“they will question the record when it fits their agenda
they will endorse the record when it fits there agenda.”
Yeah, that’s why I put “pause” in quotes, and refer to the “reported” temperature record.
A fair number of skeptics I have read doubt, as I do, that anyone knows what the global average temperature/heat content is with the accuracy and specificity claimed. Let alone knows past averages and can predict future temps with the same precision.
It is totally different to show the flaws in the reported averages (ie. UHI, uniformity of adjustments, etc.) The argument “Even assuming you are right about A, you are clearly wrong about B” does not admit that you are correct on A.
“Your reported temperature trends are garbage, but even your own reports undermine your overall theory because they don;t show the warming you all uniformly predicted.” See how it works?
But of course, you know all that. You’re just being an obscurantist.
Steve Mosher,
Stop feigning indignation! When did marketing presentations get accepted without argument.
I enjoy you but Zeke’s effort speaks for itself. A dam good effort so far (but you weaken his argument by being so over the top). The methods are worth discussing (some questions are fair and some are not) but what is new about that in climate discussion? Like R.G.B. at Duke points out continually to everyone (you on several occasions) their are weaknesses in the physics and arguments on both sides. Deniers?
Business as Usual?
The I988 projections for CAGW (science is settled talking heads) were pretty tough on everyone (even those much smarter than themselves F. Dyson, etc.). Zeke is doing just fine without winning every point.
Don
seriously, you should note that Zeke is answering every good question.
with patience and good humor. he amazes me.
me? i get to police skeptics who are out of line.
you could always do that,
you could be nice and gentle about it.
But quite frankly Zeke puts a lot of effort into this stuff. Normally at Lucias
there is 1 troll for every 10 good questioners. But here the ratio is reversed.
if me pounding on a few off topic people bugs you, then pull them aside and do it yourself.
Steven, Zeke is doing fine. He is answering almost every question with plausible explanations. You are not helping him by echoing davey apple’s tarring of skeptics with a broad brush. Isn’t that an off topic distraction?
Nobody has answered my question on why the warmest month on record changes from July 1936 to July 2012 with great fanfare, then changes back to July 1936, without a peep fro NOAA. Have their own algorithms stabbed them in the back and they are blissfully unaware?
https://www.google.com/search?q=july+2012+hottest+month+ever&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb
http://wattsupwiththat.com/2014/06/29/noaas-temperature-control-knob-for-the-past-the-present-and-maybe-the-future-july-1936-now-hottest-month-again/
I didn’t see any comment on Anthony’s post by you or Zeke. Would either of you care to comment on the unreported flip flop?
According to NOAA one state set a record high temperature in July 2012, while 14 states had their record high temperatures recorded in July 1936. Yet when homogenized and anomalized, July 2012 was declared the warmest month on record.
http://www.ncdc.noaa.gov/extremes/scec/records
David Appell:: New Rule: Anyone who doesn’t trust the temperature data, can’t use that data as evidence for the Pause.
Why? If the pause persists despite (possibly motivated) adjustments, does that not warrant greater credence in the pause?
Don Monfort wrote:
Speak for yourself, monfie.
Does the inverse of this rule also apply?
“Anyone who trusts the temperature data can’t deny this as evidence for the Pause.”
Tom Scharf w0ite:
No, not quite. The temperature data by itself aren’t the evidence. You will have to provide some analysis of it, like demonstrating that there is a “Pause” based on some statistical metric. No?
I have an analysis for you, perlie. The pause is killing the cause.
And that is all fake skeptics have to offer.
I don’t have time to waste on pause deniers, perlie. That’s all you get.
I know, since you are actually not interested in the scientific question at hand. You are just an ideologue, like fake skeptics in general, who try to further their anti-science propaganda whatever their particular economic interest or political or religious motivation is for doing so.
Truth.
The anomaly data already represent an analysis
phatboy wrote:
So, tell me then. How do you derive the assertion about the alleged “pause” from the anomaly data themselves? How do you recognize the “pause”. You don’t need any trend analysis, any statistical metrics, nothing?
That’s right, perlie. We are all motivated by some combination of ideology, profit and religion. Very scientific. You are going to save the world with that crap.
Perlie hasn’t heard:
google.com/search?q=the+climate+pause&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb
A graph of the anomaly data is effectively a trend in itself. So you just need to use your eyeballs. Trying to get a trend of what is effectively a trend produces all sorts of wonderful results, as you would know from following the comments of certain individuals.
I can look at the temperature trend over the past century and state this trend is increasing over the past 100 years.
I can look at the same trend over the past 20 years and say this same trend is essentially flat.
Can you not bring yourself to do that? At all?
Arguments that 20 years are too short for this analysis, or other forces are causing this phenomenon are worth debate, but simply ignoring the trend slow down (when it was supposed to be accelerating with BAU CO2) is not a very convincing argument.
Equivocating that the pause means something other than the flat temperature trend-line in the much monitored and accepted global trend(s) is moving the goalposts.
Don Monfort wrote:
Obviously, monfie thinks one can prove an assertion regarding a scientific question as true by being able to present a list of links from a Google search for a combination of keywords related to the question. He should try to get a paper published, applying such an approach.
I know one too:
https://www.google.com/search?client=ubuntu&channel=fs&q=alien%2Babduction%2Banal%2Bprobe&ie=utf-8&oe=utf-8#channel=fs&q=aliens%2Babduction%2Bprobe
The pause is a reality, perlie. We don’t have to show you no stinking trends. Everybody knows about it. Google it. Try to catch up, perlie. Stop being a nuisance.
Tom Scharf wrote:
I suspect, here you actually mean “positive” instead of “increasing”? The more important fact here is that the trend of the surface temperature over the last 100 years is not just positive (ca. 0.073-0.085 K/decade), it is also statistically significant with more than 13 standard deviations.
To do what? To state falsehoods? Why would I do that? The trend over the last 20 years is not flat. These are the trends (in K per decade) over the last 20 years for the various surface temperature data sets together with the 2-sigma intervals:
GISTEMP: 0.116+/-0.096
NOAA: 0.097+/-0.089
HadCRUT4: 0.101+/-0.094
Berkeley: 0.126+/-0.094 (ends in 2013)
HadCRUT4 krig v2: 0.143+/-0.099
HadCRUT4 hybrid v2: 0.15+/-0.109
(http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html)
All positive, and all even statistically significant with more than 2 standard deviations.
Who are supposed to be the ones who allegedly said that the trend for every 20-year period would always be larger than the previous one, moving forward year by year? Please provide a quote and proof of source.
The temperature trends over same length time periods, e.g. 20 years, have a frequency distribution, too. The individual trends will lie around a median value. In about 50% of the cases they will be larger than the median value, and the other ones they will be smaller (or about equal the median value). The shorter the time interval, the wider the distribution (with sufficient short time periods Zero or even negative trends will be part of the distribution also). No one has claimed that the trends will always only be increasing. Like no one has claimed that CO2 is the only factor influencing temperature variability. This is the next “skeptic” strawman often presented in this context and also hinted by you here.
The logical fallacy of “moving the goalpoast” is only given, if the one who is allegedly doing that had before defined a normative criterion about something, which then is changed, when it is fulfilled. Have I done that? Otherwise, your accusation against me of applying the logical fallacy is false.
Appell, if you say there is no pause then its you that can’t use temp data to say temps rose.
Mosher, cluttering the thread with very many similar comments, says:
“David, you are expecting consistency from skeptics.”
While showing his own inconsistency by directing his criticism only to skeptics, as per his agenda, and ignoring Appell’s position that there has been no pause.
I beat on david Appell all the time.
Today is his lucky day.
its pretty simple, police your own team.
Yet in this thread you chose to not notice what he did.
Instead you chose to protect your investment.
Police yourself, Mosher.
I thought you said you don’t have a team, Mosher. Why police the team you aren’t on?
I don’t have one.
Who could you be talking at?
Wrong. The pattern of the global temperature indices is probably roughly correct, only the trend, especially the late 20th warming (the AGW period) may be exaggerated. Furthermore, as Don Monfort correctly says, pointing out that the record shows no warming is not necessarily endorsing the record.
Example:
http://www.woodfortrees.org/plot/hadcrut4gl/plot/hadcrut4gl/from:1950/detrend:0.4
Do we know what the warmest year on record for the U.S. is, today?
2012, in both raw, tobs-adjusted, homogenized, and homogenized + infilled data. Easy enough?
But not the hottest (Tmax) Feb , March. May , June , July , October, November or December.
I was just wondering if this had affected the story:
http://dailycaller.com/2014/06/30/noaa-quietly-reinstates-july-1936-as-the-hottest-month-on-record/
It just seems to me that if they can’t get a month right, they might not be so sure about the whole year.
It was really hot in the 1930s:
http://www.ncdc.noaa.gov/extremes/scec/records
2012 thanks to underestimated UHI.
Per Bush, the skeptics misunderestimate all this stuff.
Zeke:
First, I want to thank you for your posts – here and elsewhere. I always read them and learn something, and I really appreciate the time you are contributing.
I have a couple of questions.
1. On the issue of adjusting for MMTS versus LiG – I was not clear on whether the LiG (the older style) is being adjusted to conform to the MMTS or visa versa. Could you clarify?
Also, is one type of instrument more accurate than the other?
One would assume the MMTS is more accurate than the LiG (just because it is newer) – however I am just guessing that.
It would seem to make sense to adjust the less accurate to conform to the more accurate, but I just want to clarify which way the adjustment runs.
2. Time of Observation. This is probably a stupid question – but are the measurements being taken more than once per day? Moving the time of observation from afternoon to morning sounds like we are shifting the time we look at the temperature (like one time) – but that doesn’t make sense to me. I assume we want to capture the minimum and maximum temperature at each site daily – which would seem to require more measurements (hourly or ever more frequent). So could you clarify that point.
In a perfect world – with automated stations, going forward, I would assume we would capture data fairly frequently. In a 100 years with data every minute (or 5 minutes or whatever), we would capture the min/max – is that is where we are going?
3. As to the “changing the past” issue – that is deeply unsettling to me and I assume many others. What is the point of comparing current temperatures to past temperatures if the past changes daily?
How about doing it both ways and providing a second set of data files where they adjust the new relative to the old in addition to the old relative to the new. I would love to see how that would feel (see the data over time adjusted compared to the old) just to see the difference.
4. UHI adjustment. When you write your third post could you perhaps explain the philosophy of this adjustment. I don’t get it. From my point of view we pick a spot and decide to plop a station down. For years it is rural and we have one trend. Then over a decade or so, that spot goes urban and there is a huge warming trend, then once it is urban the trend settles back down and is what it is (just warmer than rural).
Why do we adjust for that? That station did get warmer during that decade – so what are we adjusting it to? Are we trying to forever make that station be adjusted to read rural even though it is now urban? Or change the rural past to read urban? I just don’t understand the reason for this adjustment if the instrument was accurately reading the temperature throughout its history.
What if something (like a hot spring forming or a caldara forming) where to change the reading – would we adjust for that also?
Anyway – thanks in advance for looking at my lay person questions and hopefully responding.
Rick
Rick,
NCDC makes a general assumption that current temperature readings are accurate. Any past breakpoints detected in the temperature record (e.g. due to an instrument change) are removed such that the record prior to the breakpoint is aligned with the record after the breakpoint. In this sense, MMTS instruments are assumed to be more accurate than liquid in glass thermometers for min/max temperature readings.
.
As far as TOBs go, both LiG and MMTS instruments collect a single maximum and minimum temperature since they were last reset. The issue with TOBs isn’t so much that you are reading the temperature at 10 AM vs. 4 PM, but rather that when you are reading the temperature at 4 PM you are looking a the max/min temps for 12 AM to 4 PM on the current day and 4 PM to 11:59 PM on the prior day. This doesn’t sound like much, but it actually has a notable impact when there is a large temperature shift (e.g. a cold front coming through) between days. I’m writing a follow-up post to look at TOBs in much more detail, but for the time being Vose et al 2003 might be instructive: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/vose-etal2003.pdf
TOBs isn’t relevant to modern instruments that record hourly temperatures, and certainly not to the new Climate Reference Network that records temperatures every 5 minutes or so.
.
For “changing the past”, either way results in identical temperature trends over time, which is what folks studying climate are mostly interested in. Its not a bad idea to provide both sets of approaches, though it might prove confusing for folks.
.
UHI is fairly complicated. The way its generally detected is if one station (say, Reno NV) is warming much faster than its more rural neighboring stations, it gets identified as anomalous through neighbor comparisons and adjusted back down after it diverges too far from its neighbors. Menne et al 2009 has a good example of this further down in the paper: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-etal2009.pdf
Our recent paper looked in more detail at the effect of pairwise homogenization on urban-rural differences. It found that homogenization effectively removed trend differences across four different definitions of urbanity, at least after 1930 or so, and did so even when we only used rural stations to homogenize (to reduce any chance of aliasing in an urban signal).
Zeke:
Thanks for the answers.
So for UHI – it sounds like it only gets adjusted relative to its neighbors during the transition from rural to urban and then once fully urban, assuming its trend is similar to its neighbors no further adjustments would need to be made. Is that correct?
RickA,
Yes and no. If a switch from rural to urban introduces a step change relative to neighbors, that will be corrected. If an urban-located station has a higher trend than rural neighbors due to micro- or meso-scale changes, that will also generally be picked up and corrected. Its not perfect, however, and some folks (like NASA GISS) add additional urban corrections. For the U.S., at least, it seems to do a reasonably good job at dealing with UHI.
Zeke, I don’t suppose that there were stations with overlapping max/min thermometers+ LiG readings and then overlapping LiG and MMTS reading?
Doc,
This study should fit the bill: http://ams.confex.com/ams/pdfpapers/91613.pdf
Thanks Zeke, so even a correction for the LiG and MMTS is non-trivial; this is not a simple off-set problem but both instruments give different Tmax/Tmin off-set reading, at different months.
Conspiracy theorists wonder:
“Can anyone reach either rankexploits of http://ftp.ncdc.noaa.gov?”
I can’t.
I’d like to read this stuff….
Odd, both ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/ and http://rankexploits.com/musings/ seem to work fine for me.
???? rankexploits seems to have a hyperactive ip blocker…
I guess unless some other people cant reach them we’ll just assume its my setup here…..
I’ve had my IP blocked by Lucia’s blog a number of times.
She uses a blacklist to block IP addresses associated with malicious behavior. Unfortunately, those IP addresses often belong to ranges owned by ISPs who serve many customers. Since any customer can get any (dynamic) IP address with the ISP’s IP ranger for their area, people can often wind up using IP addresses which have previously been responsible for malicious behavior.
Blocking ip’s isnt so great. Tor can get you an ip anywhere in the world you would like and anyone really up to something….
nickels, Lucia also blocks Tor connections.
ah, nifty. cleverer than your general ip blocker!
But not a very useful site for links since they are blocked…. :(
nickels, you can e-mail lucia. She’s pretty good about helping legitimate users access her site.
As for the other site, it may be a coincidence, but you provided the link as http://ftp.ncdc.noaa.gov. Zeke responded by saying ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/ works for him. As you’ll note, his link begins with ftp, not http. That’s because the link is to an FTP server. That may be why you are having trouble. (Of course, you could have used the right link but typed the wrong one here.)
I realize I should have been more careful typing that one:
ncftp -u anonymous 205.167.25.101
fails.
Must be something weird in my firewall. My main intent for the post was just in case something was down, in which case there would have been some chime in. I do need to get the TOB papers, but I guess I’ll wait until that posts comes out and email someone!!
nickels there are lots of sites that don’t trust or permit raw ftp. You might try a proxy server for that.
Lucia blocks anonymizers.
I’m not into conspiracy theory, but links that don’t work don’t help….???
I couldn’t reach it either, but the other one I could
If you contact Lucia by email and explain, she may unblock you IP address. She’s done it for me a couple of times when I have been overseas in ‘bad’ regions. She says: ” If you need to contact me: my name is lucia. The domain name is ‘rankexploits.com’. Stuff an @ in between and send me an email.”
given the fragmented references I find to rankexploits that would be a bit of pain…. but its a nice offer…. if its a critical paper I’ll do it.
If you have two piles of stations, a la Mosher’s comment, and you compare the two and let’s say they give similar results.
So, it could be that all the thermometers have no problem, and they compare well.
Or, it could be that many thermometers in both piles have similar problems, but they still compare well.
So, what degree of confidence does this sort testing give me? Only that the results from the two piles are similar, not that the over-all result from all of them are accurate or meaningful.
jim2, now that we have accurate hourly/daily records, we can predict what the time of observation bias (TOB) would be, if we were still recording temps the same way today as 50 years ago. Which means we can build a model for the TOB *just* off of the high quality CRN stations, if we want. Then we can apply that model to the old records, get the adjusted temps, and compare those temps to those of the gold-standard stations. They should match up.
This is a pretty good test, since the ‘piles’ are different. It’s *out*-of-sample testing, not in-sample testing.
There are other ways to test the TOB adjustments. One way the TOB shows up in the temperature record is with a fingerprint of reduced intradaily variability. It’s from max or min temperatures effectively being counted twice, and how often the double-counting occurs depends on the time of day that temperatures were recorded, as well as how quickly temperatures change from one day to the next.
Based on the modern hourly/daily records, we can say that if the temperature was recorded at, say, 4 pm every day, then we should have X number of double-recorded days. So look at the historical data for days recorded at 4pm. Do we see a number close to X? Yes.
Or we can turn it around. Can we just look at the X, and infer the time of day that the data was recorded? Also yes.
There may be other ways of checking the TOB adjustments that I’m missing. These are just a few off the top of my head and from reading some of the papers that Zeke linked.
Thanks, Zeke, your efforts are appreciated by at least some of us.
Zeke: Have there been any adjustments to the USHCN data based on USCRN observations?
Nope, while the full co-op network is used in the pairwise homogenization process, the USCRN network is not. However, from a CONUS-wide standpoint USCRN and USHCN have been identical since USCRN achieved U.S.-wide coverage in 2004/2005: http://rankexploits.com/musings/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-1.25.23-PM.png
@ Nick Stokes
I followed your link to Watts 2011 post and found this comment, which is a pretty good summary of my opinion of climate data and the analysis thereof. I might add that the adjusting, infilling, correcting, kriging etc described by Zeke are superimposed on the basic problem described by Ms. Gray in her comment to Watts: We have no intrinsic method of separating signal and noise, even given pristine temperature records, and the existing records are anything BUT pristine. And can’t be made so.
“Pamela Gray says:
March 6, 2011 at 8:16 am
When I was searching for a signal in noisy data, I knew that I was causing it. The system was given a rapidly firing regular signal at particular frequencies. By mathematically removing random brain noise, I did indeed find the signal as it coursed through the auditory pathway and it carried with it the signature of that particular frequency. The input was artificial, and I knew what it would look like. It was not like finding a needle in a haystack, it was more like finding a neon-bright pebble I put in a haystack.
Warming and cooling signals in weather noise is not so easy to determine as to the cause. Does the climate hold onto natural warming events and dissipate it slowly? Does it do this in spurts or drips? Or is the warming caused by some artificial additive? Or both? It is like seed plots allowed to just seed themselves from whatever seed or weed blows onto the plot from nearby fields. If you get a nice crop, you will not be able to say much about it. If you get a poor crop, again, you won’t have much of a conclusion piece to your paper. And forget about statistics. You might indeed find some kind of a signal in noise, but I dare you to speak of it.
This is my issue with pronouncements of warming or cooling trends. Without fully understanding the weather pattern variation input system, we still have no insight into the theoretical cause of trends, be they natural or anthropogenic. We have only correlations, and those aren’t very good.
So just because someone is cleaning up the process, doesn’t mean that they can make pronouncements as to the cause of the trend they find. What goes in is weather temperature. The weather inputs may be various mixes of natural and anthropogenic signals and there is no way to comb it all out via the temperature data alone before putting it through the “machine”.
In essence, weather temperature is, by its nature, a mixed bag of garbage in. And you will surely get a mixed bag of garbage out.”
Zeke, thank you for an explanation of what is going behind the scenes. I’ll need time to digest your text. Meanwhile, one question is in my mind: Is the treatment of data that you describe a standard statistical technique? Can you estimate how many professional statisticians are involved?
Hi Curious,
David Brillinger was involved in the design of the Berkeley approach. Ian Jolliffe and Robert Lund are involved in the benchmarking process for homogenization through the International Surface Temperature Initiative. I’m sure there are a few more folks that are “professional statisticians”; I know a number of the scientists have degrees in mathematics, but aren’t professional statisticians.
Zeke – I could not find a BEST paper listing Brillinger among authors. Can you please help?
Here is Anthony Watts celebrating the role and demeanour of Prof Brillinger.
Oops, I had an unfortunate typo in the article. When I said “There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings”, I should have said “There are also significant positive minimum temperature biases from urban heat islands, with urban stations warming up to 0.2 C faster than rural stations”. The two are not the same, as not all the stations in the network are urban.
Climate Etc readers are invited to verify for themselves that auditors require a 180 page code of ethics to even *BEGIN* to grapple with ‘adjustment practices’ of the financial world that are *ACCEPTED* and *LEGAL*.
In a nutshell, nowhere in business or finance or insurance do we *EVER* encounter numbers that are “unadjusted.”
Conclusion Skilled climate-auditors like Zeke Hausfather and Steven Mosher — and team efforts like Berkeley Earth (BEST) and the International Surface Temperature Initiative (ISTI) — deserve all of our appreciation, respect, and thanks … for showing us a world whose warming is real, serious, and accelerating.
Of course, there are *PLENTY* of Climate Etc conspiracy theorists and/or astroturfers who *INSIST* that Zeke and Steve and BEST and ISTI are one-and-all agents of a vast conspiracy.
Of course, no amount of reason and evidence *EVER* convinces a conspiracy theorist, eh Climate Etc readers?
But what Zeke and Steve and BEST and ISTI are showing us *is* enough to convince the next generation of young scientists. And in the long run, that’s what matters, eh?
Good on `yah, Zeke and Steve and BEST and ISTI!
Real, yes, to some degree debated here concerning USHCN. Partly unreal owing to homogenization, also debated here with respect to the quality thereof. That’s what happens when the world emerges from an LIA caused by natural variation.
Serious depends on other context, not debated here.
Accelerating, no. That darned pause again, even showing up in BEST.
FAN ITSI is really cool.
it is everything we asked for after climategate.
even more cool is they have 2000 stations that we dont have.
So,
Our approach makes a prediction about what “would have been recorded” in every location were we had no data.
Now, thanks to data recovery ITSI has additional sources.
sources that we did not use constructing our prediction.
Do you think skeptics will make their own predictions about what this out of sample data says?
I think not.
“In a nutshell, nowhere in business or finance or insurance do we *EVER* encounter numbers that are ‘unadjusted.’”
Yes, ENRON adjusted its financial numbers, and so does Berkshire Hathaway.
Saying everybody adjusts numbers tells you precisely nothing about how accurate the adjustments are.
The primary problem skeptics have in temperature trends is that virtually every reported adjustment in trends results in lower figures for the past, and warmer figures for the present. The most famous “adjustment” being the hokey stick. (Yes that’s paleo, not temp measurements, but the principle seems to work the same in both.)
As an industry, the CAGW consensus is always “discovering” that “it’s worse than we thought,” including in temperature reports. And the apparent total lack of any skeptic involved in generating these adjustments just makes that less acceptable as mere coincidence.
But the alternatives are not an evil conspiracy of BEST, NOAA, et al, and pure, pristine, accurate, precise temperature trends. Confirmation bias, faulty shared assumptions, shared over confidence in the raw data and the accuracy of the adjustments are more likely to cause bad results than any conspiracy.
For example, I don’t see Mosher as being willing to engage in any conspiracy even if there were one. But I also know that he has tied his entire sense of self to defending climate models and temperature reports. He has spent years ridiculing those who disagreed with him or questioned the results he defends. So I simply do not see him as a credible check on his fellow tribesmen.
So challenge those biases and assumptions. Point out flaws in the methodology. Improve it!
This is how science progresses. Get educated on the problem, then make it better. Don’t just sit around wringing your hands and talking about potential biases.
I was unconvinced, so I got educated on the subject. I read the literature, checked the data, checked the calculations, and now I’m pretty satisfied with the adjustments. But that takes work, and most people aren’t going to bother doing it. It’s far easier to just be suspicious than it is to do your DD.
No one’s saying that the one-sidedness of adjustments are the result of coincidence. They’re the result of how we recorded data in the past and how we record it now, and the well-documented biases that result.
“Mosher
Do you think skeptics will make their own predictions about what this out of sample data says?
I think not.”
I expect to see waves of heat crashing into the Western and Eastern seaboards, matching the Atlantic and Pacific ocean warming/cooling cycles.
You can go to the McDonalds website and enter a Zip code and it will give you the nearest 5 McDonalds and the distance.
My guess is that if you prepare a McDonald index, Dist 1/1 + Dist 2/2 +Dist 3/3, you will find that the areas with the highest McDonalds index have the greatest level of warming.
“The primary problem skeptics have inhow temperature trends is that virtually every reported adjustment in trends results in lower figures for the past, and warmer figures for the present. ”
yes. exactly as they should.
For example. When you change instruments from type A to type B
You can expect there to be a bias. the bias will be up or the bias will
be down. if the bias is zero, well then thats no bias. duh.
So the change to MMTS caused a bias.
how much?
what direction?
easy, test them side by side.
yup, that science was done.
read it for change of pace
A fan of *MORE* discourse: Of course, there are *PLENTY* of Climate Etc conspiracy theorists and/or astroturfers who *INSIST* that Zeke and Steve and BEST and ISTI are one-and-all agents of a vast conspiracy.
Plenty?
Fan once again brings the AICPA ethics links. Keep those coming! Many adjustments are of the kind, No you’re not worth quite as much as you think, and No you didn’t make quite as much as you thought. Some of these are timing differences. If the client is asked to show a bit less income in the current period, generally at least some of that income will simply be pushed into the following time period though this is an extremely simplified example and each client has a unique situation. This conservative approach has served them well for a long time.
Absence of correlation = absence of causation.
There is no correlation between planetary climate (Earth’s paleoclimate, or Venus) and CO2 concentration. Your theory (and your models) may say that there “should” be warming, but the real world says it ain’t happening.
The hypothesis “CO2 causes warming” is falsified by this lack of correlation (except in reverse — warming driving increased CO2). This is why I rule in favor of those -protesting- data diddling — no matter how noble the purposes or intentions of the data-diddlers.
Data-diddling to try to show that CO2 causes warming is AT BEST some true believer trying to salvage his or her career claims with fancy hand waving. (“At worst” is left as an exercise for the reader.)
A scientist worthy of the name says, “Oh, look at that, the hypothesis was wrong” and moves on.
mellyrn,
The temperature record should stand on its own, regardless of any imputed effects from CO2 or anything else. It’s a non-sequitur to say that the adjustments are wrong because scientists are trying to show that CO2 causes warming.
You should either find a legitimate problem with the adjustments, or you should accept them.. but your acceptance of the temperature data should not be based on what you think about CO2.
Just focus on the data. That’s how science is done.
Off topic.
This is about adjustments to the temperature record.
people who dont want to understand change the topic
“TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.”
I’m just a novice at this stuff, but how is this possible?
If you take a reading at 5 PM, I can understand how a hot day might be double counted, and thus influence the average Tmax for the month. But how would the Tmin for the month possibly be affected?
If you take a reading at 7 AM, I can understand how a cool morning might be double counted, and thus influence the average Tmin for the month. But how could Tmax for the month possibly be affected?
For a station that switched observation time from late afternoon to morning, there should be a TOBS adjustment to reduce the Tmax prior to the switch, and a TOBS adjustment to raise the Tmin after the switch. Once a station is reading at 7 AM, there should be NO additional TOBS adjustment applied to Tmax. Likewise, there should be NO TOBS adjustments applied to Tmin prior to the switch.
Yep, that’s right. We used to record temps in the afternoon back in the ’30s, and later that was changed to the morning. So the raw data had a hot bias in the past, and a cold bias now.
“TOBs adjustments affect minimum and maximum temperatures similarly”
I’d wager this means that the hot bias from measuring near the hottest part of the day is about as big as the cold bias from measuring near the coldest part of the day. Same magnitude, opposite sign of bias.
It seems that a much simpler and more logical way to estimate the trend over time would be to track the change in Tmin temps prior to the switch, then the Tmax temps after the switch, where NO ADJUSTMENT would be necessary.
Why pollute the dataset by using averages that require adding in temps that are clearly biased by the time readings are being taken?
write it up KTM
get it published
be a hero
how is this possible?
1. read the posts on the skeptical site run by John Daly. its explained.
2. read the posts on CA. its explained.
3. read the papers zeke linked to. its explained.
4. Wait for the second in the series, where it will be demonstrated for the umpteenth time.
I guess my main critique is how the data is being presented. According to the graph, the Tmax TOBS adjustment was near zero in the past, and is currently near +0.2C. This makes no sense, Tmax TOBS adjustments should be large in the past and near zero today.
I think it would be much more informative and accurate to show what the actual TOBS adjustments are for Tmin and Tmax over time. The two curves would not overlap, since they are being applied very differently over time.
I also question the logic behind making all these adjustments, since it is possible that even at a midnight reading you could get double-counting of cold temps on two consecutive days. Why set the standard for USHCN at midnight when the vast majority of observations at being made at other times?
Also, where are the error bars for these graphs?
“This makes no sense,”
it does make sense.
read harder.
Now is the time to repost this:
“A C Osborn | July 2, 2014 at 2:34 pm | Reply
You jest, BEST Summaries show Swansea on the South West Coast of Wales in the UK a half a degree C WARMER than LONDON.
Now anybody living in the UK knows that is not correct due to location and UHI in London.
It also shows Identical Upward Trends for both areas of over 1.0C since 1975, obviously BEST doesn’t know that the west coast Weather is controlled by the Ocean and London by European weather systems.
So what does the Met office say about the comparison, well they show that on average Swansea is 0.6 degrees COOLER than London.
So who do you believe, The people who live in the UK and the Met Office or BEST who have changed the values by 1.1 degrees C?”
The values are not changed.
you are looking at an expected value, not data.
next, this post is about NOAA.
stay on topic.
According to the Icelandic WXmen, the adjusted average temperature in Reykjavik 1940 was 5°C. According to GISS, it was 3°C.
Hint. GISS is not NCDC.
the topic is NCDC, or NOAA take your pick
I never look at the numbers fromm GISS, and I do not read the (very long) posts by Mr. Housefather.
Alexjej
If you do not read he nformation I hope you will not complain if they show something that you do not agree with?
Zeke has gone to a lot of trouble to post information, the least denizens can do is read it
Tonyb
On NASA-GISS I would refer to Astronauts Schmitt and Cunningham.
On the “lot of trouble” I agree, but my experience is this: If you really, really understand something, you can explain it in one paragraph.
Alexei Buergin: If you really, really understand something, you can explain it in one paragraph.
You can be terse, clear, accurate, and complete, but generally not more than 2 at a time. Zeke Hausfather achieved an excellent balance: not too long, real clear, accurate, and with links to more complete details.
No. You can really understand something but be unable to describe it adequately because of poor communications skills.
Equally, you can be a good communicator but with poor understanding of your subject.
another example of a denizen who does not want to understand.
Actually, I would like to understand how anybody could get the results mentioned (Reykjavik, Swansea/London). But nobody wants to (or can) explain that, and they are obviously wrong.
huh I explained.
go read harder.
Your “explanations”:
Reykjavik: “GISS is not NCDC.”
We agree that GISS is producing Dreck?
Swansea/London: “expected value, not data”.
If by “expected value” you mean the sum of T(i)*p(i), that should not change the fact that Swansea is cooler than London (and the ridiculously named BEST is nonsense here).
“Swansea/London: “expected value, not data”.
If by “expected value” you mean the sum of T(i)*p(i), that should not change the fact that Swansea is cooler than London ”
No.
there is no changing of the fact.
We create a model to estimate the temperature WHERE IT WASNT MEASURED. to do that we create a model
T = C + W +e
the climate of a place is estimated via regression as a function of Y, Z and time or season.
the raw data is used to create this surface
This surface is subtracted from the raw data to create a residual
the residual is W.. the weather.
Now, since the model is simple ( lat, alt and season ) the residual WILL contain some structure that is not weather but is actually climate
these cases can be handled two ways
A) increase terms in the regression — like coastal/non coastal
B) keep a simple regression because these cases are small in number
and zero biased.
we do B. That means you will find that there are a small number of cases
were the expected value of the model deviates from the raw.
this happens in places where the climate is NOT dominated by Latitude altitude and season. For example, places where coastal/seasonal effects dominate
to test this we add a variable for coastal to the regression.
yes we see local changes.. BUT the R^2 stays the same.. no additional variance is explained. so adding it to the model doesnt change the overall performance of the estimate.
We have acouple ideas how to squeeze some more explanatory power out of the regression, but we would only be fiddling with local detail and not the global answer
Speaking of adjusters, does Gavin Schmidt still believe that the MWP did not really exist…?
As a global phenomena happening all over the world at the same time?
In terms of global phenomena, it seems rather than regions which have always cooled and warmed during global warming or cooling trends, the metric of rising sea levels [which have been occurring throughout our current interglacial period [10,000 years] should be metric used.
So one could compare rate of rising sea levels of MWP, LIA, and during the current period in which we recovering from the Little Ice Age- the time period after 1850.
Claimsguy
Is the modern warming period happening synchronously everywhere in the world?
Tonyb
“Before the most recent Ice Age, sea level was about 4 – 6 meters (13 – 20 feet) higher than at present. Then, during the Ice Age, sea level dropped 120 meters (395 ft) as water evaporated from the oceans precipitated out onto the great land-based ice sheets. The former ocean water remained frozen in those ice sheets during the Ice Age, but began being released 12,000 – 15,000 years ago as the Ice Age ended and the climate warmed. Sea level increased about 115 meters over a several thousand year period, rising 40 mm/year (1.6″/yr) during one 500-year pulse of melting 14,600 years ago. The rate of sea level rise slowed to 11 mm/year (0.43″/yr) during the period 7,000 – 14,000 years ago (Bard et al., 1996), then further slowed to 0.5 mm/yr 6,000 – 3,000 years ago. About 2,000 – 3,000 years ago, the sea level stopped rising, and remained fairly steady until the late 1700s (IPCC 2007). One exception to this occurred during the Medieval Warm Period of 1100 – 1200 A.D., when warm conditions similar to today’s climate caused the sea level to rise 5 – 8″ (12 – 21 cm) higher than present (Grinsted et al., 2008). This was probably the highest the sea has been since the beginning of the Ice Age, 110,000 years ago. There is a fair bit of uncertainty in all these estimates, since we don’t have direct measurements of the sea level.”
http://www.wunderground.com/blog/JeffMasters/sea-level-rise-what-has-happened-so-far
changing the subject.
doesnt want to understand.
Wagathon, your personal endorsement of the Harold Faulkner/Save America Foundation climate-change worldview and the novel economic theories of its associated Asset Preservation Institute are enthusiastically supported by the world’s carbon-asset oligarchs and billionaires.
That”s how it comes about that *EVERYONE* appreciates the focus of your unflagging efforts, wagathon!
It should be noted that there are at least the two uses of the word “data”
– information output by a sensing device or organ that includes both useful and irrelevant or redundant information and must be processed to be meaningful.
– information in numerical form that can be digitally transmitted or processed.
The raw temperature measurements, along with instrument quality information and locations, are data in the first and second senses. Adjusted temperatures and anomalies are data only in the second sense.
To adjust historic instrument readings seems sloppy measurement practice and will produce poor scientific thinking — eg, the adjusted results are estimates of the temperature record, not the record itself. These estimates should be reported with error estimates that had better span the measurements too.
Sure, but it’s rather useless data by itself, sans adjustment. Even a spatial average of temperature is some sort of “adjustment”; to get to the national temperature chart we have to start applying math. And once you start using math, it’s math all the way down. ;-)
Basically, you can’t take something like a 7am temperature reading in New Jersey and two 4pm temperature readings in Illinois and build a national temperature out of them. You have to adjust for spatial weighting of the records, for the time of day that the temperature was observed, etc. Otherwise it’s an apples-and-oranges comparison of data.
It’d be far worse to not adjust them.
But “estimate” vs “record”? Not really relevant. Any record is itself just an estimate, as no data-recording equipment is completely perfect. We aim for good enough, not perfect. We don’t need to be measuring temperature to millionths of a degree and every few milliseconds in order to get pretty solid data about the temperature.
So there’s not much point in being pedantic about whether something is an “estimate”. It’s all estimates. The question is always “how good are they?” And here, they’re pretty good.
Your reply “all are estimates” shows that you could use several courses in experimental physics.
Of course measurements are estimates — now whose being pedantic? The point missed is that measurements occupy a unique position in physics reasoning. Measurements should not be changed, but that doesn’t mean they must all be treated alike in estimating the past. It means only you estimate and estimate error should reconcile with the measurements they estimate.
Measurements should not be changed, but that doesn’t mean they must all be treated alike in estimating the past. It means only you estimate and estimate error should reconcile with the measurements they estimate.
I agree. And as far as I can tell, that’s being done. They identify the errors, they derive the adjustments, and they test the adjustments, giving a range on the errors for the adjusted data.
I appreciate the BEST work, which generally includes error bars on their temperature charts. I’d love to see that done more consistently by the other groups, though, and not just see the adjustment error estimates left in the literature.
But I don’t think it makes a lot of difference for the big picture. The errors in the adjustments are relatively small.
Phillip.
you do realize that many of the raw records are not information output by a sensing device.
Prior to the automation of reporting, a human walked out to a thermometer looked, rounded, and wrote a number down.
and none of the reports actually report the physical property of temperature.
Mosh,
I think you must be confused about what is a measurement, unless you think an old physics professor of mine at Ga Tech was wrong to teach measuring length of objects using the human eyeball and a meter-stick to record rounded values with estimates of error. Humans can be and were part of the sensing and recording of measurements and these measurements are what you have, so work with them.
As to whether ” the reports actually report the physical property of temperature,” I have no idea what you mean. Is it that thermometers don’t really measure the same property that today’s device do? If so, we need a whole new discussion.
no phillip I was was just trying to make sure you actually understand what the records are.
as for what they measure.
tell me how an LIG thermometer works.
“If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.”
Until the 1970s, there were fewer than 1000 stations in the US according to this NOAA chart, and less than 2000 until about 2005. (I don’t know if all of these were used in generating the USHCN data sets, but if there were fewer, my following questions remain).
http://www.ncdc.noaa.gov/oa/climate/isd/caption.php?fig=station-chart
In how many locations are there “a number of stations” “a few kilometers” from one another?
There are approximately 9.6 million square kilometers in the continental US. (If Alaska is included in the network, the number obviously goes up.)
By my rudimentary math, with 1000 stations, that’s 9,600 kilometers per station. (Whew, I need a nap.) With 2000 stations (here the math gets hard), that’s 4,800 kilometers per station.
If you have one station “within a few kilometers” of several other stations, and being generous and defining “a few” as four, then you have three or more stations within a 16 square mile area.
Now I can see how that would happen in the real world; you want measurements where the people are. But it raises a couple of questions that may well have been answered, but I have not seen the answers and am curious.
Here in the Chicago area, a few kilometers can make a real difference in temperature regardless of time of day. And the differences are not uniform – Skokie is not always warmer than O’Hare; the Chicago lakefront is not always cooler than Schaumburg.
So:
Question 1: Are the numbers above correct, or even close, as far area covered per station?
Question 2: Don’t urban stations require more and broader adjustments, not just for UHI, but in general?
Question 3: Are urban stations weighted differently because of their proportionally greater number in determining trends?
Question 4: Are stations not “within a few kilometers” of several others, ever similarly adjusted, and if so, how?
“Until the 1970s, there were fewer than 1000 stations in the US according to this NOAA chart, and less than 2000 until about 2005. ”
WRONG.
those are just ISD stations.
the entire population of stations is substantially larger..
If you seek understanding do not pull random charts from the internet.
Go to sources.
All the sources.
At least Zeke Hausfather mentioned that the larger number of stations is used for homogenization, rather than your obscurantist tack of implying they are included in the average.
His figure 1 in the main post referenced ” Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth.” That is why I sought the number of NCDC stations.
I missed this reference in the post: “A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.”
But while the number of stations changes the math, it does not answer the underlying question. Whether 2,000, 7,000 or 10,000, I do not see how all the stations, as he says elsewhere in this thread, have several others within “a couple kilometers” of them.
The first sentence should have been deleted, poor editing. I saw that the reported average does include 7,000 stations.
“At least Zeke Hausfather mentioned that the larger number of stations is used for homogenization, rather than your obscurantist tack of implying they are included in the average.”
I implied no such thing
in a discussion about USCHN, you linked to a unverified chart of a a different dataset entirely.
obsfucator.
Mosher,
That point I caught myself, as I noted in my second comment. I just failed to delete the snark before posting. My bad.
But of the 4 questions I asked, Zeke Hausfather half answered one and neither of you addressed the other 3. Which is fine. No one is under any obligation to respond. But I read this thread as an attempt to address the concerns skeptics have regarding reported temps. An admirable goal. Sort of like Gavin Schmidt agreeing to answer all questions at Keith Kloor’s…once.
But no answers are of course required.
I am guessing his claim that each of the stations is “within a couple kilometers” was just a bit of hyperbole. I just don’t see that sort of coverage given the numbers.
Here in the Chicago area, a few kilometers can make a real difference in temperature regardless of time of day.
Heck, you can get big changes in temperature over just a few hundred feet, if the elevation change is big enough. I grew up at the base of a hill in Florida, and top of the hill was consistently warmer than the bottom.
But the real question is the temperature anomaly. Does the temperature at the top and the bottom of the high change in sync? Yeah, pretty well. The correlation between them is pretty high.
And that holds across most of the country. Temperature stations that are a few hundred miles apart still have very well-correlated anomalies, though I expect things like lakes and mountain ranges may tend to interfere with this.
Also, in searching for data on this, I found this past post from Zeke:
http://rankexploits.com/musings/2013/correlations-of-anomalies-over-distance/
Windchaser,
I am not sure using anomalies as proxies for temperature simplifies the matter. I understand they give results more in line with what the consensus measuring them expect, but I think the prospect of determining actual average temperature in one given location is more complex than plotting anomalies.
In prior blog threads some time ago I asked whether there was any experimentation to determine the accuracy and precision of anomalies as a proxy for temperature. Did anyone ever take actual hourly temperature readings at a range of sites over a period of time, and compare them to the average inferred from the anomalies. How do you know how accurate the long term temp trend against which you are calculating the anomaly is?
At any rate, my questions are not about what is the best way to determine temperature trends. My questions are about whether any of the methods give the accuracy and precision claimed by those reporting them.
GaryM:
We don’t use anomalies as a proxy for temperature. Rather, we use the anomalies to show how the temperature has changed.
It’s actually somewhat difficult to define the average temperature of a region, because of things like the changes in temperature with elevation over even short distances. But it’s a bit easier to define the average anomaly, and besides, this shows us what we’re concerned with – how the temperature changes over time.
Whoa, anomalies aren’t calculated against long-term trends, but against a baseline temperature.
If you subtract some temperature X from the temperature, record you get the anomaly – the temperature relative to some baseline temperature X. If you subtract the linear trend, though, you get something entirely else – the detrended data, which shows you how the temperature diverges from the trend. It’s not really that useful in comparison.
Windchasers,
The results are reported as “average temperature” according to figure 1 in the main post. The plot shows a trend, but it is trend of temperatures.
As for what is used to determine an anomaly, I know Mosher hates it when people link to those dang internet sites, but:
“The term temperature anomaly means a departure from a reference value or long-term average.”
http://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php
The underlying question is not whether anomalies are consistent over large distances, but whether average temperatures are, because that is what is being sold to the public as the basis for public policy. That is why I refer to anomalies as a proxy of average temperatures, and why I ask if there is any research confirming their4 accuracy and precision as proxies.
I have read the arguments behind their use, but I have not seen any testing to verify them. Not saying there isn’t any, just that I haven’t seen it. (And I don’t mean statistical comparisons to model generated data, I mean comparisons to actual temp measurements.)
Aye. It’s the spatial average, and it shows a temporal trend, with temporal anomalies. (Note they y-axis label).
Aye. So you get the anomaly by subtracting a reference value. I just want to distinguish that from subtracting the trend.
The temporal averages definitely aren’t consistent over long distances. The average yearly temperature in Winnipeg is pretty different from the average yearly temperature in Miami.
The spatial averages? Well, they’re spatial averages, so it doesn’t make sense to talk about how they vary in space. The number is derived for an entire region. The average US temperature is the same no matter where you go. You could be in Moscow, and the average US temperature would still be the same.
I feel like I’m missing your point. The anomalies aren’t proxies for temperature in the same way that, say, the tree ring data is. The anomalies are just the temperature data with some number subtracted from the entire temporal series. Calculating the anomaly just shifts the entire temperature chart up or down, and doesn’t change how the temperature changes with time.
Gary.
are your questions about NOAA or BEST.
if you can be specific then zeke or I can answer or get an answer.
Steven,
Either one. I would be interested in the answers as to any data set.
I will answer on BEST
Question 1: Are the numbers above correct, or even close, as far area covered per station?
Area “covered” by station varies widely across the surface of the earth.
in some places the stations are dense ( say on average 20km apart) in other places ( south pole) they are sparsely sampled.
Question 2: Don’t urban stations require more and broader adjustments, not just for UHI, but in general?
The UHI effect ( ON AVERAGE0 is much smaller than people imagine.
part of the reason is that the media and literature has focused on UHI MAX rather than UHI mean.
In terms of adjustments I havent looked at the number of adjustments for urban versus rural. More generally I just eliminate all urban stations and look for a difference.
Question 3: Are urban stations weighted differently because of their proportionally greater number in determining trends?
Urban stations is a misnomer. There isnt a clear or validated way of categorizing urban versus rural. Several methods have been tried.
Rather than a categorical scale I prefer a continous scale..
For example.. rather than saying, as hansen does, that urban = population greater than X, whereas rural = population less than X, it makes more sense to just use population as a continuous variable.
so there isnt any specific weighting applied on the basis of “urbanity”
What we did was A/B testing . two piles, one urban the other rural.
no difference.
Question 4: Are stations not “within a few kilometers” of several others, ever similarly adjusted, and if so, how?
There Isnt an adjustment
GaryM,
To answer some of your questions, the homogenization process uses the full co-op network (~8,000 total stations) rather than just the USHCN stations (1218 total) to detect breakpoints. It also only covers the conterminous U.S. (not Alaska and Hawaii). For all but the very early part of the record (pre-1930s), there are multiple nearby analogues for pretty much every station.
Zeke Hausfather,
Thanks for the answer. Even using 10,000 stations, that seems like it would be an average of about 900 square kilometers per station.
I still don’t see how does each station can have several others within a few kilometers, other than urban stations.
And are you saying that stations that are not suitable for inclusion in the reported average are used to homogenize those that are?