Our algorithm is working as designed. – NOAA NCDC
Recall, in the previous post Skeptical of skeptics: is Steve Goddard right? Politifact assessed Goddards claim as ‘Pants on fire.’
Over the weekend, I informed Politifact that this this issue was still in play, and pointed to my post and Watts’ post. Today, Politifact has posted an update After the Fact, drawing from the blog posts and also additional input from Zeke. They conclude:
In short, as one of the experts in our fact-check noted, the adjusted data set from the government is imperfect and it changes as people work on it. However, the weight of evidence says the imperfections, or errors, have little impact on the broader trends.
Anthony Watts has a new post NCDC responds to identified issues in the USHCN. Apparently the NCDC Press Office sent an official response to Politifact, which Watts obtained:
Are the examples in Texas and Kansas prompting a deeper look at how the algorithms change the raw data?
No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.
Watts wrote that NCDC and USHCN are looking into this and will issue some sort of statement. Is that accurate?
Although all estimated values are identified in the USHCN dataset, NCDC’s intent was to use a flagging system that distinguishes between the two types of estimates mentioned above. NCDC intends to fix this issue in the near future.
Did the point Heller raised, and the examples provided for Texas and Kansas, suggest that the problems are larger than government scientists expected?
No, refer to question 1.
Steve Goddard has post on this, entitled Government scientists ‘expected’ the huge problems we found.
From the comments on Watts’ thread, Rud Istvan says:
The answer is in one sense honest: “Our algorithms are working as designed.”
We designed them to maintain zombie stations. We designed them to substitute estimated for actual data. We designed them to cool the past as a ‘reaction’ to UHI.
Wayne Eskridge says:
As a practical matter they have no choice but to defend their process. They will surely lose their jobs if they allow a change that damages the political narrative because that data infects many of the analyses the administration is using to push their agenda.
Wyo Skeptic says:
The Climate at a glance portion of the NCDC website is giving nothing but wonky data right now. Choose a site and it gives you data where the min temp, avg temp and max temp are the same. Change settings to go to a statewide time series and what it does is give you made up data where the average is the same amount above min as max is above avg.
Roy Spencer noticed it first in his blog about Las Vegas. I checked it out of curiosity and it is worse than what he seemed to think. It is totally worthless right now.
JC comments
As Wayne Eskridge writes, this issue is a political hot potato. I hope that the NCDC scientists are taking this more seriously than is reflected by the statement from the Press Office. I hope that NCDC comes forward with a more meaningful statement in response to the concerns that have been raised.
I’m hoping that we can see a more thorough evaluation of the impact of individual modifications to the raw data for individual stations and regions, and a detailed comparison of Berkeley Earth with the NOAA USHCN data sets. We can look forward to some posts by Zeke Hausfather on this topic.
A new paper has been published by the NOAA group that is unfortunately behind paywall: Improved Historical Temperature and Precipitation Time Series for U.S. Climate Divisions (you can read the abstract on the link). The bottom line is that the results from v2 are much different from v1. Presumably v2 is better than v1, but this large difference reflects the underlying structural uncertainty associated with models to produce fields of surface temperature. When the adjustments are of the same magnitude of the trend you are trying to detect, then the structural uncertainty inspires little confidence in the trends.
NOAA needs to clean up these data sets. Most importantly, better estimates of uncertainty in these data are needed, including the structural uncertainty associated with different methods (past and present) for producing the temperature fields.
UPDATE: Brandon Shollenberger has a very helpful post Laying the Points Out, that clarifies the four different topics related to the USHCN data set that people are talking about/criticizing.
“Our algorithm is working as designed.”
Well that’s a relief.
Location, location, location!
Is site selection categorized as being part of the algorithm activity?
You didn’t read it properly: “Our AlGoreithm is working as designed.”
You can’t make a silk purse from a sow’s ear.
This was, is, and will always be the bottom line of the instrument temperature record prior to the satellite era. You can’t go back in time and improve the instrumentation to make it produce data with accuracy, precision, and spatial coverage it was not intended to produce.
> You can’t make a silk purse from a sow’s ear.
Still lots of demands for silk purses.
Fancy that.
Can you decrypt that from gobbledygook so it makes some kind of sense to sane people, Willard? Thanks in advance.
Demanding a silk purse when all one got is a sow’s ear may not be the way to live a satisfied life, Big Dave. Sooner or later, one may need to accept that all one got is a sow’s ear. One may need to come to peace with one’s daddy first, but that’s none of my concerns.
Thanks for asking.
Heh, there are plenty of silk purses woven from the sow’s ear of past data. There was demand, you see.
================
Sometimes I think Willard’s really a skeptic, but went with the other side for the challenge.
Heh, the once and future willard.
=================
David Springer: You can’t make a silk purse from a sow’s ear.
I am glad that’s settled.
What you can do is make better estimates than before from the extant imperfect data.
Sometimes I think commenters like shytibby are sock puppets.
I wouldn’t be surprised. The name has a nasty anagram.
Matthew R Marler | July 2, 2014 at 11:54 am |
David Springer: You can’t make a silk purse from a sow’s ear.
I am glad that’s settled.
What you can do is make better estimates than before from the extant imperfect data.
————————————————————
You can just as easily make them worse too. Which seems to be the case here.
oops, sorry. lame name from another forum.
“Choose a site and it gives you data where the min temp, avg temp and max temp are the same. Change settings to go to a statewide time series and what it does is give you made up data where the average is the same amount above min as max is above avg.”
For 4 or more years the gsod average temp is the average of min and max.
Based on the responses you highlighted from Watt’s political blog, are you insinuating that NCDC scientists adjust data for political reasons?
That does appear to be what Judith is insinuating.
Why not? Obama lies about CAGW for political reasons. Indeed, many many politicians do.
I don’t think Obama lied. He reported the lies of his science advisers. Predidents don’t have the time to study the nuances of climate science. HOWEVER, presidents do have a responsibility to make good appointments.—> FAIL. Juduth Curry has testified to congressional committees. She is a much better judge than I am, but I have listened to those hearings, and it appears that some few members of congress, some on those committes, actually understand the complexities and uncertainties of climate science. Almost all the others, on both sides, IMO, are just mouthing their political party’s talking points as Obama did.
Jeremy and others. Have you tried to help someone understand the climate science issues. I find it almost impossible to do in even a few hours. Few people have the time or interest to spend even a few hours trying to understand, so they merely repeat the same old talking points.
This manipulation did not start in 2009. Please keep your bile in check hand spill when seemingly appropriate.
Obama lies about everything. Why should climate be any different?
Experts have planned for immigrant help in advance.
http://www.factcheck.org/2012/11/did-fema-create-a-youth-army/
You don’t need a weatherman to tell which way the wind is blowing.
A follow up Dr. Curry, could you name the individual scientists who are adjusting data for political reasons?
No. The evidence was on some hard drives that crashed.
If the adjustments aren’t valid, it is going to be difficult to prove the adjusting wasn’t nefarious. Sorry about that, alarmist warriors.
========================
“…could you name the individual scientists who are adjusting data for political reasons?”
No reason to and anyway it may be “who defined how to adjust” rather than “who adjusted”.
“When the adjustments are of the same magnitude of the trend you are trying to detect,…better estimates of uncertainty in these data are needed, including the structural uncertainty associated with different methods (past and present) for producing the temperature fields.”
Just so Judy – plenty have said for years that this is a MAJOR isuue that seems to have been sidestepped, ignored or downplayed for what appears to be political reasons. In this context, “political reasons” does not imply
“party political”. Uncertainty less than estimated adjustments is ridiculous – defending same is… is… reckless at best, and it only gets worse from there!
crickets
“As Wayne Eskridge writes, this issue is a political hot potato. I hope that the NCDC scientists are taking this more seriously than is reflected by the statement from the Press Office. I hope that NCDC comes forward with a more meaningful statement in response to the concerns that have been raised.” – JC
‘Political hot potato’ !???
There is no ‘poltical hot potato’ just a data collection issue.
There is an attempt to polticise a straighforward issue, by people trying to hype this up and label it a “political hot potato”.
Tsk, tsk.
Are you an idiot or just pretending to be one, Michael? The entire climate issue has been politicized for at least 20 years. And it is obvious that people are often slow to admit their mistakes and don’t like to do it openly. They will be even less likely to move quickly or be up front about their mistakes if there are potential consequences such as making your superiors unhappy. That is just common sense. I wish you had some.
Bill
I disagree that it has been a political issue for 20 years. Imo, it has only been a political issue for less than 10 years as we have learned that GCMs are unreliable and the data upon which predictions of a warmer planet resulting in dire conditions have been found to be unsupportable.
Rob Starkey
Maybe to the general public, but as a simulation expert, I’ve been saying for 15 years the GCM were programmed to pre-suppose that Co2 Drives temps, and there’s no physical proof that this assumption is a fact.
Meh, the politicization has been going on for three decades, only provoking resistance in the last of these.
=================
Could any of you define what ‘politiicastion’ is and give real examples??
At the moment we seem to be dealing with a vague term, used as a negative label to insinuate some ill-defined wrong doing.
MiCro-
“I’ve been saying for 15 years the GCM were programmed to pre-suppose that Co2 Drives temps, and there’s no physical proof that this assumption is a fact.”
If you did not have any data to conclude that the GCMs were wrong, weren’t you the one who was rejecting them without evidence based on a “political perspective”? If 15 yrs ago you had said to wait until the models had demonstrated they were accurate, that would have not been political.
Personally the issue seems political to the extent that many democrats seem uninformed that the CO2 mitigation actions they believe make sense actually will accomplish nothing measureable that those paying for them will ever see and that they seem to believe money is unlimited. It seems political to some republicans who think any government action is bad and that their religious views are more relavant than science.
Rob Starkey commented
15 years ago I was suspect of this modelers bias, but I didn’t rule it out, they could have been right. At the time only future weather data would tell. About that time I started doing astrophotography, logging nightly temps on clear night to better process the images. That lead me looking for temperature data to look to see if there was a reduction in the temp record for a loss of nightly cool..
That should end with “nightly cooling.”
Heh, Michael, you know politicization in science when you see it.
=============
Michael – maybe Hansens 1988 testimony where Wirth picked picked what historically is the hottest day in DC for the testimony, then opened all the windows the night before – http://wattsupwiththat.com/2011/06/25/bring-it-mr-wirth-a-challenge/
TIMOTHY WIRTH: We called the Weather Bureau and found out what historically was the hottest day of the summer. Well, it was June 6th or June 9th or whatever it was. So we scheduled the hearing that day, and bingo, it was the hottest day on record in Washington, or close to it.
DEBORAH AMOS: [on camera] Did you also alter the temperature in the hearing room that day?
TIMOTHY WIRTH: What we did is that we went in the night before and opened all the windows, I will admit, right, so that the air conditioning wasn’t working inside the room. And so when the- when the hearing occurred, there was not only bliss, which is television cameras and double figures, but it was really hot.[Shot of witnesses at hearing].
Maybe Gore’s inconvenient fabrication? Maybe warmest calling skeptics flat earthers, deniers, and all sorts of other ad homs? If you can’t see the politicization, then you are truly blined by ideology.
kim | July 2, 2014 at 10:53 am |
“Heh, Michael, you know politicization in science when you see it.”
Thankfully kim is immune to confirmation bias.
I wonder how many of the people who comment on this BB have ever taken a manual observation using the equipment provided by the NWS for the COOP network?
I wonder how many have ever talked to a COOP observer and found how dedicated they are, especially in the winter when they collect and melt snow to determine water equivalent of solid precipitation? There is little recognition of their efforts by those who use this data, essentially for free.
It is easy to toss “blogged hand grenades” at NCDC from the Internet for what they have done or haven’t done with the data. Scanning millions of data values from thousands of sites from a variety of sensors is one of the most challenging aspects of what they do. They can’t please everybody all the time.
We must always keep in mind that “The Essence of Quality Control is the evaluation and improvement of imperfect data by making use of other imperfect data.” – Kelly Redmond at the 17th Applied Climate Conference in 2008.
We are in this together, and that is the way we need to work, together.
The NDSC statement doesn’t say whether it is a good design or bad design, only that it works as designed. I would hope that we can agree something more than this vague statement is warranted. I don’t know whether they are doing their job right or not. I do know that something more than “we are in this together” is required.
Who is saying anything about the people who take the data? It’s those who are adjusting the data that are under scrutiny
Think of the data collectors!
A cheap derivative of “Think of the children!”. Spare me.
+1
Ah, those poor adjusters, slaves to the algorithms.
=============
Kim – Slaves To The Algorithms – great band name.
Philbert, Goddard WANTS to use the raw data those heroic observers collect.
He DOES NOT WANT to use adjusted data fabricated by some unheroic AlGoreithm.
Sunshinehours1,
Right you are!
But in the end if wants to do anything meaningful with the raw data, he will need to develop an algorithm to sort out the howlers, screamers, flatliners, etc.
It will be a good journey for him through the data minefield. He is likely to use some of the NCDC methods to get to the other side.
Note to David Springer,
If you don’t know the observation methods, then you don’t know the data.
I know the observation methods. A min/max thermometer in a Stephenson screen. Guy goes out, writes it down, resets it once per day. Misplaced decimal points and wrong sign are easy to spot. The thing of it is that errors should be unbiased as to whether they are erroneously high or erroneously low and average out so they’re of no consequence. That’s why the raw data trend is 0.0C/decade and adjusted trend is 0.1C/decade. The network was never designed to detect trends that small so just a tiny bit of bias applied globally to every station turns no warming into some warming. Capiche?
David Springer, on one thing we agree, in the beginning the COOP network was never designed to detect climate change. It should be noted that bservations are gender and age independent.
I operate two systems, one manual and one automated. I installed a Davis Vue Pro2 #6163 with a solar powered aspirated thermal sensor in April 2014. The data systems are only about 30 feet apart, but differ in max and min. I am monitoring the data closely, but it is pure chance that the max and min temperatures will be the same. The automated system uploads data every 5 minutes to CWOP. MADIS runs QC on the data.
So, why are we spending all this time detecting climate change from the COOP, or other legacy networks, when we can use 5-minute data to manage climate impacts in real-time? Yes, this is a potentially myopic approach, but this is an emerging, and potentially valuable dataset that needs to be part of the data mosaic.
I’m sure it’s all terribly minor and with no effect on the broad whatsy. On the other hand, I’m prepared to bet that any terribly minor “imperfection, or error” will show some terribly minor warming. Even “as people work on it”. I dunno why I think that. I just do.
But, as Luis Suarez might say, it was more a nibble than a bite.
Pingback: Did NASA and NOAA dramatically alter US climate history to exaggerate global warming? | Fabius Maximus
More unacknowledged Type B Uncertainty
I don’t understand. There appear to be important technical issues remaining for users of the NCDC data. At the very least, NCDC must be more transparent on their processes.
But the NCDC data appears to match temperature series from the Berkeley Earth group, and more broadly from RSS and UAH. Why doesn’t this mean that the net result on a national level from these issues is small?
Also lost in the discussions here and at WUWT is Goddard-Heller’s implication that the NCDC staff are manipulating the data for political purposes — that this is not just a difference of opinion about technical matters. Big claims require strong proof. Without that I’d hope Goddard suffers a serious loss of credibility.
Perhaps this is a test of the skeptics as much as it is of NCDC.
Upvote this. Deserves a response from Judith.
Does anyone think that the adjustments are not being highlighted for political reasons?
Re NCDC staff manipulating the data for political purposes – I don’t buy that. However, the NCDC Press Officer trying to cover up problems with the data for political reasons . . . now that one wouldn’t be too far fetched for me to believe.
I agree that the main reason for confidence in the NOAA data set is the general agreement with BEST. But we have seen climate models agreeing with each other also, what exactly does that tell us? I think both NOAA and BEST use the same TOBS adjustment. Etc. When I compared the average for the state of main between BEST and NOAA, there were some pretty significant differences – it seems like the more averaging you do, the differences become less. Regional temperature variability is important for decision making – not just the global averages.
In any event, these things need to be better understood and more work is needed to characterize the uncertainty of these data sets.
Berkeley and NCDC do not use the same TOBs adjustment. Berkeley actually has no explicit TOBs adjustment, but rather treats them as any other breakpoint and cuts the record when they are documented or empirically detected (per Williams et al 2012 the NCDC PHA can also pick up TOBs fairly well with no explicit adjustment, so maybe we will see NCDC moving that way in the future as well).
Thanks for the clarification on this
Judith, the Estimating USHCN does changes trends. It warms them.
https://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
sunshinehours1 Judith, the Estimating USHCN does changes trends. It warms them.
Well, they did say the AlGoreithm was working as designed, didn’t they?
What more do you want?
Im a little confused and have a question…
Is the correction algorithm published and or is the code available?
I do hope that our temperature data is not behind a black box? That would be an NSA type move…
Is there a paper?
curryja | July 2, 2014 at 9:52 am
They are all based on the same premise.
Actually, not correct. The GISS, NCDC, and HadCrut ‘surface’ reconstructions all run hotter that UAH and RSS, and all show less of a pause. And I have the comparison data to prove that NCDC since 2007 warmed the present and cooled the past by an ADDITIONAL 0.2C, about half and half, with a break point about 1960.
And since the satellite record only starts in 1979, cooling the more distant past does not allow comparison. One must look at the methods and reasons.NCDC USHCN itself posted on TOBS, which for CONUS justified a Cooling of 0.3F before 1960. Not the 2-3F seen by comparing recorded to adjusted. And, NASA GISS says the correct way to adjust for UHI is to warm the past (so that the current record is congruent with actual). Their Website uses Tokyo as the explanatory example, about 1C added to 1930to compensate for UHI. The net of TOBS and UHI should be slight cooling of rural stations and noticeable warming of urban stations. And given economic development, and using ‘rural’ Kansas for USHCN example purposes, virtually all the 30 stations are ‘urban’. The four zombies are more rural, which is probably why they closed.
Rud, as you go up in altitude, the temperature trend (see the annual variation for example, gets less and less. At a certain altitude, the trend flattens. Above that, the trend is opposite that near the surface.
So, the lower trop trend will have the same sign as the surface, but the trend itself will be less.
I tend to believe the satellite May temps over the surface records. Better uniformity of data and method.
Rud,
“Actually, not correct. The GISS, NCDC, and HadCrut ‘surface’ reconstructions all run hotter that UAH and RSS, and all show less of a pause.”
Can you give a cite on this, describing the size and significance of the differences? I don’t follow this closely, as you do. But my impression was …
(1) The difference between the major surface temperature datasets were small, statistically speaking. The small difference with Berkeley Earth has already been mentioned.
(2) UAH and RSS.
I said they were “broadly similar” to NCDC. Which is, I believe, correct. It would be odd if they were too close, given that they’re different instruments, with different geographic coverage, measuring the lower troposphere (not surface).
So I believe that the other datasets define in a rough way the maximum error of any errors in the NCDC record. Since the records are roughly similar, that suggests these issues are relatively minor.
Perhaps we can agree on these two big issues: the lack of transparency and inadequate quality control. I wonder if discussions like these are likely to encourage change.
Hi Rudd. RSS and UAH are averages over an air column several kilometers in height. Not sure how comparable that is to temperature 4′ off the ground in a Stephenson Screen. Apples and oranges?
@ Springer exspecially above a land mass the size of CONUS which is on the scale of some larger weather systems as Roy Spencer has noted.
“After The Fact” shows that NCDC and BEST with independent methods and many different stations get the same basic results, as mentioned by Zeke H. I don’ think there is much left for NCDC to do given this independent confirmation of their results.
Hell when I needed individual station data (Amundsen) to make a point I went to BEST to get it and found the actual data trend was falling while the adjusted data had a 0.1C/decade rising trend.
A temperature station maintained by scientists and highly trained technicians is apparently not as good BEST’s algorithms which saw fit to correct the readings.
What frickin’ joke. Hear that Mosher? You’re an untrained clown.
David Springer: A temperature station maintained by scientists and highly trained technicians is apparently not as good BEST’s algorithms which saw fit to correct the readings.
Like it or not, that can happen with actual scientists, actual highly trained technicians, actual temperature stations (or other measuring instruments), and actual data. The general theory, with some guidance as to its accuracy, is provided in Samaniego’s book “A comparison of Bayesian and frequentist methods of estimation”, and most other texts on Bayesian methods of statistical estimation. In short, the adjusted data trend might be more accurate than the raw data trend.
Like everything else in conditional probability, probability, and statistics, it is not an intuitive result.
read the read me JC SNIP.
” At the end of the analysis process,
% the “adjusted” data is created as an estimate of what the weather at
% this location might have looked like after removing apparent biases.
% This “adjusted” data will generally to be free from quality control
% issues and be regionally homogeneous. Some users may find this
% “adjusted” data that attempts to remove apparent biases more
% suitable for their needs, while other users may prefer to work
% with raw values.”
Its pretty simple.
There is no adjusting of the raw data.
The raw data is used to create a prediction
T = C + W +e
That is the temperature at any arbitrary time and place is a combination of
C: the climate of the place
W the weather at the place
e an error
When you build a model from raw data you are creating a prediction
or expectation.
Then you can compare what that model predicts “would have been recorded” to what was actually recorded.
If you like raw data, use raw data.
If you want a result that removes apparent baises or error ( deviation from the model ) then use the expected value.
IF you find a SYSTEMATIC drift in the error.. then that tells you the model can be improved.
Currently we model Climate as F(y,z,t) that is the climate of a place
is not a function of longitude nor does it use distance from coast.
So given 40000 stations you may very well find systematic structure in the final residual. In fact, you have to find some systematic structure because we know temperature is determined by more than y,z and t
y,z and t explain something like 90% of the variance. Inversion layers for example ( see katabatic winds) can create a structure in the residual where you would get a biased prediction. these geographic areas are typically small.
mathew
David doesnt get what we are doing.
Springer let me explain in simple terms.
Suppose we have raw data of springers height over the years in inches
40 42 44 46 48 50 50 54 56 58 60
frome that we build a model
it spits out
40 42 44 46 48 50 52 54 56 58 60
Data set 1 is the raw data
Data set 2 is the expected value
We note that one year springer was 2 inches shorter than expected.
1. This could be a reality
2. this could be a ruler error
We report the expecatation.
springer thinks we adjusted his raw data.
the raw data is there.
the expectation is there.
only a clown would confuse them.
Mosher,
Your David Springer Model assumes that people grow taller. But at some point they stop growing taller.
Temperatures don’t always get warmer, either.
Fail.
Andrew
I really like the Berkeley Earth webpages. You can check on individual stations to see what adjustments and QC decisions have been made for each station. Here, for example, is Amundsen-Scott:
http://berkeleyearth.lbl.gov/stations/166900
John Kennedy (@micefearboggis)
Now, first let me note I only have data from GSoD, and don’t have the other sources listed.
But, based only on GSoD this:
Is wrong. 1987-1999 all have months missing more than 10 days, all but one of those years has fewer than 111 daily samples. To determine whether this is a real issue, it would be nice to know how many sample BEST has, and which data sets they have them from.
“We note that one year springer was 2 inches shorter than expected.”
Some months can be 10C colder or warmer. Why do you always flag the colder month for rejection?
Bad Andrew: Your David Springer Model assumes that people grow taller. But at some point they stop growing taller.
If you are interested in Bayesian hierarchical modeling of growth curves with and without missing data, and the timing of growth spurts, I can supply a few references.
Steven Mosher,
Your height model is deficient if you base adjustments on samples taken of how my height was increasing 50 years ago. I am now at least 1.5 inches shorter than I was then..
Matthew & Mosher, what is the explanation for all the quality control fails at Amundsen being for data that is cold in 40+ years of monthly data?
I seem to have touched a nerve with the data torture apologists. LOL
“same basic results”?
Estimating warms US data.
https://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
We see repeated, indeed constant, changes: what I wonder is whether the changes are to the original data, such that we have an evolving understanding of inconsistencies or at least how to corrrect them, or are we seeing changes are to the changed data, i.e. change-error adjustments.
I suspect we are seeing new algorithms that modify old algorithms, that they don’t go back and recalculate everything. But it would be useful to know. Certainly the consistency of cooling the past and warming the present suggests that changes are substantive and additive, not lesser and lesser tweaks of one major fix.
It would be helpful if someone involved would take the time to lay things out in a clear fashion. As it stands, I don’t think a casual reader would actually understand many details of what is going on. Off the top of my head, I can think of at least three different issues. I’ll try to give a quick breakdown.
1) USHCN does its calculation on 1218 records, even though there are not 1218 operational stations. This is due to their methodology. Because they use absolute temperatures, not anomalies, a small number of stations dropping out can have undue effects on the final results. Infilling missing stations can help address this. The process is unintuitive and appears strange, but it also appears this has no notable effect on the final results. This is because infilling “zombie stations” then combining station records is effectively just spatially interpolating the existing data.
2) Measured data is being discarded/excluded, at which point infilling is used to estimate it. Again, this appears strange. However, it seems the NCDC is claiming the data which is discarded/excluded is data with quality control issues. If that’s true, excluding the data is entirely appropriate.
3) Adjustments are made to the data which have notable effects on the final results. There’s little to no indication this is from the “adjustments” in 1-2. Instead, it appears other adjustments to the data are responsible.
It’s also worth noting the post by Steven Goddard which Polifact responded to dealt with 3, not 1-2. The only way 1-2 would be relevant to Goddard’s post is if there is some overlap between 1-2 and 3, something nobody has shown thus far.
I think that’s all correct, but do let me know if I got something wrong. The discussions have been very sloppy so it’s tricky to keep everything straight.
Well said, Brandon. This post starts
“Our algorithm is working as designed. – NOAA NCDC”
Chrtle chortle.
But their statement is absolutely correct, and is easily seen to be so by anyone not into reflex chortling. In the Texas case referred to there was a cable fault that caused low readings. The algorithm spotted this, quarantined the readings, and when the fault was fixed, restored them. This is indeed working as designed. How anyone thinks they can make a political hot potato of this is beyond me.
The hot potato is that the warming of the twentieth century in the United States can not be considered as an observed fact. The warming trend is obtained entirely through theoretical assumptions (see the first graph there : http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/).
This could be correct but it should be proved by actual observations.
Nick Stokes | July 2, 2014 at 4:25 am | Reply
“…This is indeed working as designed. How anyone thinks they can make a political hot potato of this is beyond me.”
That’s because you’re an honest straight-forward fellow Nick.
Others, of an ethically challenged persuasion, see an opportunity to politicise a simple scientific data collection issue, and to that end lament how it’s become so, in the hope that if it’s repeated often enough, it actually will become a “political hot potato”.
It’s a standard tactic in poltical rhetoric.
“Our algorithm is working as designed. – NOAA NCDC”
This is a valueless truism, for if an algorithm doesn’t work as designed then how does it work? Do algorithms possess intelligence beyond their design? In short: GIGO.
You either have the data from a station or you do not. The data are somewhat subject to error because of siting issues so infilling with questionable data surely adds more to the uncertainty of the report than leaving it out. You should be able to get your daily average ignoring the missing data. If not the results are questionable even if filled.
There is no excuse for creating data for non-existent stations. That’s sloppy at best.
The results of the process do not seem to be random with regard to higher and lower as one would expect. Manipulation of the data such as infilling adds to the uncertainty but no estimates of uncertainty seem to come with the product, so do we assume that the SD is +/- 0.00? How else could 2012 miss being the hottest year by 0.03°?
“The algorithm is working as intended.” What, exactly, did they intend?
““The algorithm is working as intended.” What, exactly, did they intend?”
They frickin tell you.
they designed an alogorithm to create COMPLETE SERIES.
that means an algorithm that infills and extrapolates when there is missing data.
it does exactly that.
better question is : should you do that, or is there another way.
Answer. yes there is a better way. skeptics did it first
http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/
and later, Berkeley’s head statistican talked to the orginator of that approach and came up with a refinement.
Go figure
Thanks Brandon. I guess it takes a vague genie to set the record straight.
bill_c, I can’t believe you actually remembered that reference.
“Because they use absolute temperatures, not anomalies, a small number of stations dropping out can have undue effects on the final results.”
Stop being misleading by parroting the Blackboard crews propaganda.
* There are only 50 stations with 30 years of non-Estimated from 1961-1990 … so anomalies would also be problematic.
* It isn’t a small number of stations. Depending on the year it can be 35% Estimated data. The average over the whole dataset is around 15%.
* Infilling exaggerates tends and really exaggerates warming.
https://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
I’ll give people some advice. Don’t trust Brandon to summarize anything.
Saying a small amount of missing data can have undue effects is not the same as saying there is only a small amount of missing data. If a small amount of missing data can have undue effects, a larger amount of missing data can as well.
The only reason to read my post the way you did is if you want it to be wrong.
Brandon: “The process is unintuitive and appears strange, but it also appears this has no notable effect on the final results.”
And I’ve shown you that you are wrong. I’ve shown you numerous times.
I think you make some fine points Brandon.
One subject for discussion could be an analysis that demonstrates that the infilling methods are valid to what extent.
For example, in the case of the Texas station that was determined to be recording cool temperatures due to cable damage, could they show that the infilling with neighbor stations was valid by testing that process by dropping out stations with good data and see if that method recreates the data to some precision?
I think this has been done in other areas.
When it comes to making a TOBS adjustment to the raw data, I’ll suggest again as I did earlier today over at Lucia’s. Their current system seems to be derived from human input of data and times.
A comparison should be made with the totally automatic readings taken from the USCRN network. They have 5-10 years of station data from across the country with which to discover the true bias for readings at any hour of the day, any location.
Until they do that, I have no confidence their TOBS adjustments are correct. It would certainly be a worthwhile check.
Why do they feel the need to estimate (=make up) data at all?.
In Case 1 – data is missing. Tough – live with it.
Example : I did not take French A level. But based on the A level data for 3 subjects that I do have I will award myself a top grade in French by interpolation. Now I have 4 top grades.
Surely obvious that if I had taken German that too would have been a top grade. Look at the track record! And then Geography, History, Religious Instruction and British Constitution would have easily fallen into my lap. I absolutely loathed woodwork, but ‘Craft and Design’ would have been a shoo-in.
See how easy it is! From an unremarkable three data points, I’ve suddenly become the record-breaking top national A level student of 1973 with a dozen top grades and an interview on TV.
2 – ‘for applications requiring a complete set of data values’. The effort would be better spent on changing the applications so that they work in the real world with all its imperfections, not just in the non-existent perfect world of theory.
Out here in reality-land, far away from climofantasies, making stuff up is considered a pretty bad thing to do. An accountant who makes up an invoice for March because she can find one for February and April but March’s is missing is teetering on the edge of fraud. (case 1) And ‘adjusting’ the books just to keep them neat and tidy (case 2) is definitely not the way forward.
Climos like to present themselves as objective seekers after some indpendent truth. Why, then, do they so often indulge in ‘professional’ behaviour that the general public find dodgy at best and verging of fraudulent at worst? Do they really think we don’t notice?
Fiddling the books is wrong.
NOAA email server crash in 3-2-1….
=============
the algorithm did work as designed
1. In order to estimate global averages SOME methods such as GISS and CRU require LONG RECORDS.
2. When you have fragmentary records, there are many ways to stitch them
together. See CET for one example of stitching
3. When stations end there are ways to extend them.
The algorithm is used to produce data for these types of applications
” These estimates are used in applications that require a complete set of data values.”
In short, IF you need complete long stations for your application,
Then use the adjusted data.
On the other hand a long time ago skeptics suggested that rather than stitch stations together with adjustments, one should split them.
then skeptics showed us how to estimate the global average using all these fragments.
the answer showed more warming.
That skeptics approach ( use raw data, split stations) was improved by using kriging ( suggested by skeptics ) rather than least squares.
oh ya, thanks to the skeptics who made these suggestions.
One other thing they told you was that you should reconcile the data to the satellite data. Have you done that yet?
“One other thing they told you was that you should reconcile the data to the satellite data. Have you done that yet?”
YES
do you read
http://judithcurry.com/2014/02/25/berkeley-earth-global/
this in fact is the first use of AIRS data Version 6 to compare against the ground.
dunce.
here is another
http://rankexploits.com/musings/2014/berkeley-earth-airs/
one more person who is not interested in understanding and who cant be troubled to DAFS
NEXT
hey dunce
here is another.
This was even highlighted at WUWT
http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf
Did you fix Amundsen yet, clown?
its not broken.
The prediction minimizes error.
Yes it is broken. The expected data is junk. Your algorithm pretends it knows better than highly trained scientists and technicians being paid big bucks to stay year round at Amundsen. I know. I went to school for a 8-hours a day for a year in the military learning how to operate, calibrate, and repair every bit of weather forecasting equipment used at military air stations. I could have taken a job at Amundsen station in the 1970’s or 80’s before my experience got stale. One year minimum tour of duty. Very generous salary and nowhere to spend it. And you were doing what then, still eating Gerber baby food?
Infilling warms
https://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
Has anyone in the whole history of climates science ever adjusted the data to be cooler? (Other than cooling the past to make the present seem to be warming)
Certainly little miss sunshine. The entirety of the SST readings prior to WWII were adjusted to make the global temperature anomaly from before 1900 to now much smaller than it could have been from raw temperature measurements alone.
You would be stunned at how much this reduced the anomaly. The scientist alarmists could have kept it in the raw form to make the situation more alarming but they didn’t, because they are scientists, and not data manipulators like LMS appears to be.
Bob Tisdale has a great article on SST adjustments.
http://wattsupwiththat.com/2013/05/25/historical-sea-surface-temperature-adjustmentscorrections-aka-the-bucket-model/
But getting back to the point … INFILLING in USHCN (and I am only considering INFILLING in the Final dataset) cause the warming trend to increase. The amount of INFILLING is around 11% of the records per month in 1980-2014.
What I don’t understand is where confidence in the adjustments comes from. To feel confident, I would want adjusted temperatures to match some other standard; otherwise, it would seem to be not much more than an educated (as augmented by confirmation bias) guess. The fact that USHCN temperatures diverge over time from RSS is a pretty broad hint that something is badly amiss. At the very least, the difference in trend needs to be explained.
RSS is the divergent one and skeptic Roy spencer agrees.
go figure
Steven Mosher: Spencer and Christy are releasing an update soon that will reduce the divergence between UAH and RSS TLT during the 2000s.
==> ” Spencer and Christy are releasing an update soon ‘that will reduce the divergence between UAH and RSS TLT during the 2000s.”
That will be interesting for those who like irony-watching as a spectator sport.
great. just in the nick of time
Joshua, you know that will reduce the trend right? Is this the irony you mean?
Bill_c –
I anticipate an interesting switcheroo on the two sides w/r/t views about adjustments.
Joshua when Spencer changes the past its ok
“What I don’t understand is where confidence in the adjustments comes from”
verification studies using out of sample data
side by side studies of different sensors
you know science
Yes. Belief in the ignorance of experts, Steven. As a truly great scientist noted.
Did you fix Amundsen yet, dopey?
David Springer: Did you fix Amundsen yet, dopey?
How do you *know* that Amundsen needs to be “fixed”? The estimate of the slope from adjusted data is different from what you expect? If so, on what do you base your expectation? Thorough study of estimation techniques and their corresponding error rates?
jeremey,
feynman was an expert. I believe in his ignorance about believing in the ignorance of experts
Amundsen doesnt Need to be fixed.
1. If you want to know what the temperature was.. USE THE RAW DATA.
2. if you want to know what wass EXPECTED, given all the data and a geostatistical model of climate… use the expected field.
pretty simple. First as a user you decide what you want to do.
Then you pick the data set.
So, lets take a guy working on forest maintenance. he will use our raw product because he cares about the exact local detail.
Some one else interested in testing a GCM will use the expected field.
1. Decide your use
2. Understand the math
3. pick your product.
Note… When you predict local detail using a geostatiscal model and raw data you WILL generate anomalies at the local level. you have to.
Please note what Mosher wrote.
1. If you want to know what the temperature was.. USE THE RAW DATA.
2. if you want to know what wass EXPECTED, given all the data and a geostatistical model of climate… use the expected field.
Which one would you want in doing anything other than Climate “Estimation”?
The value that is REALITY or what some people think it should be.
Can you imagine running any kind of industry with that attitude, the REALITY as we KNOW IT is we need X degrees to melt some Steel but we are going to use Y because someone has “Estmated” that is what we should use based on the melting point of Plastic, Lead & Copper.
These guys wouldn’t last 2 minutes in a real job.
A C Osborn: These guys wouldn’t last 2 minutes in a real job.
In real settings, the modeled values, if based on well-tested models, are more reliable than the data. If you are into googling, look up small area estimation. Or, I’ll search up some links for you.
As always, the key concept is the random variation in the raw data.
First rule of holes, Steven. Stop digging.
Doubt if it’s a plot. Doubt if it’s a good piece of work on their part, either.
I concur.
You have to explain and defend poor designs.
Any design that requires that data be made up or estimated is a poor design.
Not fit for purpose.
Time to go back to the drawing board.
But it wasn’t designed to be used for the purpose it is now being used for.
If they were doctors we would be jailing them for prescribing drugs off label.
Wait, we don’t do that.
SST is 70% of the historical global temperature series. The continental USA is some fraction of the earth’s land surface, around 2%.
Somebody perhaps wants to clean up the poorly calibrated data during WWII. That is the single biggest issue in understanding the 20th century temperature trend. Yet that cleanup may be hard or impossible considering that the sailors were more concerned about not being blown out of the water by U-boats and kamikazes than by being careful with faithfully reading thermometers.
Get a grip people. Godtard isn’t your go-to guy. .
WebHubTelescope (@WHUT) says: “Godtard”
Your continued juvenile (but failed) attempts at wit only undermine your credibility here. Keep up the good work.
Verily.
==> “We designed them to cool the past as a ‘reaction’ to UHI.”
Thanks for passing on Rud’s conspiracy ideation, Judith. Never seen that before.
So unique, that comment – a “skeptic” promoting a conspiracy. Who’d of thunk it?
It’s so valuable that you passed it on. We’d have been left in the dark if you hadn’t.
The evidence is overwhelming (climategate, common sense, knowledge of human nature) – everybody involved in AGW (scientists, journalists, politicians, academia, banks, business…) wants AGW to be true.
Edim,
Your are absolutely correct.
Scientists want, as close as is practicable, their results to be true, ie.approaching reality.
‘Skeptics’ seem unencumbered by such quaint ideas.
Edim –
I think that “wants” is a bit strong. No one that I can think of wants dangerous climate change that causes suffering.
On the other hand, there is no doubt IMO, that motivated reasoning affects all those you mentioned. )They are not unique, however, as motivated reasoning affects “skeptics” as well.)
And there’s a big jump between saying that motivated reasoning affects everyone and saying that groups of scientists manipulate data to create an effect that doesn’t exist.
The latter is conspiracy ideation. I find it interesting that Rud sees a conspiracy and that Judith passes on Rud’s conspiracy ideation w/o comment.
Michael,
No. They want (significant) ACO2GW to be true, whether it is or not in reality. They desperately want to confirm it.
Joshua,
I don’t think there’s a big jump between motivated reasoning and manipulation/conspiracy. Read the climategate emails.
“Scientists want, as close as is practicable, their results to be true, ie.approaching reality.”
Which is why the IPCC models vary so wildly from real world data, I guess?
jeremy,
you’re half right.
Yes, the outputs vary from observations.
But so did the inputs of reality (such as aerosol forcings) vary from that assumed by the models. Fix that,and guess what, they are quite close.
You must be pleased to hear this.
Michael,
Please explain how you know the model inputs (including aerosols) are wrong. Seems strange that modeling groups would not be interested in your insights.
Joshua, There seem to be some environmentalist who are not all that unhappy about the warming.
According to Solitaire Townsend Co-founder and Chief Executive
of Futerra Sustainability Communications
“I was making a speech to nearly 200
really hard core, deep environmentalists and I played
a little thought game on them. I said imagine I am the
carbon fairy and I wave a magic wand. We can get rid
of all the carbon in the atmosphere, take it down to
two hundred fifty parts per million and I will ensure
with my little magic wand that we do not go above
two degrees of global warming. However, by waving
my magic wand I will be interfering with the laws of
physics not with people – they will be as selfish, they
will be as desiring of status. The cars will get bigger,
the houses will get bigger, the planes will fly all over
the place but there will be no climate change. And I
asked them, would you ask the fairy to wave its
magic wand? And about 2 people of the 200 raised
their hands.”
———————————————————–
(direct link to textfile)
http://news.bbc.co.uk/nol/shared/spl/hi/programmes/analysis/transcripts/25_01_10.txt
Confirmation bias is entirely different from conspiracy ideation. If you want to talk conspiracy ideation then look closer to home and discuss the stupid obsession of Mann et al with the Koch brothers, Exxon and oil companies in general (most of which have spent/wasted a great deal of cash on green initiatives including funding energy research and climate science).
To eliminate the well-founded notion of confirmation bias that Rud suspects can you tell us any adjustments that resulted in a cooler trend – as would normally be expected to occur at least 50% of the time?
The idea that confirmation bias would affect handling of new temperature data makes no sense. The issues are not the nature that could allow for that.
Dingbat.
Judith,
Do you find Springer’s comment as helpful? It offends me, and should offend all who hope this site can remain ad hom free.
Shirker. Does your employer know how much time you waste here while on the clock?
Joshua, go Plot successive iterations of GISS to see the progressive cooling of the past. Yet NASAs public web site explains that to correct for UHI, the past is warmed to get climate trends. They use Tokyo as the public example. That is not conspiracy ideation. That is fact. Archived, and I sent Judith archives to prove it.
Now please explain how Maine, Michigan, and California went from essentially no trend in GHCN Drd964x in 2013 to distinct warming trends in the new version of the same data in nClimDic in 2014. I sent the archive comparisons on California and Maine to Judith also. And Joe D’Aleo separately noted the same for Maine. Again, not conspiracy ideation. Proven fact. State level warming was manufactured by NOAA NCDC with the yearend 2013 switch to nClimDiv.
If you like your temperature record, you can keep your temperature record–NOT.
they almost got it right.
if they used more data and a better method chances are the past would come up cooler and the present warmer.
here 2010.
skeptical methods
http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/
Upthread I said it would help if someone laid the issues out in a clear manner so people could easily see what the arguments are. A lot of people don’t seem to understand what the issues are, and there is a fair amount of talking past one another. I thought a simple resource free of rhetoric could help with that.
After I made my comment, I decided I would try to be someone in question. I’ve written a post I hope will help explain things to people:
http://hiizuru.wordpress.com/2014/07/02/laying-the-points-out/
I’m sure there are points I didn’t address in it, and it might have mistakes. Hopefully people can chime in to clarify things if so.
youve done a fine job brandon. nobody ( well except you) is much interested in understanding
Thanks!
And yeah, I don’t actually expect many people to be interested in understanding the issues. What’s the point of trying to understand what you’re talking about?
Thanks Brandon this is very helpful, I’ve added a link to this in the main post
I’m glad it could help Curry!
After reading what Brandon was pointing out, although I sort of got it, it left me more confused not less. People that don’t normally deal with these issues or understand data manipulation can’t be expected to have a wonky interpretation. If your not using straightforward real data you either accept that people handling data and using algorithmic calculations correctly, scientifically and honestly or you don’t. If it is not straightforward raw data the consumer then also has to accept that there were no mistakes or not accept it. So once again people will jump back to their various camps and have their own interpretation rightly or wrongly. I wouldn’t say I’ve thrown my hands up but not being educated on the subject enough to properly interpreted the questions and reasons behind data manipulation leaves me, and probably the general public at large, at a loss. So then it really does become a question of trust and whose camp your in.
Most empirical methods give occasionally erroneous results. To get as close as possible to the true description of the state observed some procedures of excluding or correcting erroneous data must be applied, i.e. the direct observational data must be manipulated. That manipulation may produce its own new errors, but given time such methodological errors get reduced.
Falsifying intentionally new temperature data at a level that makes any difference to policy relevant conclusions would require a conspiracy of all people involved. Very many people are in the position of observing such an activity sooner or later. The change of hiding such a conspiracy for long is zero. It doesn’t make any sense for anybody to even attempt that.
I’m interested. It’s a very good post by Brandon.
I agree with Jonathan Abbott – this is a useful and important thing to do, Brandon. Goddard’s posts are often very short, muddled, unclear and exaggerated, though there is usually something valid in there. So there’s a need for setting out the issues clearly.
Steven Mosher
” nobody ( well except you) is much interested in understanding”
Why do you want to undermine you own substantive efforts online by the occasional bullsh*t comment? Guess it is just your Achilles heel…the way you are wired.
The plain fact of the matter remains the rising temperature trend in the US is an artifact of the adjustments done to the raw data. However justifiable those adjustments are it remains true there is no rising trend without them.
Steve Mosher,
I disagree, lots of people are interested in understanding. Brandon’s post is a good starting point. What is still missing are simple and clear explanations (free of jargon) for the main adjustments to the raw data, how those are calculated and how they are verified…. starting with TOB.
The rational for infilling also needs to be clearly explained, once again, avoiding jargon.
I have little doubt the temperature record is reasonably accurate (certainly inside 0.1C); I also have little doubt that better and clearer explanations of the processes used are needed.
Estimating changes the trend in the USA
https://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
The trend of REAL data is 0.23C/decade.
Then they add in about 15% Estimated data with a trend of +0.66C/decade.
The net result is a new trend of +0.33C/decade.
How many times does it have to be said, Pekka, that there is no need for a conspiracy. The Narrative of Alarmism creates the corrupting influences.
=========================
Will tin-foil hats afford protection from the ‘narrative’?
Inquiring minds want to know.
==========
“I disagree, lots of people are interested in understanding. Brandon’s post is a good starting point. What is still missing are simple and clear explanations (free of jargon) for the main adjustments to the raw data, how those are calculated and how they are verified…. starting with TOB.”
WRONG
We had a TOB discussion at CA years ago
before that JerryB did the work at John Dalys
Nick Stokes has even done a post on jerrys work after I pointed him to it
The Karl papers on TOB are online.
The code was made available to anyone who asked for it.
The testing was straightforward, two piles of stations one used to build the model, the other used to estimate the error of prediction.
For years I pounded the table about this error of prediction. nobody cared. nobody listened. nobody ever read the papers or got the code.
mw?
do you care about TOBS?
did you do a fricken search?
Nope
https://www.cac.cornell.edu/about/pubs/IJOC041012.pdf
“Andsager and Kunkel (2002) recently developed their method of estimating monthly
observation times in order to assist with the quality control of the National Climatic Data
Center’s (NCDC) new Summary of the Day TD-3206 dataset. Using this method, stations were
assigned one of two observation schedules, morning or afternoon, with midnight-observing
stations falling in the afternoon category. OBT estimates were based on the correlation of the
maximum temperatures for a station with surrounding stations. The method was tested on over
4500 CON stations over the period 1898-1947 with at least 95% accuracy at about 50% of the
stations, and less than 70% accuracy at fewer than 3% of the stations.”
Steven Mosher: youve done a fine job brandon. nobody ( well except you) is much interested in understanding
You sprinkle your comments with insults in a manner that breeds distrust and anger. It is called “poisoning the well”. For greater effectiveness when many more people read the threads than comment on them, and who are actively seeking information, you should stop it. Everyone here is interested in understanding, but it’s a balance of investments in time and energy toward multiple goals.
You write many good posts. Stop poisoning the well.
Steven Mosher: The Karl papers on TOB are online.
You know the sayings about honey a vinegar.
A link is worth any large number of insults. In less time than you used to write your self-righteous insults, you could have supplied the link.
Supply the link, please.
Really
How interested are people?
really?
i mean really
Oh Gosh, the topic has been covered at CA, folks lost interest.
Oh Gosh Victor devoted a blog post to it
TWO COMMENTS, one from him
blogspot.com/2012/08/a-short-introduction-to-time-of.html
Interested?
1. people dont download the data prepared by a skeptic at John Daly’s YEARS AGO to illustrate the problem.
2. People dont read the papers.
3. people dont comment on blog posts directly related to the problem.
If your theory is that people are interested, there is no evidence that indicates this is the case.
They are interested in pontificating or imputing motives or playing dumb.
there isnt a single one of you who has read these papers.
and went to get the code
and looked at JerryBs work
( oh Nick Stokes has.. )
are you interested? prove it. Im skeptical.
Making me or making Zeke or Nick do your work, is NOT SHOWING INTEREST IN UNDERSTANDING.
get off your butt. The data is there. the papers are there.
google is your friend
Matthew R Marler | July 2, 2014 at 12:18 pm |
Steven Mosher: The Karl papers on TOB are online.
You know the sayings about honey a vinegar.
A link is worth any large number of insults. In less time than you used to write your self-righteous insults, you could have supplied the link.
Supply the link, please.
I am not your librarian.
DAFS.
for the dullards how cant search
Seriously.
Mathew, interested?
I think not.
Here is what will prove you are interested.
1. Get JerryBs data or read nick Stokes post to prove you UNDERSTAND the problem.
2. get CRN data hourly to validate that changing the time of observation
does cause a problem. Post your study.
3. learn to use google
http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281986%29025%3C0145%3AAMTETT%3E2.0.CO%3B2
Steven Mosher: I am not your librarian.
DAFS.
That’s your choice.
Steven Mosher: Mathew, interested?
Please spell my name correctly: Matthew.
comments on spelling?
I take it you will not
1. Download Jerry Bs data and confirm for yourself that it is a problem.
2. Do your own confirming study using CRN
3. read the papers
you will just pretend that nobody pointed you at the materials..
or ask for another link.
Steven Mosher
“mw?
do you care about TOBS?, …”
No, wasn’t mw … maybe stevefitzpatrick. I was bustin’ your chops for shooting innocents. though certainly you have been provoked enough on this topic…
Steven Mosher: comments on spelling?
Yes. It’s my name.
Steven Mosher: 3. read the papers
Sadly I already have a reading list. Those papers will at best come after I read the new Curry et al book on the thermodynamics of clouds.
You were expecting everybody to be your shadow?
:)
Brandon
Good explanation. I would only add to that a few things.
Firstly, is that that in general the climate world has come to rely on data that is not necessarily good in the first place, as an example the global SST record could not be considered to have any sort of global reliable reach until the 1960’s yet we have calculations of trend based on some mythological beast that supposedly has a reach back to 1860.
Secondly, that the general upwards trends disguises many nuances. It would be interesting to examine, for example, that not everywhere in the world is warming, let alone at the same rate, so a better breakdown into regions would be a more worthwhile analysis than a one size fits all global temperature
Thirdly, is that the upwards temperature trend can be detected back to around 1690 with an acceleration around 1720 then various ups and downs but a general upward trend. The modern temperature record is therefore a staging post in the upwards trend but not the starting post.
We might understand the modern warming trend better if we understood the reasons behind historic warm and cool periods better
Last, but not least, I think Mosh and his team do a good job over at BEST. They are trying to unravel a very confused historic ball of string with numerous ends and need time and our support in order to get to wherever the end is Despite what Mosh says, we are interested in understanding
tonyb
seriously tony
I dont like NCDCs approach to creating long series
but I can understand this
1. They built an algorithm to create long complete series
2. That is something I would not do
3. There algorithm does what it is suppose to do.
It pretty simple.
We can admit that and then have a discussion about the NEED to build long series.
But nobody want to have that conversation.
Mosh
When you say that nobody wants to have that conversation, who is that ‘nobody’ you are talking about?
tonyb
climatereason, while those might be points worth making, I don’t see any of them as particularly relevant to the current discussions. Nobody seems to be talking about them at the moment.
Brandon
I agree. I’m just saying that if the establishment are going to look seriously at temperature reconstruction there are many other things they need to consider if we are interested in getting to the bottom of the climate.
tonyb
> Firstly, is that that in general the climate world has come to rely on data that is not necessarily good in the first place […]
I’m not sure how you can reconcile the above with “the upwards temperature trend can be detected back to around 1690 with an acceleration around 1720,” tonyb.
Steven Mosher
“It pretty simple.”
Yes.
“We can admit that and then have a discussion about the NEED to build long series.”
Yes and that discussion should be held.
“But nobody want to have that conversation.”
Look harder. Out in the hard cruel totally unthinking world there must be someone other than you that thinks that discussion should occur.
Willard
Changing climate can be detected in a variety of ways, for example glacier movements, crop records and tree lines. If you have quite a few pieces of evidence and observation they are worth considering as a whole with such things as instrumental readings
tonyb
Thank you for taking back your claim instead of dealing with your hot potato, tonyb.
Heh, Brandon; willard talks to himself about it and misunderstands.
==============
Steven Mosher: But nobody want to have that conversation.
Jeez, you’re tedious.
Simple Mathew.
start here
http://noconsensus.wordpress.com/2010/03/24/thermal-hammer/
ya mathew that was 2010.
in 2010 skeptics got rid of the need for long stations
in 2010 they computed a global trend using GHCN raw
in 2010 they showed more warming than CRU.
then everybody lost interest.
except a few of us.
here tony
Here is the first skeptical approach to calculating a global average
without using ANY approach to ‘creating’ long stations.
Recall, The whole discussion of adjustments and data infilling
comes from the REQUIREMENT for long stations.
GISS uses RSM to get long stations.
CRU uses adjusted data to get long stations
What happens if we define a method that doesnt need long stations?
well a skeptic did that
http://noconsensus.wordpress.com/2010/03/24/thermal-hammer/
you were on the conversation.
Were you interested in this issue of getting rid of the need for long stations?
Nope. your comments indicate a desire to shift the conservation away from the core finding.
Mosh
Your 1 . 29
How is that 4 year old quote shifting from the core message?
You will have noticed I said ‘good work look forward to reading more of this stuff’. How is that shifting the emphasis from the core finding?
lots of work is going on a and more data is being uncovered. That is a good thing and I support you in your endeavours as you will have seen above if you hadn’t been preparing your stockpile of brickbats
Tonyb
Mosher, following on from your comments about creating 2 data files, a real one with actual temperatures, & another of estimated temperatures, (or anomalies) based on what is “expected”. How is it possible to justify continually changing what the “expected” temperature in the distant past was as new data is added each month in the present? Should not the agency’s involved simply report the data as recorded, rather than changing the past continuously?
Thanks for the overview, Brandon.
Glad to!
Brandon/Zeke/Nick/Mosh- How do you feel about Watt’s proposal to use only the best data from highest quality stations not requiring so many adjustments for long-term climate trend analysis? As long as they provide adequate spatial/field coverage? It seems like that would un-muddy the waters significantly. Could you direct me to any papers in the past where this was accomplished?
“Could you direct me”
I did pretty much that here.
Try Amundsen station, Antarctica. Record goes continuously back to the 1950’s where there were always well trained scientists and technicians making the measurements. aCO2 in the well mixed atmosphere rises there as much as anywhere else. Moreover the effects of other things such as black carbon on snow and water vapor don’t interfere with the effect of aCO2. The raw data at Amundsen are flat to falling for the entire record. However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.
Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.
Starts in 1957, runs to 1987, then partial samples per year to 2000 where there’s no data until 2005/2006 which are partial then full years through 2013.
Hmmm, I think it’s a good idea. There are two main schools of thought when it comes to making temperature reconstructions (modern or paleo). 1) Focus on the best data, using the rest only in comparison to it; 2) Focus on all data, using statistical methods to account for problems in it.
The first option is the safest. It’s the hardest to mess up. I don’t know that that means it is the best though. If one did the math right, it might be possible the second option would actually be better. The problem is you have to do the math right. If you don’t, you can wind up making things worse.
Brandon Shollenberger commented
They’re both wrong, both allow the end user to manipulate what you get out. Either you cleverly pick a set of stations to product the direction of the trend you want, Same thing with statistical methods, the end use has a lot of flexibility on what comes out. Sure they (usually) think what they’re doing is correct, for good reason. But it leaves a lot of room for adjustments.
I think you have to use the measurement as they are, even if you have to say there are places we don’t have measurements for and therefore don’t know what the temp is there, and best we can give an estimate, but that place can’t have the same accuracy as someplace that is actually measured.
“Brandon/Zeke/Nick/Mosh- How do you feel about Watt’s proposal to use only the best data from highest quality stations not requiring so many adjustments for long-term climate trend analysis?”
1. Anthony has already data snooped, like Mann data snoops proxies.
most people dont get this.
2. You would need to define “best station” PRIOR to looking at any
data.
3. The definition must be supported by field studies. The site selection
criteria he has proposed have NEVER been formally completely tested and documented. In the one field test where site criteria were tested over a short period, no substantial bias was found.
4. The criteria for “high quality” must be objective so we dont have the issue of human raters.. as in Cooks paper.
So ya, you could use the data from the best stations.
Take CRN for the past 10 years
Compare 110 CRN to the thousands of bad stations.
DID THAT. Guess what? Guess fricking what?
When you select CRN, the stations Anthony accepts, guess what happens when you compare them to the horrible horrible stations?
you get the same answer. Why? because the criteria for good station HAS NEVER BEEN TESTED. formally or completely. people have just established “guidelines” based on anecdotes or physical theory.
“However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.
Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.”
BEST does no ADJUSTMENT to the data.
All the data is used to create an ESTIMATE, a PREDICTION
“At the end of the analysis process,
% the “adjusted” data is created as an estimate of what the weather at
% this location might have looked like after removing apparent biases.
% This “adjusted” data will generally to be free from quality control
% issues and be regionally homogeneous. Some users may find this
% “adjusted” data that attempts to remove apparent biases more
% suitable for their needs, while other users may prefer to work
% with raw values.”
With Amundsen if your interest is looking at the exact conditions recorded, USE THE RAW DATA.
If your interest is creating the best PREDICTION for that site given ALL the data and the given model of climate, then use “adjusted” data.
See the scare quotes?
The approach is fundamentally different that adjusting series and then calculating an average of adjusted series.
in stead we use all raw data. And then we we build a model to predict
the temperature.
At the local level this PREDICTION will deviate from the local raw values.
it has to.
“I think you have to use the measurement as they are, even if you have to say there are places we don’t have measurements for and therefore don’t know what the temp is there, and best we can give an estimate, but that place can’t have the same accuracy as someplace that is actually measured.”
Places you don’t have measurement for is almost the entire US. Yet you’re estimating the US. So in effect, you integrate your best estimate for all the other points. That will be some kind of interpolation.
That’s the principle here. Everything is estimated. If you have a system that is designed, with correct area weighting, to sum the effect of some number of stations, and you are missing data, then you won’t make it worse by substituting your best estimate of what that data would have been. because that’s what the averaging process necessarily does anyway. You have less real data; you can’t escape that. But you’re doing the best that you can. And 800+ is less than 1218, but still plenty.
Nick Stokes commented on
I’m sympathetic to this, except there are real measurements, and there is an area that said measurement might be a accurate measure of. The issue as I see it is that temperature is not a linear spatial field*, and any linear extrapolation is error prone, the larger the distance the larger the likely error. At some point you have to admit you’re just making up numbers. I get that depending how the data is processed, infilling makes it more “accurate”, but it also makes the answer less certain.
*When you watch the late night local news and they report the area temps, there’s always a range of temps(at least where I live), sometimes it’s quite large.
I think Nick has done a good job showing that substitution of the estimations for missing values is OK. It would make the error bars bigger if, in whatever the series, those values were substituted for good, measured values. But this doesn’t appear to happen in most temperature series. And no particular series was mentioned in Nick’s post, so this is just a general statement.
Nicely written, Brandon.
Thanks!
Your example on the other blog had too few data points to be valid. Try repeating your matrix until you have 1000 lines or so and a few missing points (10%-20%) make no significant difference in either the column average or the grand average.
Bob Greene, that’s not true. 10% missing values can easily be enough to bias results. It’s trivial to show this. The only thing you have to do is use values more like USHCN’s instead of the ones I used. When the numbers are small, like in my example, the effect of missing values is small. When you have larger numbers, like measured temperatures of 70 degrees, the effect of missing values can be much larger.
Read my earlier reply to Brandon/ Don’t trust his summary.
So people can find that reply, here is a link.
Brandon, I see your point. But when it comes to BEST, they can use their break methodology to compensate for lost data. If a data point is missing from a series, they can break it up into two separate series. This is why I don’t understand why Mosher and Nick keep saying they HAVE to use estimates. (And I DO UNDERSTAND they calculate a temperature field, so the field will supply CALCULATED values anywhere on the grid whether there is MEASURED data there or not.)
And that being said, I’m not clear on how they handle bad data, such as that found in the Luling station with the bad cable. IMO, it should be dropped. If that breaks a continuous series, just make two series from it.
At any rate, I appreciate your explanation.
jim2, I’m not sure why you’re talking about BEST in reference to this. BEST’s approach is basically nothing like what’s done for the USHCN data set. USHCN has to use estimates like it does because that’s the way its methodology was designed. It could use a different methodology, but that’d require making huge changes. There’s no apparent need for such.
Brandon. I simply wanted to talk about BEST. Simple. I think you can tell from my comment that I realize BEST uses different methodology.
Can’t you?
jim2, I wasn’t sure what you intended because it was a random transition. It can be hard to tell why a subject is changed at times. I figured you were just switching to a different topic, but I thought it was best to answer in a way that covered both possibilities.
http://onlinelibrary.wiley.com/doi/10.1029/2003GL018111/abstract
I am sick and tired of linking to this paper over the years
http://onlinelibrary.wiley.com/doi/10.1029/2003GL018111/pdf
These two used to be available on line
Karl, T. R., and C. N. Williams Jr., An approach to adjusting climatological
time series for discontinuous inhomogeneities, J. Climate Appl. Meteorol.,
26, 1744 – 1763, 1987.
Karl, T. R., C. N. Williams Jr., P. J. Young, and W. M. Wendland, A model
to estimate the time of observation bias associated with monthly mean
maximum, minimum and mean temperatures for the United States,
J. Clim. Appl. Meteor., 25, 145 – 160, 1986.
They are REQUIRED FRICKING READING.
if you havent read them YOU DONT WANT TO UNDERSTAND.
if you do care, then like me, you would have hunted this crap down and read it YEARS AGO since we been discussing it since 2007 at least
and since skeptics first misunderstood this in 2002.
Thanks for that. I hadn’t seen those before. It might be good if Judy kept a link for critical papers – critical in the sense that they supply needed information on controversial issues.
And the “you don’t want to understand” business is not needed or correct. Some of us have a life full of demands and haven’t thought of some of these things.
I think it’s time to dial down the boorishness.
jim2, there is a reason the key to understanding climate data is evidently buried in obscurity.
Keep peelin’ that onion.
Andrew
New people are becoming interested in this topic, so it is always helpful to provide links
Here you go Steven:
http://www.homogenisation.org/files/private/WG1/Bibliography/Applications/Applications%20(K-O)/karl_etal_1987.pdf
http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281986%29025%3C0145%3AAMTETT%3E2.0.CO%3B2
While I don’t see any problems in principle to supply calculated data for dropped stations, it does mean there are fewer measured data points. This means the error bars get bigger.
So, hopefully, when it comes to generating error bounds, they don’t use the zombie stations for that.
I don’t see anything here about error bars.
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/#missing
jim2 “While I don’t see any problems in principle to supply calculated data for dropped stations, it does mean there are fewer measured data points. This means the error bars get bigger.”
I believe the error bars are smaller because the estimated data increases the total number of data points.
jim2, You’re definitely right that infilling means there is less data available than it might seem, and that should increase error levels. I don’t know if it did or not in this case because there are so many issues with these error margins. I don’t think the uncertainty calculations were done well for any temperature reconstruction, modern or paleo.
captDallas
“I believe the error bars are smaller because the estimated data increases the total number of data points.”
Don’t forget you have measurement error for the observed values and estimation error for the interpolated value(s).
mwgrant, “Don’t forget you have measurement error for the observed values and estimation error for the interpolated value(s).”
If you are using anomalies the measurement error is very small which also reduces the interpolation error. With absolute temperatures you have a whole new can of worms to deal with.
CD. Given the uncertainties surrounding the TOB for temp and Max/Min values, I don’t see how you can just say the measurement errors are small. Look at the daily variation of temp compared to the trend. Any +/- error in the absolute temperature is carried over to the anomaly.
jim2, ” I don’t see how you can just say the measurement errors are small.”
Just the nature of the beast. A lig thermometer can be off a few degrees absolute but the variation will still be close even with the mercury separated. Digital is about the same though near the upper and lower ends of the ranges they can get flakey. There is likely a small warm bias near the poles because of the switch to digital thermometer with batteries that really don’t like arctic temperatures.
captdallas, in a nutshell, anomalies are second class objects, data are first class. We can not observe temperature anomalies. I do not deny the utility of anomalies in some circumstances but they are derived entities. I do not view them as necessities as there are other alternatives.
mwgrant, ” I do not view them as necessities as there are other alternatives.”
What alternatives? If you use a temperature scale you are just picking an arbitrary baseline and then measuring anomalies. No real difference. The nature of the instrumentation is what you should consider. With Liquid in Glass, physics demands that the liquid expands and contracts with temperature. Physics doesn’t demand where the zero is ascribed.
“And the “you don’t want to understand” business is not needed or correct. Some of us have a life full of demands and haven’t thought of some of these things.
I think it’s time to dial down the boorishness.”
If you dont have the time to devote to studying it yourself,
then maybe you shouldnt comment. maybe you should lurk.
Thanks for your opinion on how I should spend my time, Mosher, but it isn’t your business.
captdallas
“What alternatives?”
For fun I have been looking at the spatial variability of temperature (USA) one year at a time. A 2nd order regression polynomial in location and elevation is fit to the data leaving residuals which are used to construct experimental variograms for that year. I have stopped at that point since I am only curious about the change in correlation over time and regional (physiographic) effects. This was mostly brought about by BEST’s assumption/use of a constant correlation function over time.
In with regard to alternatives to anomalies: In theory I could use each variogram in an ordinary kriging program to generate the estimated temperature field for that year, along with sundry local error estimates and and of course a ‘global’ estimate and error for the USA. And this can be done for any year in the database. So one can generate a stack of temperature fields over time and just as easily (figuratively speaking) generate an annual temperature times series for any location, or even a USA estimated ‘global’ temperature series over time. To me that was not worth the effort because there are so many important details about the data that I need to be familiar with and with which I am not to really make something useful with it–QA and error analysis are a pain and my days for doing that drudgery are done. Besides someone will do something like this sometime–if they haven’t already.
BTW I found what I and others would expect. Yes, the spatial correlations (variograms) vary/change over time; yes, there are variograms differences between flat areas and mountain (duh!)–beyond what can be explained by elevation in the regression; yes, correlation distances are less than what is reported by BEST and Cowtan and Way–I get 400-600km with an occasional suggestion of 150-200km in the flat areas and at time suggestion of a gaussian shape indicative of gradual change. None of this is a surprise or a concern. But then again maybe I found it because I was looking for it.
[sorry for any typos, I’m tired.]
mwgrant, “For fun I have been looking at the spatial variability of temperature (USA) one year at a time.”
The changes in the variance in some locations was pretty interesting to me. Admunsen-Scott for example is giving Kriging a bit of a challenge. On the whole though, the temperature records are not that bad considering challenges and kriging or some other means of interpolating is pretty much required. Unfortunately, GISS and NOAA picked about the worst way to do things resulting in confusing adjustments.
“Thanks for your opinion on how I should spend my time, Mosher, but it isn’t your business.”
cool. you’ll understand if I don’t waste my time on you.
“Admunsen-Scott for example is giving Kriging a bit of a challenge.”
yes for a couple reasons.
Note that the weird data points happen almost exclusively in two time periods.. feb/march and Nov.
Note also that we have the nasty katabatic wind issue
Note also that the closest stations are very far away.
The station has actually given me a way of sorts to find areas that may be subject to inversions that is data error driven rather than geometry driven.
captdallas and steven mosher
“The changes in the variance in some locations was pretty interesting to me. Admunsen-Scott for example is giving Kriging a bit of a challenge.”
Bleep happens and we live with it. :O)
It goes without saying that kriging is an interpolation method, not a panacea. The issues Steven noted will trip up most if not all estimation techniques. The winds bring to mind locations associated with along the western and eastern fronts of the rockies… I’ve wondered about those locations… My last winter in Salt Lake City I saw the mother of inversions…
For the record IMO the use of a single constant global correlation function might impact things but maybe not as much as one might think, in particular because error estimates are developed outside the kriging (in BEST) No close neighbors and one or more factors not caught by the correlation function are more serious.
mw
the list of things we have looked at to handle these corner cases is pretty big.
the inversions will require a super dense DEM. then I have to find valleys or cliffs and such. nasty business. then the lapse rate regression needs to be changed on a seasonal basis. thats if we want to get the local detail right.
This is the fundamental thing that people forget
lets start with TonyBs favorite series.
thats 1 series that comes close to matching the global average.
add a few more long records you get closer
use the longest 1000 you get even closer.
Lets say with 100 stations you are within .1c of the best estimate.
Well whats wrong? whats wrong is the local detail. you have huge grids that all have the same temperature.
so you go to 1000 stations the global average doesnt change
what changes… local detail
you go to 10000 , the global doesnt change..
what changes.. the local detail
you go to 40000.. the global doesnt change
what changes.. the local detail.
In all of this … once you reach a certain threshold the global stops changing ( over sampling) but the local detail gets richer.
AND the more local detail you have, the more likely you will find something that looks wonky and probably is wonky.
mwgrant, could you explain this comment:
I could have sworn their uncertainty calculations required using kriging.
Steven Mosher, ““Admunsen-Scott for example is giving Kriging a bit of a challenge.”
yes for a couple reasons.”
You should add the most important reason, Admunsen-Scott doesn’t have any peers. It is the Southern Hemisphere’s surface temperaturel “singularity” unless you want to add stations with in a hundred miles or so.
Brandon Shollenberger,
“I could have sworn their uncertainty calculations required using kriging.”
You are correct, the kriging calculations are used in the jackknife. But note that while one can not krige with total disregard for for the appropriateness of one’s correlation function there may be some room for slop [see next paragraph], particularly when one considers issues such as Steven brought up that are not easily incorporated into the kriging (and most other common estimation schemes).
Room for slop? There is a rule-of-thumb is that local kriging estimates of a regional variable, e.g., the temperature (or temperature residual), is not as sensitive to the variogram [correlation function] as is the local error estimates. Apparently there is some practically to the rule because at least one popular commercial contouring packages operates/operated this way so that if you select the ‘kriging’ option and do not bother with any variography (the heart of the matter) then the program will default to a linear variogram which it can fit by simple linear regression. If you are only worried about making the local point or block estimates, i.e., a grid, that quick and dirty approach may suffice. Maybe BEST is lucky even with its constant correlation function. (That remains open, and I just note that it might be a good compromise for now when some more tough issues are out there. …a step at a time.)
I hope this makes the comment a little clearer. Its just a perspective, not dogma.
Steven Mosher
“the list of things we have looked at to handle these corner cases is pretty big.”
And knotty.
“the inversions will require a super dense DEM. then I have to find valleys or cliffs and such. nasty business. then the lapse rate regression needs to be changed on a seasonal basis. thats if we want to get the local detail right.”
Nasty indeed…brutal in regards to recognition and assimilation. Sometimes local is pretty far flung too. Thinking again of SLC where drainage winds flow 60-70 miles down the canyons from Wyoming. Maybe eventually something like multi-point geostatistics that incorporates training images will prove useful. (Normal two-point geostatistics is maximum entropy and handles things like connectivity poorly.)
multi-point:
http://geostats2012.nr.no/pdfs/1744859.pdf
http://mmc2.igeofcu.unam.mx/cursos/gest/Articulos/Geostatistics/Multiple-point%20geostatistics%20a%20quantitative%20vehicle%20for%20integrating%20geologic%20analogs%20into%20multiple%20reservoir%20models.pdf
…
“In all of this … once you reach a certain threshold the global stops changing ( over sampling) but the local detail gets richer.
AND the more local detail you have, the more likely you will find something that looks wonky and probably is wonky.”
Look for it and you will find it. It can be an interesting problem–drawing the line–but that makes it worth while.
Thanks for the (partial) list.
mwgrant, that definitely makes more sense.
Unless I’m missing something though, your comment does leave out on a significant point. The correlation function used in the kriging process is not the only correlation function used by BEST. They regress out signals for several effects prior to kriging their data (e.g. latitude. Your discussion regarding kriging wouldn’t apply to the effect of those correlation functions.
Brandon Schollenberger
“The correlation function used in the kriging process is not the only correlation function used by BEST. They regress out signals for several effects prior to kriging their data (e.g. latitude. Your discussion regarding kriging wouldn’t apply to the effect of those correlation functions.”
When I refer to correlation function above it is that ‘correlation function’ which indicates pair correlation as a function of separation distance. This is not the same as the ‘correlation coefficients’ that appear in regression. This is an unfortunate collision of terms.
See
http://en.wikipedia.org/wiki/Correlation_function
and
http://en.wikipedia.org/wiki/Correlation_coefficient
In the case of BEST the regionalized variable (link below) is the random residual field after the latitude and elevation contributions to the temperature field are removed. By doing that one arrives at a correlation function (for example Figures 1 and 2 in the supplement to the averaging paper) that is then suitable for use in ordinary kriging:
http://www.scitechnol.com/2327-4581/2327-4581-1-103a.pdf
Anyway that’s the way it looks to me. The same as or similar to regression (universal) kriging. One regresses away the underlying trending and then kriges the residuals ultimately arriving at estimates by stitching the trend surfaces back with the kriged residual surface. …likely off in some aspect as BEST is a unique house implementation with a narrow focus, but should be close to the scheme. (There are some machinations with the latitude and elevation functions in the BEST kriging step that I haven’t yet grokked. Reading that dense paper on screen is difficult on the eyes ;O))
I expect to get better insight on the BEST approach if Mosh, Zeke, and Rohde are able to post here as discussed.
Brandon Schollenberger
Appendum…
mwgrant, could you explain this comment:
“For the record IMO the use of a single constant global correlation function might impact things but maybe not as much as one might think, in particular because error estimates are developed outside the kriging (in BEST)”
BTW when poking around the BEST papers last night after already replying to your question above I happened across this paragraph at the bottom of page 5 in the Appendix to the averaging paper:
“Though these values were used in the averaging model, the global scale results were found to be quite insensitive to the specific parameter choices in the correlation function. Experiments where dmax was adjusted by large factors (e.g. +100% or -50%) were conducted and the changes in the global annual average were generally smaller than or similar to the uncertainties arising from other factors. This suggests that the Berkeley averaging method is relatively insensitive to the details of R(d). This is not surprising given that the separation distance between stations is often much less than the effective correlation length.”
http://www.scitechnol.com/2327-4581/2327-4581-1-103a.pdf
I had missed that or had just forgotten about it over time. D’oh!
Extremely helpful Brandon. Thank you.
My takeaway:
Goddard methodology is wonky. Watts jumped the gun (and should probably continue to stay away from anything related to Goddard). The whole thing, while technically interesting, is a bit of a tempest in a teacup. The past decade, globally was the warmest on instrument record with the past two months the warmest on record. Probably the real reason for Goddard creating this tempest in a teacup.
I’m glad to hear it.
Personally, I don’t get why Anthony Watts sided with Goddard in the way he did. His post was confused, and it misled people. He’s since tried to justify the confusion in his post by claiming the Polifact article he referenced misquoted a bunch of people (or rather, misrepresented their quotes), but I find that justification weird as he didn’t inform people of the supposed deception. I don’t think I’ll be able to get to the bottom of it though as he censored me when I tried to clarify the issues.
Brandon, your “team” tried to smear Goddard and keep trying to confuse two different issues.
One issue is raw vs. tobs vs final.
Another issue is that Estimated date (some from Zombie stations) changes the trend.
sunshinehours1, given my entire latest post is devoted to clarifying the different issues, I don’t think you’ll find many people who agree my “team” keeps trying to confuse two different issues.
Then again, I don’t think anyone could possibly know what “team” you’re talking about. I sure don’t.
Brandon Shollenberger,
Please let me add my voice to the chorus of “Thank you!”
Glad to!
But you know, if I keep responding to people saying thanks, I’m going to start feeling like an attention whore.
As software folks have said from Day 2, “It’s a feature, not a bug”.
“Political hot potato”?
In 2002, skeptics Balling and Idso published in GRL a reasonably well researched complaint about USHCN adjustments in creasing the trend, saying:
“It is noteworthy that while the various time series are highly correlated, the adjustments to the RAW record result in a significant warming signal in the record that approximates the widely-publicized 0.50°C increase in global temperatures over the past century.”
Sounds like a cold potato. Why should NOAA etc now be jumping because some blogger has looked into a file of USHCN data and without further analysis, showed that in one month in Kansas, adjustments had increased the temperatures?
Circumstances changed a lot since 2002, which was roughly at the height of the ACO2GW paradigm paralysis.
Nick Stokes: Sounds like a cold potato. Why should NOAA etc now be jumping because some blogger has looked into a file of USHCN data and without further analysis, showed that in one month in Kansas, adjustments had increased the temperatures?
You are putting in a respectable effort commenting here and at WUWT, and I thank you for it?
Does that quote adequately summarize the current case? Following Goddard’s effort, it was discovered that a lot of data had been replaced by estimates. Surely a thorough and adequate explanation is warranted, something more than “Our algorithm is working as designed.” If it is a hierarchical linear model with an assumed normal distribution (or gamma, or whatever) with Bayesian estimation via a MCMC algorithm, that would be useful to know, provided of course that the complete raw data and source code were made available. Expecting a bureau to find and correct its own mistakes is a bit rich.
Pharmaceutical companies, who also use Bayesian inference techniques, have their computer code checked by independent contractors (mistakes are always found) and make the code available to FDA. That is a good example for NOAA to follow in this case. If it has been done, please let us know.
oops — I thank you for your efforts, without question.
“it was discovered that a lot of data had been replaced by estimates”
It’s hardly a discovery; the estimated values are marked with an E. If one wants to make a controversy, the correct claim is that the number of stations currently reported has eroded from 1200 to 8-900. But that is still way more than needed for the area of the US.
Why keep “zombie stations”? It’s a US Historical CN. For most of the century, they provided real information. That’s part of the History.
All this fuss is about the adjusted file – no-one seems interested in the unadjusted. Adjustment is done for a purpose – to reduce bias in calculating a global average. If that’s not your purpose, leave it alone.
Nick Stokes: All this fuss is about the adjusted file – no-one seems interested in the unadjusted. Adjustment is done for a purpose – to reduce bias in calculating a global average. If that’s not your purpose, leave it alone.
I infer that you do think the NOAA response was adequate.
“I infer that you do think the NOAA response was adequate.”
I think it is adequate for what they were presented with. I agree with Brandon that the actual alleged problem has never been spelt out, so they were really dealing with a nothing. If you have a real issue and want a response to that, you have to say clearly what it is.
With the nothingness of Texas and Kansas, it seems to have morphed into a general complaint about stations dropping out. That is not an algorithm issue. The algorithm has to cope with it, and it does so as was designed from the beginning – using FILNET to maintain, with interpolation, a full set of adjusted stations.
ps I missed saying it before, but thanks for the kind words.
The bug identified by Spencer and Wyo Skeptic at the ‘climate at a glance’ NOAA website, where average, max and min all give the same numbers (I checked, that is still the case) is the second mistake to be found recently.
A couple of weeks back Paul Homewood found that the anomaly graphs were out by a factor of 12, because they had forgotten to divide by 12 when averaging over 12 months.
http://notalotofpeopleknowthat.wordpress.com/2014/06/13/ncdc-correct-their-error/
These are the people who claim that they produce “quality controlled” data.
The max/ave bug is the second to be found recently.
Paul Homewood found an error where they had forgotten to divide by 12 when averaging over 12 months (link stuck in moderation).
It’s a very cold ,icy day and on very cold, icy days I don’t drive out to the rural site to measure the temperature as we have good sites in town from which we will calculate a result for the rural site.
It’s a stiflingly hot day so I don’t drive out to the rural site to measure the temperature as we have good sites in town from which we will calculate a result for the rural site.
It’s a pretty average day but I’m not going to drive out to the rural site to measure the temperature as we have good sites in town from which we will calculate a result for the rural site.
Town is usually warmer than rural so what would the algorithm do to the rural data set over 70 years?
What about Zombie stations, where are they located compared to the sites used to calculate their “data”?
Is location, reasons for non-reading etc taken into account when infilling is done?
Reblogged this on Tallbloke's Talkshop and commented:
.
.
Judy Curry blogs about the official response from NOAA NCDC
“Nothing to see here, move along…”
In my opinion, NOAA is going to stonewall this at least for a few months.
Whether or not legal and political intervention will be needed to move NOAA is anybody’s guess at the moment.
Only one thing is certain: This is a monumental mess. A similar or identical mess seems to have affected Australian and New Zealand data.
Remember the “Harry_read_me.txt” file from Climategate 1? The temperature data has been a rat’s nest for years.
harry read me is about A TOTALLY DIFFERENT DATASET THAT IS NOT SUITABLE FOR CLIMATE ANALYSIS and is NEVER USED FOR CLIMATE ANALYSIS. The fricking manual for that dataset TELLS YOU AS MUCH.
Algorithms contain the knowledge in a system. A text-based example will allow lay people to understand all the detail. Since, in the climate world, they now contain 30 year old knowledge (eg natural variability is just noise), we can all see that some of it is old hat.
Newer algorithms are in use but the fact remains that the necessary exactitude is absent.
We need new algorithms exclusively based on current knowledge.
Well since its affect 2% of the globe these US adjustments don’t mean much overall. However the GISS cooling of the past also pointed out and animated by Heller does seem to affect the global signal. Also, as Heller has pointed out, the gridding is meaningless for US data – it is an issue only for world data so criticising his methods have zero foundation. Also Goddard is 100% correct that only the adjustments to the US record cause any warming at all in the US: That is just fact!
So is it all politically-inspired? Well our experience of green-left political statements from many climate scientists now combined with the complete lack of any cooling adjustments whatsoever would strongly suggest to anyone truly objective that they are fitting the data to the preffered narrative. Skeptics have been pointing this out for years however and nothing changes.
Some have said that an average isn’t workable. Gavin said 50 good, well-distributed stations could give us a good global temperature number. After some discussion, I can see a simple average won’t work.
However, from BEST we have a good idea of regional climate. There are only so many significant combinations of humidity and altitude, what we might call climate parameters. It seems it would be possible to define areas with similar climate parameters, then find 50 stations in as many diverse combinations as possible. Then, weight the trend from the stations based on the climate parameters and calculate the temperature trend from that.
You jest, BEST Summaries show Swansea on the South West Coast of Wales in the UK a half a degree C WARMER than LONDON.
Now anybody living in the UK knows that is not correct due to location and UHI in London.
It also shows Identical Upward Trends for both areas of over 1.0C since 1975, obviously BEST doesn’t know that the west coast Weather is controlled by the Ocean and London by European weather systems.
So what does the Met office say about the comparison, well they show that on average Swansea is 0.6 degrees COOLER than London.
So who do you believe, The people who live in the UK and the Met Office or BEST who have changed the values by 1.1 degrees C?
This is why you can’t do all surveillance and evaluation via math and algorithms.
A.C Osborn
Let them spend a year at The Mumbles and a year in Oxford street then they can tell us which location is the warmest
Tonyb
“obviously BEST doesn’t know that the west coast Weather is controlled by the Ocean and London by European weather systems.”
yes one of the local details that is hard to right in every location are effects, largely seasonal, caused by distance from coast
The model of climate is C = F(y,z,t)
Note what is missing from that model: distance from coast
But it turns out that if you try to add distance to coast to the model
you improve the fit in some areas and make it worse in others.
so overall the variance after fit is unchanged.
So adding distance to coast doesnt change the GLOBAL answer, however, because its not in the model you will, you MUST, find areas
where the local detail is wrong.
Now recently we’ve been working on an improvement that will take
distance to coast into account. This should decease the odd ball areas
where F(y,z,t) gives you
inaccurate results.
remember. Our product is a PREDICTION. a prediction based on
1. the raw data
2. a model of climate C= F(y,z,t)
3. interpolated weather.
That prediction will of necessity in some cases get LOCAL DETAIL WRONG
where the local temperature is dominated by some factor
like an inversion layer or a strong seasonal connection with marine air temps. ( distance from coast)
On the plus side the effect of coast falls off very rapidily as one goes inland so the area effected is tiny. Its also good that for every place and season where the coast effect cools there is another where it has the opposite effect.
yes england is a bad place to understand the weather of the rest of the world.
Mosh
England makes a good weather vane to reflect climate far beyond its borders as such as the met office, Hubert lamb, mike Hulme De bilt and Steven Mosher have variously confirmed
Tonyb
> As Wayne Eskridge writes, this issue is a political hot potato.
I thought Eskridge said:
Do you agree with this, Judy?
I agree that they will certainly be more cautious in making changes or admitting mistakes as they know they could upset people above them. This is the danger in having politicized science. It can sometimes slow down the progress of science.
Thank you, Judy.
We can be sure willard’s job is safe.
===============
“Our algorithm is working as designed.”
To make the line squiggle the way we want it to.
Andrew
Dance little squiggly line, dance.
Andrew
JC, “When the adjustments are of the same magnitude of the trend you are trying to detect, then the structural uncertainty inspires little confidence in the trends.”
Is it possible, just maybe, that there is no trend?
It seems that is the only sane conclusion that can be drawn.
Dr. Curry would seem to be saying there might be no warming.
can someone please explain to a layman why the adjustments for uhi cool the past ? surely as rural stations transition to urban stations the raw data plots the transition ,displaying the increasing trend in temperature which can be subtracted from these station trends ?
infilling with data from urban stations to cover loss/problems with rural stations would require even greater explanation,particularly as we are not talking about one or two stations. again as a layman,i am quite frankly astounded that those in the field are quite happy to be using such a high percentage of what is essentially “made up” data.
Because that comment was not correct, possibly an attempt at humor/sarcasm. The corrections that cool the past are time of observation corrections, if I am not mistaken.
you are correct.
The US is rather unique, but not totally unique, in changing the TOB in the past.
This lead to a systematic largely one direction bias.
Steve is correct in his statements of data infilling, and his detractors may be correct in that it makes little difference, but it does not matter either way. While I appreciate the great effort to have a correct temperature history determined, the big elephant in the room is that it matters little whether the temperature increase in the last 150 years or so is 0.5C or 0.8C (or whatever). The much longer trend is poorly defined, but appears to show that the present temperature is not unusual for the last 10,000 years, and the present trend is flat to down after an already flat to down period of 17 years. CAGW has been falsified, and AGW is overwhelmed by natural variation.
Lol.
.07C per year over last 204 months. 1998 tied once and exceeded twice. current rate of warming is .64C per decade.
Meanwhile, natural variation, which failed to to place the DOWN in downtrend, just gave up.
Should be .07C per decade over last 17 204 months.
I am surprised to have found no reference to atmospheric pressure as a factor in calculating what our planet’s surface temperature should be (or indeed explaining the high surface temperature on Venus). It is routinely stated in mainstream information on Jupiter or Saturn that temperatures inside the planets increase as one decends into their atmospheres due to increasing pressure, but when they explain the surface temperature of Venus a Greenhouse model is used, even though atmospheric pressure is 92 times greater than on earth. Pressure, not Back Radiation is a much better explanation of observed effects of atmosphere on temperature both on Venus and on Earth. If Equations of State dealing with planetary objects are modified to include outside energy received (from the sun) I am sure we will be able to prove a much more elegant and straight forward model than the Greenhouse model and one that will be directly testable without the need of a super computer!
Mosher says: “We can admit that and then have a discussion about the NEED to build long series.
But nobody want to have that conversation.”
Steve Goddard did! The impacts of ADJUSTMENTS (not just one) was the underlying premise in his many post for these several years.
Many got reacted to his presentation of temps as the average of initial RAW MEASUREMENTS DATA (Brandon’s Point 4). As has been found the individual station readings is where the action/attention should have been focused to analyze the impact of each and every ADJUSTMENT made to it.
In the end the answer to — Why do adjustments to the past data always cooled and the recent data always warmed?, Goddard’s fundamental question.
Brandon’s point 1 is fundamental to why data is processed, and although important nor actually meaningful. To do analysis on these steps we NEVER start by Processing the data, unless those processes are proven to be true and ALWAYS needed.
Points 2 and 3 are just some of the issues needing resolution to answer Goddard’s question. Point 4 appears to have initiated a turf fight over who does the data processing BEST and WATT needs to be considered in those processes to calculate the average temperature.
But nobody want to have that conversation.”
Steve Goddard did! The impacts of ADJUSTMENTS (not just one) was the underlying premise in his many post for these several years.
1. No he did not.
2. The issue is this: Do you need long series to calculate an average
A) GISS & CRU YES
B) Berkeley FOLLOWING THE IDEAS PROMOTED BY SKEPTICS? NO
Heller NEVER discusses this issue. NONE OF YOU EVER DISCUSS THIS ISSUE. Oh ya, long ago skeptics discussed this issue. They noted that you DIDNT NEED LONG SERIES. So skeptics built a NEW APPROACH that didnt need long series. They recommended it. Berkeley used it.
################################
In the end the answer to — Why do adjustments to the past data always cooled and the recent data always warmed?, Goddard’s fundamental question.
EASY. for the US… for the US.. for the fricking US.. there was one SYSTEMATIC change made to our recording system. Other countries dont have this problem ( with a couple exceptions) our country, our system went through one important SYSTEMATIC change. That systematic change just so happens to cool the past. The adjustment for that change has been verified twice by out of sample testing.
In addition there was another systematic change in instruments. This change was tested by comparing the sensors side by side. It too effects the record in one direction. It could have been otherwise, but it wasnt.
Steve,
Thanks for all the time you are spending explaining how the process works and WHY.
If the Fed’s had a spare 10 million available for improving the process how would you recommend spending the funds.
10 Million
Fund data recovery. there are millions of records still on paper
Fund data liberation: china, india, korea, uS mesonets, ag data
is all held under lock and key
Fund a world wide CRN
Fund a defintive micro site bias field study
Fund a metadata clean up and metadata expansion. exact station locations would be a nice start, they still suck
I don’t have a sense of how “egregious” these “estimates” might be but there is no need to attribute and sinister motive to those who are defending them.
We all (and I mean ALL OF US) cling to our beliefs and defend them based upon the information we have which we believe to be reliable. There are always trade-offs and different ways to look at the information.
In almost every situation we encounter, there are positive aspects of the situation and negative aspects. We always present those aspects which reinforce our beliefs and play down those aspects which we deem irrelevant to our point. This is how and why honest people find themselves mislead no matter how well intended the person providing the information. The further removed we are from the raw data, the more likely we are to misunderstand the point being made by those publishing the data,
The one thing i get from the topic at least is how thin the historical temperature record is. Which is, of course, just another brick in the wall.
Of course it thin, its a dam big plant, much of it covered in water with even less chance of it being covered by human measurements in the past. Its why often poor quality data is used because ‘it’s better than nothing ‘
Has in any area when you a massive lack of data you really need to ask yourself can I actual measurement it in the first place or am I attempted to pull a rabbit out of hat. But you certainly cannot base the spending of massive amounts of money and the making of massive changes with large impacts on it. Ironically especially if your claiming it’s the ‘most important issue to ever face the planet ‘ as some do as a call for action now.
Yeah, agreed.
Any time your research actually has $$ implications you need to get your stuff wired tight. No different than trying to model for a company whose revenue depends on your results, only about 1mill time larger scale in this case. Coming down from the ivory tower into the real world is an awakening.
When the proper site characteristics have apparently been decided upon, and many of the stations used do not meet these standards, use of the resulting data is meaningless. The cost involved in establishing 100 proper sites is much less than the monstrous costs of the massive network of “scientific” centres with supercomputers, and should have been put in place decades ago. This would have obviated most if not all of the “adjusting” that is in dispute. The government “team” has turned straight-forward accurate measurement into a complex game of manipulation that will always be questioned. As things stand, I don’t believe anything but the satellite data, and that only goes back to 1979. Beyond that, I think recorded history provides the best guidelines for longer term trends. Farming in Greenland, vineyards in UK, fish movements registered by early fishermen and open Arctic seas in mariners’ logs are better clues than someone’s guess of actual temperatures. This has truly turned into a “game”.
there is no field tested objective criteria for a “good site”
Now, there are plenty of skeptics who first look at the data and THEN select good sites. Just like Mann and others who do post hoc screening of proxies.
There is nothing wrong with throwing out bad data. Since we don’t have good metadata in some cases, just use your method of comparison with near neighbors, throw out the data thus identified as bad, infill the dropped data points if you have to, then do the temperature field calc again.
jim 2: There is nothing wrong with throwing out bad data.
The problem here is that the data can not be reliably classed into “good” and “bad” data (for almost all data, that is true.) Some of the data are less accurate than others, but all are afflicted to some degree with random variation from diverse causes.
If you decide that some data are “bad” and you exclude them, all subsequent inferences are conditional on the decision to ignore the “identitied bad” data; making inferences conditional on removing “identified bad” data can increase rather than decrease the mean square errors of whatever you are trying to estimate.
I understand the implications of dropping data marked as bad by the BEST algorithm. But it would be interesting to see the result of dropping them, infilling only the “bad” data, then running the temp field calc again.
It may not matter.
jim2
we dont throw out bad data
take 10 stations within 10 km
say for sept they reported
3,4,5,2,3,4,6,2,3,2
we dont look individually at all 10 sites and say.. Oh this is good that is bad
we simply fit a surface to that data and minimize the error.
The data isnt adjusted.
Now every data point will deviate from the surface. That deviation
may be due to : measurement error, bad siting, instrument drift,
or some ‘real’ factor.
if we saw this
3,4,5,2,30000,4,6,2,3,2
then 30000 would be removed at the start on a QC check
jim2.
To explain further.
with 40k stations we can decimate the input down to 1000 stations.
then create our Prediction or expected values from this sub sample.
Then you test that prediction by comparing the out of sample Raw data
with the field predicted by 1000 stations.
you can do that all day long by picking a random 1000 of the entire dataset.
Thank you, Stephen.
So, for example the Luling station, the final result displayed on the BEST web site for that station is the calculated values from the temperature field calculation.
And the original raw data was used for that calculation.
Correct?
Let me get this straight – AGAIN
If you take just the valid USCHN data – no interpolations – you get pretty much no trend at all. If you fabricate data for the missing data – you get a slight warming trend. Then if you are GISS or NOAA and make adjustments for the “time of observation” you get a strong warming trend. So Joe Six Pack can look at his near by station history and not see anything out of the ordinary until that station is adjusted to mesh with the CONUS data.
BEST uses a different method and gets just about the same trend as GISS and NOAA because there isn’t a whole hell of a lot of trend to begin with unless you cherry pick some year like say 1951.
“Globally” or “Nationwide” adjustments don’t make much difference unless someone tries using the adjusted data to advertise the “warmest” whatever ever. .Since the “warmest” whatever ever is based on absolute temperature not anomaly, that requires a different procedure to adjust for instrumentation and site changes. So nothing the Joe Six Pack reads in the news will ever be based on un-manipulated data.
is that about right?
In the good old days , people accepted that this data would be problematic and in turn this was not a big issue given that as you cannot predict the weather to any good standard more than 72 hours that was normal for the area sot its not a really big deal.
What changed was based on the same problematic data, super claims of its accuracy where made to support dramatic demands for massive political changes to come about. Has we seen do often in the area the factors that caused the uncertain never went away, what did disappear was admitted to this uncertainty in the first place. Has ‘the cause ‘ needed nothing but certainty to probe-up what in the end is rather bad science.
You can ask yourself this given , for instance , you can get two micro climates on two sides of mountain range , even if the actual distance between them is fairly small in miles. How do you think you can use the data from one weather station and ‘smear ‘ it cross a vast area and still maintain its actual value. It’s like saying has you have a three legged dog that all dogs most have three legs because you one you own can be proved to have three legs.
BREAKING NEWS FROM XKCD
And yet the plenty of folks adamantly deny this scientific consensus. Why is that, the world wonders?
Good on `yah, Randall Munroe/XKCD!
It’s plain to *EVERYONE* that no amount of evidence will *EVER* convince true-believers in flying saucers, lake monsters, ghosts, Bigfoot, and (nowadays) climate-science conspiracy theories. And needless to say, the same intransigence characterizes the special interests that benefit from conspiracy theories, ain’t that right?
So how much effort should mainstream science devote to addressing thei concerns of conspiracy theorists? The middle-of-the-road answer is “some, but not much.” Because scientists have *SERIOUS* work to do.
Question Judith Curry, what fraction of your Georgia Tech graduate students regard Steve Goddard’s belief in a corrupt global climate-science conspiracy as having substantially more legitimacy than “ghost science” or a “conspiracy to hide flying saucer corpses”?
Conclusion Aye, Climate Etc lassies and laddies, Randall/XKCD plainly has the right of it … at least as far as young scientists are concerned!
The Rest of the Story Alleged ‘Bigfoot’ DNA Samples Sequenced, Turn Out To Be Horses, Dogs, and Bears
We all can reasonably foresee that this evidence *WON’T* substantially alter the worldview of Bigfoot Believers … any more than climate science will alter the worldview of Steve Goddard/WUWT … or halt the astroturfing campaigns of special interests that benefit hugely from climate-change denialism.
*THAT’S* obvious to *EVERYONE*, eh Climate Etc readers?
But Fan … Goddard want’s to use the REAL data. The RAW data.
YOUR CULT says raw data is bad. Raw data must be adjusted and infilled.
a fan of *MORE* discourse: Alleged ‘Bigfoot’ DNA Samples Sequenced, Turn Out To Be Horses, Dogs, and Bears
That is important to all readers of Climate Etc who wondered about Bigfoot. It goes into the same discard pile with the controversies over tobacco and aspartame.
“It’s plain to *EVERYONE* that no amount of evidence will *EVER* convince true-believers in flying saucers, lake monsters, ghosts, Bigfoot, and (nowadays) climate-science conspiracy theories.”
I see your point, but its not a fair characterization. Some problems are hard. Some very well may be too hard for even science. Especially with our current technology and understanding.
This is difficult for our culture to understand, as we worship science and have all the star trek episodes as a reference point.
Maybe off topic – I have heard but have not personally examined them. The available labor rates to provide contract software engineering and programming services to NOAA are very low. A bit of a running joke around here – “why are we bidding that?”
I have been working with NCDC’s GSoD data for a number of years now, I’ve built “reports” with no additional data adjustments for various areas of the world, by day and year. You can find all of data here.
The anomalies created from this data shows no real trend in maximum daily temps, min temps are all over the place. I include Min/Max temp averages,Daily Rising temp, following night Falling temp, Min/Max temp daily difference, humidity, surface pressure, and rain fall. I have areas by continent, by Lat zones, and lat/lon boxes. It’s all there (including my code) for you to look at.
I’m not sure if I understand this data correctly, but now I a beginning so smell a rat.
Even if the data at the stations is not directly comparable in terms of elevation, time of day, etc… a global temperature increase would be a term that would factor out of the sum and we should still see the supposed hockey swank in this data provided the stations are consistent wrt time it seems.
*Disturbed**
There is no warming trend in this data.
Except a bit near the south pole….
The algorithm is working as designed.
The slope of temp is now much more aligned,
With our belief
It’s bad, good grief,
So shut up, pay the tax and toe the line.
======================
If some climate scientists feel the need to investigate, it’s because the phenomenon in question isn’t meeting warming expectations. Witness Cowton and Way. How many look for cooling that isn’t reflected in the temperature record?
You put your finger on why it has recently been so easy to blow up papers that support the alarmist consensus. They are making it up as they go along, and getting sloppier and sloppier as Nature refuses to follow Narrative.
===================
Another comment, if you’re using daily difference calculations, ie today’s min temp from station A has yesterdays min temp from Station A subtracted to get the difference between yesterday and today, doesn’t need a TOB adjustment. If station A measures a true minimum temp, as long as the take the same measurement each day, then they are always subtracting true min’s, if the measurement is taken at a specific time, the daily evolution of temp will give as accurate a difference as long as measurements are taken at the same time each day. I live at 41N, and you can see how the length of day changing effects temps, as daily min temp is based on the ratio of day to night, as the ratio go up daily temp goes up, as the ratio falls it goes down. This is a graph of daily difference for the northern hemisphere, 1950-2010 http://wattsupwiththat.files.wordpress.com/2013/05/clip_image022_thumb.jpg?w=864&h=621
While the US is 2% of the Earths land surface, it does have 20% of all of the measured temps in GSoD.
@ Mi Cro
Since all the hoopla is over temperature, can anyone tell me exactly what type of thermometer is installed in a ‘standard weather station’ and a rudimentary explanation of how the data is collected?
I tried a few minutes on the net, got nowhere, and just decided to ask here.
Thanks
Bob Ludwick commented
I don’t specifically know, I know some stations at some point had graphing thermometers, where paper was collected and sent somewhere, prior to that someone probably had to go physically read a thermometer and log the number, more recently it’s collected electronically. At some point I suspect they started calibrating thermometers, but even with these I’m sure there were times when people read them.
But IMO if you make an anomaly from the same station daily measurement, it’s as close as you’re going to get, and random errros should mostly cancel out over time (as long as the measurements are done the same way).
It’s really sort of depressing what exists for good data.
The way normal scientific programmers deal with it is supply the raw data plus a makefile that produces any version of the corrections you want.
You get the data, the corrections and the history.
As well as any helpful comments about what the corrections are for that somebody types in.
Pingback: NOAA sekosi | Roskasaitti
The BEST/Brandon/Blackboard crew are misleading people.
Estimating changes the trend by a significant amount. It makes the trend hotter.
https://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
sunshinehours1:The BEST/Brandon/Blackboard crew are misleading people.
Estimating changes the trend by a significant amount. It makes the trend hotter.
Steven Mosher above explained why this is so, in the US.
“The BEST/Brandon/Blackboard crew are misleading people.”
Lucia herself is noticeably silent, though.
Andrew
Knitting her thoughts, no doubt.
================
For someone who can scrutinize data with the best of ’em, she sure has left her skills absolutely unused on this topic.
Andrew
That adjustments always increase the trend is heating up the potato.
====================
Lucia has better things to do than refight old battles that were won back in 2010: http://rankexploits.com/musings/2010/the-pure-anomaly-method-aka-a-spherical-cow/
I on the other hand seem to be more of a glutton for punishment…
Sure Zeke, in 2010 all concerns with climate data were solved.
That’s why you and I are communicating today.
Andrew
This is snarled up enough that I doubt that anything short of a definitive move by Nature will settle it.
================
Only 50 stations have complete monthly data from 1961-1990. So you want anomalies calculated on infilled data.?
I don’t think Lucia considered that.
sunshine needs to understand what a study of methods is.
Infilling causes a warmer trend in USHCN. Looking at the data is more illustrative than looking at some theory about how data should work in a perfect world.
no sunshine.
As Steve Mcintyre and others have argued you need to test a method
before you use it.
As Lucia showed and as brandon showed, you should NOT average absolutes when you have missing data.
That is a method test.
1. Skeptics say, test the method on synthetic data FIRST.
2. That is how Mcintyre busted the hockey stick.
3. Doing a standard test on averaging absolute temperatures where there is missing data shows you this METHOD is BUSTED. it creates phoney
hockey sticks.
So first. If you want to really undestand adjustements and global averages you FIRST have to get the method straight. averaging absolutes is not a good method. skeptics know this. you and goddard dont
sunshinehours1: Looking at the data is more illustrative than looking at some theory about how data should work in a perfect world.
The theories are about what to do with data in the imperfect real world. Looking at the data without knowledge of those theories, the methods derived from the theories, and the tests of the methods is the most error-prone approach.
Mosher: “you should NOT average absolutes when you have missing data.”
How do you calculate a baseline for say 1961-1990?
You average the absolute data and produce an absolute value for each month.
Only 50 stations in USHCN have complete data from 1961-1990.
Using your advice, we should not average absolutes to create a baseline if there is missing data.
Matthew, bad data exists even if there is no underlying theory to explain why it is bad data. Sometimes you have to look at the data before you come up with a theory.
“How do you calculate a baseline for say 1961-1990?”
you dont need to. putz.
What are you calculating the anomaly against?
Quoting form Lucia’s post (which is what we were discussing)
“So, he creates an anomaly as follows: For the “warm” series, compute the average temperature over the first 20 years. Subtract that from all temperature from every warm temperature in the series.”
Just substitute 1961-1990 for “first 20 years”.
You can name call all you want (Judith lets you), but that doesn’t change the fact that calculate a “baseline” or “1961-1990” or “the first 20 years” it would be helpful if the data was actually there in the first place.
Me: “How do you calculate a baseline for say 1961-1990?”
Mosher: “you dont need to. putz.”
BEST: “Temperatures are in Celsius and reported as anomalies relative to the Jan 1951-Dec 1980 average.”
Rude and dumb.
1951-1980????? That is dishonest.
sunshine
you dont get it.
1. there are more than 50 stations in the period.
2. you dont need to calculate an anomaly for that period.
A) you can leave it in absolute temps
B) you can choose any period you want.
putz
“Only 50 stations in USHCN have complete data from 1961-1990.”
1. We use GHCN DAILY
2. We use GCOS Daily
3. We use GHCN-M
4. UHSCN is a SUBSET of GHCN-M
So we would ONLY use USHCN raw WHERE the station was not in GHCN-M ( they are are ) and only Where the station was not in GHCN-Daily
To repeat
USCHN Derives from, is a subset of, is built from, GHCN-M
GHCN-M derives from, is a subset of, is built from GHCN-M
GHCN-M derives from, is a subset of, is built from GCHN-D
USHCN has 1200 stations
GHCN-M has around 7K
GHCN-D has 20K plus for the US
We process data from the daily sources FIRST
If a station only has GHCN monthly, we use that
if a station is only found in USHCN we use that
Hint; USHCN station are all in GHCN-M and GHCN-D
During 1960-1990 there are more than 50 stations
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/united-states-TAVG-Counts.pdf
WHY?
because we use raw daily.
repeat that.
we use raw daily.
Mosher: “there are more than 50 stations in the period.”
Yes. But only 50 with a complete set of data. So Zeke’s standard anti-Goddard graphs use 1961-1900 as a baseline and yet … so few stations with a complete set.
And Zeke doesn’t tell anyone.
Yes, you could use a different baseline. And some times Zeke mixes and matches.
But I don’t think changing baselines would improve. The data is missing all over the place.
“Yes. But only 50 with a complete set of data. So Zeke’s standard anti-Goddard graphs use 1961-1900 as a baseline and yet … so few stations with a complete set.”
wrong.
you are still focused on USHCN.
Mosher The Rude: “you are still focused on USHCN.”
The topic of this blog posting is USHCN.
sunshinehours1: Matthew, bad data exists even if there is no underlying theory to explain why it is bad data. Sometimes you have to look at the data before you come up with a theory.
That’s different from what you said before. You should never trust your intuition without studying the statistical literature.
> You should never trust your intuition without studying the statistical literature.
And afterwards, you even know why never to trust your intuition anymore.
I hope they don’t lose their jobs. That would be taking accountability too far. What would the rest of the government bureaucrats think? It’s not like global warming is actually science. And, no one is pretending anything they do is for the public good. Give them a break. If they really gave a hot damn do you think they’d be working for the government?
Heh, safe from the wrath of the market but not the wrath of Mother Nature.
============================
What Goddard and Homewood and others are doing is a well-respected procedure in financial accounting. The Auditors must determine if the aggregate corporation reports are truly representative of the company’s financial condition. In order to test that, a sample of component operations are selected and examined to see if the reported results are accurate compared to the facts on the ground.
Discrepancies such as those we’ve seen from NCDC call into question the validity of the entire situation as reported. The stakeholders must be informed that the numbers presented are misrepresenting the reality. The Auditor must say of NCDC something like: “We are of the opinion that NCDC statements regarding USHCN temperatures do not give a true and fair view of the actual climate reported in all of the sites measured.”
Pingback: Comment threads about global warming show the American mind at work, like a reality-TV horror show | Fabius Maximus
NOAAgate (July 2014): government caught — making adjustments to the systemically-biased raw data, then adjusting the adjustments, then fabricating data altogether — to hide the decline.
Zeke and Mosher are involved in producing a climate science product. Does anyone here really think that they are going to comment anything that undermines their position?
http://www.businessdictionary.com/definition/conflict-of-interest.html
Andrew
Bad Andrew: Zeke and Mosher are involved in producing a climate science product. Does anyone here really think that they are going to comment anything that undermines their position?
The BEST team have made l their data and code available for independent scrutiny. They have responded appropriately (some mods, some defense), in what is ongoing work.
“we disagree on all manner of things.”
Like? Anything relevant to this topic?
Andrew
“The BEST team have made l their data and code available for independent scrutiny.”
Does this mean they are magically immunized from conflict of interest problems?
Andrew
yes.
The whole point of sharing code is to allow other people to identify issues of conflict of interest.
your interest is different than my interest.
you can prove my interest matters to the result by FINDING
how that interest skews the results.
I dont see you logging into SVN.
How about this Mosher… you make a simple declarative sentence or two of what your personal interest is in this discussion of temperature record manipulation.
Maybe we can draw some conclusions from that.
Andrew
My interest is simple. work on cool problems.
1. I started work on this in 2007 for no payment from anyone.
2. I thought I could find problems with GISS and CRU and fix them.
3. I like fixing problems.
4. In 2011 I started looking at what Berkeley was doing. I was bugged
by some of the stuff I saw
http://climateaudit.org/2011/12/20/berkeley-very-rural-data/#comment-317667
5. They then asked me to join. I fully expected my classifier of rural
stations would PROVE that there was a UHI effect
6. I was wrong.my classifier didnt change the results. crap.
7. In 2013 I joined the team formally.
Thanks Mosher. You didn’t really answer my question straightforwardly, but…
“7. In 2013 I joined the team formally.”
A team member who doesn’t toe the line isn’t a team member for very long, eh?
Andrew
“A team member who doesn’t toe the line isn’t a team member for very long, eh?”
Actually, not.
we disagree on all manner of things.
its actually encouraged.
Bad Andrew: Does this mean they are magically immunized from conflict of interest problems?
Magically? No.
We the public are immunized against the conflict of interest problems, in this case, because everything the BEST team says can be checked.
Bad Andrew: How about this Mosher… you make a simple declarative sentence or two of what your personal interest is in this discussion of temperature record manipulation.
You are barking up the wrong tree. If the possibility of some kind of bias bothers you, review the data, code, and output and see whether the bias you suspect makes a difference.
An old question to Mosher as I have yet to see it answered:
Rural versus urban is not the right metric to look for UHI. The right metric is the degree to which a station’s surrounding is being developed/has been developed. How has The BEST team estimating the degree of past and on-going development around stations and where will one find that metric?
“The right metric is the degree to which a station’s surrounding is being developed/has been developed.”
I think that implies a view that we should be trying to estimate the temperature of an unpopulated land. The right thing is to measure the average temperature of the land as it is. If it has developed generally, and warmed thereby, the measurement should reflect that. You can then argue about whether it’s AGW. But you need a right measure first.
@Nick,
Earlier this week during the afternoon on one of the hot days, I got out my IR thermometer and measured the grass in the shade in the low 90 ‘s F, the concrete sidewalk upper 90 ‘ to 105 moving towards the sun lit portion, to 115 for the sun lit concrete, to 135F for sun lit asphalt. I’ll also note that the shadow of a large tree over the road will easily make a 10 degree change in temp’s that you can feel on a motorcycle.
That sounds reasonable on the surface Nick, but tell me how one deco voles the influence of on-going development from CO2 from natural variation in your scenario. And, more importantly, how one then honestly communicates that to policy makers.
I don’t remember Obama mentioning that warming might be due to urbanization versus CO2. Another case of if you like your…
“Does anyone here really think that they[Zeke and Mosher] are going to comment anything that undermines their position? “
Absolutely, yes, I think that they would, if in a serious context they are convinced by argument or new evidence presented by others or that they themselves derive, or if it is just a matter of disclosure of limitation of an approach/methodology under discussion. That, Bad Andrew, is not even a topic worth discussing.
http://www.businessdictionary.com/definition/integrity.html
Bad Andrew
I do not think people, be they ethical or not, have the luxury of picking and choosing their situation. The expectation and trick is to remain ethical in situations not entirely of your choosing. For example, disclosure is a good tool effort in the effort to be objective.
Besides I do not think that there is a person living who is not constrained by conflicts of interest.
yes mw
we have interests,
like goddard, like watts, like hansen, like every human being.
“Absolutely, yes, I think that they would”
mwgrant,
Let’s see one them do it, just for fun. ;)
Andrew
That’s under your control…just convince them of the error of their ways on a particular point! …well I didn’t say that would necessarily be easy.
“just convince them of the error of their ways on a particular point!”
mwgrant,
If they are here to promote their product, what are the chances of that?
This is why ethically, people avoid the situations that put them in interest conflict if they want people to see they are objective.
Andrew
Bad Andrew … reply in wrong place,
http://judithcurry.com/2014/07/01/ncdc-responds-to-concerns-about-surface-temperature-data-set/#comment-603687
should be here.
Bad Andrew: This is why ethically, people avoid the situations that put them in interest conflict if they want people to see they are objective.
That is an absurd standard.
As to questions about the BEST work, you could challenge them openly, with reference to their published data, code, and results, here at ClimateEtc any time you had something worth our reading. You are not oppressed or shunned or isolated.
If you want to actually hear da man
Goddard explains it all here a very lucid well presented exposition. A very clear speaker… should represent the skeptic side on mainstream media
https://soundcloud.com/jim-poll/wjr-2014-07-02-1025am-stevegoddard-readscience-proc02
Enlightened voters believe we are the new boss of weather and must join together to elect those who understand the evil that AGW will bring if we are not told what to do. Government scientists tell us AGW skeptics are Holocaust deniers, children will never know what snow is, rivers will run red and “oceans will begin to boil ” — Earth will be like Venus – global warming is not a Left vs. right issue and, unlike our ignorant ancestors, we will be led to survival by high priests in green robes with computer models chanting anti-energy and anti-food slogans (with us receiving our marching orders from the headquarters office of New Liberal Utopia that is swinging around the Sun on the dark side of the comet Hale-Bopp).
John Kennedy from UKMO tweets today:
Taking central estimates, May 2014 was globally the warmest May on record. Factoring in uncertainty, we can say it was a top 10 May.
This is from HadCRUT4, presumably they have just completed their May analysis. Thanks to John Kennedy for putting this in context of the uncertainty!
Funny that UAH does not agree with that. Wonder why?
Further evidence that the troposphere temperature pause is ending, as foreseen by James Hansen’s Communication of Jan 21, 2014: Global Temperature Update
Conclusion The physics of radiation transport, and the thermodynamics of energy balance jointly suffice to tame the climate-change uncertainty monster.
This is good news for young climate-science researchers!
Needless to say, XKCD’s science-minded readers understand this already … and conversely, no amount of physics, thermodynamics, or observational data can *EVER* suffice to convince true believers (and special interests) that climate-change science is anything but a global conspiracy.
`Cuz *EVERYONE* understands how conspiracy-theorists think — and how special-interest operatives are professionally exploiting them — eh Climate Etc readers?
Thanks for the references Fan. Great choice.
Your appreciation and thanks are very welcome, sunshinehours1.
Here is some follow-on material, for you and Climate Etc readers:
• XKCD comments “Cold“
• XKCD comments “4.5 Degrees“
• Denialist Astroturfing Exposing the dirty money behind fake climate science
Enjoy broadening your scientific horizons, sunshinehours1 !!!
XKCD is a comic strip. Excellect choice for the cult now that real data is not making you happy.
XKCD’s author, Randall Munroe, is a Quaker NASA roboticist who receives lots-o-love from the STEM community.
Why is sunshinehours1 working so hard to disrespect to XKCD/Randall Munroe, Climate Etc readers wonder?
Students (like Judith Curry’s) especially wonder!
Conclusion Denialism that mocks Randall Munroe/XKCD is fated to immediate rejection and long-term extinction!
That’s *OBVIOUS* to *EVERYONE* — young folks especially — eh Climate Etc readers?
I used to read XKCD. Now I don’t.
https://sunshinehours.wordpress.com/2014/01/30/xkcd-and-global-warming-and-st-louis/
For a Simple Reason the climate-dice are evolving to be more-and-more strongly weighted in favor of the XKCD/Hansen climate-change world-view.
It has been FOMD’s pleasure to encourage *YOU*, sunshinehours1 — and encourage Climate Etc readers too (both young and old) — to resume (or begin) enjoyment of the wonderful science-respecting world of Randall Munroe’s XKCD!
Judith, was any tweeting done about February?
HADCRUT4 Feb 1878 0.403C
HADCRUT4 Feb 2014 0.299C
not sure (hard to search twitter), but I guess check what HadCRUT4 currently has to say about Feb 2014
Nope:
https://twitter.com/search?q=February%20from%3Ametoffice&src=typd
Judith Curry’s students (and Climate Etc readers too) may also wish to review other “sunshinehours1” scientific analyses
• Genocidal Wind Farms: Golden Eagles Will Be Slaughtered And Obama Approves
• AGW is a Cult: India and Greenpeace Are Economic Terrorists
• DUH!!! Animals and Plants are Learning to Adapt!
Young climate-scientists (especially) can learn much about the practice of climate-change denialism, by studying “sunshinehours1″s remarkable methods of hypothesis-testing … that mathematically are known as “backtest overfitting.”
Thanks Fan. The adaptation one is fascinating:
“As the Earth heats up, animals and plants are not necessarily helpless. They can move to cooler climes; they can stay put and adapt as individuals to their warmer environment, and they can even adapt as a species, by evolving.
The big question is, will they be able to do any of that quickly enough? Most researchers believe that climate change is happening too fast for many species to keep up. (Related: “Rain Forest Plants Race to Outrun Global Warming.”)
But in recent weeks, the general gloom has been pierced by two rays of hope: Reports have come in of unexpected adaptive ability in endangered butterflies in California and in corals in the Pacific.”
From Denier Central: National Geographic
http://news.nationalgeographic.com/news/2014/05/140506-climate-change-adaptation-evolution-coral-science-butterflies
“Most of the models that ecologists are putting out are assuming that there’s no adaptive capacity. And that’s silly,” says Ary Hoffmann, a geneticist at the University of Melbourne in Australia and the co-author of an influential review of climate change-related evolution. “Organisms are not static.”
Climate models. Dumber than you even thought possible.
It’s odd … why are the world’s disinterested citizen-scientists so adamant in their rejection of everything that sunshinehours1’s weblog advocates?
The world wonders. Young scientists especially!
Suddenly Fan loses interest now that I show my references are not on his hate list.
LOL… let’s you and me, and Climate Etc readers too, *ALL* go read XKCD/Randall Munroe.
`Cuz Randall’s gentle humor, sophisticated mathematics, and solid science are just the message that climate-change discourse needs!
Now *THAT’S* obvious to *EVERYONE*, eh sunshinehours1?
John Kennedy is a serious scientist who I have a lot of time for. A top 10 position for May 2014 sounds right! With the caveat that it is a short record.
Tonyb
We definitely need more John Kennedy’s!
Even though UAH lower trop is taken at a higher altitude, the trend will show the same sign. And comparing Mays from one year to another should also reflect the temperature near the ground, just a bit cooler in absolute terms, still comparing apples to apples should work.
Do you not believe the UAH results for the various May values?
Taken as a decadal average, what’s the uncertainty as to whether the past decade was the warmest decade on instrument record? Or, said in John Kennedy’s way: the period of 2004 to 2014 is in the top 10…top 5…top 3…or can we say we’re 95% certain the decade is the warmest?
Anything less than a decadal average contains way too much ENSO noise to say much about a longer-term trend.
Good point R,
Using USHCN Final Tmax and comparing July for the 1930s to July for 2000s
2000s were -0.79C colder on average.
422 Stations were warmer in the 2000s than in the 1930s.
796 Stations were colder in the 2000s than in the 1930s
I guess if John Kennedy had noted how cold February was on HADCRUT4 I might consider his commentary balanced.
Its not every month that is colder than the same month in 1878.
It would normally be quite newsworthy.
Hi sunshinehours1,
I don’t make any particular claim to be balanced. I tweet what interests me or what I think might be interesting.
I usually tweet HadCRUT4 anomaly values without providing any specific commentary just to remind people that it has been updated. We don’t have a regular update schedule so I thought people who use the dataset might appreciate that.
However, over the past few weeks, I’ve seen a lot of reports, articles and tweets saying that May 2014 was the warmest May on record without a whisper that these kinds of statements are uncertain. I wanted to highlight the fact that there is uncertainty around rankings at monthly time scales (at annual time scales as well for that matter). That’s all.
You do raise an interesting point by comparing February 1878 with February 2014 and again it’s a point about uncertainty and why ranking individual months, or, in this case, comparing individual months is tough.
Was February 1878 warmer than February 2014? Based on HadCRUT4, we’d have to say, maybe. However, the estimated uncertainty on the global temperature for February 1878 is around ±0.34degC, a little more than twice the uncertainty for February 2014. Consequently, the overall ranking of February 1878 is very uncertain. Does that mean February 2014 was very cold? No, I don’t think it does. To answer that question we’d need to compare February 2014 to the full spectrum of February’s not just one single February.
The central estimate for February 2014 sits somewhere around 21st warmest (143rd coldest if you prefer) and once you get that deep into the “pack”, the spread of possible rankings even for small uncertainties can encompass a wide range of positions. This, incidentally, is another reason I don’t usually say anything about rankings of monthly temperatures: saying a particular month was between 10th and 40th warmest is fundamentally a bit dull. Or, if not dull, woolly.
Cheers,
John
John Kennedy
Having Tonyb and Curry,J respect gives you a lot of credibility in this neighborhood.
Scott
Can’t fault this comment. It’s not baa, not baa at all.
===========
The key point in this whole story seems quite clearly to be a confirmation that
________
the warming trend of the twentieth century appears only through theories.
________
Anything that looks like observations shows nothing of the sort. This is true for temperature readings as for proxies for US as for the world.
To be absolutely correct I should have written:
Anything that looks like observations of regional temperatures shows nothing of the sort. This is true for thermometers as for proxies for US as for the world.
Now that we back to July 1936 being the hottest month on record again (the summer of 2012 resuming its place in history as just another also-ran) (NOAA… July 1936 now hottest month again) the consensus of opinion should reflect the fact that we live in a less dangerous world despite increases in atmospheric CO2.
@ Wagathon
“Now that we back to July 1936 being the hottest month on record again (the summer of 2012 resuming its place in history as just another also-ran)……”
To continue the flogging of the moribund horse, do you actually believe that our temperature records are complete, precise, and accurate enough to make meaningful comparisons of the ‘monthly temperature of the planet’, to hundredths, or even tenths of a degree over multi-decade to century time frames, as is done routinely to generate these ‘scary’ headlines? Rhetorical question with you, I assume, but does ANYONE?
So, you’re saying we’re not safe. Got it.
Hmmmm… Considering that the surface of the earth is a set of measure zero when it comes to the atmosphere and ground layers that contribute to temperature (I’m thinking of how my basement stays cold into the summer)…. Snow on the ground keeps things cold for months, etc….
How much does surface temperature even mean? This problem of stored energy…. It seems like an insanely difficult problem to estimate heat content and or to say anything about this decade being the warmest on record, etc…..
Again, respect for the difficulty of the science…. trust is another matter
If the data doesn’t match the models, tweak the data, right?
“Hide the Decline!”
What’s the big deal? I thought surface temps were not important as far as CAGW? What really matters is what the ocean heat content – is in areas we hardly measure. (It’s just a coincidence that we have to have even more “processing” of data to get those numbers, I am sure.)
And didn’t Mann’s model that generated the hokey stick work just as intended as well? Not sure I would use that as a selling point.
“When the adjustments are of the same magnitude of the trend you are trying to detect, then the structural uncertainty inspires little confidence in the trends.”
Absolutely correct. If the error margins on your data prevent you from measuring finely enough to detect the alleged effect – you can’t rely on that data no matter how you massage it. In science this means admitting you can’t prove your theory, and as a scientist you develop a better method of measuring the effect which will allow you to either prove or disprove your theory. Many university professors in climate studies do not appear to understand science.
And they say there is no God…
See: NOAA | Lesson Plan: Climate Change and Current–Grade Level: 9–12; Subject Area: Earth Science |
BREAKING NEWS
Wagathon offers high praise for Hansen’s worldview:
That was well-posted, wagathon!
Conclusion It’s mighty good to see Climate Etc’s own *WAGATHON* now is praising science-respecting lesson-plans that solidly agree with James Hansen’s climate-change worldview!
… would that be a world view comprising fear of global cooling before fear of global warming or vice versa?
Wagathon, physical science tells us ‘poking’ the climate-tiger with CO2 gets it *HOT* … and paleo-history says the same … and so the ending of the pause is a final nail in the coffin of the “uncertainty monster.”
It is an ongoing pleasure to help improve your climate-science understanding, wagathon!
The new climate theory places natural climate variability at the centre of climate science where it should always have been. While the atmospheric physics of greenhouse gases suggest warming of the atmosphere – this has taken place against a backdrop of abrupt climate shifts in 1976/1977 and 1998/2001. The amount of surface warming from the ocean and atmospheric circulation state between these shifts is deniable. The available satellite data – the Earth Radiation Budget Experiment (http://www.image.ucar.edu/idag/Papers/Wong_ERBEreanalysis.pdf) and the International Satellite Cloud Climatology Project (http://isccp.giss.nasa.gov/projects/browse_fc.html) – are consistent and say that most of it was cloud changes in the period.
Moreover – it creates the expectation of several climate shifts this century with even an approximate timing and scope for change that is unknowable with present day science.
This mystical belief in the magic of statistics as a means of creating data where there is none baffles me.
In order to make certain statements about long term temperature trends, we need long term temperature records. Where It seems to me that the rational response is to say- well, we can’t make those statements then because we don;t have the data. (BEST does it differently, but that is not the topic of the above post. BEST has its own logical and verification problems in my opinion, but again that is not the issue du jour.)
But our current crop of scientist/polemicists are not satisfied. There is too much at stake (most importantly their bloated egos), so we have to create long term temperature records where there are none.
But don’t worry we are told- we will compare our statistically generated data against other statistically generated data, and thereby verify its accuracy and precision.
The simple fact is there is no way to genuinely test their data.
And this trope about “skeptics argued for this method” is delusional. Skeptics looked at the claims to precision and accuracy of the various temperature reports, and identified numerous problems, including UHI, station moves, breaks in records, etc. Some like Watts proposed solutions to some of those problems.
But I am unaware of any who then claimed that if those improvements were made, the temperature reports would then be accurate and precise enough to make the grandiose claims made of their suitability for making absolutely huge policy decisions based on such crappy data.
No one knows the average temperature of the surface of the Earth to within tenths of a degree on any given day,
No one knows the average heat content of the oceans of the Earth with equivalent precision on any given day.
No one knows the historical trends in global average surface temperature or average global ocean heat content with similar precision for the past decade, century, or millennium.
The fact that the progressives of this world desperately want such data to support their policy prescriptions is irrelevant to the fact that they don’t have it.
Dr. Curry writes above that she doesn’t think any errors in creating the algorithms were intentional. This is probably true in the lying/conspiracy sense of the term.
But I would wager a princely sum that before Mann got his hokey stick, he ran a number of models, some of which did not give him the results he already “knew” to be true. I suspect he simply kept working until he got the result he deemed “accurate.”
I would be shocked if something similar has not happened repeatedly throughout the “global average temperature” industry After all, what “scientist” wants to publish any data that is “wrong?”
Confirmation bias is not lying, and it’s not a conspiracy. But it’s not scientific either.
“This mystical belief in the magic of statistics as a means of creating data where there is none baffles me.
In order to make certain statements about long term temperature trends, we need long term temperature records”
Wrong.
Here is a time series of your IQ over the past 10 years
120 120 120 120 NA 120 120 120 120 120
Using stats and making some assumptions I can make a prediction of
what we WOULD HAVE SEEN had we measured your IQ in year 5.
I predict 120.
We can support that prediction by studying a bunch of people.
we could study 1000 other people and find for example that NONE
showed a random drop or gain in IQ over a 10 year period.
we could confirm that prediction by finding a lost record.
Next we dont need long records. skeptics proved that.
I think your example is a little bit simpler than computing the temperature for the entire earth.
I’ve seen a lot of these simplified analogies that we are supposed to extrapolate to the earth system, not sure what to make of it.
Everyone in science should be required to work on a modelling project that fails. Its a huge eye opener, one that the academics have obviously never been through.
“I think your example is a little bit simpler than computing the temperature for the entire earth.”
Lil’ bit.
Andrew
Having looked back over all the comments, though, I do appreciate Mr. Mosher’s effort at addressing all these topics….
By inference I assume you do not believe increasing atmospheric CO2 levels does not effect IQ among deniers. How about, among alarmists?
“We can support that prediction by studying a bunch of people.
we could study 1000 other people and find for example that NONE
showed a random drop or gain in IQ over a 10 year period.
we could confirm that prediction by finding a lost record.”
OK, so is there a paper somewhere where that has been done with temperature records? Show that infilling and krigging and other statistical legerdemain was used, then compared against actual temperature data for sites with long term, precise, accurate And that the trend of the manufactured data was within a tenth of a degree of the trend of the actual measured data?
Because that would be fascinating.
“Next we don’t need long records. skeptics proved that.”
Funny, I thought science didn’t involve “proof.” But I would love to read the paper that “proved” that any model for estimating temps and trends was demonstrated to be sufficiently precise to meet the claims of precision by comparison to actual data. The lost station that confirmed all the data manufactured for it was spot on. That’s what you’re implying.
But it seems to me that if there were any such evidence, or papers, they would be rather widely discussed.
Actual if you were to do IQ test ever year would not be likely to get 120 each time because of the variations, which are natural to the test process. But then these area accounted for if they are not statically significant has this is not an area that makes claims to be ‘unquestionable perfect’ and no one is demanding massive changes to society and the spending of vast amount of money on the back of it , unlike climate ‘science ‘
As a professional working in science , if you want to make great claims you better have great evidenced to support them , otherwise has with any undergraduate handing in an essay , you’re going to get called out on them even if your ‘saving the planet ‘ in your own mind .
“Our algorithm is working as designed. – NOAA NCDC”
I can think of no more damning a phrase than that provided by NOAA itself!
Jeeze, if it was broke, at least they could fix it … mebbe.
The algorithm is not the point.
The several USHCN samples analyzed so far show that older temperatures have been altered so that the figures are lower than the originals. For the same sites, more recent temperatures have been altered to become higher than the originals. The result is a spurious warming trend of 1-2F, the same magnitude as the claimed warming from rising CO2. How is this acceptable public accountability? More like “creative accounting.”
So, in addition to all of the lying, added to all of the uncertainty about anything that involves the divining of our future, we have institutional incompetence at the highest and most basic level. As Lindzen says, it’s not that we expect disaster, it’s that the uncertainty is said to offer the possibility of disaster: implausible, but high consequence. Somewhere it has to be like the possible asteroid impact: Live with it.
The algorithm that is working here is pretty simple to explain: it’s the ‘public sector’ that is the recipient of tax dollars versus the ‘private sector’ that is responsible for picking up the tab.
Ding dong.
(See, jump-to article: Overconfident predictions risk damaging trust in climate science, prominent scientists warn, by Roz Pidcock)
Post on this topic coming within a few hours
Judith Corry, please don’t neglect to cite Naomi Oreskes’ cogent arguments that over-conservative climate-science has *ALREADY* exerted severely harmful effects, for reasons set forth in the recent Oreskes/Conway article (and forthcoming book) The collapse of Western civilization: a view from the future (Daedalus, 2013).
Oreskes work goes a long way toward teaching us to respect “the uncertainty monster” … and to appreciate it’s true nature and potentially lethal “bite”.
FOMBS seems almost to have got it . Let’s move beyond cycles to abrupt climate change.
The theory of abrupt climate change is the most modern – and powerful – in climate science and has profound implications for the evolution of climate this century and beyond. A mechanical analogy might set the scene. The finger pushing the balance below can be likened to changes in greenhouse gases, solar intensity or orbital eccentricity. The climate response is internally generated – with changes in cloud, ice, dust and biology – and proceeds at a pace determined by the system itself. Thus the balance below is pushed past a point at which stage a new equilibrium spontaneously emerges. Unlike the simple system below – climate has many equilibria. The old theory of climate suggests that warming is inevitable. The new theory suggests that global warming is not guaranteed and that climate surprises are inevitable.
http://watertechbyrie.files.wordpress.com/2014/06/unstable-mechanical-analogy-fig-1-jpg1.jpg
The question then is what to do about it. Here are a dozen phenomenal ways to build a vibrant and resilient global culture this century. Each is consistent with social, environmental and economic progress.
1. Achieve full and productive employment for all, reduce barriers to productive employment for all including women and young people.
2. Reduce by 50% or more malnutrition in all its forms, notably stunting and wasting in children under five years of age.
3. By 2030 end the epidemics of HIV/AIDS, tuberculosis, malaria and neglected tropical diseases reverse the spread of,and significantly reduce deaths from tuberculosis and malaria.
4. Achieve universal health coverage (UHC), including financial risk protection, with particular attention to the most marginalized, assuming a gradual increase in coverage over time, focusing first on diseases where interventions have high benefits-to-costs.
5. Ensure universal access to comprehensive sexual and reproductive health for all, including modern methods of family planning.
6. By 2030 ensure universal access to access and complete quality pre-primary education
7. By 2030 ensure equal access to education at all levels.
8. By 2030 ensure increased access to sustainable modern energy services.
9. By 2030 phase out fossil fuel subsidies that encourage wasteful consumption
10. Build resilience and adaptive capacity to climate induced hazards in all vulnerable countries.
11. Promote open, rules-based, non-discriminatory and equitable multilateral trading and financial systems, including complying with the agricultural mandate of the WTO Doha Round.
12 Improve market access for agricultural and industrial exports of developing countries, especially Least Developed Countries, and at least double the share of LDCs’ exports in global exports by 2020
http://watertechbyrie.com/
As always, these arguments about temperature trends are double or triple-edged. For example, increasing the temp trend before, say, 1945 reduces the contemporaneous correlation between CO2 and temp. Raising temps in the late 1990s reduces the correlation after that point. So the inferential implications of these alterations are not always what people think.
Even if this multi-edgedness weren’t the case, there is also the question of why the Tmean trend is considered rather than Tmax or Tmin. If you’re worried about heatstroke deaths, for example, Tmax probably matters most. Frost damage, probably Tmin. In its initial release BEST reported smaller trends in Tmax than previous surface-record products, and I think bigger trends in Tmin. That would imply that for damage from literal extreme temps “it’s better than we thought,” although of course other impacts the mean would matter.
Steven Mosher | July 2, 2014 at 5:32 pm |
“Thanks for your opinion on how I should spend my time, Mosher, but it isn’t your business.”
cool. you’ll understand if I don’t waste my time on you.
************
That is your prerogative, Stephen.
Oh, I’m sorry, it’s Steven, isn’t it.
If you compare the adjustment of USH00415429, Luling, with the close neighbor USH00417945, San Antonio, you find that here the adjustment is the opposite, downwards. For whatever reason, Steve Goddard did not comment on this.
There are several kinds of observational errors and it is not at all clear which are referred to in the above. The important ones are probably spatial, temporal and sampling, Look at each in turn: Whether one is measuring national or global temperature it is important to have a uniform system, say one thermometer in the centre of each 50 km square across the entire country or world. If this is not available one has to interpolate from the available stations and that can lead to errors, particularly when fronts are moving through. Of course global temperature error is inevitable because the vast reaches of the southern oceans, Arctic and Antarctic make it impossibly expensive. Temporal. It is difficult to make simultaneous measurements at the same time across the world. At present many measurements are made at midnight and as the local midnights vary, they are not simultaneous. Sampling. In this day and age measurement of temperature could and should be continuous as is the calculation of average daily temperature. So sampling error should not be a problem.
Smoothing can be regarded as introducing error, or removing error due to supposedly randomness in the data. Inertia smoothing tends to time shift high frequency datas so should be avoided. I find 11 year moving central average smoothing is a good compromise, as it tends to cancel sun spot effects, if that is what you want.
So subsampling proves that the bias is uniform throughout the station population?
The only significance to be drawn is that this is an argument along a partisan divide about data that is not capable of determining energy changes at the surface without also measuring enthalpy.
Useless for climate studies obviously – so what is the point?
What we need is a nice little tropospheric record. Wonder where what is kept?
Judith
Advertising the Truth and Truth in Advertising
There is a dichotomy here which needs exploring.
The problem stems from what does the USHCN data really mean and how is it managed and interpreted. its website states The United States Historical Climatology Network (USHCN) is a high quality data set of daily and monthly records of basic meteorological variables from 1218 observing stations across the 48 contiguous United States.
Steve Goddard commented it was the Coldest Year On Record In The US Through May 13 2014
Zeke Hausfather, a data scientist currently a Senior Researcher with Berkeley Earth chucked fuel on the fire when he wrote a series of articles
How not to calculate temperatures, parts 1,2 and 3 stating that Goddard was wrong
The U.S. Historical Climatological Network (USHCN) was put together in the late 1980s, with 1218 stations chosen from a larger population of 7000-odd cooperative network stations based on their long continuous records and geographical distribution. The group’s composition has been left largely unchanged, though since the late 1980s a number of stations have closed or stopped reporting.
And here is the crux. Mr Goddard reported real raw data, possibly with flaws in that missing temperature records were not counted. Zeke replied with an artificial model which was not designated an artificial model [see the blurb above from the website USHCN) is a high quality data set] yet treated this model data as the real data.
Steven Mosher to his credit has consistently said that it was estimations only, whereas other commentators like Nick Stokes have said that it is a virtually true data set. Steven unfortunately ignores the fact that the USHCN is put out as a historical data set when it is neither of those two things.
Further to this a deeper truth is hidden. The number of stations in the USHCN [a subset of the GHCN] IS 1218 , originally 1219 selected in 1987 with continuous records back to 1900. A large number of stations have closed over this time dropping the number of real stations to 833 reporting in March and April 2014.
Zeke has further suggested the number of real stations could be as low as 650. Some stations have been added to make it up to the 833 current stations. This implies that up to 40% of the data is artificial, made up by programmes that would be as adept at making a poker machine reel spin.
The data is adjusted in 2 ways according to Zeke and Steven and Nick. Infilling from surrounding stations if it appears erroneous for current temperatures with no comment on how low or high it is allowed to go before it is infilled. The past historical data is altered so the further back in time one goes the lower the so called historical data record is altered but it is not promoted or advertised or gazetted as a guess or estimate. It is put out as the truthful correct reading. Worse each day all these readings change as new readings are inputed on a daily, monthly [or mid next month computer input for the missing stations].
The second is a TOBS adjustment and a change of thermometers adjustment.
This results in a subtle underlying lowering of the whole historical record again presented as true historical data when it is anything but. Further it enables TOBS changes to be made to all missing data as in comparing it to surrounding stations gives an average reading but as the site itself was not working a TOBS is possibly made for that station as there is no proof that its Obs were done at the same time as the other stations.
Steven Goddard’s graphs may be flawed by missing real data, he says this is small. His temperature representations are at least real and accurate data.
Not an estimate dressed up as a drag queen of data, worse historical data when it is neither of those things.
USHCN addendum
it contained a contained a 138-station subset of the USHCN in 1992. This product was updated by Easterling et al. (1999) and expanded to include 1062 stations. In 2009 the daily USHCN dataset was expanded to include all 1218 stations in the USHCN.
This is quite a concern. If the 1992 version only had 138 stations used for its graphs could it be that these stations still exist and could still give a graph. Why were others discarded? How many of these best located stations have died the death and why? Did the addition of the new and massively infilled stations with TOBS adjustments cause the so called historical rise in temperatures
Final Note this question of truth, what is data and what is modelling, which is historically true and which has been written by the winners will persist until the agencies concerned label there models correctly and give raw data graphs, warts and all to the general public.
angech
This is the $64,000 question!
that is the most comprehensive and accurate description of the situation i have read anywhere angech,thank you for posting.
Angtech, right on!
Once you accept that facts and figures in the historical record are changeable, then you enter Alice’s Wonderland, or the Soviet Union, where it was said: “The future is certain; only the past keeps changing.” The apologists for NCDC confuse data and analysis. The temperature readings are facts, unchangeable. If someone wants to draw comparisons and interpret similarities and differences, that’s their analysis, and they must make their case from the data to their conclusions. Usually, when people change the record itself it’s because their case is weak.
Angtech
Thanks for the good explanation.
Does the un modified data exist on any site? If temp A is adjusted, where is the tracking of the original data and justification for the change?
Scott
can i just leave this here http://stevengoddard.wordpress.com/2014/07/03/onwards-and-upwards-at-the-us-government/
Reblogged this on I Didn't Ask To Be a Blog.
Pingback: Have the climate skeptics jumped the shark, taking the path to irrelevance? | Fabius Maximus
“When the adjustments are of the same magnitude of the trend you are trying to detect, then the structural uncertainty inspires little confidence in the trends.”
Bingo. Yahtzee.
It is a strong claim, considering the fact that there is provably no way to design an algorithm that would decide for any pair of programs if they are equivalent (realize the same algorithm) or not.
It does not mean one can not prove equivalence for a specific pair of formal descriptions, but even if it is possible, it needs creativity and much work in each case. What is more, for such a proof one needs an accurately formalized specification, otherwise there is nothing to be compared against the code.
I doubt NOAA has such a specification available and even if it has one, the code almost certainly was not checked against it in a rigorous manner.
In a case like this what they can and should do is to publish any specification they may have without delay along with their source code and let the world debug it for them.
On a higher level, even if the code conforms to specification perfectly, one may want to scrutinize the specification itself, because there are plenty of unsound designs in the software industry.
So. Is the specification published? If not, there is literally nothing to talk about.
Pingback: The worrying business of temperature measurement « DON AITKIN
Somewhere along the line, the fundamental distinction between bona fide empirical data and manufactured numerical values seems to have been lost entirely. This is apparent from recurring claims that:
1. Long measurement records are unnecessary to obtain long time-series of global climatic variation.
2. We “have” climate and correlation “fields” that allow us to predict such variation even in locations and at times without any measurements.
3. The proper programming of an algorithm is sufficient for it to act as the arbiter of questions of physical reality.
That we only have available variously flawed, often systematically biased, and geographically sparse measurements at discrete and largely unrepresentative locations is swept away by an avalanche of double-speak and hubris. But that seems to be par for the course in the age of video games and salesmanship.
This is a guess only to explain the continuing, increasing and divergent lowering of records in the past.
The algorithm for changing TOBS in the past is still incorporated for changing stations in the present.
This results in a subtle underlying lowering of the whole historical record again presented as true historical data. TOBS changes are made to all missing data at stations when comparing them to surrounding stations which gives average readings but as the sites were not working a TOBS is possibly made for those stations as there is no proof that their observations were done at the same required time as the other stations. Hence all current TOBS readings, and there are quite a few, have an inbuilt rise in temperature applied to the average temperatures when calculated. Even worse this then forces backwards changes on all the past recorded TOBS stations dropping them lower.
Otherwise the past records would have stayed the same [Zeke says they are always adjusted each day] and only the current data would be modified. Take out the link changing backwards and the system becomes much fairer though still historically wrong and should require all USHCN graphs to be labeled as estimates not true data itself.
I find it strange why you would need to adjust anything. So these scientists are telling us that back hundreds of years ago that their recording devices were as accurate as they are today??? I very much doubt that. Plus how can a recording device that was once in a small town which is now a city expect not to show anything but increased temperatures due to proximity to concrete and steel. Boggles the mind!!
When your agenda is to find warming then warming you will find. I bet if they were tasked to find cooling trends they would find them too.
Interesting that Bob Koss is the only one to bring up the USCRN data on this thread. It easily shows that the adjustments, not to mention the “cleaned” data that NCDC and NOAA fob off as RAW is overheated.
http://www.drroyspencer.com/2012/08/spurious-warmth-in-noaas-ushcn-from-comparison-to-uscrn/
http://wattsupwiththat.com/2014/06/07/noaa-shows-the-pause-in-the-u-s-surface-temperature-record-over-nearly-a-decade/
While I think that that the prediction of average surface global temperature is an attainable objective. I have no such confidence regarding regional temperatures. Of course an increase in accuracy of the latter should follow. Meteorologists on the spot will do better.
The measurement of global surface temperature, particularly by satellite, will improve. so historical records will be better.