by Monte Naylor
A comparison of NOAA-computed temperature trends with the “raw” historical temperature data.
The global historical surface temperature record is presented as the fundamental evidence for global warming. The climate divisions of many official agencies such as NOAA and NASA say that the global surface temperature has increased about 1.25 °C (2.25 °F) in the preceding 120 years.
They also propose that the driving force for atmospheric warming is the increasing amount of atmospheric carbon dioxide from human industrial emissions occurring mostly in the latter half of the 20th century and continuing to present day. The argument that industrial emissions of carbon dioxide is significantly warming the atmosphere is not without controversy (3), but the official reporting of the historical surface temperature record is broadly accepted as correct by the public and many scientists.
Some climate scientists, however, are skeptical that the NOAA and NASA historical surface temperature reporting is accurate. My study examines a specific area in the United States, roughly 1600 square miles of the northern Colorado Front Range, revealing NOAA’s methods of temperature trend computation and their temperature trend conclusions for this area. Additionally, I compare NOAA’s temperature history to the ‘raw’ temperature data, appropriately adjusted.
For the 48 contiguous United States, NASA claims that the surface temperature in the last 120 years has increased about the same amount as the global temperature, 1.2 °C (2.1°F), although the temperature trend is not as linear as the global trend.
THE OFFICIAL COLORADO SURFACE TEMPERATURE HISTORY
Specifically, for Colorado, NOAA reports that the mean surface temperature has increased about 1.3 °C (~2.4 °F) in the last 120 years, slightly more than the official global temperature increase.
CASE STUDY OF THE NORTHERN COLORADO FRONT RANGE TEMPERATURE HISTORY
One way to verify the veracity of these officially reported temperature trends is to obtain the “raw” historical temperature data of individual long-term weather stations for a specific area and attempt to recreate the temperature histories by implementing logical, easily understood computational methods.
In this study of the historical temperature record of the northern Colorado Front Range, I examine the original temperature data of this area, NOAA’s methods of adjustment, and their final rendering of the temperature history.
I also present a simplified statistical averaging version of the temperature history for the preceding 115 years for the northern Colorado Front Range. Here is the summarizing graph of this study – the difference between the two temperature trends is significant.The NOAA temperature trend, determined from an average of their homogenized USHCN station histories for the northern Colorado Front Range, shows an increase in temperature from 1900 to 2015 that is 2 °F greater than the trend my study revealed. Why do these two historical temperature trend calculations of the same region of Colorado vary so much?
For those interested in viewing the complete study, a narrated PowerPoint video, it can be viewed at https://vimeo.com/196878603/b9ea716a74
Biosketch: Monte Naylor is the Owner of GTS Energy Consulting in the Denver area. He has a B.S. in Geology from the Idaho State University and a M.S. in Geophysics from the Colorado School of Mines.
Moderation note: As with all guest posts, please keep your comments civil and relevant.
Pingback: A case study of the Northern Colorado Front Range temperature history – Enjeux énergies et environnement
Monte, a nice work. I am working on TOB adjustments. Could you please link to data used in your study?
Hello George. Sorry for the delay in answering your question. The hourly temperature data that I used in my study is downloaded from the NOAA website database, Climate Reference Network data, found at https://www.ncdc.noaa.gov/crn/qcdatasets.html. Select the “Hourly FTP Client Access”, then the year, and finally right-click the CRN station in which you are interested, and save to your computer.
Be sure to also retrieve the data format description by returning to https://www.ncdc.noaa.gov/crn/qcdatasets.html and right-clicking the “Hourly02 documentation” tab, and downloading that text file to your computer. In my TOB study, I used the field #9 data, T_CALC [7 chars] cols 58 — 64 Average air temperature, in degrees C, during the last 5 minutes of the hour. In this way, I obtained a temperature, a 5-minute average, that I assigned to the hour starting at the end of the 5-minutes, field #5 LST_time.
If you have further questions, I will try to answer them more quickly in the future.
Looking at the raw data, a straight line from 1954 to 2014 looks like we’re headed for another ice age… Yikes!
There is no purely raw data presented here.
The large trend in the Fort Collins data is in the raw data, whereas this author’s data is adjusted, by a TOBS method and by selectively picking which stations to use. This isn’t a raw vs adjusted argument.
A far grander bias than TOBs is introduced when comparing spanning a period when changes in solar activity had such a profound influence on Earth’s climate, i.e., the 3000 year solar activity record that we recently lived through. As it turns out, “the modern Grand maximum (which occurred during solar cycles 19–23, i.e., 1950-2009),” says Ilya Usoskin, “was a rare or even unique event, in both magnitude and duration, in the past three millennia.”
Truly hard to reach a conclusion when tiny mention is made of “appropriately adjusted.” Adjustments by either side, unless very explicitly explained and eminently rational, make conclusions questionable. In an analysis I performed last year, I adjusted nothing, took daily max and daily min temps for 6 weather stations around the US for the past 100+ years, and graphed 30 randomly selected days throughout the calendar year. The average slope of the 360 regression lines (6 stations X 30 days X 2 temps) for each plot was flat. It seems quite contradictory to AGW that the massive growth in industrial output and attendant air pollution and reported CO2 increases over the past 115 years couldn’t lead to a positive slope. Given that I’m just a dude with a home computer and not a climate scientist, am hopeful that someone somewhere with a credible scientific reputation would replicate my work, or debunk it.
This author argues that we need to modify the raw data in this region to remove urban heat island effect. This goes against what most people here seem to want (use raw data cause raw has to be good) even though the outcome would be to show less warming, which is what most people here want.
There are a few studies (including one by BEST co-authored by Judith Curry) that show overall an effect different from what is seen in these several colorado stations. That effect is that while the low temperature readings in urban areas have risen over time relative to the nearby rural areas (fake warming for low temps), supposedly there has been on average a greater countering effect in the high temperature readings. This particular colorado region fort collins didn’t supposedly show it but other regions supposedly have. So if these other studies are correct, it seems the UHI overall effect is a cooling trend not a warming one. One such other paper https://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf
Here are 60 more graphs taken from peer-reviewed papers from 2016 showing (a) that there has been little to no warming since the 1940s, and/or (b) modern temperatures are still much colder than most of the last several hundred to thousands of years.
In this new era, if anyone is looking for reasons to start asking questions, there is a lot material to start right here. Let’s see how curious and motivated they are.
You have to examine SST data particularly for E Atlantic 1N to 20N. 1945/64 mean tells it all.
Similar analysis by Jennifer Marohasy have shown similar results for homogenised data released by our BOM in Australia. Thus far BOM has been extremely reluctant to release details of the methodology employed other than it is very similar to NOAA’s.
“Thus far BOM has been extremely reluctant to release details of the methodology employed”
Complete nonsense. There is a running enquiry, incorporating some very eminent statisticians, called the ACORN-SAT Technical Advisory Forum. Their first report is here. On documentation, they say:
“In general, documentation on data methods is available easily within PDF files supplied from the ACORN-SAT page “
But for more detail:
“The Forum notes and commends the transparency offered by access to computer code, which is available (in the language Python) from the Bureau on request. This fact is advised on the ACORN- SAT pages on the Bureau of Meteorology website at http://www.bom.gov.au/climate/change/acorn-sat/#tabs=Methods&-network=, referencing the e-mail address Helpdesk.Climate@bom.gov.au.”
I believe that last link may have moved.
I have followed up on your links Nick. Even downloaded the code. They simply do not support your assertions. Perhaps you should look yourself instead of making such arrrant assertions.
“Similar analysis by Jennifer Marohasy have shown similar results for homogenised data released by our BOM in Australia.”
No, it has not. Marohasy cherry-picked some stations out of the entire record.
Go away ‘twotter’ . . .
I would like to see a critical study on NOAA adjustments to raw temperature data and elimination of some temperature measurement locations such as done in the Karl et. al. Science Express paper in 2015 (that, in the run up to the Paris COP21 and release of the Clean Power Plan. Perhaps it will be done under the new administration.
“I would like to see a critical study on NOAA adjustments “
Remember <a href="sceptic scientific audit“>this one?
So what happened? No report. No submissions published. Nothing found. That has been the history – BEST started as another Koch-funded “lets get the facts”. That time they did.
Correct first link is here.
And here are temperature graphs taken from Agee, 1980, showing that both Indiana and Iowa had temperature declines of more than -2 C after the 1940s:
Evidence has been presented and discussed to show a cooling trend over the Northern Hemisphere since around 1940, amounting to over 0.5°C, due primarily to cooling at mid- and high latitudes. Some regions of the middle latitudes have actually warmed while others, such as the central and eastern United States, have experienced sharp cooling. A representative station for this latter region is Lafayette, Ind., which has recorded a drop of 2.2°C in its mean annual temperature from 1940 through 1978. The cooling trend for the Northern Hemisphere has been associated with an increase of both the latitudinal gradient of temperature and the lapse rate, as predicted by climate models with decreased solar input and feedback mechanisms.
Rogers (2013) has no net warming (in fact, a significant cooling) for the entire southeastern U.S. since the 1940s:
Karl et all corrections shown here. BK = Before Karl; AK = After Karl.
Source? (Sounds like Goddard). What on earth is it a graph of?
NASA GISS monthly land + sea temperatures through May 2015 https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt vs late 2015 – posted before and after Karl et al published NOAA corrections in Science Express in mid 2015. http://science.sciencemag.org/content/early/2015/06/03/science.aaa5632.full As said this is the difference between before and after the corrections. Before Karl data from NASA GISS website posting, March 2015; after Karl from July 2016 posting same website. Y axis is the difference, representing the correction. NOAA made revisions during this period and I saw that they continued making revisions after the July 2016 update posting.
The number is the adjustment (lowering) made by NOAA in the NASA GISS land = sea temperature.
I meant adjustments INCREASING the NASA GISS temperature record. The monthly temperature adjustments were (began) in Jan 2001 and thereafter.
You may want to see this 5 minute video if you don’t understand the reasoning behind the adjustments made by Karl. It’s hard to argue that they are worse than the old NOAA averaging https://www.youtube.com/watch?v=hnyX32nkYBs . It’s an improvement and matches the warming seen by buoys, satellites, and boats when these are taken individually.
As of 1976, the National Academy of Sciences had temperatures for the NH as about -0.1 or -0.2 C colder than 1880. Now, in NASA/NOAA graphs, the mid-1970s are +0.3 C warmer. Why do observed temperatures need to be adjusted by 0.4 or 0.5 C?
Pretty sure science and logic are unwelcome elements in the “climate change” religion. Suspect the Democrats will demand Mr. Naylor be burned at the stake.
So what do we learn from all those plots? That you can find places that warm less than others, or maybe cool? You can see that more comprehensively here, with a globe map shaded according to trend over various periods. Here’s a snapshot of trends since 1979 (click to enlarge).
” Why do these two historical temperature trend calculations of the same region of Colorado vary so much?”
Why is it that these studies are done so amateurishly, and without looking up what is already known. NOAA give extensive data sheets for each of their 7280 station histories, including graphs before and after adjustments. They are here: Boulder, Fort Collins
I have extracted those graph parts. You can see the adjustment history for Boulder is significant, Fort Collins less so.
And you can see a lot of metadata history here. It says, for example, that Boulder had a substantial move (looks like about a mile) in Jul 1947, and another in 1958. A move in about 1980 raised its altitude by about 160 ft, but it subsequently came down again. The same site will tell you about Ft Collins.
You don’t have to be so helpless. Get in while the data lasts!
You can click on the plot to enlarge. As far as I can tell, there, homogenisation actually reduced, not increased, the combined trend.
Why not just throw out sites in cities or that wander about? The whole “adjustment” and “homogenization” approach strikes me as not particularly confiidence inspiring, but then I have little faith in government agencies.
Nick Stokes, you fail to see the obvious.
Homogenization smears Fort Collins UHI warmth to Boulder thus getting a representation of Northern Colorado Front Range temperature history that is grossly in error.
Funny thing is that by NOAA Boulder is in need of downward adjustments while Fort Collins is not, although being badly UHI contaminated.
Has NOAA done it amateurishly? No. They’ve done it purposefully. That is, with a bias.
I think you have no idea how homogenisation works. But if it was just between FC and B, the sum wouldn’t change.
In this case homogenization produces politically, not scientifically, correct results.
You should watch the attached narrated PowerPoint video. From your comments I take it that you have not done it.
I started to watch it, but it was interminable. If there is a properly presented written version, I’ll happily read it.
Nick Stokes | January 21, 2017 at 3:45 am |
“I think you have no idea how homogenisation works”
Oh, I think you’ll find that an increasingly large proportion of us outside the “profession” of climate “science” have a very good idea about how “homogenisation” works, and precisely what its purpose is…
And I suppose you’ll find it extra sinister that here they cunningly arranged for the trend to go down.
Analyzing Northern Colorado Front Range temperature history, as a first step you should dismiss Fort Collins as an outlier.
However, if your goal is to paint a picture of a catastrophic warming then you would choose Fort Collins to represent regional temperature history and put weight on it.
NOAA Boulder (ID:425000508480) QCA (Quality Controlled Adjusted) overlaid on GISS Boulder before GISS homogeneity adjustment.
Reference period 1901-2000.
Reference period 1951-1980.
Reference period 1981-2010.
Why the discrepancies? Are all the adjustments made by NOAA (TOBs, PHA, Infilling, …) shown in NOAA QCA graph?
Nick Stokes, do you know why does the trend over 1900-2016 change from essentially zero degC in NOAA Boulder QCA to about 1.5 degC in GISS Boulder before GISS homogeneity adjustments? Is it perhaps that NOAA Pairwise Homogenization Algorithm (PHA) adjustments are not shown in QCA graph?
Well, according to this,
“QCFLAG: quality control flag, seven possibilities within
quality controlled unadjusted (qcu) dataset, and 2
possibilities within the quality controlled adjusted (qca)
Quality Controlled Adjusted (QCA) QC Flags:
A = alternative method of adjustment used.
M = values with a non-blank quality control flag in the “qcu”
dataset are set to missing the adjusted dataset and given
an “M” quality control flag.
X = pairwise algorithm removed the value because of too many
it appears that PHA is shown in QCA graphs or that at least some sort of inhomogeneities check is made.
If GISS do only their own homogeneity adjustments why the mismatch between trends?
“do you know why does the trend over 1900-2016 change from essentially zero degC”
I can’t understand the plots you show; when I ask GISS to plot, I see something quite different, and which I don’t understand very well either. It has a legend with four curves, but only one is in strong color, the last. And they give this annotation:
*GHCN-Unadjusted is the raw data as reported by the weather station.
*GHCN-adj is the data after the NCEI adjustment for station moves and breaks.
*GHCN-adj-cleaned is the adjusted data after removal of obvious outliers and less trusted duplicate records.
*GHCN-adj-homogenized is the adjusted, cleaned data with the GISTEMP removal of an urban-only trend.
The NCEI adjustment is PHA, and that will be GHCN adjusted. Then they do cleaning, which is probably not much. Then they do cleaning, which probably only related to GHCN V2. Then they do their removal of urban trend. In the case of Boulder, this was the subtraction of a linear function of time, with slope 0.7°/cen.
The NCEI adjustment is PHA, and that will be GHCN adjusted.
To wrap it up.
1. NOAA QCA graphs do not show all the adjustments.
2. PHA, which GISS adj – cleaned graphs do show, produces false trend to Boulder due to UHI contaminated Fort Collins.
I think this is just a GHCN version difference. Nick’s graphics state version 3.2.2 but the current version as of June 2015 is 3.3. I’ve checked the QCA data in the current GHCN-M package and it looks like sajave’s GISS plots.
1) Yes QCA does include all adjustments and the graphs likely showed all the adjustments as of the 3.2.2 dataset.
2) Given that both 3.2.2 and 3.3 use PHA, there’s no evidence that PHA is introducing a false trend here.
GHCN-M V4 beta shows this for Boulder. QCA is now QCF.
This is GHCN-M V4 beta QCU for Boulder.
paulskio, do you have a link to GHCN-M V3.3 QCA/QCF graph for Boulder?
This graph does not (yet) trend toward a politically correct direction?
do you have a link to GHCN-M V3.3 QCA/QCF graph for Boulder?
No, the v3 folder still seems to contain only the v3.2 graphics.
This graph does not (yet) trend toward a politically correct direction?
Perhaps it’ll help you realise that the whole notion of a politically correct temperature trend was fictional in the first place?
I’d like to set aside the issues of adjustments, moves, etc and just focus on the apparent large difference in trend. These cities are 65 miles apart, within 400 to 700 feet elevation(move included) and comparable population (100,000 vs 160,000).
Just from a common sense and physical standpoint how do you explain the quite significant difference in warming or non-warming over 125 years? I would even have trouble reconciling that kind of trend split between Poughkeepsie and Peoria but to have thermometers only 65 miles apart gets beyond anything I can explain.
ceresco, your question was addressed some time ago by the esteemed Dr. Pielke Sr., and his answer in word: microclimates.
Some years ago Roger Pielke Sr. did excellent research on a set of weather stations in Colorado to investigate a strange phenomenon. The regional average from the 11 stations did not reflect any of the individual records that went into the calculation. The study linked below showed that numerous differences in the landscapes at each site meant that temperature and precipitation measurements differed significantly from one to the other, even when located a few kms apart. Not only absolute differences, such as altitude would create, but also the trends of changes differed due to terrain features. Thus the averages are not descriptive of any of the local realities. In my studies of temperature trends, I took Pielke findings to heart and focused on the pattern of change observed in each specific site.
The paper is available here: http://onlinelibrary.wiley.com/doi/10.1002/joc.706/abstract
“, your question was addressed some time ago by the esteemed Dr. Pielke Sr., and his answer in word: microclimates.”
That answer makes a bunch of sense; rising mountains, sinking valleys, rushing rivers, turbulent winds, all affecting temperatures differently in different areas.
Thanks for the link and explanation. I expected regions to diverge over such a long period but am surprised that the dynamics are at play in smaller locales.I have seen NOAA trends by state and the multi decadal variability was quite extensive.
It seems counter-intuitive but if the data show it, so be it.
“I’d like to set aside the issues of adjustments, moves, etc and just focus on the apparent large difference in trend.”
I mentioned (with pic) in another comment a gadget here which lets you look at that for GHCN/ERSST data. It shows as shaded plot but you can display stations and click to get their values. There are local differences, but very large scale patterns in the trend. It is unadjusted data, so the local differences are often due to some inhomogeneity.
As to explanations of the large scale differences, well, yes, that is interesting. SE Pacific had a long cooling period recently, etc. Theer is much to learn.
Nick, could you please publish graphs for temperature, not anomalies? And make it a little more understandable to a non-specialist. Maybe you can see the adjustment history, but I don’t. QCU, QCA? And anomalies to what base? A sliding base?
I showed an extract from the NOAA data sheets, which I linked. It has lots of information; it tells you that QCU means unadjusted etc. The history is the difference plot at bottom (red/blue). Anomaly is what they showed, although that just affects the numbers on the right axis. They don’t say on the sheet what the base is, but their standard was 1961-90.
Thanks for that Nick. It seems like a probable UHI-type trend hasn’t been detected by the NOAA method, though note that it has been heavily adjusted in the Berkeley method, which seems to produce a slightly greater overall trend in this region than NOAA’s.
Except there is also local heat contamination (LHI?), along the lines of what Watts’s people found to be so prevalent, that may be far more widespread than UHI, and there is no way to systematically adjust for it.
All that Watts found was a bunch of stations with undocumented changes which tended to artificially cool recent readings compared to those in 1979. I think the simplest way to demonstrate this is the Tmax/Tmin difference. Watt’s station selection exhibited very small Tmax trends but normal Tmin trends. The problem here is that “global brightening” over the 1979-2008 period would cause a larger Tmax trend, so their results are incompatible with known trends in surface solar radiation.
Watts Study Found No Micro Site effect.
His unpublished work withdrawn 4 years and 6 months …has yet to be updated… slower than gegris which was withdrawn the same month.
Since the error he made 6 years ago was a simply correcable per steve mcintyre one can only wonder why the paper and data has remained hidden for 78 months.
It appears Ft. Collins’ record is heavily biased by UHI. Instead of adjusting it, it should just be dropped.
What Watts found that’s relevant is that the vast majority of reporting stations suffer from poor siting. Only 71 stations are rated 1 or 2. When homogenizing, there is a tyranny of the majority of poorly sited stations. How did/does this effect trends? Maybe not so much, but what do I know. How did it vary in the past before such surveys occurred?
One thing to remember, though, is that homogenization has to do with trends, not absolute temperatures. Gross errors will forever live in the absolute temperature sets. One example is the digit bias of recordings. This may not effect trends, but is a good example of human bias that can permeate the records. What other human psychology bias besides digit bias exist in the temperature records?
You could try looking in the literature to find out.
Yes, as I wrote ‘maybe not so much’.
Buried in the paper that verytallguy cited, “Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends” (https://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf) as evidence that poor siting has little impact on average temperature trends is another statement that seems kind of important.
Quote: The best‐sited stations show essentially no long‐term trend in diurnal temperature range, while the most poorly sited stations have a diurnal temperature range trend of −0.4°C/century.
I have the impression that most people believe that diurnal temperature ranges are decreasing. Is this because there’s good evidence that the above is not true or has it been that this simply hasn’t got any attention?
Pingback: A case study of the Northern Colorado Front Range temperature history | privateclientweb
Maybe the best solution is to calibrate the net forcing using ocean temperature, which in turn can be partially cross checked with sea level rise? The key may be to have better abyssal temperature measurements and reanalysis projects which take into account geothermal heat flux (??).
Same answer. Same answer. Same answer. There was a lot of excitement about OHC when it was thought it would invalidate GISS Model E. When it did not, it mostly went away,
For comparison with your Front Range exercise, here is a summary of a different methodology on a similar topic for Australian temperatures. We can see only 0.5 deg C warming in 120 years (depending on data sets available) not the approx. 1.0 deg C claimed officially.
Apologies. Crazy Domain web host password problem.
This is mostly similar. Thanks to Jo Nona record keeping.
H/T Chris Gilham,
There is a lot of thoughtful Australian data comparison in this essay.It is quite important in the global T adjustment debate, IMO, because the implications can fail to appear in some global data sets (I think).
Both sides present good arguments, so it is difficult to know what is better or worse.
Ii becomes difficult to believe in GISS/NOAA when looking at the changes Ole Humlum has followed from 2008.
The problem with anomalies are, that you can not see if the reference temperature has changed or the actual temperature now.
If these temperature compilations were not used politically, i dont think anybody would take care. The differences betweeen them are so small relative to the changes we see every day, month and year.
If it was not for the climate science and their temperature measurements, no one would notice any change.
Stefan Rahmstorf showed the dishonesty of that Humlum graph here. The GISS History site shows a proper graph of the changes made to the index over the years, not just picking out an extreme and stretching it. It isn’t much:
Yes but only one side presents data and methods.
Thanks Monte for your work investigating this. I adds another example how adjusted station records tend to increase warming trends over the unadjusted data which was submitted by weather authorities managing the sites. I did a similar analysis of the highest quality US stations, and found that for this set of stations, the chance of adjustments producing warming is 19/23 or 83%. The warming is produced both by changing the record temperatures and by deleting blocks of months, apparently to be replaced by further processing (infilling, homogenization) when these records are used to calculate global mean temperatures.
The US is adjusted up. Africa is adjusted down.
Guess which continent is bigger.
Mosher is right, Africa is cooling.
Well, it’s a little hard to judge, as you’ve provided no details whatever of your methodology, except as a 40 minute video.
But presumably, as you do understand your own methodology, and you also understand that of NOAA, you’ll be able to tell us, no?
One of the most reliable long-term surface temperature record comes from Lincoln Nebraska near the geographical center of the USA. You don’t see a great deal of warming at Lincoln:
The monthly data for Lincoln is available here:
In what sense is that one of the most reliable? That station has had like a billion documented moves.
UHI is to blame. We know it was humans who heated our climate.
Nick Stokes -You can read the text in the film.
UHI dubbled the trend and NOAA did nothing.
UHI affects low temperatures-as shown.
“Nick Stokes -You can read the text in the film.”
Great! A forced 40 minutes to read a few slides. and then rewind and all that just to reread a slide. Why no simple text. Why???
Our first-order climate network in the USA is largely comprised of ASOS observations. It is all to common to find a warm bias with the ASOS observations. One such warm bias of about +2 F seems to be ongoing now at Astoria Oregon. The ASOS has been running warmer than the co-located ESRL surface met station since I began comparing the two last week. Here is a description of the ESRL Astoria site (ASTOR) located at the Astoria Airport next to the ASOS:
Doesn’t the NOAA homogenization algorithms similarize nearby station data, I.E. if a rural and an urban station are nearby, part of the trend from the warmer station will be transferred to the colder one? Doesn’t this practise “hide” the possible UHI-contamination from some stations?
That is an argument put forth by Anthony Watts long ago
Now here is the difference between fake skeptics and science
Fake skeptics ASSERT that the trend will be transfered… they dont
look at data.. they just raise doubt and then make assertions..
This, In contrast, is what science does..
It tests Anthony’s argument for him
Understand… there are always people making suggestions
Feynman describes fake skeptics perfectly…
This is embarrassing. Only 1% of GW is in the atmosphere – 93% minimum in the oceans. Soooooo why are you touting this theoretical 1% ?
The fig 2 chart shows the upturn around 1970 -I’m looking at 1965 because that is when the AEW cloud mass began to reduce, allowing more SW to heat the equitorial Atlantic. This also affected the Azores high and events elsewhere such as mid west and western seaboard. So why is it so necessary to avoid the very real changes to the AEW system?
Now, compare these with the warming phase of the Roman and Medieval Warm periods and the many other warming periods of the recent ten thousand years. These temperatures are well inside the bounds of past warming periods.
Reblogged this on Climate Collections.
I’m not really sure what this meant to be a case study of? The author presents this as his station average versus NOAA’s, but his “NOAA average” is based on a couple of stations picked by him, not NOAA.
Surely the most appropriate comparison to get NOAA’s average is to check the trend in the relevant grid cell of the global dataset. I’ve done that and get a warming of 1.01degC over 1900-2015 compared to the author’s 0.95degC over the same period. Effectively identical. Put simply, the Fort Collins+Boulder presented here clearly is not representative of the NOAA average for this region.
As it happens the Berkeley method does identify and adjust for an erroneous upward trend in the Fort Collins raw data, which could be attributed to UHI. Their average for the region is about the same as NOAA’s, actually slightly higher.
What would make this nice post even stronger is a characterization of the Front range stations by site quality based on surfacestations.org. There is a spreadsheet with each station ID, type (urban, suburban, rural) and the CRN rating. I took a sample of the 14 CRN 1 (best) in the spreadsheet. GISS Homogenization appeared to remove UHI from 4 urban; Charlston SC was especially noteworthy. But added spurious warming to 9 out of 10 suburban and rural stations across the country. Posted the research at WUWT couple of years ago.
Monte Naylor, How many regions have you studied? There are bound to be some increasing at rates higher than average, and others at rates lower than average. Exploring why some are above average and others below might be worthwhile, but identifying a single region that is different from average isn’t very helpful.
The issue is a head to head regional comparison of NOAA vs raw data. Whether it is warmer or colder isn’t so much the issue as that they are really different.
Indeed, average has nothing to do with it. This is about Federal manipulation of the data, and not in a good way.
“The issue is a head to head regional comparison”
But it isn’t head to head, at least in the article. It compares five unspecified stations without homogenisation (but TOBS), with two different ones that have been homogenised. They are just different places; there is no way to attribute the difference to adjustment, especially as for the two stations selected, adjustment actually reduces the trend.
Judith did you look at his code?
Please post the code and data.
curryja: The issue is a head to head regional comparison of NOAA vs raw data. Whether it is warmer or colder isn’t so much the issue as that they are really different.
I disagree, in part. He has possibly chosen one region out of many where the difference looks big by a criterion not specified in advance (was it?) What happens in all the other regions? The word “average” was inappropriate (i.e., I goofed).
“Whether it is warmer or colder isn’t so much the issue as that they are really different.”
How is this an issue? If the adjustments or homogenization are scientifically sound then all this would tell you is that you shouldn’t be looking at the “raw” data because it has known biases.
Can someone point me to an explanation of what this “homogenization” process is all about? Temperatures and how they change over time are very far from homogenous. Many stations close together show very different trends so if homogenization is being imposed by changing the data that is just wrong, a fabulous bias indeed.
Homogenization is simple. Spencer and Christy do it.
Simple example. The observation practice changes.
Either time or place or instruments.
You adust measurements to account for the change.
You test and validate your adjustments.
[Wikipedia] “relative homogenization assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities.”
How does this work for the South Pole and McMurdo? Who decides if they are nearby?
Steven Mosher, I don’t believe Spencer and Christy are doing homogenization any more. Or at least not homogenization in anything like the sense that NOAA does to the surface temperature data.
Quote: “So, instead of the past method of calculating LT as a weighted difference between different view angles of MSU2 (or AMSU5), we are now calculating it as a weighted difference between MSU channels 2, 3,
and 4 (or AMSU channels 5, 7, and 9) at a constant Earth incidence angle. This has the very important advantage that all satellite data necessary for the LT retrieval come from the same location.”
So in version 6 of UAH, Spencer, Christy, and Braswell, measure temperature of a patch of the lower troposphere by taking and combining the measurements of three different bands of microwave radiation from the same location at the same time.
As I understand it the reason they haven’t been doing this all along is that it took quite a few years before they were able to come with a method and adjustments to it for variations in satellite performance where the above would work. And the reason for the previous approach was simply that it was easier to make that work.
As a side effect of the new approach tropospheric temperatures are measured at a higher resolution and, of course, the meaning of what is actually being measured is much clearer.
The best explanation for this temperature data manipulation has been put forth by Svend. The religious zeal of the researchers to show warming is very strong – “…do you like it master, did I do good?”
Svend Ferdinandsen |The problem with anomalies are, that you can not see if the reference temperature has changed or the actual temperature now.
If these temperature compilations were not used politically, i dont think anybody would take care. The differences betweeen them are so small relative to the changes we see every day, month and year.
If it was not for the climate science and their temperature measurements, no one would notice any change.
After viewing the video, it appears that Mr. Naylor has uncovered problems with the official temperature reconstruction. It makes sense to use a Tob adjustment based on local geographic conditions – similar altitude, longitude, humidity, terrain, etc. Instead, the official version uses a nation-wide average for the Tob adjustment.
Also, it appears he has found an example of UHI at Ft. Collins.
This does cast doubt on the homogenization technique used in the official reconstruction.
He never validates his adjustments.
I consider the agreement of the few long-record thermometers to be a partial validation. Unfortunately, there is no way to be certain of the temperature at X time, unless the actual reading was taken then. Of course, you can apply some statistical techniques, but you are inputting data from diverse environments. Not satisfying.
Far better to do what the “professionals” do and “homogenise” it to match the out from the
computer gamesclimate models.
cat – after fitting the data to a field, pair-wise homogenization, or whatever the “adjustment” is, it’s not too surprising that statistical tests for consistency between two groups indicates they are very similar. I’m guessing if you just used Tob-corrected data, any two random groups would agree for some stats.
Reiterating Rud’s comment. Would it not be informative to pull the CRN data for this region in CO?
An excellent post on the adjustments to raw data.
However, it is not only raw data that is adjusted but also the homogenised data (see global land station data in the diagram below).
Two points are evident from the diagram above, namely:
The pre-1890 data has been reduced by ≈ 0.4-0.5 °C in GISS 2016 when compared with GISS 2004 and Hansen et al (1981).
The 1940’s warm period has been gradually cooled by approximately ≈ 0.2 °C, which now makes it cooler than 1980 in the current GISS data set.
My question is, what were NASA GISS scientists doing wrong in the 1981-2004 period that they are doing correctly now?
There is a nonsense comment early in this thread suggesting that I cherry pick. No! I’ve worked through example after example of the type of deceit documented here by Monte Naylor.
My most recent effort has been with the temperature record for Rutherglen; the full report is here: http://climatelab.com.au/wp-content/uploads/NW2016.001.PP_.Marohasy.pdf
Monte has shown in the study how the artificial warming bias in the Fort Collins record due to UHI contamination can become embedded into trends from rural sites through NOAA’s homogenisation method. Similarily, the Australian Bureau of Meteorology use temperature trends from Melbourne and Sydney to ‘homogenise’ rural sites. It’s a disgrace.
Yet, the panel established by the Australian government to consider this evidence refuses to work through a single exampe of homogenisation. They will not look at one example that shows what Monte has so eloquently explained here for Colorado. Instead our mainstream climate scientists, and their managers, hide behind peer-review and claimed ‘world’s best practice’. Monte has shown in this sudy what a shame ‘world’s best practice is.
One of my more recent blog posts provides an understanding of long-term temperature trends for SE Australia, herehttp://jennifermarohasy.com/2016/12/temperatures-trends-southeast-australia-1887-part/ . For those wanting more information, I can email you a pdf copy of the Elsevier book chapter on which this study is based. Just let me know at firstname.lastname@example.org
No, you do cherry-pick.
You also use the Argument from Personal Incredulity fallacy which is absurd.
If you claim to be some sort of scientist, you know what to do. Publish your “findings” in a credible peer-reviewed scientific journal. Otherwise take your allegations, anonymous patrons and go away.
Funny, I don’t recall seeing your name at the top of this blog. Yes, true to form another leftist wants to censor those who don’t drink the same kook aid. The violent branch of your clan infested our capital today with the same kind of intolerance.
I rather liked her analysis as I’m sure others will. Only closed minded dogmatists will not. Deal with the facts. Your team lost at the ballot box and the science, in due time, will render a fit verdict for your theory.
Ohhhh look! A troll! Trying to distract and divert from my comments. Tsk.
Dearest Harry Twinotter,
You miss represent me! I’ve no anonymous patron. My sources of funding are all detailed at the end of my publications.
You can find a list of my most recent publications: all of these are peer-review and in credible scientific journals here: http://climatelab.com.au/publications/
No, I do not misrepresent you.
In which journal is “Temperature change at Rutherglen in south-east Australia” published? Does it include your accusations of professional misconduct against the Australian BOM?
I point out that Rutherglen is one location in the ACORN-SAT reference network.
As you may have seen I have regularly urged sceptics who query the official temperature records should record their information in? peer reviewed journals which would allow us to quote them.
My pleas have fallen on deaf ears and as a result I get it in the neck from both sides
Jennifer paper is available as a pre publication PDF here
As far as I can tell the version printed in ‘new climate’ is not yet available online.
As to the credibility of the journal, I can not comment. Jennifer appears to have produced a good piece of work. You may disagree after reading it, in which case it would be useful to read your criticisms
You are confirming that Marohasy’s paper is not published or peer-reviewed, despite what she tried to imply with her comment.
This is very odd behaviour for a scientist who is accusing the Australian BOM of professional misconduct – no wonder no official takes Marohasy seriously. Her past “cash for comments”, association with a fossil-fuel friendly lobby group and lack of peer-review puts her into the crank category.
I don’t know why you waste your time with low brow, anti-science twits. I can tell they are desperate by the tactics used. Political correctness taken to the extreme by becoming the thought police of science, weakening its foundation like termites.
Several peoble have shown how the GISS manipulations changes a stations temperature. A famous example is Iceland, where the GISS temperature differs visible from the Icelandics own measurements.
It needs some cherry picking to find it, because you have to look at the single stations at a time, and if one station differs without any good reason, it is enough. It doesnt matter that the big picture is the same, you do not know for sure even if it is only one station that has changed.
“Several peoble have shown how the GISS manipulations changes a stations temperature.”
You’ll never get past amateurishness unless you make some attempt to figure who is doing what. GISS does not do any of the adjustments referred to here. NOAA/GHCN publish unadjusted and adjusted data. GISS uses the GHCN adjusted data. And of course, local geniuses can discover that sometimes the adjusted data is not the same as was originally measured.
You really need to look at both min and max temp trends, because when you do you see that there is no support for a slow forcing increasing min temp, and that min temps follow dew points, water vapor blown inland.
In the early 70’s when the oceans of the northern hemisphere warmed from the decadal oscillations moving warm water north of the equator, there wasn’t a reciprocal cooling in the southern hemisphere (thus balancing global temps) because of the large asymmetry between hemispheres.
Pingback: Weekly Climate and Energy News Roundup #255 | Watts Up With That?
I think most of the comments, especially the ones that criticize the author are not related to the subject.
No matter what the author did or did not do with the temperatures and no matter what the root cause of the outlier is (micro climate, water vapor, UHI or something else) the unanswered question is: why does NOAA define the temperature of a region that is described by 6 locations with the extreme outlier of the six when the homogenization process is supposed to do the exact opposite of that?
“Some climate scientists, however, are skeptical that the NOAA and NASA historical surface temperature reporting is accurate.”
This is quite an assertion. As much as I might like it to be true, are there any references that might point to its veracity? For example, Professor Lindzen is quoted as saying, “I am skeptical that the NOAA and NASA historical surface temperature reporting is accurate.”