by Steven Mosher
Some updates from the Berkeley Earth Surface Temperature project.
A short time ago, Berkeley Earth completed a major upgrade to its website at http://berkeleyearth.org/. The upgrade included a few new sections, improved navigation for looking up stations, new memos, new datasets and of course links to follow us on social media. I’ll briefly cover some of these changes, and in light of Judith’s post So what is the best available scientific evidence anyways, I’ll spend more time on the new datasets.
The new sections include some work in progress. Most notable here is the beginning of the work we are doing on GCM comparisons to observations. Start here and explore. As this is work-in-progress, I’ll answer some general questions as best I can.
New memos are listed here, and the two highlights in my mind are the work that Zeke Hausfather recently posted [link] , and Dr. Muller’s memo on methane leakage [link] .
With 39000 stations in the database, finding what you want can sometimes be difficult. We’ve improved the interface for finding stations and now you can search by name and latitude/longitude [link], or you can use a clickable map [link].
The improvements to the dataset include some substantial changes. We are now in a position to provide monthly updates somewhat automatically. Since the process requires generating well over 100K new graphics every update, this can take a while after the close of a month, but going forward the data should be more current than it has been in the past.
In addition, we’ve made a few important processing changes since publication. Recently we added an additional 2000 stations. We’ve also improved the “breakpoint” code as a result of an exercise conducted for the last AGU [see poster] where we conducted a double-blind test of our method against synthetic data. Finally, we’ve improved the estimation and removal of seasonality in the kriging process. The net result of these improvements doesn’t change any of our conclusions in a material way, although there are some changes in the early record, all within the rather wide uncertainty bounds for pre 1850 records.
We’ve also added a dataset that many have asked for: station records after “scalpeling” or “breakpoint adjusted data.” Now, you can examine a station record before breakpoint analysis and after, viewing the graphics here; you can download the data here. .
The most notable improvement comes in the addition of gridded data. Currently we have 1 degree gridded fields for the entire globe posted on the data page, and we will have .25 degree grids for CONUS and Europe. The latter will be posted shortly as we’ve just begun to explore them.
I’ll give you a sense of what the gridded data looks like by doing some quick comparisons with CRUTEM and GISSTEMP with 250km smoothing:
Because we express temperature as the combination of a climate component (driven by geography) and a weather component, we have the capability of deriving higher resolution grids where the station density is high. For the CONUS, we are able to generate grids of .25 degrees (roughly 25km) For comparison, I show CRUTEMP and GISSTEMP:
In addition to CONUS, we also have solved the field for Europe at this resolution. Below I show an example of the kind of detail we can extract:
If you have any questions or comments, please do let us know at: steve@berkeleyearth.org.
JC comment: This is a guest post, one of a series on the Berkeley Earth Surface Temperature project that has been contribute by Steve Mosher. Please keep your comments relevant and civil.
Thank you for the methane link. As far as I can tell, we’re seeing substantial improvement in the technology of reducing leakage, and can reasonably anticipate much more. This eliminates (or at least substantially reduces) concerns over methane leakage as a result of a major societal investment in methane for fixed power plants.
Steve…I believe that Dr. Muller has made a statement to the effect that global warming is undeniable, and that man is the main contributor. Do you know if there is anything new in the dataset interpretation to cause his statement to be refuted??
It kinda of looks to me like the areas with the main warming are higher elevation land with major agricultural “cooling” in progress. Thankfully, land use if a negative feed back or we would be seriously frying.
Cappy states that if mankind did not transform lots of the land into an agricultural zone with higher albedo, then the AGW signal would be even higher and in to apparently a catastrophic regime, “frying” in his words.
Nice alarmism there Cappy.
leave it to Webster to not recognize sarcasm.
time permitting I may be able to look at your agricultural land use hypothesis. I have a dataset going back to the 1800s of land use with a resolution that is 5 arc minutes.
That said what would your hypothesis be. for the following case
for stations in the US starting in say 1900:
A) agriculural , no transition
B) starting as grass transitioning to agricultural
C) starting as forest transitioning to agricultural
D) grass, no transition
E) forest, no transition
You might want to cheat and have a look at some of the work I did on land SURFACE temperature and land class.
Steven,
A) agriculural , no transition
There would be an early transition from slash and burn to pasture or plow pre 1800 along with changing crops and rotation practices.
B) starting as grass transitioning to agricultural.
Delayed inpact due to soil moisture and carbon gradual depletion. The USSR “Virgin Lands” campaign provides some insight. Early cooling later warming. Most of the native grasses have much deeper root system than crops.
C) starting as forest transitioning to agricultural
Same gradual transition,
D) grass, no transition
Depends on regional herds and hunting practices. Hard to verify no transition in usage.
E) forest, no transition
Probably the best to estimate CO2 equivalent forcing, but there are few forests that haven’t had some form of change. Large farming acreage can impact areas hundreds of miles away with erosion and local climate modification.
The problem Cappy is that the 3% do not always recognize sarcasm. That’s why we always have to throw it back in your face.
They also don’t recognize your imitation of Professor Irwin Corey, the comedian who pioneered the use of scientific word salad and gibberish to get laughs. Some people thought he was a legit scientist.
Here’s a thought for you, Dallas. Ocean heat transport controls sea surface temperatures, sea surface temperatures control water vapor, changes in water vapor control changes in diurnal temperatures, and sea surface temperature gradients and interaction with the atmosphere control weather patterns.
Reconstruction of the Gulf Stream transport indicates increased ocean heat transport until some point in the later half of the 20th century. ARGO shows no trend. Water vapor flattens, diurnal temperature gradients flatten, ocean sea surface temperatures flatten, but the feedbacks continue so you still lose ice and weather patterns over land are still affected. Just a thought.
steven, What controls what when varies which is just part of dealing with any non-linear dynamic system. To me it is easier to find the “normal” range of variation and reasons for the limits. Freezing provides a lower limit and evaporation an upper limit. Ocean heat transport produces another range ~3.2 C that has to operate in the freezing/evaporation limits. You end up with a big box of +/- 2.5 C for the oceans and an interglacial box of about +/- 1.5 C. Different forcings have different impacts depending on where you are in the box and what direction you are moving in the box. The tighter you try to get the range the more likely you are going to screw up.
“Just a thought”
Here is just a thought. Why don’t you pick up a pencil and sketch out a solution? They used to do this in school. We called them “homework problems”.
Use Mosh as a template. He actually gets his hands dirty and digs into the problem.
Web, because Exxon is paying me not to and I have become used to my checks. Let those that get paid to get the right answer figure out why water vapor shows no trend, sea surfaces aren’t warming, and the diurnal difference has switched direction.
captain.
i cannot translate your description into testable hypothesis.
Here is an exxample.
If I compare the trends between two classes ( A and B ) what will I see
class A: was agricultural in 1900 and is agricultural today
class B was forest in 1900 and is agricultural today.
make a testable statement about the trends for both of those classes.
Preferably one that will force you to revise your thinking if it is proven wrong.
OK, but like most things it is not that simple.
Forest up to 1900 transitioned to agricultural, as cropland after, would tend warmer. Agricultural, cropland in 1900 converted to forest, would tend cooler.
Agricultural, croplands in 1900 that are still croplands today, would vary with farming practice. All should be warmer, but some with irrigation and conservation farming would be cooler than the peak.
Then there are higher latitude grain fields. Because of mechanized farming there is a longer growing season. Wheat farmers can spread fertilizers on spring snow to speed snow melt to avoid snow mold. They should be warmer with a noticeable seasonal shift.
Those are the three cases that probably would provide the most information for generic cases. Of course good farming practices would mean less impact, poor farming practices might have more noticeable impact.
http://berkeleyearth.lbl.gov/auto/Local/TMAX/Figures/50.63N-72.25E-TMAX-Trend.png
The Virgin lands program started in the mid 1950s
Lets focus on 1.
Forest to agricultural.
You say warmer.
If next. Effect size. Ur estimate
Steven Mosher, “If next. Effect size. Ur estimate.”
Boreal forest to plowed soil temperature change IIRC was ~3C for the peak summer months. Depending on length of growing season, about 0.5 to 1.5 C.
I don’t think Exxon pays Mosh.
Actually, I believe he does it on his own dime.
If you don’t like the way somebody does research, do it yourself. As far as I know, your tax dollars do not effect the way another country does climate research.
Web, which part do you think that I should figure out? That ocean heat transport controls sea surface temperatures? Josh Willis is already on that with his ARGO studies and is finding a correlation. SSTs control water vapor? Brian Rose is on that and his models show they do. Water vapor hasn’t been increasing? Vonder Haar is already on that and not finding a trend. Diurnal temperatures not behaving as expected? The BEST team has found that diurnal temperatures are not behaving as expected already. That ice would melt even after the SSTs stopped going up? Hansen says for thousands of years. That ice melting would affect weather patterns? I think that’s why anyone cares about melting sea ice but let me know if they were just concerned their drinks would get warm.
I’m not doing any research when those that get paid to do the research, and are better at it than the amateurish efforts people like you or I could come up with, are already on it. I am just taking all the little puzzle pieces and trying to fit them together. That is what I enjoy. That is my hobby. You don’t get to dictate to me how to do what I do for fun and for free. Capishe?
Wow capt.
It looks like you have a testable hypothesis.
I need to finish this data for our fall submission. Then ill go get your data ready.
compound question.
1. on global warming, there is nothing in the new dataset to suggest that it hasnt warmed since the beginning of the industrial age.
2. on Man being the main cause. I never really viewed the data as making that attribution argument in a conclusive way. That said, I do have one guy out there who is working on assessing the impact of the new data on the fit to c02 & volcanos.
We’d better hope the attribution comes out to a low sensitivity, otherwise we’d be way cooler than we are now, and stalling out the warming help from AnthroCO2.
==============
“I believe that Dr. Muller has made a statement to the effect that global warming is undeniable, and that man is the main contributor.”
Too funny. Now Web is getting all respectful with the *Dr. Muller.* If he’d said the opposite, it would have been Dr. Chucklehead.
If Muller could defend his wild statement regarding the “main contributer,” he certainly would have by now.
“If Muller could defend his wild statement regarding the “main contributer,” he certainly would have by now.”
His defense runs like this:
He’s explained the rise from 1800’s to today as a combination of
“C02” forcing and volcanos. Other proposed causes ( the sun ) could not be shown to have any explanatory value. Absent a concrete proposal to the contrary, there is a warrant to accept the explanation proposed and build on it.
In other terms, the explanation is its own defense, absent a more compelling counter explanation one need not answer every what if, what about, blah blah blah. The defense is really a challenge. propose, in numbers, a better counter explanation.
It would be nice to know how tightly land temperatures are coupled to sea temperatures.
I would guess that the East Coast temperature data is very tightly coupled to the Atlantic SST and West Coast to the Pacific SST.The interior may be coupled to both or neither.
“In other terms, the explanation is its own defense, absent a more compelling counter explanation one need not answer every what if, what about, blah blah blah. The defense is really a challenge. propose, in numbers, a better counter explanation.”
Steven: How is that different from “anything that is not sufficiently researched to be quantifiable does not exist”?
“In other terms, the explanation is its own defense, absent a more compelling counter explanation one need not answer every what if, what about, blah blah blah. The defense is really a challenge. propose, in numbers, a better counter explanation.”
I appreciate that Steve. It’s an interesting “get around” if I can use that phrase. I was not aware of it
And yet seems terribly facile to this lay brain of mine. And convenient. Rather removed from the real world with its messy uncertainties in its “philosophy of science” sort of appeal, if I’m getting the gist properly. l Muller knows damn well there’s a great deal we don’t know. Because we can’t quantify these things, we don’t legitimately take them into account as sources of uncertainty? I don’t see anything about uncertainty in his statements as to cause. Only confidence.
“Steven: How is that different from “anything that is not sufficiently researched to be quantifiable does not exist”?”
If I understand you you are asking what is the difference between
1. Unquantifiable things might exist and we cannot rule out there existence
2. Unquantifiable things do not exist
The difference between those should be clear.
one of them makes a positive statement about the non existence of an entity ( known as a negative existential ) and the other, more defensible, says that the existence of these things is logically possible but not emprically supported.
So Muller would say 1 not 2.
The practical consequence is that by saying number 1 you allow that some people may want to go out and look for unicorns. They are free to do so.
Steven: You seem to be ruling out the possibility that I was thinking about. That there are things that are known to exist or likely to exist, and are quantifiable in principle, but cannot be quantified in practice (yet). I can see people right now in the building across the street, but if I you ask me how many there are in the building, I couldn’t do better than a wild guess.
Of course the warming since the beginning of the industrial age has nothing to do with the industrial age beginning just when the Little Ice Age was ending. You anthropogenic warming boys sure have an uncanny ability to not let facts get in the way of your beliefs.
‘ I can see people right now in the building across the street, but if I you ask me how many there are in the building, I couldn’t do better than a wild guess.”
sure you could. you could give a bounded guess.
How many stories is the building? whats the length and width?
what percentage of the building are you observing? 5%, do you see 5 people? multiply by 20 and you have something better than a wild guess, you have an estimate based on an assumption.
so for example you might say “its cosmic rays” well fine. then even there you have some things to do to look for the effect. Say increased cloudines after Forbush events.. opps last I looked there was no effect there.
lets suppose you make a guess. Steve, there are 50 people in that building. I have some options
1) cool youve identified a thing, gave a prediction of sorts, we can now go test.
2. hey look, we are miles away from any bus stop or train stop
and the building has 5 cars in the parking lot.. I think your guess of
50 is high..
The point is until you have at least a general idea of what you want us to look at we cant read peoples minds.
Steven, Checkers: 64; 24…still have the same problems.
Hmmm…so if I can make an educated guess, then it’s quantifiable? OK, that helps.
So Muller’s ‘proof’ is essentially just ‘we can’t think of anything else’, ie still just about speculative models rather than definitive fixes on measurements eg the radiation budget or OHC.
Perhaps the WUWT team will find urban heat islands lurking in northern Siberia?
Seriously, that is why the deniers concentrated so much at finding the weapons of mass UHI. They realized that land-based temperature records held the key to predicting the eventual warming that we would see. Thus to find UHI effects was to discredit the AGW theory.
Alas, no UHI impacts were found lurking underneath the table.
Webster, it is not “UHI” all be its lonesome, it is general land use impacts on the water and carbon cycle. Call it the “Suburban Heat Island”, general land use changes the average soil temperature by a couple of degrees and the seasonal snow coverage duration. Much to complex a subject for you to grasp.
No Cappy, that would be scientific studies that you would need to refer to, not some bonefish theory that you always seem to dream up.
Webster, here is one study
http://static.msi.umn.edu/rreports/2008/319.pdf
The biggest impact is bare soil (plowed and close cut grasses i.e. over grazed) have ground surface temperatures that range around 2C warmer than tall grass, forest and well maintained lawns/pasture. This is one of the advantages of conservation farm practices, the stubble and debris from the previous crop shade the soil and help retain moisture. Crops that have longer periods of bare surrounding soil before developing a full canopy also would cause higher temperatures. At higher latitudes where early snow removal is used there would be a warming impact which was not included in that study.
http://www.uvm.edu/vmc/reports/Snowremoval.pdf
That study estimates a 0.6 C early snow removal impact on forest temperatures in Vermont. Of course that study expects global warming the reduce the snow and not a bunch of farmers spreading ash, manure and dyed chemicals to reduce snow to prevent snow mold and get an early start planting.
Cappy, Your bonefish theory needs further elaboration. What you always do is set up these weak premises that are far from complete. Someone needs to slap you upside the head.
In the past ten thousand years of a new paradise of more tightly bounded temperatures, it has always warmed after every cold period. The Consensus Climate Experts tell us that we would have not warmed this time. They tell us that we warmed this time, just like all the other times, only because humans raised a trace gas by a fraction..
They have not explained why this time should have been different. They totally ignore this question. I have asked this question many times over five years. If we were not supposed to warm this time, why were we not supposed to warm this time. What is happening is following the data from the past ten thousand years and doing the same thing.
If we were not supposed to warm this time, why were we not supposed to warm this time.
Statistical science triumphs once again and BEST puts to rest the notion that an urban heat island effect has any impact on the long term land warming observed.
The BEST derived climate sensitivity of 3 C warming for doubling of CO2 also supports the mean scientific estimates for AGW.
Bottomline is that more data is a good thing when it comes to a better understanding of what is happening, climate or otherwise.
yep, more data is a good thing. Funny how the models overestimate polar amplification and underestimate “land” amplification. That is a head scratcher ain’t it?
Yup there a bunch of head scratchers in that.. stay tuned
Cappy, yeah it is funny how land warming is about twice that of ocean surface warming. You must be splitting your sides over that one. ha ha
Webster, not really. When you consider that land use can amplify both CO2 equivalent forcing and natural variability it makes perfectly good sense. Of course you need to consider the difference in the specific heat differences of both land and land at altitude to notice that there is a mid troposphere hot spot in the NH.
The funny part is the longer term natural variability which is avoided like the plague by the Great and Power Carbon minions.
http://redneckphysics.blogspot.com/2013/08/the-madness-of-reductionist-methods.html
I appreciate references to Redneck Physics as source material.
Webster, “I appreciate references to Redneck Physics as source material.”
You should, there were links to the original data sources and everything.
Redneck Physics is a step below Moron Science. You were the one that named it, don’t blame me.
While BEST might be the best temperature mashup of CONUS land-based surface thermometers we have, it still might be a bit premature to jump to contusions concerning UHI or other land use effects on the land temp record.
sure I give you the same challeng I give everybody else.
there are 39000 stations.
split them into urban and rural and make a prediction.
split them into airport and non airport. make a prediction.
Thats what I did. what I thought would be found was not found.
I went looking for the unicorn. couldnt find it.
Now you, who have never gone unicorn hunting, tell me that I didnt look good enough. Fine. give me a map.
@Mosher
How about getting a weather app for your smartphone and looking at the temperatures reported in a major city and the surrounding area. If you’re unsure which city to use, use Cleveland.
Because almost every evening I can see a different temperature for stations 10-20-30 miles apart.
Also, you can ride a motorcycle from a city to a rural area and go from warm air to almost cold in a few miles.
Thank you Steven Mosher for this outstanding scientific effort!
The web page “Summary of BEST Findings“ provides summarizes the strong scientific evidence that (1) relatively simple models and (2) global land temperatures agree outstandingly well with (3) the soon-to-be-published conclusions of Hansen et al in regard to climate sensitivity, sea Level, and atmospheric CO2.
Judith Curry’s wise call for “best available science” has been well-met by the trifecta of agreement between BEST data sets, simple climate models, and Hansen-style thermometric analysis. For which, this appreciation and thanks are extended to you and your BEST colleagues, Steven Mosher!
Conclusion The best available science accords with the conclusion of Hansen et al that “Burning all fossil fuels, we conclude, would make much of the planet uninhabitable by humans.”
Thank you, Steven Mosher and BEST, for contributing so many of the “best available scientific elements” that so strongly support this crucial scientific, economic, and moral conclusion.
—————–
Collegial Note The BEST collaboration’s image directory “http://static.berkeleyearth.org/img/” is presently wide open to public inspection. It’s probably best to close this invitation to abusive denialist hacking and/or cherry-picking, and instead provide contextual access via explicit image-by-image URLs.
thanks fan. I’m trying to avoid the superlative.
if somebody wants to hack around the site ( they can find the .25 deg dat ) I don’t have an issue with it.
“James Hansen wrote this in 1999.
in the U.S. there has been little temperature change in the past 50 years, the time of rapidly increasing greenhouse gases — in fact, there was a slight cooling throughout much of the country
NASA GISS: Science Briefs: Whither U.S. Climate?”
http://stevengoddard.wordpress.com/2013/08/16/im-100-sure-that-the-ipcc-is-lying/
And do read the Trenberth email..
There has been no warming.
Fan : Conclusion The best available science accords with the conclusion of Hanisen et al that “Burning all fossil fuels, we conclude, would make much of the planet uninhabitable by humans.”
It is no great surprise that politically-funded ‘best available’ science’s findings support the argument for more politics. And bear in mind nearly all climate science funding is political money. And integrity has not exactly been the watchword in climate science, as the official climategate coverups showed.
Collegial Note The BEST collaboration’s image directory “http://static.berkeleyearth.org/img/” is presently wide open to public inspection. It’s probably best to close this invitation to abusive denialist hacking and/or cherry-picking, and instead provide contextual access via explicit image-by-image URLs.
Yes, data-hiding has always played a key part in the “best available science”. Why change a winning formula eh?
The Berkeley website has on its home page a listing of a few “Climate Facts”.
Here’s one:
Climate Fast Facts
In order to effectively address global greenhouse gas emissions, the U.S. must set an example that the developing world can afford to follow.
Is this a fact? Or is it advocacy? Such an attitude tends to destroy our belief in BEST objectivity.
Big Muddy example.
=============
I believe that is a legitimate fact. Of course Berkeley is a US based project where a more global based project would have used “developed nations” instead of the US. Then the US has been doing a better job relative to the “developed nations” as far as CO2 and other pollutant reductions with land use and water shed restoration being a big part of the story.
Land use in the US offsets ~17% of US CO2 emissions.
The reason we say the US is because the US has the technology today to reduce carbon emissions in China through assisting them with fracking. There are a number of myths about fracking and fracking in china that we are working to dispell. myths that get repeated over and over again in MSM.
Moshpup:
“The reason we say the US is because the US has the technology today to reduce carbon emissions in China through assisting them with fracking. There are a number of myths about fracking and fracking in china that we are working to dispell. myths that get repeated over and over again in MSM.”
Thanks for reminding me that the Good Doc has “skin in the game!” 8>)
“Dr. Muller has made a statement to the effect that global warming is undeniable, and that man is the main contributor.”
The only fact in that statement is the first claim. The second? Pure advocacy. He might be right in the end., but he’s not shown how given what we now know, which in the scheme of things isn’t much.
You lays your money down and you takes your chances. Richard Muller didn’t wait to deal, he bet while still shuffling the cards.
===============
As a poker player Kim, I can attest to the fact that betting before the cards are all dealt is very unwise, and likely to end badly.
“The only fact in that statement is the first claim”
Actually, I doubt anyone can demonstrate that any warming of any kind has been ‘global’.
Andrew
Unless it’s a trick deck.
Good information, SM. Good post.
Steven Mosher clearly has an open mind about attribution. That is reassuring in someone who is exploring the meaning of the temperature record.
Richard Muller clearly does not have an open mind about attribution. That is disturbing in someone who trumpets the meaning of the temperature record.
=================
My respect for Mosher has only grown with time.
Agree with the above.Ths SM.
Bts
I dont think its an issue of open mindedness.
Its more like this.
We all look at explanations and tend to have certain mental “biases”
A bias toward acceptance and building upon
A bias toward digging deeper.
So we laid a foundation: rich looks at it. Its sturdy, its flat. build on it.
I look at the foundation, and wanna know how flat, how sturdy, can I make it better, will it survive an earthquake, what if this, what if that.
meanwhile the walls go up.
In my mind these are not epistemically driven differences. there is no right and wrong here. there is just human curiousity. One form wonders what can build on this and finds out by actually building. Another form of curiousity, digs a little deeper. Visionaries versus worry worts? I dunno
Steven, I feel almost the same way about Building 7.
I look at the foundation, and wanna know how flat, how sturdy, can I make it better, will it survive an earthquake,
Berkley is transected by the Hayward fault.There is a 66% chance of a greater then 7 quake at present in the fault (which is somewhat overdue.)
“I look at the foundation, and wanna know how flat, how sturdy, can I make it better, will it survive an earthquake, what if this, what if that.” – Steven Mosher.
Pretty good. In Minnesota you want about 6 feet deep I think. Otherwise the frost might heave it. Push it up a bit.
How about a bias toward data? When scientists overstate certainty it confuses the political discussion on both sides and leads to poor decisions. I am still not sure your uncertainty levels are well expressed in any way the layman could understand.
How sure are you UHI is correctly dealt with?
How sure are you that areas with low station concentration are correctly dealt with?
How sure are you that siting issues are cleaned up?
My issue on both sides is that certainty is grossly over-stated.
How sure are you UHI is correctly dealt with?
1. I’ve looked for UHI for about 5 years now. every time I thought I found a Bias it has vanished when I push a little deeper. At one point
Zeke and I found a .05c (ish) decadal trend bias, about what I would
expect from other things I’ve looked at. In the end that too vanished.
The best I can say is that the bias exists, we know that from individual station analysis and small regional analysis, but the effect is sporatic, highly variable, and so small when averaged over time that it falls below the noise floor of any global analysis. Im fairly confident that if the effect was large, if the effect were pervasive, if the effect were not temporally sporatic, then you would easily see it by separating stations by a rural/non rural clasification. But, there are other effects ( like land class )
that swamp the signal. For example, The temperature difference between
grassland and bareland is greater than the difference between grassland and city ( depending on latitude ).
How sure are you that areas with low station concentration are correctly dealt with?
1. The notion of correct is probably misleading you. The issue is
have you estimated the field optimally for those locations where you have no data.. have you minimized the error of prediction as best you can.
I think we have improved on other methods by using standard geostatistical methods. Testing with synthetic data indicates that we handle this problem more optimally than others. There is some headroom for improvement that I’m working on. slowly cause its tough. We also have a good sense of how well we do by decimating the field and by looking at what happens when we add 2000 new stations.
How sure are you that siting issues are cleaned up?
1. I’m not even sure that there is a siting issue. The assumption is that siting standards where set up because of measured field data. This isnt the case. There is no published study backing up the effects of siting on measured results. No systematic study that looks at the various factors and their importance. The Leroy standard has no published fiield data to support its criteria. This is not to say that siting doesnt matter, but rather that the standards are not based on any firm empirical footing. I would not suggest violating the standards, but assuming that a violation leads to a measureable bias is just an assumption.
Lots of work to do here, but there is a small amount of data ( from Leroy and his partner) that suggests the bias is small ( less than 1/10th C ) and that the biggest effect is an increase in noise.
Again, when I first found the Leroy ratings and passsed them on to Anthony my hope was that I could compare class 1 to class 5 and just show a big difference. back in 2007-08 we started that at climate audit, john V and I worked on it. What we found was really small. So we waited for more site data. years. Then looking at that we found nothing. Now, there is a new site rating scheme. Anthony has the sites classified. I asked for the data last July ( 2012 ) and complained that folks would just sit on that data, make blog posts, and never release it.
I was told to calm down. A year has passed. No data. no new paper.
Any way, I started some work on automated classifications. may return to that.
Just one question, why would we use surface stations at all when they are so unevenly spaced and, subject to UHI, and have other issues? Why not just depend on the satellite temp record which has many fewer issues.
Because the satellite record only started in 1979.
Most climate base records use a 30 year standard. 1979 is thus enough to establish the sat record as the baseline, and then make the land based record reconcile with it. Why is this so hard?
jsg,
“Most climate base records use a 30 year standard. 1979 is thus enough to establish the sat record as the baseline, and then make the land based record reconcile with it. Why is this so hard?”
It shouldn’t be, but because the data bases are baseline dependent there are seasonality issues that can cause problems. You have the same thing with paleo that can create problems like “dimples” that smoosh the information right out of the data. Since there is no “standard” method, you kind of have to wing it and figure out your own uncertainty. Then of course you are “cherry picking” a baseline because everyone should blindly accept the “science”.
justsomeguy31167 | August 18, 2013 at 12:27 pm |
Actually, the satellite temperature record has many more, not fewer issues.
There is less, not more, transparency in the handling of the data. It is less, not more, open to inspection.
It is known to suffer from technical issues in the 1970’s equipment that will not be repaired and can only be guessed at.
It is known the post-processing between RSS and UAH has produced essentially opposite trends over much of the satellite record, and that neither well corresponds to any terrestrial dataset.
UHI is not resolved by satellites any more than by land-based tracking.
Changes and moves in stations is less pervasive than changes in atmospheric depth, and less easy to account for.
There are too few satellites by far to confirm the data on satellites the way statisticians can parse the surface record.
Less than 64 years is too little to establish credible climate trendology; the satellites only have another three decades to go before that date. It’s doubtful they’ll last.
Satellites don’t measure the surface, they measure the lower troposphere; it’s a different thing.
Satellites are new; we have three hundred years more experience with thermometers at surface stations than with satellite technology, in which to figure out what can go wrong and how to deal with it.
We really can’t trust the satellite guys, there’s so few of them and they are so incredibly secretive, and have been caught repeatedly making crap up.
…and have been caught repeatedly making crap up…
http://www.woodfortrees.org/plot/rss/offset:0.4/plot/uah/plot/hadcrut4gl/from:1979/offset:-1
The correspondences between the datasets are blindingly evident – although showing amplification in the troposphere from increased latent heat transport in E Nino.
This is good. Very informative. Thank you Steven Mosher and the whole Berkeley team.
What is the best (and BEST?) estimate of the ratio of the human contribution of CO2 increase to the total CO2 increase. Murry Salby, in his book Physics of the Atmosphere and Climate, presents the figure 4
%. This might seem off-topic, but some CO2 information is presented at the BEST website.
thanks.
I dont focus on the C02 stuff. Its hard enough staying current with temperature.
In 1979 these creeping bits of clay will put up satellites, I hear.
=====================
BEST has proven to be very useful. I just wish we could use it on Wood for Trees. No update since the first issue.
I sent a note. no response
I am bothered by global/land and land only discrepancies which remain. UAH and RSS are in quite close agreement since the beginning of satellite measurement in 1979. Both are also very close to HADSST2. Makes sense, since Earth’s surface is 79% ocean. BEST (land only) is noisier, but only slightly warmer (variably less than 0.3C using monthlies) up until about 1995. Then the discrepancy widened to now over 0.5C. In consequence, the first three data sets estimate 0.13C warming per decade, while BEST estimates 0.28C–double. A growing disparity between land and sea/planet is not thermodynamically credible over decades by any mechanism I can think of other than growing UHI bias.
Double checking using woodfortrees to access UAH and RSS land only compared to BEST, there was little or no discrepancy until 2000, and now it is about 0.2C. I would recheck the BEST last decade or so rather carefully. Especially since this concerns the pause issue, and so GCM validity, and so indirectly CO2 attribution. All are big, important topics.
Precisely my reaction. Until the surface records can be reconciled with the satellite record, you have a basic quandary: the same radiative physics that drive the greenhouse effect also drive the satellite measurements. Deny the latter and you deny the former. This circle has yet to be squared. And yes, the UHI is the most obvious way to do this.
the problem is what does a reconciliation look like.
take three levels of data
1. The land surface temperature
2. the air temp at 2meters
3. the air temp at the lower trop
do you expect them all to have the same trend, why why not?
Well Mosher, I expect the data to somewhat match reality, which BEST does, though I am not all that sold on the actual confidence intervals.
So I compared the Reynolds Oiv2 SST for the NH, 24N to 70N with BEST Tmax and Tmin.
http://redneckphysics.blogspot.com/2013/08/sea-dog-wagging-best-tail.html
That appears to be close to reality, oceans drive the show, land amplification produces overshots of mean, things decay to mean as both settle out on their own time frames.
UAH NoExt has the same basic relationship since the oceans are still in the drivers seat. Everything is fine in the data “massage” world :) BEST “Global” land is about 0.2 C higher after 2000 because of apparent amplification which is the “surface” temperature world should be. Ain’t it funny how the large thermal mass of oceans and lakes tends to dominate “surface” temperature?
I think it is the sun and the external forcing perturbations which tend to dominate the surface temperature.
Decrease the output of the sun by a fraction and see how the temperature responds. Increase the output of the sun by a fraction and see how the temperature responds.
Of course it is impossible to do this experiment unless it is a controlled environment, and we wouldn’t even if we could. The closest we can come is by taking careful measurements during a total solar eclipse, and monitoring the temperature response.
This is being pedantic I realize, but when you get clowns like Cappy spewing garbage, pedantic is good as a balance.
“Of course it is impossible to do this experiment unless it is a controlled environment, and we wouldn’t even if we could. The closest we can come is by taking careful measurements during a total solar eclipse, and monitoring the temperature response.”
Every night the Sun goes down, and at different latitudes the ratio of day to night changes changes every day. There’s over 100 million daily station records of as careful a measurement as we have to analyze.
That does not provide a global imbalance.
What I described is actually close to a delta forcing function, which is an ideal impulse for characterizing a response against.
Webster that solar eclipse sounds like just the plan, check for a significant temperature change for about 5 minutes of midday solar impact on a system with millennial scale lag times. I am sure you can amaze your minion buddies.
https://lh5.googleusercontent.com/-zpvEmGcB7mg/UhTIuFIoFfI/AAAAAAAAJQE/Lsl8-W9sr9I/s720/Solar%2520and%2520IPWP.png
Then again we could look at a solar dTSI reconstruction versus the firebox of the planet, the IPWP just to see if there might be a little more to solar than the brain dead minions of the Great and Power Carbon might have been lead to believe. Now what is that recurrent ~100 year wiggle?
You ever seen that Penn and Teller dihydrogen monoxide skit?
Hmm, I just checked the newest version of BEST global with UAH land only NH and they are right on time. UAH Global Land is ~.2 C lower than BEST global because BEST has a strong NH bias which Webster loves to play with to “prove” his pet theories. I think it requires some fiddling with the seasonality to get rid of the bias..
Cappy, you are projecting again. I don’t have any “pet theories” other than to try to simplify the textbook climate science that already exists.
“lower than BEST global because BEST has a strong NH bias which Webster loves to play with to “prove” his pet theories.”
Huh, NH bias? that claim is supported by any analysis. Its designed not to overweight.
Land, Ho!
======
Steven Mosher, “Huh, NH bias? that claim is supported by any analysis. Its designed not to overweight.”
It is over-weighted naturally. The bias is due to land distribution. The Northern Extratropics land and/or oceans nearly perfectly match BEST Global while BEST Global is ~ 0.2C warmer than land and oceans global. Best has a strong NH seasonal cycle because the majority of land is in the NH. Plus you have the worst surface land measurements in the Antarctic.
You have a better term for a “Global” data set that more strongly represents half of the globe?
And while we are at it, “land amplification” is due to the differences in specific heat capacity. “Land Abuse” reduces soil moisture content increase average soil temperature increasing the “land amplification”. That is the “suburban” effect. The Minions of the Great and Powerful Carbon have consistently denied “land amplification.”
Webster, “Cappy, you are projecting again. I don’t have any “pet theories” other than to try to simplify the textbook climate science that already exists.”
By using BEST land only, GLOBAL TAVE only to fit to you favorite sensitivity. BEST actually has Tmax, Tmin and two whole hemispheres.
Land plus Ocean gives the global if one applies the land/ocean areal split. That works for the HadCru data and I would not doubt that it works for the BEST data sets as well.
Cappy has a goal to turn everything into FUD no matter how straightforward the concept is. Gee, I wonder why?
Webster, you are the FUD master. I have already used BEST absolute with ERSST3/Reynolds to estimate that elusive global absolute temperature. I know how it works. What I was doing was comparing northern extent oceans only to the BEST Tmax/Tmin data as is. There you see an amplification of Tmax and a good track with Tmin. Tave includes that land amplification factor. If you can tease out the CO2 part, the natural variability part and/or the :”other” part, you can estimate attribution, kinda the goal.
You of course deny that land amplification exists other than due solely to CO2 period end of conversation, because your pet theory needs it. You were the one that switched to the preliminary BEST data to eke out that almost 3.0 sensitivity like a good minion on a mission to keep Catastrophe alive in the face of a barrage of 1.6 C estimates.
Why do you call it land amplification where it is clearly ocean sinking that is the comparison? A heat sink will reduce the temperature of that location, making it appear that non-heat sinked regions are rising in temperature.
Once again, you lack the intuitive grasp of clearly defined physical principles.
The thing that you call “amplified” is real heat that exists and is not being dreamed up in your fantasy world.
Deal with it.
Webster, “Why do you call it land amplification where it is clearly ocean sinking that is the comparison?”
Because that is what it is. There is some reduction in heat loss from the oceans, There is some CO2 forcing simply by concentration increase and there is some amplification by land surfaces of both natural and anthropogenic impacts. You assume incorrectly the CO2 is 100% efficient. That is a pipe dream. You have to split out the different impacts to estimate attribution. Land use that impacts soil moisture and carbon retention reduces the specific heat capacity of that land surface. That causes an amplification of available forcing.
You can’t estimate any of the impacts without looking at regional responses to compare impacts. The northern oceans from 20 to 90 have a very good correlation with BEST global Tmin and less with BEST global Tmax. There is an increase in the diurnal temperature range since 1985 which is inconsistent with CO2 forcing and since both the northern ocean heat up take and SST flattens out with an increase in CO2 forcing you can’t write that off as oceans sinking heat. I have pointed that out to you before but you stick with “Global” data out of fear of learning something.
http://4.bp.blogspot.com/-j88jG6E6v-0/UhOOrjbK5wI/AAAAAAAAJPo/8uDteDcLYEo/s640/best+versus+the+northern+oceans+ERSST3.png
That is BEST Tmax, Tmin and the ERSST3 20 to 90N using a modern era baseline. DTR decreased until ~1980 then reversed to increasing. There was a shift, it is real, it is interesting and it happens to coincide with increased SSW intensity, like the ocean battery has reached fully charged for whatever “cell” or layer that was charging.
BobTisdale in his e-book ‘Who Turned On the Heat? The
Unsuspected Global Warming Culprit, El Nino-Southern
Oscillation’ observes that land surface air temperatures
representing only 30% of our planet’s surface, mimic and
exaggerate changes in sea surface temperatures. He says
that ‘There have always been problems with the hypothesis
of annthopogenic global warming and there still are. A big
problem with it; how can downward long-wave radiation,
( infrared radiation associated with greenhouse gases) have
any impact with the surface and subsurface temperatures of
the global oceans when infrared radiation can only penetrate
to the top few millimeters of ocean surface?’
His book relies on satellite data, (Reynolds (o1.v2) and records
global sea surface temperature anomalies, Nov 1981-April 2012
linear trend which show overall warming by 0.084 C/decade (p61)
and show that a large portion of the global oceans (p61) have
warmed very little, East Pacific SST anomalies, 90S-90N -180E-
80W – by only 0.006 degrees/decade.
We need to ask Bob Tisdale how he thinks the land warming twice as fast as the ocean, as it has been since 1980, can be forced by the ocean. This is a fingerprint of external forcing. Internal variability would show the land having a muted response to the ocean and lagged. In fact 30-year averaged trends show the land leading the ocean, another sign of external forcing.
“He says that ‘There have always been problems with the hypothesis of annthopogenic global warming and there still are. A big problem with it; how can downward long-wave radiation, ( infrared radiation associated with greenhouse gases) have any impact with the surface and subsurface temperatures of the global oceans when infrared radiation can only penetrate to the top few millimeters of ocean surface?’”
A lot of research went into Tisdale’s e-book evidentally. In just 2 minutes I found this:
Why greenhouse gases heat the ocean
http://www.realclimate.org/index.php/archives/2006/09/why-greenhouse-gases-heat-the-ocean/
Tisdale goes to suggest that the warming skin responds by increasing evaporative losses to the atmosphere. That evaporation increases seems logical but I see no reason to assume it is exactly equal to the increase in downwelling radiation at time T0. Ultimately losses will equal gains as the oceans warm to a conditional equilibrium – and the time lag in this equation is ultimately little understood.
Conditional on all other things being equal. The reality is that forcings – understood as the state of radiative flux at toa – change considerably as a factor of ocean and atmospheric circulation. Ocean heat content follows closely on changes in toa flux as the primary agent of change in the climate system.
Anyone who suggests the ocean can’t warm under increased forcing has just said that all the warming has to be on the land. It is not a sustainable situation for the forcing to change and for no part of the surface to warm to cancel it. Of course both the ocean and land respond, but the land surface can respond faster.
Changes in global forcing can only be measured. One of the components is latent heat released high in the troposphere – increasing IR losses to space. ENSO is thus the most obvious source of varied IR losses. Higher SST cools the planet.
Higher land surface temperature result from a lower water availability – thus less evaporation – over land. But mostly because of cooling ocean surfaces from upwelling. It is not all that important or interesting consideration.
That idea is like Lindzen and Choi, but the problems were many. First, they extrapolated from a region in the tropics to a global sensitivity conclusion, second he looked at short time scales and extrapolated to climate change, and third he used El Nino sea-surface warming and extrapolated to CO2 warming effects. Other than that, fine.
This is measured in TOA radiant fux.
http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf
I have not the slightest clue what you are on about and don’t particularly give a rat’s arse.
CH, that’s you. I thought Dr Dunderhead was being more reasonable until this. Anyway, as Lindzen found out, few believe ENSO noise can tell us much about climate, even Spencer.
I should think it obvious that in dealing with dunderheads – a different approach is required. Science seems to go in one ear and out the other – so we try harder with yet more science and ever more simple explanations. .
You should try reading the study – Jimbo – as it is shows the extreme ‘noise’ in climate forcing.
‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture. An illustration of the variability in TOA radiation is provided in Fig. 1, which shows a continuous 31-year record of
tropical (20S–20N) TOA broadband outgoing longwave (LW) radiation (OLR) between 1979 and 2010 from non-scanner and scanner instruments.’
The ‘noise’ is decadal at least.
e.g. http://www.benlaken.com/documents/AIP_PL_13.pdf
The PDO and ENSO are related – with more frequent and intense La Nina in cool PDO and vice versa.
Spencer?
http://www.drroyspencer.com/global-warming-background-articles/the-pacific-decadal-oscillation/
NASA?
http://earthobservatory.nasa.gov/IOTD/view.php?id=8703
So what have we learned boys and girls?
Spencer makes a clear distinction between ENSO and PDO in their usefulness for climate change. Spencer and Braswell even more explicitly complains that people use ENSO (ahem, Lindzen and Choi). This is what I was referring to, so you can switch to talking about PDO, no problem. Satellites don’t cover a PDO cycle yet, but we can return to it when they do.
ENSO and the PDO are linked in what is known as the Interdecadal Pacific Oscillation – or better yet the Pacific Decadal Variation.
‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract
‘Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.” ‘ http://earthobservatory.nasa.gov/IOTD/view.php?id=8703
We have most of a warm mode in the satellite and part of a cool mode behaving exactly as hypothesized. Come back to me when it starts warming again.
I too figured Dr Dunderhead, Downunderhead, was Chief.
Sockpuppets are the lowest form of internet scum, and Chief should really knock off the late night drinking binges.
If you have some actual science questions ask Dr Dunderhead. Otherwise p_ss off and do something equally useless – like say not thinking about penguins.
e.g. http://www.urban75.org/useless/bored.html
I’d lay odds at you succeeding at excellence in the field of useless endeavours, interweb tripe and generally looking like a dork.
Have fun, get messy, make mistakes as they say. Then again not making mistakes doesn’t seem to be an option for you. Looking on the bright side – if there weren’t plenty of dunderheads like you I’d have to put my laptop down and get a life.
“Double checking using woodfortrees to access UAH and RSS land only compared to BEST, there was little or no discrepancy until 2000, and now it is about 0.2C. I would recheck the BEST last decade or so rather carefully. ”
Funny you should mention that. without letting the cat out of the bag, I will say that Zeke, Robert and I are in the process of doing some comparisons across a broad range of datasets. But you should probably move away from simple time series if you want to understand a spatio/temporal thing.
The focus is Conus, finely gridded data , includes, sat data, various reanalysis datasets, and I hope ( but cant promise ) some out of sample data.
Oh great, more post-hoc analysis. Just what we need.
“Oh great, more post-hoc analysis. Just what we need.”
Huh.
Lets see if I can explain.
I construct a field from 10 datapoints. That field is continuous.
it gives a value at locations where I have no measurement.
Like so.
You live in marin, I live in mountain view. We use kriging to estimate the temperature in san francisco. i have no measures in SF.
My feild says “SF was 54F on january 1, 1987′ thats a prediction
How do I test that?
Simple, go find the data for san francisco.
basically, there are a bunch of stations that do not exist in the 39K.
some of them cost money. But, I have predictions for what they should be given their location.
So the field is a prediction. it is not an average. people forget this about models of global temperature. they are not averages.. oh well some people USE averaging to compute them, and they call them averages, but technically, deep in the bowels, they are predictions.
Every time I get new temperature data I get to test the prediction of the model.
A cool dataset from early 1800’s in the US exists that I’m trying to get. A dataset with full repeated calibration. Now nobody else uses this data because it doesnt belong to the base period ( CRU needs a base period ) but we dont need a base period. Any way, our model will have a prediction for what these temperatures should be.. And if I can weasel this dataset out of its owners hands.. I can test the model.
Not post hoc.
Steve, where can we access the predicted vs actual datasets?
Great. How about giving us the data rather than your implications. Whenever you finish the data massage.
Doc predicted versus actual.
At this stage I am collecting some official out of sample stuff. Then I have to verify that no versions of it were used in the construction of the field and then that I’m not cherry picking stuff that is easy.
for the existing field actual versus predicted, I’ve got that but need to work out how that gets posted– I was basically supposed to work on some more geography stuff ( like your coast problem) but am working on a a different thing now..
“Great. How about giving us the data rather than your implications. Whenever you finish the data massage”
data massage? dear me Rud, you hoping for an happy ending. check with someone else.
WebHubTelescope (@WHUT) | August 21, 2013 at 1:47 am |
“That does not provide a global imbalance.”
“What I described is actually close to a delta forcing function, which is an ideal impulse for characterizing a response against.”
The Sun setting over almost every location on the Earth, every day, is not global? An eclipse is also a local phenomenon.
And what you’ve described isn’t going to happen, “Turn the Sun down a little”.
So you want to see how the Earth reacts to an impulse of solar energy, you have one choice, and we have data.
I think you’d be surprised to see that on average temps drop at night the same amount they went up the day before.
It is interesting that the GCMs’ ratio of land to ocean warming ranges between slightly less than 1 for some models to almost 2 for some others. The real ratio is somewhere between although it has been near 2 since 1980 from what I have seen. Also 2/3 of the models underestimate the land trend relative to BEST. I suspect a lot of the difference is in the ocean models used in these GCMs and how quickly their oceans can respond to the forcing. The lack of land response may be a sign that the ocean responds too easily in the models. A great plot to have would be the GCM land and ocean trends on the axes of a scatter plot, with a line showing the BEST land trend. I suspect the points would be around a line with the lowest land trends corresponding to the highest ocean trends. The model data is presented in the graphs, so I guess I could do it myself.
JimD, “Also 2/3 of the models underestimate the land trend relative to BEST. ”
OMG! It is worse than we thought!
It is a bad sign for those of us who live on land when the ocean is not warming as much as expected, because then the land has to warm more than expected with the model mean behavior.
JimD, “It is a bad sign for those of us who live on land when the ocean is not warming as much as expected, because then the land has to warm more than expected with the model mean behavior.”
That is right, because we all know the model mean is golden and BEST unbiased No “suburban effect” there, no sirree.
oops, /sarc since some wouldn’t know.
The spread of model sensitivities is somewhat surprising too. Some had very little 20th century warming, much less than observed. Can’t claim bias in those groups.
Jimmy Dee observes:”The spread of model sensitivities is somewhat surprising too. Some had very little 20th century warming, much less than observed. Can’t claim bias in those groups.”
They were just unlucky, or inept. If they don’t catch up with the dogma soon, their funding will disappear.
But they are used in the IPCC reports, blasting the idea that there is a conspiracy.
JIm D writes:”But they are used in the IPCC reports, blasting the idea that there is a conspiracy.”
While the truth is racing around the Internet to co-opt an oft used saying, the MSM is still in the bathroom taking it’s morning crap. Which is to say, they don’t care. They rush to summaries proclaiming 95 percent certainties based on nothing more than some sort of internal poll, and that’s what they print.
Also, there’s more than one kind of conspiracy. Get a bunch of people together whose continued employment is contingent on the existence of the very thing they’re commissioned to study in order to determine whether it exists or not, what do you suppose is the preferred result?
Jim D, the conspiracy theorists need their theories, stop winding them up by claiming said conspiracy theories don’t exist.
Conspiracies.
A wacky notion seems to have arisen, whereby an organisation working to further its interests, is said to involve a “conspiracy”.
Drug company science says its drugs are safe, government climatology says the climate is unsafe without more government.
These are not conspiracies, they are Business-As-Usual; people and organizations acting in their own interests – exactly as you would expect.
Turns out the scatter plot is a line. That is, there is a good correlation between land and ocean sensitivity in the 40 GCMs, but not in the way I thought. Land and ocean sensitivity are highly positively correlated with each other, but the land/ocean trend ratio has no visible correlation with global trend. Turns out these models are inherently sensitive or not, but it has nothing to do with the land trend relative to the ocean trend, which is interesting but hard to explain.
“But they are used in the IPCC reports, blasting the idea that there is a conspiracy.”
According to Jimmy Dee, the few laggard GCMs are used in the IPCC reports as sort of token negroes, to show us that the IPCC is not biased towards a certain agenda. Of course, we are not stupid enough to believe that.
Wow between this comment and the “building 7” comment above…
They show even less sensitivity than observations indicate. In fact the models span the observations on both sides. They would be wary of outliers in each direction when using them for projections. This is how they would come up with a consensus.
To echo what lolwot said, just WOW on this
When it is said that nature explains everything, that includes the existence of shamans and witchdoctors among us who appoint themselves as seers of the future.
Shamans and witchdoctors generally do not base predictions on data. Scientists do. There is a difference. Your job? Figure out who is who is the shaman and who is the scientist in the climate debate.
The current crop of shamans are not predicting there will be climate but that the climate will be bad if their fears about it are not indulged and and acted upon by all in a manner the shamans’ prescribe.
So that some of us plebs get it strait concernin’ who is what,
when publishin’ a new paper on climate or an import-aint publick pronounc-meant, these clim-atologists should display the current
letters after their name, fer example, Dr Gym Handsum PHD in Shamanism, M.A in Witchcraft, Hallows University.
A serf..
Steven Mosher,
re: Comparisons with GCMs
Stats are not among my working tools. Are the approaches development in the following papers valid for checking GCM behavior against data?
Note that the first citation was published in 2001 ( the work was very likely completed before then ) and offered the possibility that the GCMs would calculate the “global averaged near-surface temperature” to be greater than “measured” values. The possibility seems to have been proven to be correct. The last citation below was published in 2013.
Can you estimate if application of the methodologies to calculated dependent variables at every time step values would, or would not, be useful?
All Abstracts and quoted material are Copyright by other than me.
——————
R.B. Govindan, Dmitry Vjushin, Stephen Brenner, Armin Bunde, Shlomo Havlin, Hans-Joachim Schellnhuber, “Long-range correlations and trends in global climate models: Comparison with real data”, Physica A. Vol. 294 (2001) pp. 239–248. http://havlin.biu.ac.il/PS/gvbbhs409.pdf
Abstract
We study trends and temporal correlations in the monthly mean temperature data of Prague and Melbourne derived from four state-of-the-art general circulation models that are currently used in studies of anthropogenic effects on the atmosphere: GFDL-R15-a, CSIRO-Mk2, ECHAM4/OPYC3 and HADCM3. In all models, the atmosphere is coupled to the ocean dynamics. We apply fluctuation analysis, and detrended fluctuation analysis which can systematically overcome nonstationarities in the data, to evaluate the models according to their ability to reproduce the proper fluctuations and trends in the past and compare the results with the future prediction.
From the concluding discussions:
We have also obtained similar qualitative behavior for other simulated temperature records. From the trends, one can estimate the warming of the atmosphere in future. Since the trends are almost not visible in the real data and overestimated by the models in the past, it seems possible that the trends are also overestimated for the future projections of the simulations. From this point of view, it is quite possible that the global warming in the next 100 yr will be less pronounced than that is predicted by the models.
Rudolf O. Weber, and Peter Talkner, “Spectra and correlations of climate data from days to decades”, Journal Of Geophysical Research, Vol. 106, No. D17, Pages 20,131-20,144, September 16, 2001. http://onlinelibrary.wiley.com/doi/10.1029/2001JD000548/abstract
Abstract
The correlations of several daily surface meteorological parameters such as maximum, minimum, and mean temperature, diurnal temperature range, pressure, precipitation, and relative air humidity are analyzed by partly complementary methods being effective on different timescales: power spectral analysis, second- and higher-degree detrended fluctuation analysis, Hurst analysis, and the direct estimation of the autocorrelation in the time domain. Data from American continental and maritime and European low-elevation and mountain stations are used to see possible site dependencies. For all station types and locations, all meteorological parameter show correlations from the shortest to the longest statistically reliable timescales of about three decades. The correlations partly show a clear power law scaling with site-dependent exponents. Mainly, the short-time behavior of the correlations depends on the station type and differs considerably among the various meteorological parameters. In particular, the detrended fluctuation and the Hurst analyses reveal a possible power law behavior for long timescales which is less well resolved or even may remain unrecognized by the classical power spectral analysis and from the autocorrelation. The long-time behavior of the American temperatures is governed by power laws. The corresponding exponents coincide for all temperatures except for the daily temperature range with different values for the maritime and the continental stations. From the European temperatures those from low-elevation stations also scale quite well, whereas temperatures from mountain stations do not.
D Vjushin, R B Govindan, S Brenner, A Bunde, S Havlin and H-J Schellnhuber,”Lack of scaling in global climate model”, Journal Of Physics: Condensed Matter, Vol. 14 (2002) pp. 2275-2282. http://havlin.biu.ac.il/PS/vgbbhs420.pdf
Abstract
Detrended fluctuation analysis is used to test the performance of global climate models. We study the temperature data simulated by seven leading models for the greenhouse gas forcing only (GGFO) scenario and test their ability to reproduce the universal scaling (persistence) law found in the real records for four sites on the globe: (i) NewYork, (ii) Brookings, (iii) Tashkent and (iv) Saint Petersburg. We find that the models perform quite differently for the four sites and the data simulated by the models lack the universal persistence found in the observed data. We also compare the scaling behaviour of this scenario with that of the control run where the CO2 concentration is kept constant. Surprisingly, from the scaling point of view, the simple control run performs better than the more sophisticated GGFO scenario. This comparison indicates that the variation of the greenhouse gases affects not only trends but also fluctuations.
Peter Talkner and Rudolf O. Weber, “Power spectrum and detrended fluctuation analysis: Application to daily temperatures,” Physical Review E, Volume 62, Number 1, July 2000. http://www.physik.uni-augsburg.de/theo1/Talkner/Papers/T_Weber_PRE_2000.pdf
Abstract
The variability measures of fluctuation analysis (FA) and detrended fluctuation analysis (DFA) are expressed in terms of the power spectral density and of the autocovariance of a given process. The diagnostic potential of these methods is tested on several model power spectral densities. In particular we find that both FA and DFA reveal an algebraic singularity of the power spectral density at small frequencies corresponding to an algebraic decay of the autocovariance. A scaling behavior of the power spectral density in an intermediate frequency regime is better reflected by DFA than by FA. We apply FA and DFA to ambient temperature data from the 20th century with the primary goal to resolve the controversy in literature whether the low frequency behavior of the corresponding power spectral densities are better described by a power law or a stretched exponential. As a third possible model we suggest a Weibull distribution. However, it turns out that neither FA nor DFA can reliably distinguish between the proposed models.
R. B. Govindan, Armin Bunde, Shlomo Havlin, “Volatility in atmospheric temperature variability,” http://arxiv.org/pdf/cond-mat/0209687v1.pdf
Abstract
Using detrended fluctuation analysis (DFA), we study the scaling properties of the volatility time series Vi = |Ti+1 − Ti| of daily temperatures Ti for ten chosen sites around the globe. We find that the volatility is long range power-law correlated with an exponent close to 0.8 for all sites considered here. We use this result to test the scaling performance of several state-of-the art global climate models and find that the models do not reproduce the observed scaling behavior.
Armin Bunde, Jan Eichner, Rathinaswamy Govindan, Shlomo Havlin, Eva Koscielny-Bunde, Diego Rybski, and Dmitry Vjushin, “Power-Law Persistence in the Atmosphere: Analysis and Applications”, http://arxiv.org/pdf/physics/0208019v2.pdf
Abstract
We review recent results on the appearance of long-term persistence in climatic records and their relevance for the evaluation of global climate models and rare events. The persistence can be characterized, for example, by the correlation C(s) of temperature variations separated by s days. We show that, contrary to previous expectations, C(s) decays for large s as a power law, C(s) ∼ s^−g. For continental stations, the exponent g is always close to 0.7, while for stations on islands g∼=0.4. In contrast to the temperature fluctuations, the fluctuations of the rainfall usually cannot be characterized by long-term power-law correlations but rather by pronounced short-term correlations. The universal persistence law for the temperature fluctuations on continental stations represents an ideal (and uncomfortable) test-bed for the state of-the-art global climate models and allows us to evaluate their performance. In addition, the presence of long-term correlations leads to a novel approach for evaluating the statistics of rare events.
Dmitry Vjushin, Igor Zhidkov, Shlomo Havlin, Armin Bunde, and Stephen Brenner, “Volcanic forcing improves Atmosphere-Ocean Coupled General Circulation Model scaling performance”, http://arxiv.org/pdf/physics/0401143.pdf
Abstract
Recent Atmosphere-Ocean Coupled General Circulation Model (AOGCM) simulations of the twentieth century climate, which account for anthropogenic and natural forcings, make it possible to study the origin of long-term temperature correlations found in the observed records. We study ensemble experiments performed with the NCAR PCM for 10 different historical scenarios, including no forcings, greenhouse gas, sulfate aerosol, ozone, solar, volcanic forcing and various combinations, such as natural, anthropogenic and all forcings. We compare the scaling exponents characterizing the long-term correlations of the observed and simulated model data for 16 representative land stations and 16 sites in the Atlantic Ocean for these scenarios. We find that inclusion of volcanic forcing in the AOGCM considerably improves the PCM scaling behavior. The scenarios containing volcanic forcing are able to reproduce quite well the observed scaling exponents for the land with exponents around 0.65 independent of the station distance from the ocean. For the Atlantic Ocean, scenarios with the volcanic forcing slightly underestimate the observed persistence exhibiting an average exponent 0.74 instead of 0.85 for reconstructed data.
C. A. Varotsos, M. N. Efstathiou, and A. P. Cracknell, “On the scaling effect in global surface air temperature anomalies”, Atmos. Chem. Phys., 13, 5243–5253, 2013. http://www.atmos-chem-phys.net/13/5243/2013/ doi:10.5194/acp-13-5243-2013
Abstract
The annual and the monthly mean values of the land-surface air temperature anomalies from 1880–2011, over both hemispheres, are used to investigate the existence of long-range correlations in their temporal evolution. The analytical tool employed is the detrended fluctuation analysis, which eliminates the noise of the non-stationarities that characterize the land-surface air temperature anomalies in both hemispheres. The reliability of the results obtained from this tool (e.g., power-law scaling) is investigated, especially for large scales, by using error bounds statistics, the autocorrelation function (e.g., rejection of its exponential decay) and the method of local slopes (e.g., their constancy in a sufficient range). The main finding is that deviations of one sign of the land-surface air temperature anomalies in both hemispheres are generally followed by deviations with the same sign at different time intervals. In other words, the land-surface air temperature anomalies exhibit persistent behaviour, i.e., deviations tend to keep the same sign. Taking into account our earlier study, according to which the land and sea surface temperature anomalies exhibit scaling behaviour in the Northern and Southern Hemisphere, we conclude that the difference between the scaling exponents mainly stems from the sea surface temperature, which exhibits a stronger memory in the Southern than in the Northern Hemisphere. Moreover, the variability of the scaling exponents of the annual mean values of the land-surface air temperature anomalies versus latitude shows an increasing trend from the low latitudes to polar regions, starting from the classical random walk (white noise) over the tropics. There is a gradual increase of the scaling exponent from low to high latitudes (which is stronger over the Southern Hemisphere).
Dan,
at this stage we are not focusing on fine scale temporal characteristics.
basically the work is focused on larger scale metrics.
Do the models get amplification right? global average right? response to volcanos, changes in seasonality.
personally I think its a trivial matter to select small spatial scale high frequency data and show mis matches and also to cherry pick good matches.
Interesting – I don’t see in the BEST data any sign of the Mexican ‘hot-spot’ seen in the GISS data visualisation here http://warmingworld.newscientistapps.com/?utm_source=PA&utm_medium=email&utm_campaign=climate (a screenshot of what I mean is here http://twitpic.com/c6k984)
There is a ‘step’ of about 1.5 deg C in the GISS data from that part of Mexico in about 1980 and I wondered whether it can be seen in the raw data that BEST used. I’m not sure how to look at each Mexican station on the BEST site individually, but the ones I’ve found don’t show this ‘step’. Still puzzled.
Steve: Standard methodology for screening thermometers in temperature stations didn’t come widespread until the late 1800’s, when station coverage was much lower than today. Is the BEST reconstruction methodology able to handle the biases from those early periods? Can you point to any data simulations or other evidence that supports your answer?
the bottoms up way of approaching this problem would be to create an “adjustment” for the early years. We handle the problem top down.
you can think of it this way
T = C +W +e
the temperature at any given time and place is a combination of the
Climate for a location ( a function of latitude and altitude and season)
The Weather for that location, and an error.
We interatively solve this equation by minimizing. At the end you will generate a correlation length. At distance Zero 87% of the variance is explained leaving a “nugget” at distance zero. this nugget is interpreted as the total error for all known causes.
So rather than building up an error model from the bottom up where you assign errors and then sum them, a process filled with guesswork, we solve the field top down by minimizing and whats left over is the error due to all causes. This number happens to be much larger than the error estimated by others using a bottoms up approach.
in general the error bars in the pre 1850 era tend to be dominated by the spatial components.
Which assumes that top down is correct and that the measurement itself is a good place to start. The sat record shows that is a fallacy, as do the UHI studies.
You pick this method as it gives the largest temp rise, not because it produces the least error.
“You pick this method as it gives the largest temp rise, not because it produces the least error.”
Yeah justme, that’s how it’s done. Let’s think about that. Error is expressed as a +/- value. The greater the error, the more uncertain what the true value was. So you can take a position and assume a large error allows one to say the true values in early years were maybe cooler thus making the rise in temperature look greater -or- you could take the opposite position and say that allows one to say the true values were maybe warmer making the temperature rise look smaller. But neither position is really plausible, larger errors only allow one to say those earlier measurements are less certain as to what their true value was. So it really doesn’t allow one to take a position to say it was done purposely to ‘give the largest temp rise’. That is an unsupported assertion where you assume the worst in those doing the work, which says more about you than it does about the BEST team.
“Which assumes that top down is correct and that the measurement itself is a good place to start. The sat record shows that is a fallacy, as do the UHI studies”
the error includes UHI which is rather small in every study I’ve done for the past 5 years. The “correctness” of the top down approach can be seen in the study using synthetic data, and also in the results you get when you add out of sample data. If the error estimate where wrong then you’d expect new data to lie outside the error bounds at a higher rate that the specified error of prediction.
Steve: The paper below discusses what you call “bottom up” corrections applied (misapplied?) to a whole region on the basis of studies at one site. If the average BEST station had these kind of biases before 1850, does it make any sense to put pre-1850 and post-1850 records on a single plot?
Bohm et al. Climatic Change (2010) 101:41–67
Abstract InstrumentaltemperaturerecordingintheGreaterAlpineRegion(GAR) began in the year 1760. Prior to the 1850–1870 period, after which screens of different types protected the instruments, thermometers were insufficiently sheltered from direct sunlight so were normally placed on north-facing walls or windows. It is likely that temperatures recorded in the summer half of the year were biased warm and those in the winter half biased cold, with the summer effect dominating. Because the changeover to screens often occurred at similar times, often coincident with the formation of National Meteorological Services (NMSs) in the GAR, it has been difficult to determine the scale of the problem, as all neighbour sites were likely to be similarly affected. This paper uses simultaneous measurements taken for eight recent years at the old and modern site at Kremsmünster, Austria to assess the issue. The temperature differences between the two locations (screened and unscreened) have caused a change in the diurnal cycle, which depends on the time of year. Starting from this specific empirical evidence from the only still existing and active early instrumental measuring site in the region, we developed three correction models for orientations NW through N to NE. Using the orientation angle of the buildings derived from metadata in the station histories of the other early instrumental sites in the region (sites across the GAR in the range from NE to NW) different adjustments to the diurnal cycle are developed for each location. The effect on the 32 sites across the GAR varies due to different formulae being used by NMSs to calculate monthly means from the two or more observations made at each site each day. These formulae also vary with time, so considerable amounts of additional metadata have had to be collected to apply the adjustments across the whole network. Overall, the results indicate that summer (April to September) average temperatures are cooled by about 0.4◦C before 1850, with winters (October to March) staying much the same. The effects on monthly temperature averages are largest in June (a cooling from 0.21◦ to 0.93◦C, depending on location) to a slight warming (up to 0.3◦C) at some sites in February. In addition to revising the temperature evolution during the past centuries, the results have important implications for the calibration of proxy climatic data in the region (such as tree ring indices and documentary data such as grape harvest dates). A difference series across the 32 sites in the GAR indicates that summers since 1760 have warmed by about 1◦C less than winters.
Can I get a clarification of something? The data page of the BEST website describes the dataset used by BEST as having:
Yet the resulting temperature series is described as:
Is BEST saying seasonal fluctuations were removed but seasonality still exists in the series, or is there a mistake somewhere? Also, this post says:
Yet the description of the the dataset says seasonality is removed:
While pointing to a dataset that has not had kriging applied to it.
Could I get a clarification of what steps BEST has taken to address seasonality in its data?
Brandon.
First clarification
“Can I get a clarification of something? The data page of the BEST website describes the dataset used by BEST as having:
each series… adjusted by removing seasonal fluctuations.
wrong.
These are not datasets “used” by BEST as you describe it.
These are datasets produced.
produced, not used.
there is a difference between using a dataset and producing one. This is an output of the system it is produced, not used.
“Same as “Single-valued” except that each series has been adjusted by removing seasonal fluctuations. This is done by fitting the data to an annual cycle, subtracting the result, and then re-adding the annual mean. This preserves the spatial variations in the annual mean.”
So, an annual cycle is fit to the data, removed, and an annual mean is then re added. Think of it as taking an annual anomaly.
Steven Mosher:
I’m confused. Why do you say “these are not datasets”? I did not refer to multiple datasets. I said “the dataset used by BEST.” And I said that because that page explicitly states:
It’s silly to say:
As though I didn’t understand some simple distinction while what I stated is the same thing your own webpage states. If the dataset I referred to isn’t used by the Berkeley Earth Averaging Methodology, you should say the page is wrong and fix it. You shouldn’t dismiss what I say based as an errant point of semantics.
wrong Brandon.
the quote you lifted was from a description of the datasets produced.
The datasets USED are here
Source files
The source files we used to create the Berkeley Earth database are available in a common format here.
LOOK at the diagram.
See the bottom. where it says SOURCE FILES. those are the datasets USED. follow it up. see the top? those are the outputs.
Those used are at the bottom under source files.
all the rest are outputs.
Steven Mosher, the quote I “lifted” explicitly says the data set I referred to is “use by the Berkeley Earth Averaging Methodology.” The diagram you tell me to “LOOK at” shows seasonality is removed prior to the formation of the Berkeley Earth Dataset. The descriptions of the various data sets on that page state what is done in the formation of each data set, and not a one says a word about Kriging or the like being used.
Moreover, what I describe is the same as what the BEST results paper says. It describes the scalpeling process then says:
This is a description of pre-processing steps taken before the Berkeley Earth Averaging Methodology is implemented. That is demonstrated by the fact it is followed by:
Everything says seasonality was removed before the Berkeley Earth Averaging Methodology was used. Additional steps may be used as well, but there is no way to deny BEST says it removes seasonality as part of its data set creation.
wrong again Brandon.
“Everything says seasonality was removed before the Berkeley Earth Averaging Methodology was used. Additional steps may be used as well, but there is no way to deny BEST says it removes seasonality as part of its data set creation.
See the methods paper and start with equation 3.
The change I describe is basically a change to equation 3.
In the first version of the approach
T = C +W
we “remove” or estimate the climatology C
That “leaves” a residual
we then “removed” or estimated the seasonality
that leaves the weather
we then krig the weather
and add them all back together.
so seasonality is “removed” so that you can krig the weather field
Then that structural element is added back in. to put out the final answer
That final answer is the field with seasonality included.
You can now take that final field and remove seasonality if you like and produce a output with seasonlity removed AGAIN.
In the new process, which I refer to in the post, seasonality is not represented as a part of the weather decomposition.
Temperature is decomposed as follows
T = C + W + S
and so before where we first decomposed C and W and then decomposed W in to W + S, we now do it directly minimizing
C, W, and S. we call this “removing” but its really simultaneously estimating the contribution of each.
some people like to refer to this as “removing” or detrending, but its its a decomposition that allows us to isolate the W field and then krig it. the “averaging” process then recombines the elements and outputs various dataproducts.
Steven Mosher, you keep insisting I’m wrong, yet you don’t address anything I say. All you do is repeat yourself over and over. If I am wrong, show how what I say is wrong. Don’t simply ignore it while saying it’s wrong.
The BEST website and results paper both say the data set I referred to is used by the Berkeley Earth Averaging Methodology. As such, I say it is used by the Berkeley Earth Averaging Methodology. If it is not, you need to explain why both the BEST website and results paper say it is.
In the meantime, if you’re going to claim the data set I referred to is the output fields of the Berkeley Earth Averaging Methodology, you’ll have to explain why that data set is a list of temperature station data. Why would we be able to pick out individual stations from an interpolated output field? Why wouldn’t we be able to pick out locations not covered by a temperature station? Why would records be of uneven length?
Here’s a better’s question. If it’s an interpolated field output, why does the data come with a file named:
Or a Readme file that says:
I’d love to hear how “the main dataset used for the analysis conducted by the Berkeley Earth project” is really just an output field.
Brandon.
I think I see the cause of the confusion.
See my post at the bottom.
There are input data files. These are sources.
There are output files or datasets. these are posted on the website
they have .txt extensions after you unzip.
There are intermediate datasets. These are both produced by and used by the averaging process depending on the program options that you
have set. They are in the SVN dump under data. They have .mat
extensions.
Second
“Is BEST saying seasonal fluctuations were removed but seasonality still exists in the series, or is there a mistake somewhere? Also, this post says:
Finally, we’ve improved the estimation and removal of seasonality in the kriging process.
Yet the description of the the dataset says seasonality is removed:
by fitting the data to an annual cycle, subtracting the result, and then re-adding the annual mean. This preserves the spatial variations in the annual mean.
###############
“Finally, we’ve improved the estimation and removal of seasonality in the kriging process.”
This refers to the computation process NOT the output datasets, so you get confused when you compare the two.
The process of estimation works by estimating a Climate for every position and then kriging the residual.
During the estimation of Climate we do the following
T = f(y,z,t) or the temperature is a function of latitide, altitude and time.
the time is the seasonal component of the climate for that location.
In the previous version this site seasonality was removed via a fundamental and a couple harmonics. We also had a source Fork that removed this seasonality by minimizing. For simplicity we focused on the first path.
In any case you get a climate for every location and time. Whats left is the weather. That residual gets kriged.
Now, you have two fields, a climate field and a weather field. you add them back together and you have a temperature feild. This field has seasonility in it. The first step is to remove the seasonality to leave the weather in ts own field. Then after kriging the weather you add it back to the climate field. Since the climate field is defined for all x,y,z,t and the weather field is defined for all x,y,z,t that result gives you values at all places on the field, even where you havnt recorded temps.
Finally, when it comes to producing OUTPUT data sets we take that field
and produce products of various flavors.
in one of those OUTPUT OPTIONS you can select a field that has the seasonality removed from the final field. In another option you can have the seasonality preserved.
So to recap. the method whereby we calculate the field involves a step where seasonality is removed for the purpose of leaving a residual of weather. that weather residual is kriged. The seasonality is then added back in and you get a final field which is just a field in temperature.
When it comes time to produce output OPTIONS.. we provide a bunch of different options: without seasonlity, with seasonlity, quality controlled, breakpoint adjusted etc.
Brandon
“Can I get a clarification of something? The data page of the BEST website describes the dataset used by BEST as having:
each series… adjusted by removing seasonal fluctuations.”
Your quote comes from here
“Seasonality Removed
Same as “Single-valued” except that each series has been adjusted by removing seasonal fluctuations. This is done by fitting the data to an annual cycle, subtracting the result, and then re-adding the annual mean. This preserves the spatial variations in the annual mean.”
THIS IS AN OUTPUT. it is produced. it is not used.
You wrote: “The data page of the BEST website describes the dataset used by BEST as having:
each series… adjusted by removing seasonal fluctuations.”
your QUOTE comes from the paragraph describing the data PRODUCED.
not used as ingest
First of all, let’s concede that BEST has given us some valuable new information regarding our “globally and annually averaged land only surface temperature anomaly” record. The new improvements have added to this knowledge and should be commended by one and all.
The study has not made any formal attribution studies to identify root causes, yet even without these BEST has generally underscored the hypothesis being sold by IPCC, namely that human GHG emissions (principally CO2) have been the primary cause of recent warming of the land.
The Muller/Mosher logic goes as follows (with Mosher tossing in “unicorns”: to replace “unknowns”):
***************************************************************
1. We have a plausible theoretical hypothesis, whereby increased atmospheric CO2, through its IR absorption capability and the greenhouse effect, could have been a significant root cause of the warming observed since around 1850.
2. We can measure changes in direct solar irradiance and can conclude from these measurements that these changes cannot have been a major cause of past long-term warming.
3. We can exclude such things as volcanoes, short-term cyclical variability, etc. as their input is not long-term.
4. We do not believe that other anthropogenic factors, such as land use changes, urbanization, poor siting, relocations or shutdowns of weather stations have contributed significantly to the observed global warming signal.
5. We are unable to explain the observed ~30-year warming/cooling cycles in the observed long-term record.
6. Although we realize that they may very well exist, we are not aware of any longer-term cyclical factors, which may have been responsible for much of the observed past warming.
7. We are unable to explain the current pause in warming despite unabated CO2 emissions and CO2 concentrations reaching all-time records.
8. We realize that there are still many unknowns relating to our climate, but are unaware of any other forces that could have been a major contributor to the observed past warming trend.
9. But despite the many unknowns today in 5 through 8 we conclude that CO2 has been the primary cause of the observed past warming.
***********************************************************************
Does anyone see any flaws in this logic?
Max
I don’t really believe Muller can see it. I don’t believe moshe wants to see it. Neither of them care to address how cold we would now be without AnthroCO2 if attribution to man is as high as Muller ‘wants’ it to be.
=======================
kim, I think we would be at least 0.9 degrees cooler. I would take the 1910 temperature when the sun was about as inactive as now and subtract a tenth for CO2 up to 1910. This gives 0.9 degrees. Would you agree?
Here’s a stab at attribution. The century temperature change was 0.7 C. The expected change for a (low transient) 2 C per doubling sensitivity (290 ppm to 370 ppm) is about 0.7 C. Attribution = 100%(!). Hmmm. Where did the skeptics accepting numbers near 2 C per doubling go wrong? It’s a puzzler for them to figure out where they stand on attribution now. This is disregarding aerosols, the sun and other GHGs which may add or subtract to the 0.7 C.
JimD, “Where did the skeptics accepting numbers near 2 C per doubling go wrong?”
By not considering ocean lags. With the Maunder and Dalton Minimums plus heavy volcanic activity during about 400 years of little Ice Age, the oceans battery was running a touch low. Recharging the oceans with a less than 1 Wm-2 setting on the trickle charger takes a while. That is why there are longer term pseudo-cycles like 1220, 1070, 400, 208, 150 and 108 yrs with lots of combinations that need to be considered plus the 4000, 5000 and 5700 year recurrences associated with precession and obliquity. CO2, land use, black carbon just bumped up the charging rate a little.
http://link.springer.com/article/10.1007%2Fs11434-012-5576-2
There are likely to be a lot more papers like that at newsstands near you.
Of course you can always reduce the problem to one variable and pretend you have a clue.
http://redneckphysics.blogspot.com/2013/08/the-madness-of-reductionist-methods.html
captd, put it this way. Say the sensitivity is 2 C per doubling and CO2 goes from 290 ppm to 370 ppm. What warming do you get relative to what happened in the 20th century? How much is there left to explain? (Clue: I gave the answer above).
JimD, the 0.7 to 0.9 was the estimated total impact. So you need a longer term reference to determine what “normal” should be for some longer time period. Using the Oppo IPWP with BEST CO2/volcanic forcing to compare to the ERSST3 and GISS global data I get a range.
http://redneckphysics.blogspot.com/2013/08/is-picture-worth-trillion-dollars.html
1.6 C per doubling is the best general fit and 0.8 C is the best low sensitivity fit. 3C fits only the instrumental.
captd, you can stretch the numbers as much as you like, the CO2 attribution is in the vicinity of 100%. The IPCC’s “very likely most” was too conservative. It only leaves about a tenth of a degree to explain by other factors and internal variation. A tenth or two is about right for internal variation, so it is no surprise.
JimD, “captd, you can stretch the numbers as much as you like, the CO2 attribution is in the vicinity of 100%.”
That would be perfection. Assuming perfection is how you generate false positives, irrelevant fat tails and false hopes. A 30 to 40 percent efficiency is a more down to Earth estimate. Since the response curve to natural warming is almost indistinguishable, it is easy to confuse what is causing what, but when you assume 100% efficiency you are wrong “most” of the time.
captd, not perfection just mathematics. What do we mean by sensitivity? I thought everyone agreed. Given that, you get 100%, and that’s just the CO2 part. The other parts add and subtract and apparently cancel each other, but each one is not assumed to be zero. Take CO2 away, leave in the other parts, aerosols, sun, other GHGs, internal variability, and you would have no warming. That is what 100% means.
JimD, “What do we mean by sensitivity? I thought everyone agreed”
Everyone is also perfection. Most agree that a doubling of CO2 will produce 3.7 Wm-2 of additional atmospheric forcing. At the actual surface, that forcing could produce up to 1.5 C of temperature increase with no feed back, “all things remaining equal.” The best estimate of the “benchmark” no feed back sensitivity if 3.3Wm-2 per degree or 1.12 C per doubling. Since the average DWLR is ~334 Wm-2 and is a result of current atmospheric forcing, an additional 3.7Wm-2 would produce ~0.8C of “surface” temperature increase. The “surface” though is actually the atmospheric boundary layer or the point were water vapor begins to condense. Water vapor condenses a a relatively stable temperature of 0C or 316Wm-2 effective energy. That limits the actual “surface” impact since increased surface temperature increase evaporation where water is present producing a “surface” cooling and more energy at the atmospheric boundary layer by no temperature change. CO2 forcing would at that point increase upper level convection. That is a twofer negative feedback without even considering cloud albedo.
If you don’t consider the negative water vapor “surface” feed back, then you have ~1.6C per 3.7 Wm-2 of forcing as estimated by Callandar with a fixed rH and no clue what the average DWLR was.
If you want to do a double check, the “average” temperature of the deep oceans is 4C ~334 Wm-2 the same as DWLR and the average temperature of the Stratopause is 0C 316 Wm-2 effective energy. You can even check the “sensitivity” of the stratopause to solar forcing and find that it take ~5.4 Wm-2 to produce 1 C of change, that would be a “sensitivity” of ~0.8C per 3.7Wm-2 of atmospheric forcing.
Without drinking the Hansen Koolade, you can’t get over 2 C unless you invoke the water vapor unicorns.
Now that does not mean that Black Carbon, Land Use and “other” pollutants can’t be contributing,just that atmospheric forcing is limited due to the specific heat capacity of the atmosphere. No mo mass, no mo warming.
captd, now you are just inventing your own theory again. 2 C accounts for a partial water vapor feedback, and is a transient sensitivity because there would still be imbalance (or pipeline) energy at the end of the period (perhaps a third of it depending how fast the forcing changed). It looked like you denied water vapor feedback, and that is fine for you to do. It puts you in a certain category with some others. You have to figure out how a warmer ocean doesn’t have more vapor over it, but that’s just a problem of phase equilibrium thermodynamics, so it shouldn’t bother you too much.
JimD, Nope, I am not inventing anything.
http://climateaudit.org/2013/07/26/guy-callendar-vs-the-gcms/
There is a good discussion on Guy Callandar. I looked at Arrhenius paper and compare the latitude bands he predicted with observation. Arrhenius’ second estimate 1.6(2.1) with water vapor agrees with Callandar. The only this I calculated was the sensitivity of the “Average” oceans. With the current rate of uptake, 0.8C would take ~316 years.
You can also compare the stratosphere response to SST and even OHC, and notice that the reduced rate of stratospheric cooling has a classic approach shape to a full charged condition which agrees with the Oppo 2009 mean for the past 2000 years. The Oppo reconstruction indicates that ocean warming started circa 1700 and the slope agrees pretty well with 0.8C per ~316 years iff you allow for volcanic forcing.
There is a growing wealth of information indicating that 1.6C or less is the combined AGW target with virtually zero pipeline.
You realize that Callendar kept water vapor fixed in his calculation (he was a steam engineer not a scientist and overlooked things Arrhenius had considered several decades earlier). He detected global warming in the ’30’s, and was probably the first to do so, but it was more than he expected from his theory.
It seems very likely to me that we will reach atmospheric CO2 levels of around 650 ppmv by 2100.
Let me ‘splain:
This is based on no global climate initiatives being implemented, no new competitive “renewable” fuel, no breakthrough on competitive automotive batteries, no all-out switch from gas or coal to nuclear for new power plants (as Peter Lang proposes), but just BaU, with global population growing to 10.8 billion by 2100 (latest UN estimate), China and India continuing to grow their economies, the poorest nations also improving their quality of life with an energy infrastructure largely based on low-cost fossil fuels and overall per capita CO2 generation increasing by 30% from today to 2100 as a result..
(Peter Lang’s proposal could reduce this by 60 to 80 ppmv.)
Using the estimates for 2xCO2 climate sensitivity from latest observation-based studies (1.8ºC), this would result in future global warming of around 1.3ºC.
If we use the previous model-based estimates from IPCC AR4 (3.2 ºC), this would result in future global warming of around 2.3ºC.
In all honesty, this does not seem to be anything to get too excited about, especially since it seems likely that there will be some switch to nuclear and other new cost-competitive alternates will most likely be developed, so that the CO2 warming by 2100 will most likely be lower than projected.
Like kim, I’d be more concerned about a long-term natural cooling cycle that exceeds the ability of CO2 to counteract it.
Max
Max_OK wants to go back to the LIA.
(Not me.)
Brrrrr!
I like it as it is now – maybe a degree or so warmer would be even nicer.
Max_CH
Jim D | August 18, 2013 at 11:40 pm |
“You realize that Callendar kept water vapor fixed in his calculation (he was a steam engineer not a scientist and overlooked things Arrhenius had considered several decades earlier). He detected global warming in the ’30′s, and was probably the first to do so, but it was more than he expected from his theory.”
The water vapor feedback is very close to what the steam engineer expected. Arrhenius grossly overestimated the tropics and oceans. Where are the biggest model misses?
The steam engineer would be the “Guy” grounded in the reality that in an open system steam doesn’t get hotter. You have to add pressure to improve steam quality. Without adding mass to the atmosphere to increase the effective pressure, the temperature of the steam remains the same and the temperature of the condensate remains the same.
So Callandar mentions that most of the warming should be in the northern hemisphere, where there is snow and ice on the ground, and that warming would likely be beneficial.
This is where the Climate Etc.resident experts and I totally disagree. I, being an HVAC kinda guy, look at the Psychrometric charts and consider the volume below the atmospheric boundary layer separately from the radiant physics of the upper atmosphere where CO2 has its major impact. It is a divide and conquer strategy where the physics inside the moist air envelop are well understood. Then I only have to keep track of the energy exchanged with the surroundings. That can be Gibbs or Helmholtz energy if you are into semantics or heat loss/gain if you are a simple HVAC or steam engineer kinda guy. It is one of the first real laws of thermodynamics, Keep It Simple Stupid (KISS) then my moist air envelop is my Frame Of Reference (FOR) the second real law of thermodynamics. All I have to do is remember the third real law of thermodynamics, don’t ASS-U-ME.
You ASS-U-ME your ASS off. Try simple Carnot engines to get MAXIMUM limits before assuming you have discovered perpetual warming.
Jim D’s “stab at attribution” : CO2 and temp went up. Therefore the former caused the latter.
Don’t snigger, he’s only aping how 97% of climate scientists argue.
ya the flaw would be in your mind reading.
Well, would you care to express, you know, the output of your mind, how cold we would now be without man’s CO2 if attribution to man is as high as Muller ‘apparently wants’ it to be. You can sneer all you like, but you cannot dodge around this point.
================
Steven Mosher
Reads like a cop-out to me, Mosh.
Just admit that “unicorns” can be real when you don’t know all there is to know.
(Judith calls it “uncertainty” – you call it “unicorns”).
The truth of the matter, Mosh, is that neither you nor the good Dr. Muller KNOW whether or not CO2 has been the major driver of past warming. You just ASS-U-ME this based on the limited knowledge you think you have, by ignoring the data points out there, which do not corroborate that which you ASS-U-ME.
Max
He and Muller are trying to find a foundation strong enough to support a house of cards. I wish them luck, but seek architects elsewhere.
=============
sure I can dodge kim.
there is one thing I dont like about hansen talking about taxes. hes not an expert on them. Its not that its politics, its that he’s got no standing.
So, you wanna talk about temperature, UHI, the logic of arguments..
I’ll help you. ill talk a little about GCMs, less about paleo, a lot about climategate, some about sensitivity.
Now you want me to hold court on how much C02 has contributed..
Sheesh a man has to know his limits.. ball park? well if it is 1C warmer now than in the 1850s.. Id guess .5C +- giver take pulling a number out of my butt without looking.
probably wrong, but in my mind Im just not interested in that question.
Id rather know what kind of UHI 10000 people cause.
Steven, When you can fly your AGW models like a B-one, doing 600k at 150′ without pancaking into that ridge right in front of us… I would have to change my mind. I think you would agree, these weather models are not there yet. The price tags do seem similar.
‘Just admit that “unicorns” can be real when you don’t know all there is to know.”
of course uniicorns can be real. of course there are unknowns. of course.
“The truth of the matter, Mosh, is that neither you nor the good Dr. Muller KNOW whether or not CO2 has been the major driver of past warming. ”
I can only speak for myself.
I do not know that c02 has been a major driver. I dont know that the world exists. I dont know that you exist. I am fairly certain that 2+2 = 4. Or rather I find that by accepting that 2+2=4 I am able to do more things. But, given the inclination I could doubt that 2+2=4. I could and have doubted that the world exists. And that you exist. But in the end this skepticism could not be lived.
What we have is what we always have. We have an explanation, a story, an account that tries to make sense of everything we do know. Understanding that someday a unicorn may show up. But I cannot do anything with the mere possibility that it might be unicorns?
lets take an example: the neutrino. when it was first proposed it was a unicorn. a thing that would explain some problems IF it existed.
So there is a history of proposing unicorns, BUT with this difference.
people who propose that it might be something else actually tell you what a unicorn LOOKS LIKE.
So with the climate record. you say it might be something other than GHGs. Fine. unicorns. what exactly does this unicorn look like? That way I can go look for it.
In our meeetings somebody suggested the sun could be the cause.
ok.. whats that look like? he says.. “look at TSI” ok, we go look for that unicorn. Nope. didnt work.
you do get to believe in unicorns, but to be pratical and engage in science you have to describe what folks for look for? what physical cause?
Kim said ” Neither of them care to address how cold we would now be without AnthroCO2 if attribution to man is as high as Muller ‘wants’ it to be.”
____
Temperature would be like it used to be, which was just about right. I never hear old timers claiming it used to be too hot or too cold. Your problem, kim, may be poor circulation. I recommend you invest in thermal underwear, wool socks (Merino is best) , and mittens
moshe minimizes the temperature rise since the LIA, pulls attribution out of his hat, and then divides by half. Please, moshe, get real; it’s real important.
Muller attributes all of the warming since the LIA to HumanGHGs, and here we are beginning to understand the limits of how much CO2 man is likely to be able to put in the atmosphere, temperature rise flattening despite rising CO2, and facing imminent global cooling of unknown magnitude or duration. It’s real important.
lowlot reminds us that he has no idea what the ‘good ol’ days’ were like.
===========
‘moshe minimizes the temperature rise since the LIA, pulls attribution out of his hat, and then divides by half. Please, moshe, get real; it’s real important.”
attribution is un important to me. very simply I am not consumed with the question of “how much warming of 1700-2013 is c02 caused” I think that is not very interesting although many people think it important.
Whether it is 25% or 50% or 75% or 100% doesnt really change my views on issues that matter to me,
Like so: Mosh what do you think is important?
Mosh: small town UHI.
kim: why?
Mosh: cause it fascinates me and keeps my mind busy.
kim; what about the future of the planet?
Mosh: what planet? Im focused on this grain of sand its all the planet I need. now quit bugging me.
kim: what about our troubles?
Mosh: sorry, I’m kinda consumed by this grain of sand.
kim: what about the mails?
Mosh: been there done that. I have a new grain of sand
kim: but look at what Im interested in?
Mosh: nope, I got OCD for other problems.
So basically Im able to move to from field to field because I like to focus. This means that most days I forget to tie my shoes or make sure my socks match. As a kid they called me absent minded professor. meh. I was just focused on other shit. you really dont want to talk to me when my mind is focused elsewhere, I’ll say random shit to make you go away.
So check back with me later
Muller;s level of confidence regardin’ the human element in
global warming, given that temperatures risin’ in parallel with
CO2 emissions … fer 15 years …er … jest …haven’t… seems
kinda strange. So can someone maybe tell me on what Muller’s confidence is based?
Confused serf.
Beth, there are some people whose daughter’s papers I will no longer read. moshe, shy boy that he is, can’t talk to Muller about attribution even though the elephant has left huge tracks through the living room.
Muller’s absurd bias about attribution will ultimately color his whole effort, and that of moshe himself. The grain of sand moshe inspects is at the bottom of a deep hole.
==============
moshe, if Muller’s attribution is correct, then it means we would not have recovered naturally from perhaps the lowest depths of the Holocene, as we have previously done throughout the Holocene. You should overcome your shyness and ask Muller what that means.
==========================
The oceans are a store of solar energy that moderate global climate. The difference in surface temperature over land and ocean is the result of upwelling and reduced evaporation – lack of water – over land.
This schematic tells the story of the relative influence of ocean and land – about 90% ocean.
http://s1114.photobucket.com/user/Chief_Hydrologist/media/DIETMARDOMMENGET_zps939fe12e.png.html?sort=3&o=10
http://journals.ametsoc.org/doi/full/10.1175/2009JCLI2778.1
The land surface temperature rise responds to drought and wind speeds and the ocean/land differential isn’t seen in the troposphere – for obvious reasons.
So a dataset that focuses on surface land temperature – no matter how well krigged – seems utterly inconsequential. Using it to infer attribution is utter nonsense.
Unquantified unicorns? Show it isn’t real.
http://s1114.photobucket.com/user/Chief_Hydrologist/media/Loeb2011-Fig1.png.html?sort=3&o=53
http://s1114.photobucket.com/user/Chief_Hydrologist/media/WongFig2-1.jpg.html?sort=3&o=48
http://s1114.photobucket.com/user/Chief_Hydrologist/media/cloud_palleandLaken2013_zps73c516f9.png.html?sort=3&o=16
My interpretation is that people are so far down the rabbit hole they can see stars at midday. Do we like detail. Of course – but it needs to be put in a proper context. This all seems climate trivia in great detail rather than meta-analysis that is the true strength of natural philosophy.
Here is the troposphere – land and global.
http://www.woodfortrees.org/plot/rss-land/plot/rss
Yes, he left the year 2000 out of the OLS. I would like to think it was because he didn’t know how to use woodfortrees correctly. However, if he
left 2000 out on purpose because he liked the chart better that way, he should have explained why.
Before you get carried away with the labeling of observations, just remember observations aren’t explanations.
I would like to think that it is about as relevant as you get. Utterly not so. Oh – I do think that.
‘This paper provides an update to an earlier work that showed specific changes in the aggregate time evolution of major Northern Hemispheric atmospheric and oceanic modes of variability serve as a harbinger of climate shifts. Specifically, when the major modes of Northern Hemisphere climate variability are synchronized, or resonate, and the coupling between those modes simultaneously increases, the climate system appears to be
thrown into a new state, marked by a break in the global mean temperature trend and in the character of El Niño/Southern Oscillation variability. Here, a new and improved means to quantify the coupling between climate modes confirms that another synchronization of these modes, followed by an increase in coupling occurred in 2001/02. This suggests that a break in the global mean temperature trend from the consistent warming over the 1976/77–2001/02 period may have occurred…
If as suggested here, a dynamically driven climate shift has occurred, the duration of similar shifts during the 20th century suggests the new global mean temperature trend may persist for several decades. Of course, it is purely speculative to presume that the global mean temperature will remain near current levels for such an extended period of time. Moreover, we caution that the shifts described here are presumably superimposed upon a long term warming trend due to anthropogenic forcing. However, the nature of these past shifts in climate state suggests the possibility of near constant temperature lasting a decade or more into the future must at least be entertained.’ Citation: Swanson, K. L., and A. A. Tsonis (2009), Has the climate recently shifted?, Geophys. Res. Lett., 36, L06711, doi:10.1029/2008GL037022.
It is just that you are an utter twit and cant begin to understand the explanation.
OK, Chief
Explanation for recent temperature plateau: cooling influences offset warming influences.
Explanation for temperature of my shower: cool water moderates hot water.
Explanation for taste of my coffee: sugar compensates for bitterness.
Well, Chief, I suppose these are explanations, but I hope science isn’t stuck at this level of explanation.
Tsonis posits a change in cloud associated with changes in ocean and atmospheric circulation.
Funny about that – http://s1114.photobucket.com/user/Chief_Hydrologist/media/cloud_palleandlaken2013_zps3c92a9fc.png.html?sort=3&o=15
The ocean and atmospheric patterns have been observed in proxies for millennia.
Here’s another study for you to not read or understand – http://heartland.org/sites/all/modules/custom/heartland_migration/files/pdfs/21743.pdf
Reminds me of what stock market chartists do, because it’s based on the notion history repeats itself with regularity. I guess it’s better than reading tea leaves, but not much.
I was reading 10 top investing tips yesterday. One was to use new chart tricks because everyone already knows the old ones.
The lack of warming and clouds ain’t good enough for you?
‘Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”
http://earthobservatory.nasa.gov/IOTD/view.php?id=8703
The fact that we are in a cool decadal mode and clouds and temperature are doing exactly as hypothesized not good enough? You are a waste of time nincompoop Max.
Looks like the troposphere warmed for two decades and has cooled since 2001
http://www.woodfortrees.org/plot/rss-land/plot/rss/plot/rss/from:1979/to:2000/trend/plot/rss/from:2001/trend
Max_CH, if you hadn’t left out a year, your “cooling” would be puny if it happened at all.
The most incredible thing happened in climate in 1998/2001. Climate spontaneously reorganized into a new – and cooler – configuration. Understanding of the fundamental significance of this climate shift is still emerging.
Climate had previously shifted in 1976/77 – to a warmer mode characterized by a warm PDO and more frequent and intense El Nino. We were aware of this – it was widely known as the Great Pacific Climate Shift. It was speculated that the 1976/77 shift was caused by global warming and that this was a new and permanent state of the Pacific Ocean. The shift back to cooler conditions reveals underlying climate dynamics that transforms notions in a new paradigm of how climate works.
The old paradigm applies inappropriate methodologies of reductionism to a dynamically complex system. The new paradigm allows progress to a state where the source of uncertainty is known and the new methods of complexity science can be applied.
e.g. http://www.ucl.ac.uk/~ucess21/00%20Thompson2010%20off%20JS%20web.pdf
These multi-decadal modes last for 20 to 40 years in the proxy records.
My comment was about Max_CH not knowing how to use woodfortrees, not about the silly notion global temperature change is a function of time.
There are plenty of numbskulls around Max – but you are as dumb as they get.
Chief, If you don’t know the error Max_CH is making, like him, you should be quiet until you figure it out.
You figure the to and from is a error? Or that this is the most critical aspect of the climate shift in 1998/2001?
Seriously – you are a twit with bells on.
Max_OK
The cooling started January 2001 and has been going strong ever since.
Rejoice!
But don’t forget to pack your woolies – winter’s coming on soon (even in Okieland!)
Max_CH
Warming has sped up since 2008
http://www.woodfortrees.org/plot/rss-land/plot/rss/plot/rss/from:1979/to:2008/trend/plot/rss/from:2008/trend
No cooling here.
Interesting, UAH shows it too
http://www.woodfortrees.org/plot/uah/plot/uah/from:1979/to:2008/trend/plot/uah/from:2008/trend
0.15C/decade warming up until 2008.
After 2008 warming is now 0.24C/decade.
Perhaps a kind climate skeptic could explain where I have erred in my methodology.
Should I pick a different year than 2008? Say 1998? Now why should I do that?
As the climate shifted in 1998/2001 – not merely temperature but ocean and atmospheric circulation – a coherent picture with a working hypothesis – a true testable hypothesis – emerges and not just random nonsense contaminated by short term variability from the largest source of interannular variability.
‘Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “these natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.” http://earthobservatory.nasa.gov/IOTD/view.php?id=8703
The hypothesis suggests no warming – or even cooling – for 20 to 40 years from 2002. How’s that working out ya think?
lolwot, maybe you have caught one of those shifts in the making.
What’s wrong with your methodology? You have none. You’re just playing around with toys, like a toddler playing with sharp knives. Stop messing around with OLS, because it tells you nothing and just serves to make you overconfident in your own biases.
lolwot
In case you missed the news, 2008 was a cold year in comparison with other recent years since 2001.
The cooling trend actually started in 2001 and is still going strong.
Rejoice!
Max
Phatboy complains I have no methodology, but I am using the same “methodology” climate skeptics use. I guess Phatboy is just being tribal attacking my use of the methodology but giving his pals a free pass whenever they do the same thing.
Manacker happened to pick 2001 for his graph. I picked 2008. Apart from that the methodology is identical. I’ve seen some skeptics start the trend line at 1995. Others at 1997. Others at 1998. Here everyone is picking 2001 (although dr dunderhead picks both 1998 and 2001 confusingly). I’ve also seen skeptics pick 2003 and 2005.
I am not pretending that the methodology is solid. In fact I am holding a mirror up to you guys. If you want to use sucky methodology then I’ll be here for to show you what the same methodology “shows” if you pick 2008 as your year.
And you can hardly argue the period since 2008 is too short, as you’ve already thrown the “period is too short” argument under the bus.
See it’s a mirror.
manacker argues: “In case you missed the news, 2008 was a cold year in comparison with other recent years since 2001. The cooling trend actually started in 2001 and is still going strong.”
Yet 2001 is a warm year in comparison to years years since 1994. See, a mirror. Any complaint you raise against my start point can equally apply to your own.
Dr Dunderhead writes: “As the climate shifted in 1998/2001”
This is adding an epicycle. I’ve shown the trend methodology is junk by providing a contradicting example that climate skeptics don’t like. Dr Dunderhead is trying to make his graph a special case by appealing to information that is outside of the graph.
I am happy for you to try and find out why my graph doesn’t show an acceleration to warming. If you can understand why it doesn’t then you’ll realize why your graphs don’t show cooling or a pause. They are all working off the same misuse of trend lines.
lolwot | August 18, 2013 at 7:52 pm
Warming has sped up since 2008
So the climate establishment, foursquare as they stand behind the CAGW position, and understandably agonising over the Pause, have not noticed this crucial fact – yet you have ??
I’ve lost count of the number times I’ve argued the same point with people on both sides of the divide.
But you do go on and on and on about it – it really seems as though you’re convinced that an OLS trend line tells you something about what the data are doing – it doesn’t.
Firstly, it’s extremely risky to apply OLS to any time series, because, as has been demonstrated ad nauseam you can make the lines go in any direction you wish by simply changing the endpoints.
Secondly, it’s seldom necessary, as, in a time series, it’s normally easy to see what the data’s doing just by eyeballing it.
Thirdly, if you absolutely have to use OLS, don’t imagine that the resulting straight line somehow represents the data – it doesn’t.
“But you do go on and on and on about it”
And I’ll continue going on about it. In every thread. So long as skeptics keep using that methodology, I’ll use it too to show what it shows post 2008.
Dr Dunderhead writes: “As the climate shifted in 1998/2001″
This is adding an epicycle. I’ve shown the trend methodology is junk by providing a contradicting example that climate skeptics don’t like. Dr Dunderhead is trying to make his graph a special case by appealing to information that is outside of the graph.
Not even close. The ‘shift’ is an objective reality in ocean and atmosphere indices. It has happened before – many times. It is not cyclic but chaotic bifurcation and the difference is important. This is mainstream climate science that explains the surface temperature data better and provides a testable hypothesis – no warming or even cooling for a decade to three more. Explaining and predicting in the true scientific method – not trendology at all.
I want to thank Steven Mosher for his contribution to climate science. I trust he will continue to be as productive as he’s been in the past.
Max_OK
I’ll second that.
Mosh is productive, logical and is contributing to the overall knowledge on climate through his work.
As a self-proclaimed “luke-warmer”, he’s like many of us here: agrees that there could be some warming from CO2 and knows what the theory says but just not sure how much that has really been in the past or will be in the future.
Who is sure?
We can all make estimates or WAGs, but nobody really knows.
Max_CH
Ok guys I have to duck our for a bit. My HD crashed saturday AM so I had to go buy a new computer. need to get back to producing some datasets.
I’ll check back later..
The plot of temperature vs log([CO2]) from Law Dome/Keeling from 1832 to 2012 gives a spectacular fit, r2=0.8, and give warming over the last 180 years of 1.7C, with 1.7 to come at 560 ppm, due to a North American CE of 3.4C per doubling. The fit is poorest in the 1920-1960 period as there appears to be AMO in the signal.
How much a coastal phenomena we are seeing I am unsure. I think you have the lineshape correct, but the amplitude is what it is all about.
How much a coastal phenomena we are seeing I am unsure. I think you have the lineshape correct, but the amplitude is what it is all about.
Coasts are still effin me up. it is not easy to ‘remove’ the effect. but you can definately see that the larger divergences in expectation from the field happen along coasts . the other area is places where there are winter season inversions which eff with the lapse rate estimates.
I thought I solved it.. then nope
“I thought I solved it.. then nope”
It was ever thus, I hypothesized from first principles, based on known pathways and rates of enzymes, what effects fructose would have on cells with either fructokinase or hexokinase; I was completely wrong.
Unlike you I can actually do the damned experiments.
A 150 mile band of stations along the East/West axis would be easier to examine than the whole CUS.
The volcano’s would also give you an internal controls for thermal buffering.
Not easy.
DocMartyn, “It was ever thus, I hypothesized from first principles, based on known pathways and rates of enzymes, what effects fructose would have on cells with either fructokinase or hexokinase; I was completely wrong.”
Yep, I thought I had discovered the perfect diet, Scotch and Salad Diet A/K/A, ketosis or cirrhosis, the fat melts away nearly as fast as relationships :)
What he really mean is it’s time for dinner, then a sit down at the T.V. for the next episode of “Breaking Bad.”
if their data was reliable in the past; why bother improving it? if it wasn’t reliable, lets admit it!
Always work to improve data.
Data is a record of what is really happening.
People cheat and create hockey sticks, but when you have actual data you can remove a hockey stick from an IPCC Report that followed the IPCC Report that included the hockey stick. THAT IS REAL POWER!
The other Computer Output is based on Theories and Models that have produced forecast that have been wrong for decades.
Does anyone happen to have (or know where to find) a copy of the previous BEST temperature series? I know this is at least the third. I think it’d be interesting to compare them to each other, but I don’t think I saved copies of them.
woodfortrees have not updated their BEST yet. I was hoping this new one would go in there and remain up to date like CRUTEM. Their current BEST goes to 2010 only.
I know. Unfortunately, that’s a single series. The results file I want has multiple series. Specifically, it has uncertainties. I don’t believe WFT has that.
Also, there’s at least one intermediary results file between the preliminary one and the current one.
The preliminary data set is still on the site in text format, bottom of the data page.
Thanks, but that’s the preliminary data set, not the results file. I want the latter.
http://berkeleyearth.org/summary-of-findings
That doesn’t provide what I asked for, at all.
max_far from OK writes : “Temperature would be like it used to be, which was just about right.”
One of the silliest statements I’ve read in quite a long time. even for you.
There is massive denial when contemplating that the higher the sensitivity, the colder it would now be without man’s input. lowlot enters the realm of fantasy, and moshe wishes he could.
================
Kim, I think you have a brilliant point which really turns this thing on its head. I don’t know skeptic scientists don’t usually talk about it.
Sorry, should be *why*skeptic scientists…I can only surmise they just haven’t thought about it.
I was absolutely floored when it first occurred to me, but it’s been simmering for awhile, ever since a long ago thread @ lucia’s and a graph by a ‘julio’.
===============
You folks are starting to think like Hansen. He suggests 350 ppm is the ideal, which is not as cold as it used to be and not as warm as it is getting.
I know why the denial; the implications are horrifying. Far better for us that climate sensitivity turns out low, and we’ve mostly been at the mercy of natural forces.
===============
Interestingly we blew past Hansen’s 350 ppm in 1988 (a big year for him).
kim,
The Warmists know precisely how much colder it would have been. 33C is due to “the greenhouse effect”. Therefore, the “real” temperature without CO2 warming is something around 255K as opposed to around 288K at present.
It’s no surprise that people actually believe that the world has “warmed”. Just consider all the other things that people who otherwise appeared rational and intelligent have believed. Fill in your own examples.
Intelligence and education are no barrier to gullibility.
I spit on your global warming. So there!
Live well and prosper,
Mike Flynn.
pokerguy doesn’t think global temperature has been just about right. He wants to experiment with it getting warmer. But what if that experiment turns out bad, you might ask? No problem for pokerguy because, being mortal, he won’t be around by then.
Show me the harms of the last 2 deg C of warming, then show me the harms of the next 2 deg C of warming, which is about all we’ll get. Now show me the harms of 8-10 deg C of cooling, and the harm from 1 deg C of cooling on the way there.
Risks of cooling magnificently outweigh the risks of warming. It’s as if the whole human raced failed to do the maths.
================
Excellent work Mosher. The BEST team is the best! I would, however, like to see more use of the satellite data in verifying the outputs of the latest version of BEST. The reason being that the SST are not as well covered by the input data sets and that 70% of the Earth surface temperatures are above our oceans.
well, that is what 3 of us are working on so its all kinda in progress.
The satellite data aren’t measurements in the Jim Cripwell sense or temperatures in the thermometer sense. You should read how Spencer gets them on his blog.
http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/
ya its kinda funny
The satellite data would have to be the best and most stable proxy for surface temps (including the oceans) with good spacial and temporal resolution to boot, notwithstanding changes in saellites occurring from time to time. The error margin can be calculated within very high statistical confidence intervals.
Would love to see the same coverage for CO2 as Moana Loa is too low in resolution to properly measure global CO2 anomalies IMO.
Steve:
In looking at your CONUS temperature map, what years are the baseline? The midwest and upper midwest are severely lacking in heat units, and July was one of the reasons we are lacking.
Thank you.
Camburn.
Its probably 1951-1980, right now Im more focused on getting all the various datasets ( about 6-7 for comparison) projected into the right projection, masked off properly, and output into the forms that the analyst needs.
I went to look at their Graphics web site using your link. I found a number of things there I did not like. Apparently they don’t take comments so I will put some comments down here. First, there are three figures, numbers 52, 53 and 54, that should not have been shown at all. They display scattered model outputs that have no coherence and are totally worthless. Their Figure 9 has an esthetic quality that pulls you in but once you start analyzing it you realize that it is an indictment of the GCM system. These models consistently fail to match past temperature history and therefore should not be used to predict future warming. The example comes from AR4 but the previews of AR5 I have seen do not differ significantly. Another interesting thing about this figure is its representation of recent temperature history that coincides with the satellite era. The era starts with 1980 on this graph and and should include an 18 year of temperature standstill at the beginning. This can be seen clearly in the satellite data set according to which there was no warming before the super El Nino of 1998 arrived. The super El Nino brought a step warming that raised global temperature by a third of a degree Celsius and then stopped. It was followed by the present day “pause” in warming. Berkeley temperature graph does not have any of these features, never mind those wandering GCMs. There is no sign of the super El Nino, the highest peak of the century, nor is there any sign of the stoppage of warming in the eighties and nineties, and no sign also of the horizontal warm platform that begins in 2001. The standstill in the eighties and nineties is covered up by the so-called “late twentieth century warming” which I had proved was a fake. I said so in my book “What Warming?” that came out in 2010 but nothing happened for two years. Then, accidentally, I discovered last fall that GISTEMP, HadCRUT, and NSDC had all stopped showing that fake warming and were rearranging their data for the eighties and nineties to parallel the satellites. It was done secretly and no explanation was given. I regard this concerted action as tantamount to an admission that they all knew the warming was false. You guys have steadfastly refused to use the satellite data, obviously because you did not like what it tells us about temperature. But now you have no excuse for not showing the temperature pause in the eighties and nineties since it has been accepted into ground-based data sets that you have used as sources. And one more thing when you start re-drawing your curves: do not wipe out the El Nino peaks, they are an integral part of temperature, not noise. Their average spacing is five years so use a resolution of less than a year, preferably a month, for these temperature curves. If you think that is too fuzzy, use a transparent overlay as I did which does not destroy information.
“I went to look at their Graphics web site using your link. I found a number of things there I did not like. Apparently they don’t take comments so I will put some comments down here. First, there are three figures, numbers 52, 53 and 54, that should not have been shown at all. They display scattered model outputs that have no coherence and are totally worthless.”
RIP : reading is fundamental:
“The new sections include some work in progress. Most notable here is the beginning of the work we are doing on GCM comparisons to observations. Start here and explore. As this is work-in-progress, I’ll answer some general questions as best I can.”
The point of 52,53, and 54 is obvious. Others have figured it out. I will give you a clue. Look at these figures.
http://berkeleyearth.org/graphics-more
one thing you will note is that the models do not agree on the the amplification at the poles. that is, there is scatter as you asutely observed when looking a 52-54. So, the scatter and the lack of coherence you see is exactly the point. This series compares observation to model. And you see the scatter in the models. That tells a story.
##########################
“Their Figure 9 has an esthetic quality that pulls you in but once you start analyzing it you realize that it is an indictment of the GCM system. These models consistently fail to match past temperature history and therefore should not be used to predict future warming. ”
1. the point of this exercise is to illustrate the match
2. your moralizing ( they should not be used ) isn’t very interesting.
A smarter question would be, given the quality of the match, how can
the models be used if at all. For example, do they give us an upper limit?
our best guess? so, saying “should not be used” is non pragmatic moralizing.
#################################################
as for the rest of your comment about GISS and other’s you need to talk to the building 7 folks
Sorry for being off topic here. Jerome Whitington writes here: http://accountingforatmosphere.wordpress.com/2013/05/10/speculation-quantification-anthropogenesis/
“Anthropogenic goes two ways: anthropogenic CO2 has re-made the climate, and now the climate promises to dictate.”
Once in awhile I run across a statement that captures so much for me. Not to bring up if he’s correct, but to see the two ends of the spectrum. Some say, we’ve done it and now the climate is some kind of threat to us. Some say or perhaps wonder, are we really capable of doing that?
Is it non-Scientific to hold a bias either way?
The full article from Whitington is here: http://www.anthropology-news.org/index.php/2013/07/09/speculation-quantification-anthropogenesis/
He brings up some interesting points. As the title says, Speculation, there is some of that, Quantification, which is what we are getting more of, and Anthropogenic – the study of the origin and development of man. I think the last one is part of what’s going on.
He also covers some background history about the road to here.
Thanks Steve, and the Berkeley team.
I know you’re constantly dealing with all the issues of confirmation/disconfirmation bias, with all the issues of crap people say about you, and all the issues of being balked both by the technology and the difficulties of getting cooperation from people who should be glad to help.
I wish you all the BEST, and promise to be so scrupulous and meticulous a critic as my narrow skills permit.
Though I’ve been saying, “Curses, foiled again!” an awful lot whenever I’ve thought I’ve found something in your BEST stuff to be skeptical about.
Possibly, the long hours of thoughtful and productive work may be eroding your blog commenting skills, the only criticism I can some up with.
tnks. trying to multitask is not happening. major computer meltdown. zero sleep. not good
moshe, my good man, it’s from trying to convince us that it has warmed, when the question is attribution.
=========================
“trying to convince us that it has warmed, when the question is attribution.” – koldie
But, but,……in almost every other comment you say it’s cooling.
Koldie goes warmist!!
‘has warmed’, Son. Keep an eye on the sunspots.
==============
Your mistake, Michael, was to take the nick from the naif, willard. I don’t think you have any appreciation for the amusement I derive from the irony.
Oh, now I’ve done it.
==============
Wonder, too, Michael, why the Maunder Minimum sunspots were ‘large, sparse, and primarily Southern Hemispheric’. There is a clue there.
===================
“has warmed”
Keep an eye on your tenses.
And sunspots!!
Koldie goes sun crank.
How long can the pause last? Can it last for ever?
CO2′s voracious appetite for energy can omly be
satisfied in two ways: kinetic and vibrational
energy. We can forget kinetic, because it is no
worse than O2 or N2 and it is less than 1% of the
atmosphere. The answer has to be in the
vibrational modes, of which there are many. When
CO2 leaves the cylinders of your car or the
furnace of the power station it is over 1,000C –
very hot and most of, if not all. of its
vibrational modes will be excited. When it exits
the tail pipe or chimney it is still very hot and
we would expect it to rise in the troposphere as a
plume of hot gas passing its heat to the N2 and O2
as it rises. As it rises in the troposphere (like
a hot air balloon) it can more readily radiate its
heat into space, because the atmosphere above is
thinning. So what propottion of heat is radiated
into space, instead of heating our planet?. As the
CO2 cools, density increases, it will fall again,
maybe having used up all its excitation modes, it
can no longer heat the planet. So this simple but
apparently little understood chain of events may
not be such a threat?
So this explanation of CO2′s behavior in the
troposphere can explain the pause. So long as the
hot, new proportion of CO2 from exhaust or
chimneyremains below the presert level the pause
will continue. Note that this new metric of CO2,
if accepted, focuses not on total CO2, but on the
proportion of new, hot CO2
“CO2′s voracious appetite for energy can only be
satisfied in two ways: kinetic and vibrational
energy. We can forget kinetic, because it is no
worse than O2 or N2 and it is less than 1% of the
atmosphere. The answer has to be in the
vibrational modes, of which there are many. When
CO2 leaves the cylinders of your car or the
furnace of the power station it is over 1,000C –
very hot and most of, if not all. of its
vibrational modes will be excited. When it exits
the tail pipe or chimney it is still very hot and
we would expect it to rise in the troposphere as a
plume of hot gas passing its heat to the N2 and O2
as it rises. As it rises in the troposphere (like
a hot air balloon) it can more readily radiate its
heat into space, because the atmosphere above is
thinning. So what proportion of heat is radiated
into space, instead of heating our planet?. As the
CO2 cools, density increases, it will fall again,
maybe having used up all its excitation modes, it
can no longer heat the planet. So this simple but
apparently little understood chain of events may
not be such a threat?”
Molecule in the air collide with other molecules in the
air within brief periods of time [nanoseconds].
A hot Molecule does not rise. It collides with other molecules.
A group of molecules which are warmed crash into other
bodies of molecules and group of warm molecule exchange
their energy to the other groups of molecules.
A warmed group of molecules is causing there to be less
molecules in a given area [they have less density]. It is
a group of air which has less density which rises as air packet.
But group members do remain the same in the group.
“Let us calculate the root-mean-square velocity of oxygen molecules at room temperature, 25oC.
…
= 482.1 m/s
A speed of 482.1 m/s is 1726 km/h, much faster than a jetliner can fly and faster than most rifle bullets.
The very high speed of gas molecules under normal room conditions would indicate that a gas molecule would travel across a room almost instantly. In fact, gas molecules do not do so. If a small sample of the very odorous (and poisonous!) gas hydrogen sulfide is released in one corner of a room, our noses will not detect it in another corner of the room for several minutes unless the air is vigorously stirred by a mechanical fan.”
http://dwb4.unl.edu/Chem/CHEM869J/CHEM869JLinks/www.compusmart.ab.ca/plambeck/che/p101/p01065.htm
Gbaikie: Thahks for giving thought to my proposition. It is a somewhat simplfied version because I ommited to say that a lot of N2 goes along for the ride. The air that is drawn into the combustion chamber, perforce contains 80% N2 and although it plays no part im the combustion it still gets heated to over 1000C and is part of the rising plume which eventually gives up its heat to the troposphere. But all the heat comes from the combustion of the 20% O2 with the fuel. Obviously there would be less waste heat and higher efficiency if the fuel were burnt in pure O2. In due course I will amend my theoretical paper (uinderlined above) to include this new material.
Get this Aussie crapola by Biggs out of here. Something in the water there as Chief has the same misguided notions of CO2 combustion ALONE heating the planet.
WHT-
“Biggs…misguided notions of CO2 combustion ALONE heating the planet.”
No. I seriously doubt that Biggs thinks that.
More along the lines of:
1) CO2 absorption bands are saturated, hence the pause.
2) Higher exhaust temperatures loft CO2 to higher altitudes where it absorbs outgoing IR it otherwise could not, hence increased CO2 warming.
Like I say, more along these lines. But of course my interpretation may be incorrect.
I wondered once about more CO2 at higher latitudes once, and still a little now and then.
=================
And then, and then,
Convection when?
=============
Biggs is wrong to neglect the potential energy of the molecule. Webster is wrong in just about all ways possible.
Hi Steve Mosher, about the Berkeley Earth Surface Temperature project, focusing on that figure: Berkeley Earth July 2012 Anomaly, … could we say that the increase of temperatures in northern hemisphere (that, e. g., allows summer arctic marine shipping) is “balanced” by a decrease in southern hemisphere and antarctic temperatures?.
balanced? I’m not sure what you mean.
Sorry. Let’s try again …
B.E.S.T. claims that: “Global land temperatures have increased by 1.5 degrees C over the past 250 years”.
JC has been posting: “Has global warming stopped?”. On
http://judithcurry.com/2011/11/04/pause/ and many other posts.
My query was about if we could understand the Earth as a planet:
(1) where temperatures have become globally constant during, at least, last decade (in land + oceans); and
(2) where heat has been transported (convectionally and/or radiationally?) between south towards north pole in such a compensating way, (in such a “balance”), that southern hemisphere is now colder, in the same amount, than northern hemisphere is hotter. And if this heat transport could explain land’s temperature measurements of the Berkeley Earth July 2012 anomaly figure, or if “my” heat transport between hemispheres is too speculative.
Antonio,
see the charts I’ve pointed to folks in my article.
http://berkeleyearth.org/graphics#global-warming-since-1900-winter
and others
Figure 46 of this link is very illustrative. But it is a pity that, while MPI-ESM-MR measures global temperatures in land & oceans, B.E.S.T. only focuses on surface land temperature measurements.
It is also a pity that the measurement of temperatures in that figure go from 1900 to 1999 and not from 2000 to 2010 (that is what I was interested in my previous comment: just to check JC’s ‘pause’).
About Earth’s heat transport … I understand that it is too speculative to talk about atmospheric and oceanic heat transport, about how Sun’s energy is absorved by oceans and how a fraction of it is then radiated …
Anyway, Steven Mosher, thanks for your divulgative task.
Regards, Antonio.
Steve, do you have a comparison with satellite temp anomalies? Do you have a view on sat temps vs. land temps? (If I were a more regular reader of WUWT and Judith’s blog, I probably would already know that.)
John, that is what I’ve been working on all weekend amid repeated computer crashes. I’ve looked at them in the past, but this time we are
adding a couple of twists.
On the issue: the methods by which temperatures are inferred from a variety of sensors in space is not as open as I would like it to be. There are at least three versions of the record all have adjustments, assumptions, and regularly disagree with each other.
That said, I do like the idea of using them as a “limit” or boundary for the UHI effect. This too requires a couple of assumptions.
It does seem clear however that if a skeptic accepts the satillite record then the possible contribution of UHI during the POR is definitely limited below some of the wilder claims.
For example: If you believe in UAH and the trend in UAH is say .15 per decade, and the land is .2C per decade, then all bias
a) microsite
b) land use change
c) uhi
d etc
would have an upper limit of .05C per decade, an effect size that might
be hard to detect.
amplification also plays in this argument
Thanks, Steve.
I’m very far from having expertise on this issue, but in principle I like the satellite data for the amount of coverage it gets, and because it doesn’t have to continually be trying to adjust indivual sites for UHI and changes in location.
I understand what you say about satellite data also having assumptions, that seems unescapable.
The fact that satellite data seems to be in close agreement with radiosondes also seems to speak in favor of the satellite data.
If the satellite data constrain possible UHI effects, that doesn’t matter to me if the satellite data are the most accurate measures we can get.
I think you would probably say that we should emphasise the “if” in the previous sentence.
If you have more observations on satellite data, I’m all ears.
‘I’m very far from having expertise on this issue, but in principle I like the satellite data for the amount of coverage it gets, and because it doesn’t have to continually be trying to adjust indivual sites for UHI and changes in location. ”
huh. the whole satillite reconstruction of temperature is filled with individual adjustments. Changes from satillite to satillite, changes to the position of the platform ( orbital decay )
Look at the record of changes. I suppose one could also look at the differences the various groups have between each other.
John,
One of the issues with satellites is that they are not really measuring temperature, and converting their measurements into temperature (plus adjusting for any orbital decay or diurnal measurement issues) is not necessarily clearcut. The historical adjustments done dwarf those applied to the surface temperature records (after all, until the early 2000s UAH showed global cooling). Thats not to say that the satellites are necessarily wrong; rather, like the surface record, they are not infallible. There has also been some reanalysis work done to satellite records that produce trends much more similar to surface records, like Zou et al.: http://onlinelibrary.wiley.com/doi/10.1029/2005JD006798/abstract
Two methods, both subject to narrative abuse. When will we ever learn?
==========
Zeke, yes, I knew that some recalibrations had to be done in the mid 2000s because people not working with Christy or Spencer found that some of the necessary adjustments weren’t correct — was it lack of adequate correction for satellite drift? And, yes, after that was fixed, the satellite data showed warming, not cooling.
But I haven’t seen any major issues after that one, and I assume that if there were major problems, folks on the other side of the debate would have looked for and found them, as they did with the issue of drift.
That isn’t to say that there aren’t remaining issues regarding adjustments — how could there not be? — but at this point probably not major ones, is my current view.
I still find the apparent closeness of results with radiosondes pretty validating, though. Is there reason to not see things that way?
Not all land use changes could be included. For instance irrigation and draining of wetlands (although this paper doesn’t address wetland draining) might affect the troposphere.
http://nldr.library.ucar.edu/repository/assets/osgc/OSGC-000-000-003-834.pdf
Steve Mosher…In his book: ‘Energy for Future Presidents”, Richard Muller wrote: “Global warming, although real and caused largely (and maybe 100 %) by humans, can be controlled only if we find inexpensive (or better, profitable) methods to reduce greenhouse emissions in China and the developing world.” Which suggests to me that the US, although having cut CO2 emissions significantly, cannot do it all alone. Obviously, only world recognition of AGW, and world efforts to reduce GHG emissions can be successful in saving our species. Is the BEST program, and its data, continually updated, so any temperature reductions can be identified and quantified going forward??
Walter,
The Berkeley project is moving toward a system of automating monthly updates. Right now the process is somewhat ad-hoc, every three or four months.
As far as identifying the effects of CO2 reductions in the temperature record, that is easier said than done, as one would need to create a compelling counterfactual scenario to figure out what would have happened if emissions had remained at a particular level. At that point you are already using climate models (with all their associated uncertainties), so you might just be better off comparing climate model runs with different forcings…
Heh, every three or four months is about as often as Richard Muller can afford the agony of his attribution unraveling, or ravelling, as you like it.
============================
Zeke…Scripps Institute has CO2 measurements going back to 1958, so if the numbers go down, why wouldn’t they be able to quantify it?? And, who would care about the ‘what if’ scenario of emissions remaining at a particular level?? It won’t matter what the climate models predict, as long as global temperatures follow CO2 levels down. The cause and effect link will show cause and effect.
“Obviously, only world recognition of AGW, and world efforts to reduce GHG emissions can be successful in saving our species.” Really? Saving our species? Muller is an extreme alarmist?
Well, that was Walter, being ironic about the jist of it. But yes, Muller is an alarmist and always has been. That was what made his ‘skeptical conversion’ so fabulous, and so phony. You let it all hang out, and it’ll all come back.
==============
I said extreme, mildly shocked. There are certainly CAGW believers that don’t think the human race will be wiped out. The other, perhaps even more damning interpretation, is that Muller is like Rajendra Pachauri, who seems not to give a rat’s ass whether his pronouncements are at a precision level that makes them worth taking seriously.
Dagfinn,
Muller didn’t say “Obviously, only world recognition of AGW, and world efforts to reduce GHG emissions can be successful in saving our species.” That was Walter.
Zeke, you’re right. Got my quotes mixed up. My bad.
Zeke, a moment about attribution and another about the rabidly religious, including Muller, on the alarmist side?
============
Profitable. Tax everyone on the face of the Earth, just for exhaling.
“Global warming, although real and caused largely (and maybe 100 %) by humans, can be controlled only if we find inexpensive (or better, profitable) methods to reduce greenhouse emissions in China and the developing world.”
The spoken word will become too expensive for US to complain anymore.
I find it interesting how many Neros there are here on this blog. While the flames rage beneath their feet, they look skyward for any reason, no matter how irrelevant or miniscule to seize to justify their avoidance of reality. NASA has reported that 2012 was the 9th warmest year and all nine warmest years have occurred since 1998. Nitpicking each 0.1 C rise, year after year avoids the problem that while the Earth is warming and population rising, few are coming up with programs to ameliorate these worldwide problems. Shall we all keep fiddling?
Walter, you srite “the Earth is warming”
As of August 2013, this statement is just flat out wrong. The earth has warmed recently, but that warming has ceased. Temperatures have been flat for about 15 years on the basis of all 5 data sets. And in the last 5 years, according to ther satellite data, temperatures have been falling
http://notrickszone.com/2013/08/17/over-a-quarter-of-the-worlds-population-1-8-billion-people-have-never-seen-global-warming-in-their-lives/
Sorry Walter,. The warmists have all sorts of pal reviewed papers to support their hypothesis, but empirical data trumps expert opinion every time.
Hah, hah, you weren’t being ironic.
=================
Here’s a clue, Walter; people who use your argument usually don’t even conceive of attribution, about which there has been some discussion here.
So, you have ignorant arguments and cite rabidly alarmist blogs. Your pants are on fire, but all you see is smoke and mirrors.
====================
Jim…What you state, “Temperatures have been flat for about 15 years on the basis of all 5 data sets.” is blatantly FALSE. IF you think that you are correct, you should IMMEDIATELY tell NOAA and NASA that they need to change their news release, which stated:2012 was one of the hottest years EVER. See: http://www.noaanews.noaa.gov/stories2013/20130806_stateoftheclimate.html
Walter, you write “you should IMMEDIATELY tell NOAA and NASA that they need to change their news release, ”
If I did I would be committing a scientific error. The paper you quoted is correct, but makes the same mistake you did. If there is an error, it is an error by omission. It is axiomatic that if you have a warming period, followed by a cooling period, the cooling period ALWAYS starts at the MAXIMUM of the warming period. There are NO exceptions. So the fact that 2012 is amongst the years with record high temperatures, is exactly in accordance with the idea that we are starting a new cooling period following a warming period. You are not understanding the meaning of the present tense in English language. Currently temperatures are at a historically high value, but are now cooling
Jim…there is not doubt in my mind that you live in a house with many mirrors. This causes you to never see reality for what it is. Of course, IF the world climate begins to cool, it would start during a warm period. However, for a cooling period to begin, there would have to be climatic indicators. What do you cite as the climatic indicator that CLEARLY (as established by scientific evidence) announces a cooling period has begun?? If it hasn’t begun YET, when will it start, and what is you scientific evidence for when it will start??
‘Specifically, when the major modes of Northern Hemisphere climate variability are synchronized, or resonate, and the coupling between those modes simultaneously increases, the climate system appears to be thrown into a new state, marked by a break in the global mean temperature trend and in the character of El Niño/Southern Oscillation variability. Here, a new and improved means to quantify the coupling between climate modes confirms that another synchronization of these modes, followed by an increase in coupling occurred in 2001/02. This suggests that a break in the global mean temperature trend from the consistent warming over the 1976/77–2001/02 period may have occurred.’
ftp://starfish.mar.dfo-mpo.gc.ca/pub/ocean/CCS-WG_References/NewSinceReport/March15/Swanson%20and%20Tsonis%20Has%20the%20climate%20recently%20shifted%202008GL037022.pdf
The cool mode started in 2002. Horror of horrors – a tipping point involving changes in ocean and atmospheric circulation and resultant changes in cloud. These modes last 20 to 40 years.
http://s1114.photobucket.com/user/Chief_Hydrologist/media/cloud_palleandlaken2013_zps3c92a9fc.png.html?sort=3&o=15
Walter, you write “However, for a cooling period to begin, there would have to be climatic indicators.”
You are changing the subject. I objected to your statement “the earth is warming”. Whether the earth has started cooling is irrelevat. The empirical data shows that the world is at record temperatures and is cooling. Whether this cooling is going to continue I have no idea. But at the moment the earth is not warming.
Jim…I’m sorry, I can’t have a discussion with you if you insist on bobbing and weaving. I’m good at hitting a moving target, but no in this blogosphere. If you say the whole Earth climate is warming, you can’t also say it is starting a cooling period WITHOUT having SOME climatic indicator that shows cooling. I have seen NO credible source suggesting that the whole Earth climate is cooling; I have seen some here suggesting that their personal knowledge shows it is not. For example, read what Chief Hydrologist wrote below; confusing disinformation with no clear statement. Anyone can take good data and run it through a poor model and come up with useless results that appear to contradict some other model. But, Jim, you stated ” the warming has ceased”; to which I responded that NOAA/NASA has shown 9 hottest years since 1998-clearly warming HAS NOT ceased. So you respond: well, it is the end of a warming cycle and a cooling cycle has begun; to which I ask for proof. AND, you have no clear proof. Call it what you will, but your arguments have no substance.
You ask for science – which you then reject out of hand and without any degree of understanding at all. Something that is quite obvious.
Pathetic and pointless.
Walter you write “to which I responded that NOAA/NASA has shown 9 hottest years since 1998-clearly warming HAS NOT ceased.”
The fact that there have been 9 hottest years since 1998 is NOT proof that the warming has not ceased. That is the error in your logic
Jim…I am convinced that the error is entirely within YOUR logic.
I’m very impressed, Walter; unfavorably.
====================
Guys…its been fun chatting with a bunch of blogosphere DENIERS. However, while you challenge mathematical models and kibbitz about them, I have been relying on expert testimony. Obviously, I’m not thinking of you as expert witnesses. IF you refuse to accept what the majority (what is it-97 % ??) of the real experts conclude (in peer-reviewed study results), I just have to give up on you (remember, the definition of insanity is doing the same thing repetitively and expecting different results-and I’m not there yet).
Well, I’ve given up on your irreducible ignorance. You have last century’s lamest arguments. Pause? What pause? Roll on juggernaut thermagedden, don’t worry about the Walter’s being crushed.
=========
kim…’irreducible ignorance’. Reminds me of the advice given lawyers: if the law is in your favor, argue the law; if the FACTS are in your favor, argue the facts; if neither are in your favor, assassinate character. Of course, that shows YOUR character; guess that what runs around here in the denier blogosphere. Lots of Neros.
Walter is a cult of AGW groupthink space cadet. As usual very little understanding of science and very many dogmatic assertions from the official playbook. Stuff and nonsense and not worth the effort of engaging.
Chief…well said. So, goodbye.
Mr Mosher, I look forward to reading your response to the comments by Willis Eschenbach concerning BEST’S data manipulation and assumptions, Monthly Averages, Anomalies and Uncertainty: http://wattsupwiththat.com/2013/08/17/monthly-averages-anomalies-and-uncertainties/#more-91832
Paddylol,
I’d suggest reading the comment thread of the post you linked if you wan’t Steve’s response.
Willis Eschenbach, blogger with a certificate in massage and a B.A. in Psychology.Has worked recently as an Accounts/IT Senior Manager with South Pacific Oil. A profile can be found at desmogblog.com/willis-eschenbach. Has produced no peer-reviewed papers on climate science according to the criteria set by Skeptical Science…
With this bio, I , personally wouldn’t bother reading Willis’ comments.
That’s how we’ve got here, by ignoring, supressing and laughing at anything that dilutes the AGW message. It’s a very dogmatic religion. Luckily, you cannot fool nature.
Even willard’s progressed further than you have.
=========
I personally think that Willis often (not always) makes sense in respect of climate and that its what is written that is most important, not the status of the person writing. Even scientists at the top of their game in their field do not influence my judgment on the substance or otherwise of their comments.
A sobering reminder of why CO2 emissions continue, and will keep continuing, to climb:
http://www.economist.com/news/briefing/21583245-china-worlds-worst-polluter-largest-investor-green-energy-its-rise-will-have
Look at the graphic comparing U.S. and Chinese emissions the last decade (“Emissions statement”). From 2005 through 2011, the six year increase in Chinese CO2 emissions is equal to half of ALL U.S. CO2 emissions in 2011. Chinese emissions have climbed since 2011, and will continue to climb.
Unless the satellite data is reconciled then this dataset is compromised. There are also several papers about Urban bias from Christy, McKitrick, Michaels Pielke, that remain unresolved. The ROW is much worse than the US in this respect. The two areas largely unaffected by UHI effect are the Arctic and the US48. Both show warming now is only as bad as it was in the thirties and hence both can be used to verify the solar link
Explain why this time we were not supposed to warm like we always warmed after every cold period in the past ten thousand years.
I know why. Computer Curve Fits Do smooth the data of the past ten thousand years. None of the up and down cycles should have been there. Computer output is right and Mann was right to take out the Natural wiggles in the data that could not have really happened.
They smoothed out the past cycles and the future cycles. Now,this normal cycle is not normal.
So, what we have is a hockey stick, but this hockey stick is ten thousand years old.
Herman…You are NOT going to like my answer; when you ask:
Explain why this time we were not supposed to warm like we always warmed after every cold period in the past ten thousand years
The obvious answer is: an overabundance of CO2, generated over the last half-century by increasing use of coal (cubic miles per year) and petroleum (billions of barrels per year). Check this out with the Energy Information Agency.
I am alarmist, hear me roar.
==========
Kim…or a realist with head NOT stuck in the sand !!
The sand is cooling, Walter; for how long even kim doesn’t know.
================
This must be the Best post ever .Really Steve your’e (it’s) wonderful..
On a lighter note have you wondered about the lack of discrepancy in your 5 years of searching? 5years and not one hint of UHI, fantastic , utterly believable, after all it’s weather measurement we are, talking about here isn’t it?
Even Neven has the ability to admit results vary from the expected. “What weird weather we are having in the Arctic recently”
But not Mosher, I have not found one example of UHI in 5 years, not one, not a half, not a skerrick.
Inconceivable ” in the best Princess Bride tradition”
, either you’re being had , or you’re in bed
Either way your’e still the BEST .
Cheers mate.
Seems relevant to put this here: I was just trying to get to the BEST results paper from this page but clicking on the scitechnol.com link awoke my kaspersky antivirus, telling me that the object is infected with a trojan.
Mosher, Has BEST published its Surface Temperature Methods and Results in mainstream peer-reviewed journals in the Climate Science field?
Yes.
MORE ISSUES WITH BERKELEY EARTH
beyond the criticism of Watts (regarding station classification for UHI analysis) and McKitrick (regarding implicit assumption that dUHI increases with UHI), I stumbled over additional issues.in the Berkeley Earth methods and UHI papers.
Methods paper
http://www.scitechnol.com/GIGS/GIGS-1-103.php
UHI paper
http://www.scitechnol.com/GIGS/GIGS-1-104.php
Issues are:
1. UHI paper, figure 2 plot of very rual stations / other stations.
All stations in Greenland, Arctic, almost all in northern Canada and northern Siberia are classified “very rural”.
Almost no “very rural” stations in India, few in southern South America.
Issue 1:
“Very rural” station are grossly overrepresented in the high North, where warming has been faster than anywhere else (warming trends from figure 7 in methods paper).
There also seem to be an underrepresentation in regions such as India and southern South America, where warming was small
-> UHI paper figure 3: Temperature trends histogram are distorted, as “very rural” stations are grossly overrepresent in fast warming regions und underrepresented in slow warming regions.
Issue 2:
At least some high latitude airport stations obviously suffer from strong UHI, such as Svalbard or Maniitsoq, Greenland
http://wattsupwiththat.com/2013/08/10/how-not-to-measure-temperature-part-94-maniitsoq-greenland-all-time-high-temperature-rescinded/
-> Evident UHI from the high North is not included in the UHI estimation.
Issue 3:
India has almost no “very rural” data points. The “very rural” gridded temperature may heve been computed with inflated trend data around India.
2. Estimation of the size of UHI trend
The UHI paper claims that the effect of UHI is approx. 0 deg/century and that this would be consistent with some other results (Hadcru 0.05, NOAA 0.06, GISS USA 0.01).
Even if we ignore above issues and assume all “very rural” stations are perfect, this claim is not properly founded.
Issue 4:
This claim is made by computing the difference between “very rural” stations reconstruction and all stations reconstruction.
However, the reconstruction is deliberately tailored to remove at least some of the UHI effect (see methods paper, “scalpel”, “outlier weighting”, “relaibility weighting”)
-> The difference does not give an estimation of UHI, because part of UHI has been already been removed before comparison.
3. Missing plot in UHI paper
There is no plot showing UHI on a world map and only total land averages are given.
Such a plot is indispensable to assess the overall result.
Significant, implausible local results would immediately display issues with station classification.
Such a plot is also indispensible to compare with the McKitrick paper..
Issue 1, is a valid concern
Issue 2. Maniitsoq is a very short record in our system 8 years,
and svalbard shows less warming.
Issue 3. Not much you can do about india they keep a huge number of there stations as a state secret.
Issue 4. Precisely. when we find a station that has a break with its rural neighbors it is going to be adjusted down. Take the US for example.
we find a UHI trend of around .04C per decade, but after breakpoint
we find no UHI. Similar to Zekes paper. The issue is Not whether UHI exists. It does. The issue is can you estimate the temperature in a way
that is not impacted by it. The goal was not to ESTIMATE UHI, the goal was to see if you could find it after applying the best averaging methods.
“The goal was not to ESTIMATE UHI”
——————————-
The paper says otherwise.
The first sentence in “Discussion” is an estimate of UHI.
“We observe the opposite of an urban heating effect over the period 1950 to 2010, with a slope of -0.10 ± 0.24°C/100yr (2σ error) in the Berkeley Earth global land temperature average.”
Little further down this not-an-estimate is compared with estimates from other groups.
“Our results are in line with previous results on global averages despite differences in methodology. Parker [30] concluded that the effect of urban heating on the global trends is minor, HadCRU use a bias error of 0.05°C per century, and NOAA estimate residual urban heating of 0.06°C per century for the USA and GISS applies a correction to their data of 0.01°C per century.”
The conclusions are then
1. The estimate given is not an estimate. Thnak you for agreeing.
2. The comparison with other “real” estimates is pointless. This is an important issue with this paper. Being “inline” with other near 0 estimates was used to argue that these results and those of other groups were correct.
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?
This is way late, but I happened to come across something, and I had to post it. Upthread Steven Mosher insisted a data set was not used by BEST, but rather, it (and several others like it) were “produced, not used.” He explained seasonality was not removed in a step prior to the Berkeley Earth Averagin process, but rather:
In other words, the data sets on the page I referred to were made after kriging was applied to the data. I pointed out several problems with this claim, and things went nowhere as Mosher failed to address anything I said. I rehash this because I just stumbled across a discussion of the data sets being referred to on Mosher’s blog, where he said:
This is exactly as I described. In Mosher’s own words, seasonality is removed prior to the Berkeley Earth Averaging methodology being used. It’s done before kriging is used. It is done in the pre-processing of data.
Not only has the issue I raised remained unresolved, Mosher’s dismissal of it contradicts his own words. And since he both avoided any actual discussion and left the “discussion” we were having, it looks like things aren’t going to progress any further.
Brandon
You have raised a number of interesting issues, as did Manfred regarding UHI on the post immediately above yours. I hope your paper with Mosh on this subject goes ahead as it is a hotly contested subject which needs clarifying.
As you have burrowed deep into the Berkeley bowels you may know the answer to this, if not when Mosh comes along perhaps he can answer.
We want to graph CET alongside BEST land for as far back as it will go.
There seem to be a number of different BEST data bases that are all slightly different to each other. Have you discovered which is the most ‘unadulterated’ set so we can compare like for like preferably in the following separate formats;
1 Global
2 Northern Hemisphere only
3 Southern hemisphere only.
Presumably the latter will not be available reaching as far back as number 2) as so few SH records are available.
tonyb
climatereason, results files are available for all three of those. Global is here. Northern hemisphere is here. Southern hemisphere is here. Those are all the “official” series for the regions you asked about.
I’m currently working on getting from the raw data to those results, but I suspect it won’t be possible. I can accept that the code released wasn’t remotely close to turnkey, but as it stands, I don’t think it can even produce the results they have. Until about a month ago, there was a different set of results uploaded. When those results were updated, the code wasn’t. That means there is one code release for two sets of results. Best I can tell, they’ve simply released new results without releasing code for them. That makes any attempt to check their work difficult.
Especially since there is a glaring problem with the BEST results I pointed out ages ago. And I just mean in their results. It’s not a methodological criticism; it’s simply an observation. They’ve released data that defies all logic. I think I first pointed out the problem over a year ago now, and all that’s happened since is it’s been made worse.
I should have worked through the code sooner to figure out the problem, but every time I’ve tried, I got frustrated with how poor BEST’s code releases have been. I also haven’t been able to get a single meaningful response from any BEST member despite attempting to discuss issues with them many times, has made me have little interest. Plus, the BEST team failed to do even the simplest of testing on a number of their results. All that combined has kept me from having much motivation.
Sorry for the rambling. None of that really has to do with the paper Mosher suggested he and I write. I’ve just spent the last hour looking into BEST’s results, and I’m annoyed. I’m starting to think either I’m delusional or the BEST team members are blind. That, or they simply didn’t bother to check their work.
Brandon
Many thanks for that. We will play around with them over the next few days and see how it compares with what we have already extracted. Trying to work through any of these data bases is not straightforward. Raw data is very hard to find as there is always a reason to amend it, sometimes considerably
Have you ever heard of the IMPROV project?
The EU gave Phil Jones, D Camuffo and their team 7 million Euros to look at 7 Historic temperature data sets. The end result bears no relation to the initial readings (no doubt for reasons they could defend). Personally, having being looking at historic temperatures for years, I think so many have been changed or were suspect in the first place that the historic temperature record as we know it has a very large question mark over it as a precise measure of what happened.
As Hubert Lamb said about them; ‘We can understand the tendency but not the precision.’ Accurate to a few tenths of a degree? Hmmm.
tonyb
On the upside, I did come across this comment in the main entrance to the BEST code:
Even the code says BEST removes (or at least claims to remove) seasonality as a pre-processing step. I have no idea why Mosher is so insistent that I was wrong to say it does.
I’ve never heard of that project, but what you say doesn’t surprise me. I think one could “justify” just about any change one wanted when using a statistical meat-grinder (phrase shamelessly stolen from McIntyre). That’s especially true if one’s methodology isn’t worked through to demonstrate it’s appropriateness (which BEST didn’t do).
Your comment about raw data made me check something. It seems BEST released a new data dump. That means its raw data is available and up to date. That means its raw data and its final results are available, but the actual code use to go between the two isn’t. It’s weird.
By the way, who is this we you mention?
“Your comment about raw data made me check something. It seems BEST released a new data dump. That means its raw data is available and up to date. That means its raw data and its final results are available, but the actual code use to go between the two isn’t. ”
Yes and the code will probably not be available until we decide what other additions we want to make to it, when to integrate the ocean work, the high res work, the daily fields and the further refinements to the regressions, adding some more tweaks on seasonality and lapse rate.
‘I also haven’t been able to get a single meaningful response from any BEST member despite attempting to discuss issues with them many times, has made me have little interest.”
Do you really want me to post the only 8 emails we exchanged?
Oh, did you ever write to Henrick as I suggested?
Did you find a solution to your problem with R.matlab?
Steven Mosher:
Go for it. People will see you interpreted my question in a way that couldn’t be justified by what I wrote. They’ll see you then said you’d look into the issue (yet apparently never did). In fact they’ll see the only useful thing you said was to ask if I had written to the maintainer of the package which would have been my next step anyway.
Yes. Henrick and I talked a bit, and he found there was a bug in his code. He updated his package to fix it.
The public is good enough for it? That you like us enough? That it’s ridiculous to release results predicated on substantial changes to your methodology that nobody could possibly know about? That it’s unacceptable to publish your results on a page that describes them with false information?
Here brandon
“Mosher, I’m afraid we’re talking about different things. The data you’re talking about is found via this page, and it says it hasn’t been updated in a year (since February 2012). The data I’m talking about is in the January 2013 update to this page. If you decompress the file found there, there are many .mat files in it. Those are what I haven’t been able to read into R.
The files you refer to are simple .txt files. They’re not compressed. That makes them relatively easy to read into R. The proprietary compression used by Matlab in making .mat files is what causes my problem. I don’t know how to read them without having Matlab. I’ve tried using the R.matlab package to read them, but I get an error every time. I can confirm I’ve fully extracted the files as some plain text is viewable if I examine the .mat files in a text editor, and the R.matlab package recognizes them are appropriate files, but I can’t actually read them into R.”
Ah ok. sorry my bad. I’ll look into that today/tonight.
No problem. I hope you have better luck than I did.
did you write to the maintainer of the r.matlab package.
if you have unconvered a bug he should know.
henrick is the maintainer. he is very good when it comes to support
he is one of the gurus of r
Here is your last message to me
################
I can send him an e-mail, but I’m not sure if the problem is in his package. It might be something about the file. All I know is the error says:
Error in readArrayFlags(this) :
Unknown array type (class). Not in [1,13]: 15
Then again, I have read the package isn’t compatible with files written by some versions of Matlab. I’ll try sending him an e-mail and see if he has any insight.
#########
So lets see. I misinterpreted your question. Again, since you were reporting a problem to me I assumed it was my package.
I apologize and send you to the PERSON YOU SHOULD HAVE ASKED FIRST. you were using his package and then you report his bugs to me.and get upset when I dont figure that out.
Then you say that you doubt the problem is his problem
THEN you never come back to say that I was right and the problem was his. Now do you.
So. you use another guys package
You report a problem with that package to me.
I assume you are reporting a problem with MY PACKAGE
you tell me its R.matlab
I tell you to read the fucking manual and write to the maintainer of the package
You surmise that its not his bug
You email him and figure out that it is his bug
AND you never report that back to me
Is that about it?
Brandon
“The public is good enough for it? That you like us enough? That it’s ridiculous to release results predicated on substantial changes to your methodology that nobody could possibly know about? That it’s unacceptable to publish your results on a page that describes them with false information?”
huh,
I thought I was clear in the post that this is work in progress. Also,
The changes are not material otherwise, one could probably get a paper out of it. basically,
we added more stations. we improved our homogenization, we changed the way we remove seasonality. those changes are not substantial.
Steven Mosher:
Even if you were, this post is not the BEST website. Given that website is where the results are hosted, that’s where any notification would need to be given.
But you weren’t clear at all. You said:
We’re not talking about new sections. We’re talking about the underlying methodology used to generate the BEST results. Nothing in your post indicates the BEST temperature record, which has been had multiple papers published about it, is a “work in progress.” In fact, you directly implied it is not a work in progress by only specifying that the new sections have some work in progress.
Actually, they are. And if I can work out just what you’ve done, I intend to write a post about a significant change I’ve found in the results.
Brandon you still dont get it.
The data we use is in the source files.
No Mosher, I do get it. Once I realized you guys published new results without disclosing the changes you’ve made, it was pretty obvious why I was confused. My earlier confusion arose because I assumed the public documentation of BEST’s work provided on the same page as the results actually described those results.
I’ll admit it took me a little while to catch onto that issue, but to be fair, you could have just told me the BEST site gives false information about the methodology used to generate the results on said site.
Or, you know, you could have updated the site.
brandon Check the date on my blog discussion.
The approach has changed substantially from then. Until the code is published its going to be very hard for you to follow any discussions or descriptions. The datasets and their descriptions are all being updated, so you are going to continue to be confused, referring to old blog posts wont help you, it will just confuse you.
I’ll spend some time with Robert and See if we cant write a more clear description for you, but I have to choose between helping you or helping a couple of guys who are actually getting the code running.
Steven Mosher states the obvious:
This is why you don’t publish results if you can’t publish the code to support those results. Mosher has been fond of saying papers aren’t science, they’re advertisement for science. I agree. The same is true of this blog post and for the latest data release on the BEST website. Neither is science. Until BEST releases its code, none of this is science.
And that’s okay by me. If BEST wants to release results we can’t check, that’s fine by me. BEST can even take down the previous results, which had code to go with them, in favor of unverifiable results. We can do what Mosher has said about papers that are released without data or code: ignore them for not providing science.
But if BEST is going to do this, it really ought to say that’s what it’s doing. It shouldn’t prominently say its work is open and verifiable, like on its homepage:
Brandon
Do you understand the difference between publishing a paper with all the data and code to support it, and releasing work in progress. Seriously.
If you want to understand the new bits have a look at the code and look for the routines called experimental. Its marked clearly in the code.
You’ll even see some things that we havent announced. Of course that means other folks could go take that and get a paper published first, but what the heck.
But until a paper is published explaining it all, you are going to have to read the code. And if you install it then you should be able to pull from SVN.
Steven Mosher:
I do. I also understand when you release work in progress, you label it as work in progress. You also state any caveats necessary because of it being a work in progress. BEST hasn’t done anything like that.
So it’s okay to have false information on your website, code that doesn’t produce your results, and no information for people to tell them the code won’t produce your results… because they could find code buried somewhere and run it? That’s an interesting position.
What’s also interesting is I can’t find anything labeled experimental in the code. I did a tree command and there is no file or folder with such a label. I did a search in Matlab, and it didn’t find any such routines. In fact, a search of all the files found only one instance of experimental, and that was in a line commented out. So you might want to think twice when you say:
The first thing I did is download the code release and look at every file. It’s possible I missed something, but I doubt it.
“This is exactly as I described. In Mosher’s own words, seasonality is removed prior to the Berkeley Earth Averaging methodology being used. It’s done before kriging is used. It is done in the pre-processing of data.”
Wrong, this is the way we used to do it. Actually, prior to the publication of the first paper we started looking at a different approach to seasonality estimation. That approach was completed during the submission process of the first paper. So the paper describe the way it used to be done and the website describes the way it used to be done.
Here is how it is done now
The temperature is modelled as a combination
of Climate, Weather, and Seasonality.. This is solved simultaneously
So the old step process described on my web page and the current web page is out of date.
Bravo Steven Mosher. After a great deal of pestering, you’ve managed to finally say what I asked you to say before. Of course, you said I was wrong at the same time:
In my very first response to you on this page, I said:
We’ve now clarified that the page is wrong. It’s wrong because BEST changed its methodology without updating its description of that methodology. The entire problem I raised on this page comes from that undisclosed change. All you ever had to do was “say the page is wrong and fix it.” Instead, you said things like:
Over and over you insisted I was wrong because I wasn’t aware of a change BEST hasn’t disclosed. The entire time, I accepted my understanding could be wrong if the descriptions I was reading were wrong. I explicitly asked you to say if those descriptions were wrong. You could have. You didn’t.
Rather than give a simple, straightforward response to address confusion created by BEST making undisclosed changes, you insisted I was wrong for having the audacity to trust BEST’s documentation of its work.
And you’re still saying I’m wrong!
I should point out the figure Steven Mosher directed me to was, like the description I quoted, outdated. Rather than just say the website was wrong and needed to be updated, Mosher pointed me to a figure on that website that showed exactly what I said and claimed it showed I was wrong! That takes courage.
Brandon.
I dont know how I can be any clearer. The whole point of this post is to point out some updates. there are new datasets. there are new stations, there is an improved method and a bunch of changes to the web site. The changes will continue to happen as we continue to do upgrades. There will be figures out of date and links out of date, text our date. The data goes up first because of the folks who need to use it. Everything else will catch up in due course. Most people just send me email with their suggested fixes and questions. They get a thank you and the fix goes up.
Steven Mosher, I asked a question about a contradiction I found on the website. I then asked you to tell me if the website is wrong. You didn’t. In fact, you pointed me to outdated aspects of the site without any warning that they were outdated. Are you seriously saying you “dont know how [you] can be any clearer”? How about this: Tell people when the sources they’re relying on are wrong. That’s a good first step.
As for the idea of a website being outdated, that’s ridiculous. There are maybe five pages that need to be updated. Most of the changes would be minor. I could do it in 20 minutes. Are you seriously saying you guys released a major update to your data and results, but you can’t be bothered to take 20 minutes to correct your descriptions of it?
If that’s the case, I’ll do it for you. And if people have questions about inconsistencies in what they read in BEST’s material, rather than being an unhelpful prick, you can direct them to me.
Brandon
“Over and over you insisted I was wrong because I wasn’t aware of a change BEST hasn’t disclosed. The entire time, I accepted my understanding could be wrong if the descriptions I was reading were wrong. I explicitly asked you to say if those descriptions were wrong. You could have. You didn’t.”
You are still not getting it.
The description is accurate if you are talking about the data values in the file. But the file is not used by the program as you tried to insist. the file is produced by the program as I’ve told you now several times. If you are interested in the actual physical dataset that gets used, then thats in the source build.
it will have the same base name as the output file, but it will have a different extension. the program uses .mat files. When we post to the web, we write output files. These will NOT be .mat files. I dont know how many different ways to say this. we use .mat files, we output .txt.
So, when you look at the BerkeleyAverage() function you will find that
it gets data from the loadTemperature function The load function gets its data from the registered datasets. Those are all .mat files. The files on the web are not .mat files. If you want the .mat versions they are in the nightly build.
So if you are hunting around the code asking yourself where are these .txt files that they post ingested, you’ll be fustrated. because those files are not used, The versions that get used have .mat
extensions.
So again. the files on the web are not used by the program. you wont find code that says “read in this .txt file” Those files get produced by the program.
Brandon
If you continue to think that the program uses the .txt files, you never understand the code. It uses the .mat files.
The .txt files are posted on the web site. the .mat files are in the
Source files.
you are just wrong when you insist that the program uses the .txt files. It doesnt. those are output to the web for people who dont use matlab
Here is the function that loads data. study it.
function [se, sites, site_table, site_index] = …
loadTemperatureData( archive, class, version, kind )
it loads data from registered datasets.
Those are all .mat files. not .txt files.
Steven Mosher, what are you smoking?
I pointed to what the BEST website says it uses. The fact data sets can come in many different formats has no bearing on anything. I have never said anything to justify these comments:
This is no better than me saying if you continue to beat your wife, you deserve to go to jail. In the same way, when you say:
You’re full of it. I’ve never insisted the program uses .txt files. I said it uses that data set. The fact it uses that data set in a different format may be relevant in some technical regard, but it has no bearing on any point I’ve raised. You’re going on and on about some inanity that has no relevance to anything I’ve said. And then you have the audacity to use your red herring to say:
Try this. Read what I write and stop being a prick. Not only have I run that code, the loadTemperatureData.m routine that preps data for it says exactly what I said about this data set.
Lets see if I can clear up some terminology.
1. Source files. Source files are the temperature files that we read in.
These are the files that we use.
you can see what is loaded by loadRawTemperatureData
2. Output files. These are the files that we write. Those files are listed on the web site. These files are not used by the process. they are output by the process.
3. Intermediate datasets.
Intermediate datasets. intermediate datasets are in the Data folder of the code drop. They have a .mat extension. these datasets are both written and used by the program depending on how you execute it
( see the options flags )
As I said brandon seasonality is “removed” or rather estimated before krigging. The confusion seems to center around the difference between
source files. the temperature data we read in.
output files. data we put on the web that you see.
and intermediate datasets which are only in the data directory of the
of the source code drop. These have a .mat extension.
Mosh
Not intended to be snark, but above you said;
“The temperature is modelled as a combination of Climate, Weather, and Seasonality.. This is solved simultaneously.”
We are trying to graph the correlation between BEST and CET for as far back as possible.
If Stockholm was originally recorded as having a mean average of say 8.62C in 1781 and 8.71 in 2003 (just examples and not real) by the time you have carried out the above exercise is it STILL 8.62 and 8.71CC?
tonyb
Wait a second… Steven Mosher’s habit of making many separate comments may have caused me to miss something important. I didn’t catch this comment until I saw climatereason’s response to it. That’s bad as it says:
But that’s not what he said originally. His post explicitly says:
In the kriging process is not “before krigging.” That’s a significant change (if it wasn’t just a typo). Half of my original comment was devoted to discussing whether seasonality was removed during kriging or in a pre-processing step.
Brandon
Yesterday I mentioned the Eu IMPROVE project designed to produce 7 new series from 7 existing historic data sets.
here is a link;
http://www.isac.cnr.it/~microcl/climatologia/improve.php
I have had the book out from the Met office library three times. Its a fine piece of detective work but whether the end result is highly accurate is another matter.
Tony
Tony, I may be wrong, but the IMPROVE project strongly resembles a straw-clutching exercise.
Tony
Brandon hasnt really helped much to get you to an understanding of whats going on.
First the files.
As the website notes, there are “matlab” files and then the .txt files
we release on the web.
Brandon has tried to insist that we use the .txt files. we don’t. we output those for people who dont have matlab.
The matlab files are now output in the source tree which is built and posted daily
So, from a data inputs and data outputs standpoint. The data that we input is in the source files. The data that we output is in the .txt files.
The txt files are for folks who dont want to work with matlab.
Mosh
So what data should we fairly use in order to get back as far as possible with BEST on a global, northern and southern hemisphere basis?
Tonyb
Steve, do the matlab and txt files contain identical data? IOW is it only the format that’s changed?
“phatboy | September 8, 2013 at 3:37 pm |
Steve, do the matlab and txt files contain identical data? IOW is it only the format that’s changed?”
Yes, with the old method they should contain the same values, but with the new method made to the estimation of seasonality, that is something that I would want to check rather than assume. It might not even make sense to post those files anymore as it leads to confusion and perhaps we should just move to leaving that data in .mat formats.
That way there be no confusion about what data values are actually read in and used by the program. The thought was reflecting the intermediate data in an open format would be helpful, but its really cost me more time an effort than its worth.
Steven Mosher:
I have never claimed or thought any .txt files were read by the code. All I did was point to a data set Mosher agrees “should contain the same values,” that the BEST website says is used by its analysis, as being the data set used by BEST’s analysis.
In retrospect, it seem Mosher’s responses to me revolve around the fact these files are in a different format than the .mat files actually read by the code. As in, I said a data set was used (as did the BEST website), but Mosher insists I’m wrong because identical data stored in a different format was used instead. Mosher is flat-out making things up when he says:
I haven’t done anything of the sort. I pointed to a public version of a data set and said it is the data set used in BEST’s analysis. I didn’t worry about the technical detail that BEST’s analysis actually uses the exact same data, but in a different format. I think that’s reasonable. And apparently, so does BEST. After all, it describes the data set I pointed to as:
The only person who has cared about the distinction between the .txt and .mat versions of the same data set is Mosher. Nobody except him has been confused or even interested in this issue.
Steve, why can’t the matlab files be translated directly into txt format? Shoudn’t be too difficult?
phatboy, they can. That’s how the .txt files on the BEST website were produced.
For that matter, before I posted anything on this page, I pulled about a dozen station records from the .mat files and compared them to copies of the .txt files I had on my machine. Try to reconcile that with Mosher’s claim that I insisted the .txt files were used in the analysis. If I had insisted the .txt files were used, why would I have spent time matching records in .txt files to records in the .mat files?
Can we get a mediator?
Tonyb | September 8, 2013 at 3:32 pm |
Mosh
So what data should we fairly use in order to get back as far as possible with BEST on a global, northern and southern hemisphere basis?
Tonyb
############
As best as I can figure brandon is trying to reconstruct all the various stages that the data goes through which is why he apparently is focused on intermediate data.
Lets see if I can walk you through it and maybe it will help him as well
1. Multiple sources are read in.
2. Those sources are turned into a common format
http://berkeleyearth.org/source-files
NOTE: these are txt versions of the matlab data structures.
Why do we output this common format?
A) so that you can check that we read in the ftp sources correctly
B) you have your own process for doing averaging
3. Then The sources are assembled into a multi value data
This shows the data before we do duplicate removal
Again, the files posted are in .txt format.
If you want the matlab versions they are in the nightly build
4. Single valued
Duplicate stations are removed and you get the single valued
file.
The txt versions on are the web site. The matlab versions
are in the nightly build
Why do we output both the multi value and the single value
Primarily so that you can check that we did the de-duplication
correctly. This is where folks who specialize in certain stations
can make a contribution because the problem of de-duping
is inexact. That is, we use algorithms to discover duplicates
Those algorithms are never as good as a human expert.
5. Quality Flags Applied
we have our own QC process. The txt files show what the station
data looks like after QC. There are also matlab versions which
are the data the source code uses.
Here too people can compare the single value and the QC
to look at what we do for QC.
6. Seasonality removed.
First lets talk about the old process. In the old process seasonality was ‘removed’ by simply applying a sin() and a couple
of harmonics to each station. This removes most of the seasonality.
That dataset is then saved ( as .mat ). Then the averaging is applied to the residuals after climatology and seasonality are removed. Then the climatology, seasonality, and krigged field is
added back together for final output.
On the web we post a txt version of this file. That’s an output
of the program. The ingest version is a .mat file thats in the
source code dump.
Now the new process: in the new process we do the fitting
of the climatology and the seasonality in the same step. So I’m not
sure if the files under seasonality removed have the same meaning that they did before. In any case, the txt files are not
inputs. Hopefully I’ll have an answer on what the .mat files represent ( if anything ) or even if we should be posting them.
7. Homogenized.
This is a new dataset it shows what a station looks like after
the entire process. Again, the txt files are outputs.
##########################
Now on to your question. What is the best way to compare CET to the Berkeley
For this you want to look at various outputs. Comparing to CET is a nice check for us, because the biggest impact of the code changes was in the early years. For us thats the biggest source of uncertainty. The IPCC just cuts off everything before 1850, so one of my goals is to get to the bottom of what we can really say before 1850. There are a few things I dont like about the current version of the early years, so any criticism you have is quite welcomed.
1.global: This is the global field.
http://berkeleyearth.lbl.gov/auto/Global/Full_TAVG_complete.txt
2. NH
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/northern-hemisphere-TAVG-Trend.txt
3. SH
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/southern-hemisphere-TAVG-Trend.txt
England ( not including territories)
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/united-kingdom-%28europe%29-TAVG-Trend.txt
and you can look at estimates for various grid areas
http://berkeleyearth.lbl.gov/auto/Local/TAVG/Text/52.24N-2.63W-TAVG-Trend.txt
http://berkeleyearth.lbl.gov/auto/Local/TAVG/Text/52.24N-0.00W-TAVG-Trend.txt
Ideally, If you had the lat/lon extent of the measuring area
we could extract that from the field.
Or, I can use the new EU 1/4 degree gridded data. The new 1/4 degree fields are probably the best option, but since we are just starting to work with them ourselves I’m not sure I want to post them until we actually finish going through the vetting work. The addition of new stations, a new homogenization, new seasonality and new gridding scheme is a lot of changes. The checks for the US 1/4 degree are going on, so it might be nice to compare to CET and see if the answer is making sense.
Reading the above thread, I get the impression that a very simplistic development protocol has been followed, one that involved changing all sorts of things that should probably have been left unchanged, resulting in confusion and difficulty of replication and testing.
While a full design for a regression testing environment is beyond the scope of a comment here, I can offer some pointers that I would use in designing such an environment, if it were my charge:
1. Never change any dataset. If there is a version of data (of any type) that will be part of the input to any part of the process, whether original or intermediate, it should be explicitly versioned, e.g. xxxxdata.version3.run5. This includes the output from all test runs, since they will necessarily be input to (probably automated) comparison processes to pin down exactly what changes to the output were produced by changes to software or intermediate datasets.
2. When a new release of software is produced, it should not replace the older release, but should become the newest release/version in a sequence reaching back to the beginning of the process.
3. Test runs should create logs, as part of the versioned output, that explicitly identify the versions of input and output datasets from the run, as well as the date, time, and ideally any relevant comments.
4. Unit test and associated code changes can be excluded from this process, but when a set of software reaches the level of system test, all the above protocols should be followed. This will facilitate identification of errors where an incorrect version of one module was advanced to acceptance testing (or whatever equivalent is used in the specific environment).
5. Versioning and logging should be fully automated; there should be no manual processes to be forgotten, creating an overlay situation in which old runs cannot be recreated because the original input has been lost.
6. For versioned datasets, redirect URL’s can be created that always point to the latest version of a specific dataset, including code, for the use of people who only want to run with the latest data/code. When the specific structure or meaning of fields in the dataset changes, users of reidrect URL’s would have to make changes to keep up, but they would always have the option of changing their inputs to the specific version they were last using, to avoid confusion WRT changes.
7. Documentation regarding what each dataset represents, and how to use data and code, should also be versioned with older versions retained as an audit trail. Although I haven’t always done this, I’ve often found myself wishing I had, when I came back to an older version 6 months later and couldn’t remember exactly where I’d been in the process and what I’d been doing. For larger projects, where staffing changes would have to be accommodated, this would be even more valuable.
My experience is that such a system, while much more expensive to set up at the beginning, will quickly pay for itself in saved time during development and testing, not to mention improved software quality. Although there have probably been exceptions, my recollection is that whenever I perform a manual process “because it’s only going to happen once or twice”, I tend to get burned and end up wishing I’d set up an automated process from the start.
The enormous benefits in terms of ease of use by outside users/auditors would be a bonus.
While I’m not familiar with the current array of open-source products that address this type of issue, I strongly suspect there are products that can be configured to set up precisely this sort of environment.
AK, I agree wholeheartedly with this sort of approach. I think it’d be great, and I don’t get why BEST hasn’t done anything like it. That’s especially true since Steven Mosher has promoted this same idea. Given how hyped up BEST was made, I’m at a loss as to why they didn’t do anything like this. I get not going all the way with it, but they haven’t even created the most basic of archives. If someone wants to compare the current data set to the last, they’re out of luck. Heck, if they want to compare the current BEST temperature series to past versions, they’re out of luck. As far as BEST allows, you can’t even look at their previous results (I think each version is available via the Wayback Machine).
Not only do they not have even the simplest of archives, they don’t even inform people of when changes will happen. I visited the BEST website a month before this latest release. I had no idea there was going to be a change. If I had, I’d have grabbed files for future reference.
But it gets better. Before this latest release, there was another release. I wanted to compare the data used for that set of results to the data used in the original. I couldn’t. It turns out they updated the code and packaged data for it (in a proprietary format that requires Matlab), but they didn’t update their data page. Even better, they didn’t update the results file promoted on their Findings page.
To look at their data, one would have to have known to go to their Code page. To look at their results, one would have to have known to go to their Data by Location page and find the global record there. Going to the Data page would only show the old version’s data. Going to the Findings page would only show the old findings.
And now they apparently have a new version of code they simply aren’t releasing. Because BEST, the ideal project designed to finally resolve concerns about the surface temperature record, is less competent at releasing and documenting work than an amateur video game modder. (I speak from experience.)
By the way, Mosher justified BEST’s decision to not release its newest code in part by saying:
Apparently we’re supposed to just take him at his word. BEST doesn’t have to release its code because one member says the changes aren’t important. And he says this, not on the BEST website or in any forum officially endorsed by BEST, but on a blog somewhere. Can anyone imagine the reaction if a climate scientist had argued this position?
Besides, we it’s not true. I found a glaring (new) problem after having the data for less than five minutes. I wouldn’t have found it as quickly except I was checking the status of a problem that existed in earlier versions. Still, the fact it took me less than five minutes shows BEST is doing something really wrong.
And yes, I’m being intentionally vague here. I’ve tried discussing issues with BEST before. One thing I noticed is the BEST temperature record had a strong seasonal cycle, stronger than any other major temperature record. That struck me as ridiculous. It’s one of the easiest steps in creating a temperature record, and BEST did a worse job of it than any other group.
I pointed it out a number of times, including in an e-mail to Robert Rhode (who I was told to contact with concerns by another member of the BEST team). I even made ACF graphs to make it undeniable. I was ignored. It was annoying, but whatever, right?
Wrong. It turns out the latest release by BEST has removed that seasonal cycle from its temperature series. That means more than a year after I raised an issue multiple times, with two different members of the BEST team and in an e-mail to a third, the problem was fixed without any acknowledgment it ever existed. I don’t care about getting credit,* but I do care about problems being hidden like that. I’m not exactly inclined to help them do it.
*Besides, several others pointed it out as well.
AGW science today…
ho·mog·e·nize
həˈmäjəˌnīz/
verb
verb: homogenize; 3rd person present: homogenizes; past tense: homogenized; past participle: homogenized; gerund or present participle: homogenizing; verb: homogenise; 3rd person present: homogenises; past tense: homogenised; past participle: homogenised; gerund or present participle: homogenising
1.
subject (milk) to a process in which the fat droplets are emulsified and the cream does not separate.
“homogenized milk”
Biology
prepare a suspension of cell constituents from (tissue) by physical treatment in a liquid.
2.
make uniform or similar.
Got Milk? Want some more?
@Brandon Shollenberger…
Well, I do. It’s a huge front-end expense, not just in money but in time. All my (larger project) IT work has been in a corporate environment, with strong incentives to get the stuff right, everybody knowing that we’re going to have to go back and change things in a year or two when the business model changes, and a programming staff that, by and large, is happy enough to spend the extra time as long as they get a paycheck. Even there, I’ve always had to fight to get authorization to set up a regression testing environment. (And often lost.)
Contrast that with scientists working on software for the sake of a paper, or even a few papers in sequence, they want to get it done and get to the science. I can understand their feelings, and it’s very hard to really perceive the longer-term benefits that justify spending an extra 6 months before you can publish your paper. I understand, I just don’t agree. From my perspective.
So we a left with: synthetic homogenized weather models.
Honestly, AK. The raw data, massaged data, and code already exist in file form. How much effort does it really take to copy the files to a folder available using FTP? Not much.
AK, getting something as structured as you described setup is definitely a huge undertaking, but getting a small version of it setup is not. For example, it’s incredibly easy to make an archive directory with a subdirectory for each version you release (or even internal versions). It’s also incredibly easy to keep logs of what code has changed between versions. Do those two things, make the archive publicly accessible, and you’re set. People can go to the archive to find anything they need, and a direct link is available for anything people want to look at (a great boon for discussions).
If you don’t automate it, you’re talking about maybe ten minutes of work per release. That’s nothing.
Disciplined programmers keep code in a special code repository like SVN for example. It would be no brainer to set up a read-only copy in an FTP folder. That way, you could see all the code changes.
1. Never change any dataset. If there is a version of data (of any type) that will be part of the input to any part of the process, whether original or intermediate, it should be explicitly versioned, e.g. xxxxdata.version3.run5. This includes the output from all test runs, since they will necessarily be input to (probably automated) comparison processes to pin down exactly what changes to the output were produced by changes to software or intermediate datasets.
######################################################
Perhaps you dont understand how the system works.
1. we dont control the inputs that we read. Without notice
NOAA and others can drop stations, add stations and
reformat the data.
2. There are multiple tests runs made while algorithms are
being refined or tested. There are test runs where code
fails because of upstream data errors made by data providers.
2. When a new release of software is produced, it should not replace the older release, but should become the newest release/version in a sequence reaching back to the beginning of the process.
The code is in SVN along with the data. The nightly build is posted. If you want access, I can look into the request.
3. Test runs should create logs, as part of the versioned output, that explicitly identify the versions of input and output datasets from the run, as well as the date, time, and ideally any relevant comments.
Download the code and install it. The datasets are registered and recorded into SVN. Knock yourself out.
4. Unit test and associated code changes can be excluded from this process, but when a set of software reaches the level of system test, all the above protocols should be followed. This will facilitate identification of errors where an incorrect version of one module was advanced to acceptance testing (or whatever equivalent is used in the specific environment).
See my SVN comments
5. Versioning and logging should be fully automated; there should be no manual processes to be forgotten, creating an overlay situation in which old runs cannot be recreated because the original input has been lost.
Sorry there are manual processes and checks which cannot be automated. Specially in the cases where station latitudes and longitudes are misidentified.
6. For versioned datasets, redirect URL’s can be created that always point to the latest version of a specific dataset, including code, for the use of people who only want to run with the latest data/code. When the specific structure or meaning of fields in the dataset changes, users of reidrect URL’s would have to make changes to keep up, but they would always have the option of changing their inputs to the specific version they were last using, to avoid confusion WRT changes.
have a look at the data manager code and see if it does what you
want here. If not, then go ahead and modified the system to do what you want.
7. Documentation regarding what each dataset represents, and how to use data and code, should also be versioned with older versions retained as an audit trail. Although I haven’t always done this, I’ve often found myself wishing I had, when I came back to an older version 6 months later and couldn’t remember exactly where I’d been in the process and what I’d been doing. For larger projects, where staffing changes would have to be accommodated, this would be even more valuable.
Yup, If I were selling a system that’s a good idea. But here we are basically building a tool for our own use. We post that tool AS IS.
If you want to take that tool and modify it you are welcomed to.
My experience is that such a system, while much more expensive to set up at the beginning, will quickly pay for itself in saved time during development and testing, not to mention improved software quality. Although there have probably been exceptions, my recollection is that whenever I perform a manual process “because it’s only going to happen once or twice”, I tend to get burned and end up wishing I’d set up an automated process from the start.
The enormous benefits in terms of ease of use by outside users/auditors would be a bonus.
While I’m not familiar with the current array of open-source products that address this type of issue, I strongly suspect there are products that can be configured to set up precisely this sort of environment.
AK, I agree wholeheartedly with this sort of approach. I think it’d be great, and I don’t get why BEST hasn’t done anything like it. That’s especially true since Steven Mosher has promoted this same idea. Given how hyped up BEST was made, I’m at a loss as to why they didn’t do anything like this. I get not going all the way with it, but they haven’t even created the most basic of archives.
1. If you want the published results ( eg the peer reviewed science ) then I’ve pointed you at that data.
2. The newest material is still in progress with even more changes
in progress. You can ignore it if you like, you can print it out and line your bird cage with it. But if you are interesting in the things we are up to you can download that. If you find bugs as others have then you can send a mail and say: steve I found this. As an example, it took about 3 or 4 months to sort out one issue with the gridded files. Those guys were easy to work with. They spotted an issue and sent documentation of it. Those folks documented what they did, I was able to duplicate their problems and eventually get it fixed.
In the past, all you’ve done is ask me about a bug in another guys R package, and then never followed up with me when you found out that I was right about the bug being his.
3. The path to turning all of this into turnkey reproduceable science is not an overnight thing. However, if you want to donate time or raise money to get there, knock yourself out. I’ve donated a huge amount of time getting folks from a position where they dont share data and code to one where they do. Taking it to the next level of polish and support will take people will to work on the problem, like doing proper bug reports or taking the code base and adding the features they want to see.
jim2, to be fair, Steven Mosher has referred to an SVN, so apparently one exists. It’s just not mentioned on the BEST site, and users of and the last code release aren’t told about it. Specifically, the readme for the code release says:
That refers to the use of SVN in the future tense. Similarly, the information for where the SVN is hosted is absent (the line is left blank).
I’ve waited for about eight months now to hear about an SVN for the project so I could use it. I’ve read everything on the BEST website, and I’ve gone through everything in the latest code release. As far as I’ve seen, this page is the first time the existence of an SVN has been brought up in public. I’m not sure what to make of that. And I’m really not sure what to make of the fact Mosher says I should be able to pull from the SVN when the code BEST has given out says otherwise.
jim2, I may have just resolved the SVN issue. I was confused by Mosher’s earlier comment which indicated I should be able to access the SVN:
Given this was directed at me, it was a clear indication I ought to be able to use the SVN. However, in his response to AK, Mosher said:
If one has to request access and be approved, clearly I should not “be able to pull from SVN” at the moment. I certainly can’t with what BEST has provided.
By the way, Mosher reminded me of an interesting point:
BEST has published a number of papers based on their data set. They did not specify versions of their data set in their papers. Given they don’t even post previous versions, one has no clear connection between between their publications and the underpinnings of their work. In fact, their publications are tied to results no longer available on their website. That strikes me as odd.
“Apparently we’re supposed to just take him at his word. BEST doesn’t have to release its code because one member says the changes aren’t important.”
err no, when we actually publish science using this code then I’d say, as I’ve always said, then we’d have to publish the code. As it stands these changes havent shown us anything new. If you like the old code use it. When we get to a good point to update all the new code, we will update it. I’m guessing that as AGU comes around and we finish some of the publications that that will be a good time.
Brandon.
rather than pretend that this is the first you heard about SVN you might want to consider the following
If you actually looked at the code you would see SVN information in the installer including the password.
And over a year ago you were on this thread where I talked about it.
http://rankexploits.com/musings/2012/berkeley-earth-results/#comment-100652
So, the system is set up to allow for folks to pull from SVN. It should work, but with the recent changes in the websites folks might have to ask for access.
To be clear, the system is set up to support it. It should work, but with some of the changing urls, folks might have to ask for access.
Now understand the whole purpose of providing SVN was so people could work without bugging others. You know like reporting bugs in r.matlab to me when you know Im not the maintainer. If you are going to use access to the SVN to pester scientists with your programming questions, then dont be suprised if you get no answers.
“I’ve waited for about eight months now to hear about an SVN for the project so I could use it. I’ve read everything on the BEST website, and I’ve gone through everything in the latest code release. As far as I’ve seen, this page is the first time the existence of an SVN has been brought up in public.”
Brandon: “I’ve gone through everything in the latest code release”
WRONG
The latest code
“function temperatureInstaller()
% temperatureInstaller installs the data and program files used by the
% Temperature Study. User will be prompted for path and other information.
% If an SVN client is present, user will have the option to download code
% and data under SVN version control (allowing for easy updates).
global install_path psep
global BerkeleyEarth_username password
program_name = ‘Berkeley Earth Surface Temperature’;
default_path = ‘Temperature Study’;
upath = userpath;
svn_repository = ‘http://berkeleyearth.lbl.gov/svn’;
“this page is the first time the existence of an SVN has been brought up in public.”
Wrong, last august on Lucia’s you were there.
###########
So brandon you made two claims.
1. You had been through the code and found nothing about SVN
Wrong. see the installer
2. This was the first time it was mentioned.
Wrong, I meantioned over a year ago on a thread and site you visited.
Now, go ahead and install the code and see if it works for you.
I think the that access might need to be granted but maybe not Not sure. but stop pretending you read all the code and stop pretending this is the first you’ve heard of it.
ITS IN THE CODE YOU DIDNT READ. Its been there for a while.
but you never read the installer
http://berkeleyearth.lbl.gov/svn
Brandon
” It’s just not mentioned on the BEST site, and users of and the last code release aren’t told about it. Specifically, the readme for the code release says:
During the next couple of months the Berkeley Earth group intends to move to a more user friendly distribution platform with online SVN, dedicated installer, and better examples and documentation.
The hope is that this future version will be more directly useful for other research programs.
That refers to the use of SVN in the future tense. Similarly, the information for where the SVN is hosted is absent (the line is left blank).”
1. you claim to have read the code
2. you claim no mention of SVN in the code
3. the note you quote above mentions a dedicated installer
DID YOU LOOK FOR THE INSTALLER?
4. The installer code that you apparently didnt read shows
A) the svn url
B) the username “installer”
C) the password “temperature”
D) committed on aug 17th 2012
Apparently you didnt read the code.
At the end of last July on Lucia’s I told you guys about switching our SVN for our public release.
So, if you downloaded the source code in the past year and read it
you would have seen the access to SVN.
But you didnt read the code, did you?
So question for you AK.
last July 2012 I mentioned in a thread that brandon was on about us switching our SVN for public release. I’ll check around but I talked about this for a while.
in august 2012 we committed the installer to the code That installer gives people access to the SVN. open password open username.
Now its september of 2013, and Brandon claims
A) to have read all the code
B) that this is the first he has heard of SVN
But SVN access has been there
I think he didnt read the code, or maybe he forgot.
In the grand scheme of things I’ve been trying to move people to a place where its a more common occurance to have scientists give access to their SVN.. hell to even use SVN
In light of the experience here where you have a guy who cant even read the code we posted or tell the truth about what is in there, do you think that other people will say.. “ya, open your SVN to people”
or will they say… “shit, those guys opened their SVN and wannabes consumed their bandwidth”
Or this. Should I spend time being Brandons personal “grep” to help him find stuff in the code or should I help tonyb with his CET work?
Who would you spend time with? A person who is focused on improving our understanding of CET and climate history, or somebody who
is given the keys to the code and then drops them? especially since we have been fighting to get other people to give access.?
“heir publications and the underpinnings of their work. In fact, their publications are tied to results no longer available on their website. That strikes me as odd.”
Its in the spreadsheet I pointed you at a while ago.
In fact, in the redesign of the web site I made sure to leave that
there especially for you.
@Steven Mosher…
I’m not sure opening your SVN for access over the Internet would be a good idea bandwidth-wise. And I’m not sure I’m recommending it. My list was of design principles I’d use if I were designing a regression test system starting from scratch. Using an existing system such as Subversion, I’d probably look for how well I could meet the actual needs of the project by configuring my version control system before I thought about modifying it. But for offering open access, I’d probably think about copying the relevant objects out of SVN to a Web Server where anybody could get to it. If it’s like most Apache products I’ve looked into, I suspect this could be done with a combination of a few (fairly) simple batch files and configuration settings.
And my primary interest is how the actual project is organized, not the quality of the open access provided to wannabes.
I am interested in your comment about the original source data always changing: My first step in designing any project using such data would be to create a front-end capture step that copies it in to a versioned input dataset, which my actual code runs against. The next step (my first code) would be a data edit and massage process that looks for bad data (stuff that will cause downstream programs to crash) and replaces it with appropriate default values. I strongly suspect your system has such a step, in one form or another.
As for manual tweaks, many of those can be automated (preferably as part of that edit/massage process). For instance, suppose the latitude/longitude of a particular station has to be modified, there can be a step that says “if [the latitude/longitude] of a particular data line is (still) [x], change it to [y]. Along, perhaps with other massage. The step would write an edit report/log, which would become part of the permanent record. This way, if you need to come back to the run 6 months later, you can.
There are several benefits to such a process. Not only does it effectively eliminate the risk that the step will be forgotten or done wrong in any particular run, it also leaves a permanent record of it. Perhaps more importantly, the actual analysis around the “(still)” above can turn up all sorts of interesting questions and answers about why the data is being massaged, and under what conditions the massaging needs to stop or be modified.
This isn’t to say that some manual tweaks won’t remain, in smaller projects I’ve never completely eliminated manual processes, simply because at any specific time the effort doesn’t seem worth the benefit. I get burned fairly often though.
I find it awkward to be basically called dishonest and a “wannabe” by a person who is currently supposed to be working on writing a paper with me. Call me crazy, but if you’re supposed to be collaborating with someone, I think you should refrain from publicly insulting them at the same time.
climatereason, that’s an interesting project. I didn’t see the data posted on the site though. Do you know if it’s available without buying the book/CD? I’m especially curious because the site says:
This is something that’s always struck me. The larger the departure from an average, the easier it is to observe. At the same time, if there is a local extreme, it will be reduced due to the smoothing introduced when combining temperature series. This is because most algorithms use a single correlation measurement between series. That misses the distinction between records that correlate well in the short term and ones that correlate well in the long term.
Oops. This landed in the wrong spot. It should be just above, in response to this comment.
By the way, I’m starting to think we should do a test run of the mediator idea. This page has seen a lengthy disagreement between me and Mosher, and it all stems from basic, factual issues that can be verified. It might be useful to see if we can resolve something that simple before trying to move onto something as large as writing a paper.
Brandon
Its why I borrowed the book as I couldn’t find the data on line. Its well worth a read but highly detailed and heavy going. My interest in it was piqued by a comment I haven’t managed to find since along the lines of a conversation between Phil Jones and one of the authors, when it was said that the models showed warm compared to the observational data, to which the answer was that they would go with the model results.
Trying to find climate extremes in Britain back to 1088Ad is one of the projects I am currently involved in. Trouble is I have to keep laying off my team of researchers if the Cheque from Big Oil fails to arrive.
In trying to work out a mean average prior to the CET instrumental temperature from 1659 (as I am currently doing) you have to specifically ignore the extremes as a one day event is not necessarily indicative of the season.
I had an email from a scientist at the Met office on this subject last Friday (no names no pack drill)
” I’m still cautious about this though, (great variability around 1730)especially the further back you go because the sources of data become more seasonal and likely to pick up on extremes rather than more moderate events. This may bias an annual average if not many of the more moderate days are included.’
tonyb
You should definitely be critical of extremes if the data isn’t reported consistently. One of the most common forms of bias is remembering/reporting things that are remarkable in greater proportions than they actually are.
Still, I know a person who lives a couple hundred miles away. We get a lot of the same weather. Most weeks, I could tell you what his weather is like just by looking outside. But that’s only most weeks. While seasons change and heat waves come around the same time, there are times he’ll get thunder storms while it’s hot and dry where I am. Should the fact my area correlates well with his in the long term mean short term fluctuations we don’t share should be suppressed? What happens if we have similar winters but very different summers? What effects does using a single correlation coefficient, possibly measured over thirty years, have on things like this?
I mostly bring this up because it’s tied to the UHI effect. UHI is not a constant influence. It affects temperatures more on some days than others. Some months it matters more than others. Some years it matters more than others. That raises questions for any algorithm that only looks at one type of correlation. Those algorithms will smear information, and there’s no simple way to know what effect that will have on the results.
By the way, this same issue crops up in paleoclimatology. The scales are just different. I find that interesting.
Brandon
There is a current belief that extremes are much worse than they used to be -probably because the older records aren’t known about or are discounted as ‘anecdotal.’ Having looked at tens of thousands of records things were much more extreme (at times) prior to 1850 than after it.
Over the last couple of decades the IPCC and the Met office have promoted the idea of this constant ‘climate stability’ until man started affecting things around 1900.You may have seen my article on ‘noticeable climate change’ recently carried here and I followed that up with a post at
WUWT from which this graph is taken. Climate stability my foot.
http://wattsupwiththat.files.wordpress.com/2013/08/clip_image010.jpg
(closed blue line at top indicates periods of glacier retreat, closed at bottom indicates advance.)
The point I wish to make is that a whole alternative climate narrative has grown up over the last few decades. IMPROVE is part of it. Constant amendments to other historic records generally is another. The promotion of modern extremes as exceptional and a constancy in Arctic ice until satellite records began yet another (How on earth did the Vikings move around if the arctic was as choked with ice then as kinnard e al claim)
We know from Roman accounts that UHI affected the imperial city 2000 years ago, so how it has so little effect these days is something that needs clarification, which is why I hope your project with Mosh goes ahead. There are very many contradictions in the climate story and they need picking at one by one.
tonyb
Eh. I’m as suspicious of claims that the past was more variable as I am that it was less. I think it’d take a lot of work to show either case holds any meaningful period.
BraNDON SAID
‘Eh. I’m as suspicious of claims that the past was more variable as I am that it was less. I think it’d take a lot of work to show either case holds any meaningful period.’
I’m loo0king for funding here, so obviously I meant to say;
‘it appears very likely that extremes have become much GREATER since 1850 but further research is needed….’
BTW I agree with your 7.47. Misunderstandings happen.
tonyb
TonyB, Are there any historical climate data from Asia for instance? Surely the western world is not the only ones that kept Temp records?
MHastings
There is a wealth of information in China dating from the Qing dynasty as the emperor in the early years insisted on documentation for everything and that related to a lot of weather related accounts, partly became if there was harsh weather the regions might ask for assistance and proof was required. The dynasty was overthrown in 1911 and there has been a lot of confusions since which is not conducive to record keeping
India especially under the British raj was well documented as were other parts of the empire and voyages by sea were especially well documented as regards weather,. This latter is gradually being prised from the archives. We are still at fairly early days with regards to gaining information from the archives for such countries as India.
As I know from my research at the met office archives there is still a great deal to be found out about Britain let alone other countries.
Tonyb
Years ago I wondered if there were cricket chirping records in China.
=====================
Steven Mosher has an annoying habit of posting many comments in response to a single comment. This sometimes leads to there being many forks, making discussion cumbersome, if not impossible. As such, I’m going to gather a couple key points of discussion into one comment in a new fork.
1) I began this entire discussion by referring to a particular data set and the descriptions given of it. Specifically, I quoted the BEST website saying this data set was used in its analysis. The link I provided gave reference to .txt files for the data set. Mosher says:
This is baseless. As I pointed out in my response to the latter comment:
I had made the exact same point 12 hours prior to Mosher repeating his claim. Mosher didn’t respond there. And while he commented in the same fork as me half an hour after I pointed out he was making things up, he has still not responded to this issue.
To put it simply, I say (as does the BEST website) a set of data is the set used in BEST’s analysis. This data set comes in two formats, .mat and .txt. The .mat format is read by the BEST code. Because that data is identical to the data in the .txt version, (BEST and) I refer to the .txt version as the same data set as the .mat version. Mosher has somehow warped this into me claiming the .txt version is actually used by the code, thus:
2) Mosher has leveled implicit accusations of dishonesty against me that are baseless. He massively overstates a case against me. Specifically, I advanced the idea there was no way I could have accessed the SVN service for BEST. I supported this view by saying I had read all the BEST code and couldn’t find certain information needed to access it. This was wrong. I made a mistake. However, Mosher claims:
I claimed to have read all the code. Mosher says I’m only pretending to have. In reality, I read the BEST code across many sittings, and I did not have the same issues in mind each time I looked at it. I most likely read the file he refers to one day and simply didn’t remember it when thinking about SVN on another day. I made a mistake, but that does not mean I’m pretending to have done anything.
In a more absurd argument, Mosher misquotes me to say:
He leaves off the qualifier at the beginning of my sentence, “As far as I’ve seen.” That makes it seem my statement is far more absolute than it was. More importantly, he says he talked about the SVN service on a thread where I posted. The argument here is I must have seen what he wrote, therefore I’m only pretending to have not known about the SVN.
This has a number of problems. First, Mosher is wrong to say that I was there. The last time I posted in that thread was about half a day before the comment Mosher referred to. I never posted after. It’s perfectly possible I didn’t return, and thus never even saw his comment existed. Second, Mosher “talked about” SVN in only the most passing of ways:
He didn’t talk about it in any meaningful way. It was a single sentence made in passing, in response to a third party, obscured by Mosher’s horrible formatting for his quotations. All this in a comment I may have never seen. And that is the basis for Mosher claiming I’m pretending not to have known about the service.
I made a mistake about the SVN. Maybe I failed to take note of it in the code. Heck, maybe I even failed to read that code because I accidentally skipped a directory. Whatever. It doesn’t matter. There is still no basis for claiming I am pretending to have done anything. I’ve told the simple truth to the best of my ability, and had I known BEST’s SVN was available for public use, I’d have made use of it a long time ago.
I don’t appreciate effectively being called a liar, especially on such flimsy grounds.