by Judith Curry
The preparation of a new land surface temperature record was heralded last week in this news article entitled “Professor counters global warming myths with data.” The article states:
The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.
So what is this all about?
Context
The analysis of surface temperature used in the AR4 is the HadCRUT data set as described by Brohan et al. (2006).
Prior to the release of the CRU emails, I (along with pretty much everyone else in the c limate research community) used the HadCRU and NASA GISS surface temperature data sets in a range of different applications, citing the published error bars. My colleague Peter Webster started questioning these analyses in the tropics during summer 2009, which motivated his request to Phil Jones for the data set. After the release of the CRU emails, many people started questioning the surface data sets. Several independent analyses of the land data found essentially the same global average variation as the HadCRU and GISS analysis.
However, the surface temperature data sets have continued to be scrutinized and questioned. Over land, issues include station quality and interpretation of the urban heat island effect. Many concerns have been raised about the ocean surface temperature data sets, including adjustments to the actual measurements themselves and also the infilling of data using EOFs based on 1960-1990 temperatures. The issue of how to analyze the data statistically also need to be addressed, David Wojick describes his concerns about the surface temperature data sets as follows:
The so-called anomalies are first calculated for individual thermometers, most of questionable accuracy. The overall sample is a convenience sample, in no way statistically representative of the earth. Individual thermometer anomaly averages are then averaged for all the thermometers in individual grid cells, covering the earth. The statistical weight of a thermometer is inversely proportional to how many there are, another violation of statistical sampling theory. Many cells have no fixed thermometers so various kludges are used to fabricate grid cell averages. Then these averages of averages are averaged again to get the global average. The overall process is so statistically bizarre that no one knows how to carry the error bars of individual averages, or grid averages, or interpolations, etc., forward to even estimate the likely error.
Response by the climate community
From the Surface Temperatures web page:
To deliver climate services for the benefit of society we need to develop and deliver a suite of monitoring products from hourly to century timescales and from location specific to the global mean. As the study of climate science increases in its importance for decision and policy making – decisions that could have multi-billion dollar ramifications – the expectations and requirements of our data products will continue to grow. Society expects openness and transparency in the process and to have a greater understanding of the certainty regarding how climate has changed and how it will continue to change. Necessary steps to deliver on these requirements for observed land surface temperatures were discussed at a meeting held at the UK Met Office in September 2010 attended by climate scientists, measurement scientists, statisticians, economists and software / IT specialists. The meeting followed a submission to the WMO Commission for Climatology from the UK Met Office which was expanded upon in an invited opinion piece for Nature. Meeting discussions were based upon white papers solicited from authors with specialist knowledge in the relevant areas which were open for public comment for over a month. The meeting initiated an envisaged multi-year project which this website constitutes the focal point for. As work continues both this site and the accompanying moderated blog (used primarily to disseminate news items and solicit comments) will form the central focal point of the effort.
The envisaged process includes as its first necessary step the creation, for the first time, of a single comprehensive international databank of the actual surface meteorological observations taken globally at monthly, daily and sub-daily resolutions. This databank will be version controlled and seek to ascertain data provenance, preferably enabling researchers to drill down all the way to the original data record (see Figure). It will also have associated metadata – data describing the data – including images and changes in instrumentation and practices to the extent known. The databank effort will be run internationally and for the benefit of all. The effort required in creating and maintaining such a databank is substantial and the task is envisaged as open ended both because there is a wealth of data to recover and incorporate and because the databank will need to update in real-time. Novel approaches to data recovery such as crowd sourcing digitisation may be pursued. In the interests of getting subsequent parts of the work underway it is envisaged that a first version of the databank will be ready in 2011. This will definitively not mean that the databank issue is closed or resolved.
Presentations from the Exeter Workshop are here. Blog discussions on the workshop can be found at:
Steve Easterbrook states:
Now, it’s clear that any new temperature record needs to be entirely open and transparent, so that every piece of research based on it could (in principle) be traced all the way back to basic observational records, and to echo the way John Christy put it at the workshop – every step of the research now has to be available as admissible evidence that could stand up in a court of law, because that’s the kind of scrutiny we’re being subjected to. Of course, the problem is that not only isn’t science ready for this (no field of science is anywhere near that transparent), it’s also not currently feasible, given the huge array of data sources being drawn on, the complexities of ownership and access rights, the expectations that much of the data will have high commercial value.
Not feasible, eh?
The Berkeley Earth study
The website for the Berkeley study is here. The rationale for this project is stated as:
The most important indicator of global warming, by far, is the land and sea surface temperature record. This has been criticized in several ways, including the choice of stations and the methods for correcting systematic errors. The Berkeley Earth Surface Temperature study sets out to to do a new analysis of the surface temperature record in a rigorous manner that addresses this criticism. We are using over 39,000 unique stations, which is more than five times the 7,280 stations found in the Global Historical Climatology Network Monthly data set (GHCN-M) that has served as the focus of many climate studies.
Our aim is to resolve current criticism of the former temperature analyses, and to prepare an open record that will allow rapid response to further criticism or suggestions. Our results will include not only our best estimate for the global temperature change, but estimates of the uncertainties in the record.
Their data set:
The Berkeley Earth Surface Temperature Study has created a preliminary merged data set by combining 1.6 billion temperature reports from 10 preexisting data archives (4 daily and 6 monthly). We hope to be able to make the data set publicly available on this site by the end of 2010.
Whenever possible, we have used raw data rather than previously homogenized or edited data. After eliminating duplicate records, the current archive contains 39,390 unique stations. This is more than five times the 7,280 stations found in the Global Historical Climatology Network Monthly data set (GHCN-M) that has served as the focus of many climate studies. The GHCN-M is limited by strict requirements for record length, completeness, and the need for nearly complete reference intervals used to define baselines. We believe it is possible to design new algorithms that can greatly reduce the need to impose all of these requirements (see section on “Our Proposed Algorithms” under “Methodology”), and as such we have intentionally created a more expansive data set.
The project has been organized under the auspices of the nonprofit Novim group.
Novim’s mission is to provide clear scientific options to the most urgent problems facing society, to explore and explain the feasibility, probable costs and possible consequences of each course of action, and to distribute the results without advocacy or agenda both quickly and widely.
Novim builds on a set of scientific collaborative tools developed at the Kavli Institute for Theoretical Physics at UC Santa Barbara.
Novim’s motto is interesting food for thought:
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
The project is funded by:
- The Lee and Juliet Folger Fund
- Lawrence Berkeley National Laboratory, Laboratory Directed Research and Development (LDRD) Program
- William K. Bowes, Jr. Foundation
- Fund for Innovative Climate and Energy Research (created by Bill Gates)
- Charles G. Koch Charitable Foundation
- The Ann & Gordon Getty Foundation
- a number of private individuals
Project participants are:
- Robert Rohde, Physicist (Lead Scientist)
- Richard Muller, Professor of Physics (Chair)
- David Brillinger, Statistical Scientist
- Judith Curry, Climatologist
- Don Groom, Physicist
- Robert Jacobsen, Professor of Physics
- Elizabeth Muller, Project Manager
- Saul Perlmutter, Professor of Physics
- Arthur Rosenfeld, Professor of Physics, Former California Energy Commissioner
- Charlotte Wickham, Statistical Scientist
- Jonathan Wurtele, Professor of Physics
A view from a (sort of) insider
I was invited to be a member of this team last March. The impetus for inviting me to join the team was my interview in Discover Magazine, which brought me to their attention. My motivation for joining the group is that I thought the project was badly needed, and that it was especially important to have a group of scientists take a new, independent look at this and recreate the data set in a way that is transparent and unbiased. I was particularly impressed by the credentials of the team they put together, and their optimism for raising funds.
At the time I joined the project, they were researching sources of temperature data and looking through the literature to understand the data and past methodologies. By July, the data had been mostly collected, and they had identified their first source of funding. Efforts were made to communicate with personnel at NASA GISS and NOAA about the project. A representative of the group attended the Exeter meeting discussed above.
I’m not exactly sure what my originally intended role in this was, other than that they viewed me as person that was concerned about uncertainties in the temperature data set, relatively unbiased, and making public statements about the need for transparency and openness in the data sets. I participated loosely in this project, mostly as a resource person calling their attention to any new papers or blog posts that I thought were relevant and as a sounding board for ideas. As they have begun analyzing the data, I have completely refrained from commenting on the process or preliminary results, I have only made suggestions regarding where they might publish their analyses, etc.
Apart from building a comprehensive and (soon to be) publicly available surface temperature data set, with both raw and analyzed data, the most significant aspect of this project is the attention given to eliminating bias. They are testing all of their algorithms and analysis methods only on 2% of the total data, so that no unintended biases sneak in in terms of what the final result looks like. I have brought up this issue a number of times, in terms of the possibility for bias in the GISS analysis (since the same group makes predictions of next years temperature anomaly and uses the data to evaluate their climate models), and also the CRU data set (e.g. Jones-Wigley discussion of the 1940’s temperature bump). Further, there are some serious heavy hitter statisticians on the Berkeley team, and we can expect a more defensible statistical analysis of the data and its errors.
I have no idea whether any of their analysis methods will turn out to be “definitive;” the important thing is that there will be some new analysis methods on the table, and open data and open methods will enable all to assess the methods and contribute to the further development of improved methods.
I am holding my breath to see what the final results turn out to be. (FYI: I have not received any funding for my minor participation in this project).
Well Berkeley is where Unix was developed. So it must be good, right?
Not stricly correct. Unix came from Dennis Ritchie and Ken Thompson at Bell Labs. Berkeley became associated with a much-used open source variant, BSD, because of Thompson’s sabbatical there in 76-77. BSD was a key moment in the development of open source licenses though, which is pretty relevant to a new way for SST.
Sorry for the TLA inexactitude at the end there: I meant of course ST, not just the ocean contributions (though I very much support tallbloke’s call for the same deal for ARGO below).
Richard, thanks for the clarification. In my elliptical way your point about open source was what I was getting at. I think of Jim Hansen and Gavin Schmidt’s GISS product as the Microsoft Windows of the surface datasets. Very old code, endlessly patched, with a jazzy user interface to paper over the cracks.
Let’s hope Judith’s involvement can lead to an expansion of the Berkeley product to include ocean data too, so it can provide global coverage.
If an ocean gets 1 degree warmer and a nearby continent gets 1 degree cooler then what happens to the average and does this make sense?
While you’re about it, can we have and open and transparent set of ARGO data too please.
I first learned about this project at WUWT here http://wattsupwiththat.com/2011/02/11/new-independent-surface-temperature-record-in-the-works/. It’s already generated a lot of discussion. It’s badly needed and I’m looking forward to the results.
I read through the study methodology and my only concern currently is the treatment of UHIE. I wasn’t able to find any discussion on that topic.
Would you be able to comment on that at this time?
GregP: I agree that UHI is an important topic, and would look forward to a discussion on the issue. What I like about the Berkeley project is that they are ignoring that for now. UHI adjustments should be considered, but also independent, I think, of a clear record of raw measurements. After the Berkeley folks are done with this project, they can address UHI … and so can anyone else, if the actual records are open and transparent. All in all, I think this project will be extremely valuable.
I hope it can live up to the goals stated by its developers. I wish them luck.
Berkeley Earth Surface Temperatures site is at http://www.berkeleyearth.org/
This is an image of their station count
http://www.berkeleyearth.org/images/Figure1.jpg
The little dip takes place in 1972 and is an artifact of the ISH/ISD/GSOD data set.
http://rhinohide.wordpress.com/2010/05/23/gsod-global-surface-summary-of-the-day/
http://rhinohide.wordpress.com/2010/06/29/gistemp-with-gsod-round-2/
If we were restricted to only two metrics for global temperature change, I believe the best choices would be sea surface temperature and ocean heat content (including the deep oceans). Currently, both datasets are imperfect, with OHC even more problematic than SST. Land data will be interesting for understanding climate dynamics and regional variation, but much less important for ascertaining global trends.
Fred and others, this was discussed in another thread on OHC. Dr. Willis has emailed Dr. Pielke a preliminary, unpublished ARGO analysis of OHC into 2010.
Dr. Curry. I am honored by your quoting me but in order to get that honor I must mention my name is actually spelled Wojick. Your rendition — Wojcik — was the original Polish version, which my father abandoned. I too look forward to the results of this project, especially how the error aggregation problem is handled.
Are UAH and RSS about to be shot out of the sky?
No, but if the guys from Berkeley know what they’re doing HadSST2 and HadCRUT3 are.
“We believe it is possible to design new algorithms that can greatly reduce the need to impose all of these requirements”
More adjustments?
Well no doubt people will firstly want to see what answer gets produced before they decide whether this is a good thing or not, much like the shift that occurred from UAH as the best record to Hadley CRU (at 95% significance only, no warming for 15 years don’t you know).
Looking forward to learning from another SST dataset.
“… every step of the research now has to be available as admissible evidence that could stand up in a court of law, because that’s the kind of scrutiny we’re being subjected to. Of course, the problem is that not only isn’t science ready for this (no field of science is anywhere near that transparent), …”
Absolute nonsense!
Speaking as someone who routinely performs environmental monitoring, the results of which are expected to be of a quality and provenance admissible as evidence in a court of law (because that is precisely where those data are expected to often end up) I can say that not only is that level of transparency possible in the field of environmental science, it is commonplace.
But not in’climate science’, evidently.
Opposition oversight, chain of custody, verification replicates, public data archiving … climate science needs to learn these concepts.
And more. This new surface temp dataset is interesting. It attempts transparency. It at least speaks to bias. But structurally, how much different is it from the previous efforts?
And Tallbloke’s question about the ARGO data is crucial. Now that the surface temp record is no longer conforming to the alarmist ‘just so’ story about ‘global warming’, people are casting about to find the ‘missing heat’. They are looking to the ocean to find it, and 100% of our reliable ocean heat data is coming thru one man. And that man has already reversed a prior estimate of ocean cooling into one of ocean warming, by revising the past cooler and the present warmer. Is this legitimate? Maybe. Who knows? It is a black box to anyone outside of those few at the center of that one project.
The status of this whole field is one of ‘thousands of climate scientists agree’ based on data passing thru the hands of a very small number of gatekeepers, with key players known to be highly politicized.
Data collection needs to be completley transparent, and occur under recognition of the semi-adversarial nature of the end use. Raw data (i.e. instrument data) needs to be published in near real time, with full provenance.
Data collection needs to be structurally separate from primary data analyses (i.e. preparation of gridded temp datasets) and that needs to be separate from dependent data analyses (such as modeling). Multiple entities should be replicating these analyses, and those entities should not only be ‘independant’ but competitive, and/or operating under some incentive structure that aims for truth.
The identified participants on the new surface temp project are all academics. Academics are well and good and their skillsets are certainly necessary to a project like this . But if the goal is to produce data that are supposed to be robust for use in an adversarial environment, where are the scientists that have experience doing just that? (Note: Acutal scientists, not ‘extended peer groupies’ or ‘extended fact providers’ or other PNS nonsense).
JJ – one of the obstacles encountered by anyone calling for transparency is the supposedly commercially privileged nature of much of the data. Your response?
Tom,
In any situation where the results are to be used to support an arguement, ‘secret data’ is a contradiction in terms. If it is secret, then you dont get to use it. If you use it, then you dont get to keep it secret.
This affects very little of the data that people would like to use for ‘global warming’. At any rate, you dont use it for analyses that inform policy. Figure out a way to get the owner to authorize public disclosure, or toss it.
“Figure out a way to get the owner to authorize public disclosure, or toss it.” Precisely – and thank you.
ARGO data are non-commercial, so no problem there. Most of the weather station data is freely available on line or in the literature, and so is essentially all the SST data, so no problem there either.
I think that “commercial privilege” is largely evasion.
‘I think that “commercial privilege” is largely evasion’
It was shown to be in the case of CRU. In one case (Sweden) there was indeed a clause that prevented the CRU’s data for Sweden being passed on.
But not, as was claimed, for reasons of commercial confidentiality, but instead because the Swedish authorities did not want ‘CRU-adjusted’ data being passed off as genuine Swedish data. I guess they didn’t want to have their reputation as good scientists tarnished by allowing ‘dodgy data’ to be thought of as the real Mcoy.
The CRU team tried to spin this as being out of their hands due to confidentiality. But an FoI revealed the truth when the original agreement was located (down the back of an old discarded filing cabinet??). And it was CRU b…gg..ing around with the data that the Swedes were frightened of.
Of the 200 odd agreements CRU claimed to have that cited ‘commercial confidentiality’, they could actually produce only 4 when challenged. And the Swedish one was one of the four.
Draw your own conclusions about the true prevelance of commercial considerations in these matters. And of the veracity of statements by CRU.
“Draw your own conclusions ”
Or rather “confirm your own biases”
I laid out the evidence of the situation as documented and provable.
Which of your many biases would you like to confirm from them?
I agree. There are lots of branches of science and engineering where the quality of the product can stand up in a court of law. Climate science should be the same, not built on hidden data and hidden methods that have not been replicated or verified.
It’s pretty easy to borrow the techniques already developed for the areas like environmental science. The biggest problem is the drudgery of execution and maintaining detail discipline.
Well, given that ‘climate science’ is an environmental science, you really wouldnt be ‘borrowing’ techniques. :)
Agree that it adds a level of data management effort, especially over that of say, Phil Jones’ SOPs.
JJ
Josh Willis, thru’ his interactions with Pielke Snr, has shown that he can take an objective view on the matter. At least he’s satisfied me on that issue. He appears to be a man lead by the data rather than the theory. I would suggest your objection to his corrections may only highlight that it is the other way round with you. No doubt your complaint of his correction stems from the fact that the data is now less in line with your own particular view point.
If you can find a valid scientific reason why Willis’ corrections should not have been performed then I’ll apologise for my comment about you. But at the moment your criticism of the corrections just looks like so much arm waving.
“I would suggest your objection to his corrections may only highlight that it is the other way round with you.”
I would suggest that your misinterpretation of my post highlights something very similar about yourself.
Please reread my post, and correctly identify what my objection wrt ARGO actually is. You may then consider apology. You should at least consider the relevance of that objection in the light of your own inability to avoid bias.
We all have that. None are perfect, even those that try to be – and some do not. The question is, how do we handle that reality?
While you are pondering that, you may wish to think a bit about the task that you are inclined to lay at the feet of those who actually do object to Willis’ recent correction of the ARGO data … exactly how do you propose that a person other thanWillis would raise a ‘valid scientific objection’ to anything that that he does? For that matter, how does one have ‘valid scientific confidence’ in what he does? “Pielke says so” is not scientific…
Perhaps you see what I actually was talking about now?
Separation of duties is a key component to ensure data quality. The principles are much the same as used in double blind studies, to prevent unintended contamination due to sub-conscious knowledge.
If Josh Willis is responsible for collecting the data, and making the adjustments, then that is a problem. The collection should be completely independent from the adjustment, with no possibility of cross contamination.
This implies, says, that the deniers dispute the data. There are many of educated engineers and scientists out here who do not dispute the data. We strongly dispute the cause and effect that produced the data. Albedo is very powerful and it is not even mentioned in your discussions. You talk about global warming causing ice melting, all over the world. You never mention that when ice melts, albedo drops and equilibrium temperature rises. That is the major flaw in your theory and models.
NASA GISS Model E takes such albedo changes into account. Where are you getting your information?
Question is, how well is does it (the GISS E)? Take for example a run (or several, if you like) initialized to 2000, and compare the produced cloud cover estimates and snow extents (and whatever affects the albedo) to what actually happened, of which there ought to be reasonably good estimates at least for the snow and less so for the clouds.
Of course, if such comparisons (i.e. models vs. reality) exist, I will stand corrected and read the paper(s) with great interest. Of course it might be that I have managed to miss these key studies that demonstrate the predictive abilities. Which is a shame, as they arethe kind of studies people like me (i.e. suspicious about the doomsday stuff) would like to read. As we have been told – also here – that climate is not weather (I remember the limit being 15 years, although somewhere in the scared-wing blogs they recently proposed to rise the limit into 18 years, wonder why), it is not an initial value problem, not chaotic, all the modelling error average out etc, so surely we have results that prove e.g. the these things somewhere — don’t we?
So any good references for this?
Toxic suspician hangs over selections and corrections of existing analyses, therefore transparency and comprehensiveness are paramount. You have earned spurs as an outspoken, even controversial, champion of scientific integrity, so your participation is undoubtedly pivotal to the greatest hurdle of all- the problem of restoring universal trust. Good luck.
Hi Judy – Thank you for your post on this. I have just two comments here. First, as some of your readers may not know, we have a paper in the second stage of review of the effect of siting quality of the United States Historical Climate Record [USHCN] based on the seminal work of Anthony Watts, Evan Jones and their numerous volunteers. Anthony, Evan, and several other well know climate scientists are co-authors. As soon as this paper completes the final review process (it has been with the Editor for one month so far), we will be communicating our results.
Secondly, I also look forward the analysis of this independent assessment. I do disagree, however, with the statement that
“The most important indicator of global warming, by far, is the land and sea surface temperature record.”
As I have discussed in on my weblog and in papers; e.g. see
Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335. http://pielkeclimatesci.wordpress.com/files/2009/10/r-247.pdf
Pielke Sr., R.A., 2008: A broader view of the role of humans in the climate system. Physics Today, 61, Vol. 11, 54-55.
http://pielkeclimatesci.wordpress.com/files/2009/10/r-334.pdf
it is the annual global ocean heat content changes that are, by far, the most robust metric to monitor global warming and cooling.
RP, Sr.- You have often commented on the importance of ocean heat content changes. A commenter above suggests more openness in the Argo data communication – could you comment on that?
Don B: If memory serves me well, until the summer of last year (I believe that’s the timing), ARGO data was available to the public from its webite, as was a program that allowed users to process it. The ARGO data may still be available; but I don’t think they’re keeping it up to date.
Roger Pielke:
From everything I know (which isn’t much) I agree with you.
However, because I’d like to give the benefit of the doubt to this fascinating initative (BSD open source meets real physicists and statisticians meets Berkeley leftiness meets money from the hated libertarian Charles Koch) … perhaps the solution would be a simple word change:
“The most important indicator of global warming, by far, has been taken to be the land and sea surface temperature record.”
That surely is true. What does everyone else think? Would Judy feel this is worth mentioning to the guys doing the website? Openness is meant to be a two-way street, you know.
“based on the seminal work of Anthony Watts,” what work is that, post a link to the published work….
apparently the paper is in press. given the potential for controversy, it is probably a wise idea to wait until the paper is published before publicly providing a copy.
It is a reference to this:
http://www.surfacestations.org/
The project is an inventory of USHCN stations according to the siting criteria of NOAA as published here:
http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf.
The researchers and volunteers visited sites, photographed them and checked other information about the site.
given the huge array of data sources being drawn on, the complexities of ownership and access rights, the expectations that much of the data will have high commercial value.
All data obtained with taxpayers money belongs to us, it has no commercial value. It’s public domian.
Federally funded research data is not public domain, not in the USA under present law, and this is a common misconception. Normally, but depending on the contract, the researcher owns the data as intellectual property (to be mined for future papers, for example). The government has a right to use the data but not to disclose it. The exception, and it is rather recent, is data specifically developed for regulatory purposes, which must be disclosed. Other countries may differ.
Gotta applaud the brave project leaders. They have some massive mountains to climb, like UHI and extensive areas with no stations. I wish these adventurers a lot of luck!
While this is all very tangential, UNIX was developed at Bell Labs (yes, in NJ, which is perhaps the point dhogaza intended). It was developed for incredibly small machines by today’s standards — with two banks of 64Kb memory, one for data and one for instructions. AT&T (the parent co. of Bell Labs) was for several subsequent years unable to sell software due to being a highly regulated monopoly. As a result, UNIX was for these dozen years or thereabouts only available throughout AT&T, and for government and educational systems, which got it for free, along with full source code, which allowed Bill Joy at UC Berkeley to make some very significant improvements, in particular making it a more suitable operating system for computers with around 1Mb of memory or more. These were in time either adapted or recreated back at Bell Labs. Eventually, with deregulation, AT&T was able to sell UNIX and it gained some popularity in the commercial world.
All of this has very little relevance except as a small bit of evidence to support the assertion that Tallbloke is prone to speak of things he knows not whereof. It doesn’t prove the assertion and there might be much evidence to the contrary (i.e. the assertion that Tallbloke is in general very careful to speak within the areas of his expertise).
Berkeley eventually reimplemented the parts of UNIX that weren’t its original creation, and so became a major source of a free UNIX distribution called Berkeley or UCB Unix. This continues to challenge to some extent the other reimplementation of UNIX, Linux.
That was my first point, yes.
Actually, the *really first* version was written in assembly for the PDP-7. But you’re right, what we would really consider the first Unix – written in C – was written for the PDP-11/45 or 11/70 which you’re describing (smaller versions didn’t even have separate I/D spaces). C constructs like “i++” largely owe their existence to the fact that addressing modes on the PDP-11 included auto-increment and auto-decrement …
I still have source for the first version of PDP-11 Unix that was released, complete with my handwritten changes to get rid of the Multics-style line editing in favor of something more DEC-like that I implemented just after we got it (as a high school kid at a science museum with a PDP-11/45, we qualified as an educational system).
And this was my second …
Is anyone besides me gobsmacked at GISTemp showing that the January global temperature anomaly rose to 0.46C from 0.4C in December? RSS dropped from 0.251C in December to 0.083 and UAH from 0.18C to
-o.oo9C.
Congress can’t get Hansen and NASA out of the global temperature business fast enough.
Your wish is my command: poof, no Gistemp!
And why are there so many blogs and papers making their most important comparinsons exclusively to GISS? It is because it shows less multidecadal dynamics and no recent flattening on the trend.
So the world without gistemp would be quite different, than what you are proposing with your spaghetti.
Juakola
Juakola, yes, here is what you are saying.
http://bit.ly/emAwAu
gistemp data adjustments
a) Reduced the 1880s local maximum
b) Increased the 1910s local minimum
c) Reduced the 1940s maximum
d) Did not show the shift from warming to cooling in the 2000s
No, it would look just like the 2nd graph, from which GisTEMP was removed.
Here is GisTEMP versus UAH.
Do you expect the Berkeley series to be wildly different than the satellite series?
The biggest differences (to the extent that there are any) would likely be in the pre-satellite era, IMO
Well, I would much rather see the money spent on improving current coverage of the Earth’s surface and oceans.
Well it is different. GISS shows no flatlining at all, where all of the other datasets do. Try looking at the trendlines from El Nino to El Nino.
Last 31 years GISS vs HadCRUT
http://www.woodfortrees.org/plot/gistemp/from:1979/offset:-0.24/mean:12/plot/hadcrut3vgl/from:1979/offset:-0.15/mean:12
It looks like 1998 is being cooler, and 2005 and subsequent years being warmer in gistemp.
I dont expetct the Berkeley serios to be wildly different than the satellite series. But if you think similar trends between the surface and the troposphere equals good agreement you are wrong:
http://hurricane.atmos.colostate.edu/Includes/Documents/Publications/klotzbachetal2009.pdf
Chuck – The GISTemp and RSS/UAH figures could all be accurate even if each recorded exactly the same temperature. The reason is that they are comparing their figures to Different Baselines . In other words, they compared current December and January with a different set of past Decembers and Januarys.
Notice that although this results in different anomalies, it does not significantly affect the magnitude of long term trends, which depend on how the anomalies change over time, particularly when averaged over the course of a year rather than based on month to month comparisons.
Yeah, these people do have a basic problem with middle school algebra …
Fred, your criticism is misplaced. Chuck was not comparing magnitudes between the three temp measures. He was referring to the fact that the GISS anomaly rose over the same period that two other fell.
As you note, differing baselines do not affect trend. It was trend (specifically the sign thereof) that Chuck was talking about. The two satellite datasets dropped to their baselines, while GISS climbed away from its baseline by an additional 15%. Chuck’s point stands.
That they don’t measure the same temperature is more of a factor, on a short term basis the satellite measurements in the troposphere frequently have a lag wrt the surface, particularly during a La Niña/El Niño event.
JJ – The point I made is valid for month to month changes, just as it is for individual months. If you think about it, you’ll understand why.
Fred,
Thinking about it just a little bit more, brings the understanding why a attributing a 0.26C Dec-Jan trend difference between the two datasets to baseline difference is to rely on the highly improbable. It would require a 0.26C difference between the Dec-Jan actual value differential of the two base periods. I would think that would be at least as gobsmacking as seeing it in an annual anomaly.
Further, refraining from winging it and looking at the data, we see that the GISS to UAH Dec-Jan trend difference is not constant (as it would be if due to a difference in base period). To the contrary, it varies from about -0.6C to 0.4C. Chuck has noticed something, and it is not attribuatable to base period difference.
I haven’t looked at the data myself, but unless I misunderstand your point about which data you refer to, I don’t see why the GISS to UAH Dec-Jan trend difference need be constant for different base periods. In any case, my original point is simply that a GISS/UAH disparity in the direction of change does not necessarily signify an error in either. It may represent a gross error, or it may represent a difference in altitude, or it may represent minor errors for both datasets. I was making a point about the logic, not the magnitude of the difference. I actually doubt that any of these values are error-free.
Which is probably the reason GISS uses another baseline than everybody else (1951-80 instead of the WMO-mandated 1961-90).
You’re probably unaware that in greenland it was raining, and in the arctic, the winter freeze has been very, very slow.
And where I live in the PNW, it’s been a warmer than usual winter.
I suspect you may be gobsmacked because you live somewhere where it’s been cold, not warm, while not understanding that huge areas of the planet have also been warmer than normal????
“Huge areas?” According satellite observations, in January, the only positive temperature anomalies were found in far nothern Canada, NE Canada, the western half of Greenland, far northern Asia, the southern half of Australia, and the eastern Antarctica. Anomalies were negative in most of North America, western Europe, central Asia, most of Africa, most of South America, the central and south Pacific, and north Atlantic. Had-CRUT January 2011 global sea surface temperatures dropped from an anomaly of 0.437C in December to 0.226C. So it is likely that the Had-CRUT global temperature anomaly in January will also decrease, leaving GISTemp as the only temperature anomaly showing an increase in the positive trend.
Chuck L | February 12, 2011 at 10:00 pm | Reply
“Congress can’t get Hansen and NASA out of the global temperature business fast enough.”
Translation “make science I cant argue against dissapear by fiat of congress”
It looks like you missed the actual arguments above in terms of contrasting trends and the different baselines used to obtain such contrasting trends in GISTEMP. It would be good if you actually read all the arguments before you make the claim that GISTEMP trend is something no one could argue against!!
Simply because for the task they are doing NONE are required, though they do have one in a advisory role. If you think more is needed and you are one why don’t you contact them and volunteer.
If the reconstruction doesn’t respect the preconceive belief about AGW, the simple fact that the evil Koch have founded part of it, will be used to denigrate the result.
If so climate science is really in a sad state.
It will be interesting to have an unadjusted, all in, temperature set. Thank you Dr. Curry for participating in its creation.
Getting the raw data will let smart people run their own “corrections” and adjustments. With the right metadata we should be able to learn what effect station moves have had. And, because the dataset seeks to be inclusive, we can look at things like the great thermometer die off objectively. (With luck we might also be able to pinpoint the locations where thermometers should be sited going forward.
It would be nice to think that this would already have been happening with the existing datasets but, clearly, it has not. Unless and until we have clean, unbiased, temp data, the rest of the climate science is really just blowing smoke.
Yeah, it’s great, assuming that temps taken at 3 PM are equivalent to those taken at 10 AM when station protocols are changed.
I mean, we always expect morning and afternoon temps to be the same, therefore, any effort to compensate for time-of-day observations is a commie plot to lower the trend.
??
It was declared earlier that the dataset(s) would contain the metadata describing the observations. Even I, thirty years out of academe but pretty experienced in IT would manage to think of putting a timestamp in there somehow. Y’know with a little clock thingie…maybe even put that in the measuring instrument?? /sarc off
Or did I miss your point about the whole thing being a commie conspiracy? It seems an eminently sensible and much needed initiative to me. Something like it should have been done 25 years ago.
But I guess if you want to find Reds Under Your Bed, you always will. Keep on looking! Bring me back one if you find it in this initiative.
Latimer:
I think with dhogaza it is the link with the libertarian Kochs that has him seriously worried.
It is mind boggling how anyone can be opposed to this effort to prepare a new more comprehensive, unfiltered, metadat rich dataset – especially where the taxpayer is not yet being asked to foot the bill – unless of course they realize the likely direction and nature of the distortions in the current data set.
So can anyone please tell me what happened to ” the science is settled” ” there is no longer any reason for debate” ” the proof is incontravertible” surely such statements which have been and continue to be uttered by the IPCC spin doctors are proof positive that the IPCC have contraviened the very basic principles of science and therefore they have no credibility.
Steve Easterbrook says:
‘every step of the research now has to be available as admissible evidence that could stand up in a court of law, because that’s the kind of scrutiny we’re being subjected to. Of course, the problem is that not only isn’t science ready for this, (no field of science is anywhere near that transparent), it’s also not currently feasible’
Well if that’s the case – that the science couldn’t stand up to a court of law examination, I don’t think we need to take much notice of it until it is.
Plenty of other fields of human endeavour have worked out ways to verify their work in such a way as it can stand up in court. I see no reason why climatology and climatologits should be any different.
I seem to remember that Steve Easterbrook was also very resistant to any independent scrutiny or tests of climate models. The list of excuses for not bringing climatology into the light of day is growing thinner and less credible by the day.
Wise decision making is based on best available knowledge, not only knowledge that is ready for court. Excluding the existing scientific knowledge is not a way of using best available knowledge. Deciding, what is best available knowledge is a responsibility of the decision makers using whatever help they judge necessary.
Fair comment, but blithely ignoring the fact that the current state of ‘the knowledge’ is based on a very shallow foundation (if at all), is no good reason not to attempt to build those foundations a lot deeper.
And ‘court ready’ evidence is good enough for lots of other endeavours. From the trivial to the important – like parking tickets and death penalties.
If climatology is to have the influence to claims and wants to have it must have a solid and reliable basis of real observations to work with. Until then, by all means use the data that exists – but only when the many limitations of it are clearly laid out and understood…not swept beneath the carpet or ‘adjusted’ in arcane and undocumented ways (or lost in an office move :-( ) as they have been in the past. And absolutely no more ‘The Science is Settled’ type proclamations.
Latimer,
Correct. If the data are limited, the conclusions that are possible are limited and it need to be described that way.
Regards
Günter
I don’t think “best available knowledge” means it is enough to make a decision. It may very well be an immature knowledge. Not all “knowledges” are equally trust-able.
Even in the same science you have areas where you feel more comfortable, and cutting-edge investigation where you are more insecure. So “best available knowledge” does not have the hard meaning you imply.
If it’s not ready for the court (use this as a metaphor), you may think it is immature.
Poor data makes for poor decisions, especially if you don’t know the data is bad. In that case it is much better to have no data than bad data, because you at least know where you stand.
For example, you have a fork in the road. The right fork leads to prosperity, the left to certain disaster. If the best information available says take the left fork, you would be much better of with no information.
At least with no information you would have a 50/50 chance of success. If you relied on the best information and it was wrong, there would be no chance of success.
IMO Steve Easterbrook is one of the reasons this project is necessary . Here is his deeply spun description of Climategate and other faux pas:
The scandals were real. Scientists did lose data, threaten to destroy data, and present data in a misleading way at the very least. Plus there were other real abuses.
Easterbrook’s summary is partisan to the point of dishonesty. I realize the D word is wrong by the Lisbon standards of reconciliation, but it’s hard for me to understand what he has written otherwise.
Latimer,
Steve Easterbrook conducted an “independent scrutiny of climate models” and brought practices to “the light of day”. But because you don’t like his answer you are rejecting his audit.
And having done so himself – effectively reviewing the professional practices of his own specialty – gave them a relatively clean bill of health and declared that no further external scrutiny was needed…and that it would waste his valuable time if anybody were so lacking in deference or politeness to insist on it.
He, like you , are still failing to grasp the essential point from the outside world. Self-certification is not an acceptable answer for important stuff.
For important stuff, the scrutiny has to be external and independent.
I’m sure that you would not be comfortable if , say the operators of Three Mile Island self-certified that they had learnt their lessons and all would now be well so please let them have another go with another reactor.
Nor that the chairman of Leman Brothers declared that he had now read some accountancy books, converted to Jesus and would anybody mind if he took over Bank Of America with his newly found insights. And that he promised to be very good this time, honest.
Of course you wouldn’t. But the implications of ‘climate science’, if taken to the extremes some present would have world-wide consequences many many many times greater than either of those examples. It is simply unacceptable to leave the level of scrutiny at a few coworkers patting each other on the back and asking to be left alone in peace. Three years ago that may have been possible. But the world has changed around them.
Climatology and climatologists have forfeited the public’s trust and it won’t be regained without rigorous external scrutiny.
Steve Milesworthy: Easterbrook is unabashedly partisan and vitriolic in the climate change debate — why should he be trusted? (See my post previous to yours.) It’s like taking Oliver North’s word that the Reagan White House was on the up-and-up in the Iran-Contra scandal.
Here’s another Easterbrook greatest hit, taking to task Andy Revkin, a climate change journalist (and sycophant as revealed in the Climategate emails):
So, here’s a challenge for Andy Revkin: Do not write another word about climate science until you have spent one whole month as a visitor in a climate research institute. Attend the seminars, talk to the PhD students, sit in on meetings, find out what actually goes on in these places. If you can’t be bothered to do that, then please STFU [about this whole bias, groupthink and tribalism meme].
http://www.easterbrook.ca/steve/?p=1874
Be assured that Easterbrook spelled out STFU in its full glory before going back later to abbreviate it.
There is nothing like a full primate display of chest-thumping profanity to settle the issue of whether scientists might be guilty of a tribalism.
It’s nothing like Oliver North, don’t be so stupid. Oliver North was in it to the eyeballs from the outset. You are unashamedly partisan and vitriolic for making such a comparison. Why should you be trusted?
Steve Easterbrook has done his own personal audit and exposed his findings, and you don’t like the fact that he does not disapprove very strongly of what he sees. Complaining about his findings because he has a sharp tongue is ad hominem.
Would you ask a teller in a bank to audit the receipts of the other tellers? No, it would be meaningless because they are not independent. Easterbrook may well have conducted a scrutiny, but training as a climate scientist does not qualify you as an auditor .
Easterbrook is a professor and an expert on software engineering. He is not trained as a climate scientist.
http://www.easterbrook.ca/steve/?page_id=2
Dr Curry, Sorry, off topic but it seems your on stage as Dr Diane Cassell in The Heretic at the Royall Court Theatre in London!
Whoever may have provided the inspiration for a heretic female climate scientist – what a vast field to choose from! – there’s an excellent review of the play, with obligatory cartoon, by Josh and his missus on Bishop Hill.
It’s good to see so many physicists involved. I hope they take cognisance of the fact that temperature is an intensive property.
Pick your answer:
1. Because they want to do it properly according to the processes and practices of ‘hard science’ and measurement.
Not with the basic idea that ‘its good enough for climatology and anyway we can always adjust the data back in the lab.
2. Because there are very very few competent climate scientists. And one of them is already engaged.
As a general remark it is excellent to see people with expertise in physics and statistics so heavily involved. Degrees in ‘soft science’ like climatology and its brethren are not adequate preparation for hard problems like this one.
Being slightly more charitable after a nice kipper for breakfast I’ll add option 3.
3. Because this is not the sort of problem that climate scientists are good at. It is a data collection and analysis (metrology not meteorology) problem. That the data being collected is temperature – and hence may have a subsequent value in climatology is coincidental to the techniques needed to do this part of the puzzle correctly.
I think this is exciting news, because it was the reports of poor quality of the existing records, with lots of filled in data (I read a figure that 5000 out of 6000 data points were computer generated in one of the data sets – can that really be true?) together with the HARRY_README.TXT set of programmers notes from the CRU, that alerted me to the fact that something was seriously wrong.
However, if the new data shows temperature changes that differ from NASA’s, I do worry that the result will descend into another political fight, regardless of the actual care taken over the work.
Don B – I have recommended openness in access to the upper ocean heat data in real time. This should be among the highest priorities in climate science.
Air temperatures are the continuation of water temperatures by other means.
H/t A. Bernaerts.
==========
So not the sun then.
Interesting.
Judith,
Why are you criticising Easterbrook for suggesting that meeting the rather ambitious requirements that he lists are “not *currently* feasible” (my emphasis) before you or others have seen and evaluated the new product?
Judith said just this at the end of a paragraph-length quote from Easterbrook:
At the end of her comprehensive overview of the Berkeley project she said:
Isn’t it hypersensitive to take the first as criticism of Steve Easterbrook, on his behalf? Isn’t it rather Dr Curry drawing our attention to a very significant statement he’s made? One that indeed needs to be questioned, again and again, given the lamentable record of climate science in open data and code. I likewise see no claim that the ‘new project’ will solve all current problems with such openness. Just that having alternatives committed to these vastly important principles has to be very positive.
Given that Easterbrook said “not currently feasible” rather than “not feasible”, Dr Curry is possibly implying that she thinks that this new effort is disproving Easterbrook’s claim. I merely suggest that such a claim should be expanded on.
Why would you interpret my statement as a criticism of Easterbrook? His statement is probably an accurate assessment of the general sentiments of the group at the Exeter meeting.
The “Eh?” sounds like a criticism (of the claim, not of Easterbrook) and its position suggests that you favourably contrast the new effort with the Exeter effort.
This is a better interpretation.
One of the major problems with current surface temperature datasets like GHCN is the lack of good supporting station information. For example, station 42572434000 is listed as “ST LOUIS/LAMB” and has data back to 1836. There was no Lambert Field in 1836, so the data must be a combination of more than one station, but there is no information about where the discontinuities are in the data, and without that information, is not possible to properly use the data. The information does exist, but it is tedious work to find it. If this new surface record addresses those types of problems, it would be a great contribution. Otherwise it will be more of the same.
It’s worse than that. For a large proportion of the GHCN dataset metadata are a mess. Coordinates and altitudes are sometimes grossly wrong, names are misspelled, airfield/non airfield, urban/rural data are unreliable, stations are merged without notice (e. g. Lambert field as already noted). There are even stations that have never existed.
I would hope that this new project sets up a mechanism for reporting and correcting such errors, which are often obvious to people with local knowledge.
The latest issue of Science has a special collection of articles on data issues, with field-by-field assessments. It is free (on registration) at http://www.sciencemag.org/site/special/data/. There is a perspective article on climate science — “Climate Data Challenges in the 21st Century” by
Jonathan Overpeck, Gerald Meehl, Sandrine Bony, and David Easterling.
Here is what I hope the new Berkeley Earth Surface Temperature data will show:
http://bit.ly/emAwAu
1) Confirm the hadcrut3vgl dataset
2) Confirm the following global mean temperature adjustments in the gistemp data
a) Reduced the 1880s local maximum
b) Reduced the 1910s local minimum
c) Reduced the 1940s maximum
d) Did not show the shift from warming to cooling in the 2000s
> a comprehensive and (soon to be) publicly available
> surface temperature data set, with both raw and analyzed data
Their website was down yesterday, the Google cache they hope to have the database available “late 2010” — let me check now —
Ah, it’s back, and the front page text has been changed to “We hope to have an initial data release available on this website in early 2011.”
http://novim.org/ has a bunch of related sites:
sciencediscourse.org, climatediscourse.org, carbondiscourse.org, energydiscourse.org, healthdiscourse.org, waterdiscourse.org
http://www.berkeleyearth.org/dataset still says: “We hope to be able to make the data set publicly available on this site by the end of 2010.”
About ten years ago, back in my “denier” days, I set out to prove that the surface air temperature time series published by GISS and Lugina et al. were wrong. I selected about 1,000 records from GISTEMP, threw out all of the ones that showed the faintest hint of a UHI impact and averaged the remainder using different area-weighting methods. And what did I get? Almost exactly the same result as GISS and Lugina.
Lugina used about 600 records, I used up to 1,000, GISS used several thousand. Now Berkeley wants to use 39,390. Maybe adding another 30,000 records will make a difference, maybe it won’t. But if it does I would have to question whether the new result was any more reliable than the old one.
We’re barking up the wrong tree with the surface air temperature record. If we want to “counter global warming myths” we should be taking a critical look at the SST series that contribute about 70% of the combined land and ocean series the IPCC and others use. About two years ago I set out to prove that these series were wrong, and this time I succeeded.
Wow. Anyone have a list identifying the red dots?
http://www.berkeleyearth.org/images/Figure2.jpg
Is this to be only a “new land surface temperature record” collection?
It’d be a shame to spend all the money on only land (and icecap) surface data.
Is there funding to compile the sea surface and the new large resource of ocean volume data that’s starting to come in fast? — for example http://www.agu.org/pubs/crossref/2011/2010GL046376.shtml
actually there is a group (represented at the exeter meeting) that are digitizing old ship records. IMO the ocean data is in MUCH worse shape than the land data. Getting more data is half the battle; the other half is correcting for known measurement errors and then the spatial analysis (the EOF method that they have used to fill in missing data is a disaster IMO). I would say the ocean data also needs an independent analysis such as the Berkeley group is doing for land.
The thread is behaving chaotically. Here is another try at the reply function.
“IMO the ocean data is in MUCH worse shape than the land data. “
To what extent does this apply to the satellite era?
Very strange timestamps showing up in this thread.
How can posts from yesterday follow posts from today?
Very strange timestamps showing up in this thread.
Hank R: I’ve seen it before in this blog. In the past it seems to have been related to deleted comments. Or maybe the blog software just loses track sometime.
“IMO the ocean data is in MUCH worse shape than the land data. “
To what extent does this apply to the satellite era?
Question.
If the new data set comes up with a significantly different trend during the current GCM’s tuning periods what happens? The current GCM’s would then not even be able to hindcast to their tuning periods much less their validation periods. Does that mean back to the drawing with an admission that they don’t yet have all the factors right? Or would it mean just a retune to the new data and playing with the fudge factors?
I think the most interesting possible outcome of the new data set is the potential impact on the GCM’s and their results such as the effect of aerosols and the climate sensitivity which my understanding came from “We can’t get it to match without these factors so the factors must be right.” logic.
However it comes out the project seems to be very admirable, sorely needed and long overdue.
Reading the article, they appear to be entertaining some sort of change with respect to UHI. Yippee, as an urban dweller, I’m sick and tired of being beat out by cornfields.
I think this is going nowhere.
Judy,
FYI:
A re-analysis of historical surface temperature records is a component of a planned European Metrology (note spelling!) Research Program on “Metrology for Pressure, Temperature, Humidity and Airspeed in the Atmosphere”. This is a EU funded program carrried out by EURAMET – the European Association of National Metrology Institutes. The relevant research topics are described here: http://www.emrponline.eu/call2010/docs/srt/srt02e.pdf
“The aim […] should be to provide validated and reliable measurements/methods with traceability wherever it is practicable to do so for :
[..]
3. Assessment of the historical temperature measurement data with respect to uncertainties and to the used techniques and temperature scales, and recalculation of the data where appropriate in order to establish comparability through traceability to the international temperature scale ITS-90.
4.Development of traceable measurement methods and protocols for temperature, humidity, pressure and airspeed measurements needed for climate studies and meteorological long-term and wide-scale observations. Possible applications are ground-based measurements, aircraft measurements, radiosonde measurements, balloon measurements – such as traceability for LIDAR systems, improvement of anemometer calibration in wind tunnel.”
Thank for the link, looks like a very useful effort
One link i forgot to post is their general description of methodology, see here
http://www.berkeleyearth.org/methodology
Judith somebody may have picked up on this already but at WUWT the following was written
“Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method.”
Which makes it sound as though no preliminary results exist.
and you write
” I have completely refrained from commenting on the process or preliminary results”
Which make it sound like you’ve seen preliminary results but haven’t commented on them. So which is it? Have you seen preliminary results?
(Apologies for the suspicious tone)
As stated in my original post, they are testing their algorithms on a sample that is 2% of the overall data set. Testing of their algorithms is obviously required, but by only looking at 2% of the data, there is no way to assess what the result might look like for the entire dataset and hence looking at this small sample does not bias their analysis, IMO.
I’ve seen Steve Mosher comment on this issue in the past (on WUWT I think). His point was if you take as little as 100 stations you probably have a good approximation of the global picture. I wonder whether 2% isn’t giving you a good idea about the global estimate. I see he’s around maybe he can comment on that.
steve’s conclusion was based on homogenized data (I’ll let him clarify). I conducted a sampling study in terms of determining a global average, see this paper (I don’t buy that you can do it with 100 stations).
http://curry.eas.gatech.edu/currydoc/Agudelo_GRL31.pdf
If you take 100 stations that are actually geographically representative (I doubt they exist), then compute a conventional global average, sampling theory gives you the confidence interval distribution. It is a simple case. If you use area averaging because there is no representative sample then all bets are off.
I guess it depends on how you identify the stations, but I doubt they exist too. Of the 7280 GHCN stations, only 825 of them have no missing data for 1961-1990. Of those, only 443 have data extending back to 1930 or before. Of those, only 131 are outside of the U.S. (75-Japan; 12-China; 12-Australia; 10-Canada; 6-Germany; 6-South Korea; 2-Norway; 2-U.K.; 1-Thailand, Czech Republic, Israel, Spain, Sweden, Switzerland)
Given the chaotic nature of the field in question and several thousands met stations, I am sure you can find a single good station that would match the global average pretty well on a selected time interval. Does it mean that after finding this lucky spot, all other stations should be decommissioned at great savings? Yeah right!
Whew, That only took three years of complaining to get started.
Thanks for your help on this Judith. The guys working the project got in touch a while back. Solid work.
of course, some people will never be satisfied. usually those who could never do the work themselves, or those that could do the work, but don’t want to have their opinions changed.
It is quite clear for anyone that meteorological “surface temperature” does not represent any physics of heat transfer on Earth surface if measured at 2m above surface (as all current met stations do). Because of boundary layer, the same 2-m temperature could be produced with different winds, while the actual surface temperature and corresponding radiative heat exchange can be vastly different. More, majority of stations are min-max thermometers, which does not give you any meaningful data about heat transfer process over day-night cycle. As other discussions on this blog show, trends in globally-averaged “anomalies” are no proxy for warming nor cooling either. Even if the data were measured meaningfully, the distribution of sampling points is vastly insufficient to represent temperature field, the sampling grid is undersampled by a factor of about 100X.
It also seems that none of team members have any expertise in measuring fluid dynamics FIELDS, mostly they are high-energy specialists and astronomers. The lead scientist of the project has no expertise in anything other than preparing AGW-promoting figures for global warming pages in Wikipedia. Therefore the entire effort seems to be a waste of time and money.
Judith,
Still missing a great deal of interacting data by just strictly following temperature records.
So, it will fail again.
Scientists still have absolutely no clue to the mechanics of the planet.
Comparison of 20th century global mean temperature 30-years trends between hadcrut3vgl and gistemp:
http://bit.ly/gva9cX
Results:
For the period from 1880 to 1910, gistemp provides a flat global mean temperature trend
For the period from 1880 to 1910, hadcrut3vgl provides a cooling global mean temperature trend
Global mean temperature for Dec-2010 dropped by 0.51 deg C from its maximum of 0.76 deg C for Feb-1998
http://bit.ly/f2Ujfn
Sign of the start of global cooling?
Fear not.
Somebody will look at that number and ‘adjust it’ to make any instrumentally derived cooling go away in favour of the Authorised Version from a model.
You should not invest in fur coat futures.
It appears that those chosen for this project, excluding Dr Curry, lack the knowledge and experience to fix the flawed and biased system of weather stations both nationally and globally. So long as land and sea surface temperatures remain the metric for climate change anomalies and existing data sources are used, sound physics and superior statistical analysis cannot overcome the “garbage” data inputs. The flaw data and sources are well documented and analyzed by the likes of Anthony Watts, Joseph D’Aleo, Spencer, Chirsty and Pielke Sr.
I find it difficult to expect anything but more of the same from the new data set.
Not 100% sure but I believe Watts and D’Aleo are involved with the effort. Don’t know about the others.
Not to be missed, Joe Romm’s latest rant, topic is the new surface temperature study
http://climateprogress.org/2011/02/14/exclusive-richard-muller-charles-koch-judith-curry-and-the-implosion-of-the-berkeley-earth-surface-temperature-study/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+climateprogress%2FlCrX+%28Climate+Progress%29
I could read it half-way and my heart almost stopped and blood boiled. It would take a full day to make a comprehensive list of the logical fallacies and and hominem attacks made by Romm. This rant goes to the same category as the Sky Dragon book.
What concerns me is that in other climate-related forums people link a lot to likes of Romm and actually give him credit. What conrecns me even more is that the editors of the Time magazine etc. actually read his blog (and presumably believe him).
I also once made an evil kind of test on his moderation. Firstly, I posted a comment where is suspected the credibility of UAH, and after a while another post where i critisized gistemp. You can guess which one was moderated and which wasn’t.
Romm’s article is terrible. I wonder if comments that disagree are actually posted. I tried to reply to see what would happen
With no exception they aren’t. At least they haven’t been so far and according to his uber-childish style nothing has changed.
I’ve always wondered, who reads the blogs like this Romm thing, and more importantly, why. So why bother with these guys with trying to comment? They’ve made up their mind, and there is nothing you can say to change their minds. It just seems so… religious. Another thing I’ve wondered is how they pick names for their blogs: “Climate Progress”, “Open Mind” (probably the most hilarious), “Sceptical Science”. Are these names just some kind of a parody or is it so that they can’t see the contradiction?
As for this blog and Dr. Curry, who Mr. Romm seems to be having busy time playing down, I’ve found articles and discussion here refreshing, mostly civil and usually well informed, and frankly in many threads way over my level of expertise in this field. What I’ve seen is both sides ‘presenting their case’ without that much denialist-warmist labelling that has been poisoning the discussion so long. So why Mr. Romm is having hard time with that?
At least I read those because I am a masochist. I have no other explanation for that.
I wouldn’t really drop SkS in the same category with Tamino and Romm, since at least it is _trying_ to be a scientific blog with discussion, debate and no ad hominem attacks (yet this doesnt mean they wouldnt miserably fail but still…). The biggest problem with SkS seems to be that whoever “dana1981” wants to play the “expert” and write “rebuttals”, is allowed to do so. And whenever you are critisizing a particular article you are not allowed even a slightest big of offtopic, but your opponents are (otherwise their circular logic wouldnt work).
But as you read those blogs you learn where they go horribly wrong, it also helps to understand where and when the skeptic blogs go wrong. I guess it is balancing out what you read.
But so much for commenting about blogs when we should be having a discussion about the surface temperature datasets, and particularly the new one.
I can confirm that my post was not published. I did not state anything rude, but simply questioned his conclusions on the hockey stick curve and his comments about Judith.
Judith,
Aw c’mon.
Romm’s rant starts off by bashing M+M for deconstructing the hockey stick. This is ancient history by now, but it appears he is still having problems accepting what happened and turning it loose.
Romm’s comments on the Berkeley team seem to concentrate on some “ad homs” regarding the team members (plus an “ad fem” regarding his “old friend Curry”).
Hardly a very enlightening blog – does anyone really read this rubbish?
Max
actually, if you go to romm’s blog stats (say alexa), his main audience is in washington DC, Males over 45 yrs, who read the blog from work. He is not without influence, courtesy of Podesta. Frightening thought.
I feel sorry for you that likes of him have taken you as their target without a proper justification. Just remain strong and take care Judith.
Didn’t find any earlier discussion about Knox & Douglass (2010): Recent energy balance of Earth, latest issue of International Journal of Geosciences.
Based on ARGO data, and referencing four other studies, they conclude that “In summary, we find that estimates of the recent (2003–2008) OHC rates of change are preponderantly negative. This does not support the existence of either a large positive radiative imbalance or a “missing energy.”. Admittably, the warm year 2010 could alter their results somewhat, and the period is short to make strong conlusions.
With a better search I found it was actually covered here already.
As I understood what Pekka told elsewhere (in a finnish forum where also you and I are in common), that conclusion wasn’t a robust conclusion at all, because they smoothed the data first with running averages and because the data is short this overpronounced the trend. They calculated the trend with four different methods and this was the most their most pronounced finding, whereas the “right” method had no significant trend at all.
So in short: you actually can’t say oceans are cooling, but nevertheless this is already the 4th study which confirms that the oceans at least in the 0-700m level aren’t warming either.
The comment that I made referred to the fact that they calculate the trend in four different ways. One of them is reasonable and correct, one is simply wrong (and this is the one emphasized in the paper) and two are statistically powerless.
I guess this was yet another example (among many of the recent papers) spesially for the laymen, that even when a papers main conclusion is wrong, it doesn’t mean the paper itself wouldn’t contain any useful information. You just need to find what it is.
Actually there was quite a discussion here, including comments by, I believe, both of the authors, Pielke Sr, Trenberth, Bob Tisdale, and others.
Finding it is like diving for pearls in the part of the ocean not sampled by ARGO. Noted in a listing of papers accepted in January for publication by a prominent journal, that there were two on OHC. In a comment above I linked to new, as yet unpublished information on ARGO. Since then Dr. Pielke has gotten another email from Dr. Willis that changed Dr. Pielke’s number a bit.
There is no mention of depth there, and I’m curios about that.
Judith,
One of tallblokes blogs brought back memories of working with optics in laser splitting and bending for holograms.
This has me looking into our planets atmospheric “lense” that can change the focus of the suns energy. Cloud cover can very much interfere with this.
Joe
Judith
Hooray!
The Berkeley site starts off with:
Whether or not the Berkeley team’s analysis turns out to be “definitive”, it will provide a transparent record that is less suspect of having been deliberately manipulated in order to show the message, which the keepers of the record wanted to show. And this will constitute a significant improvement over the current non-transparent situation at both GISS and HadCRUT.
Thanks for being involved and lots of luck!
Max
Judith, love this blog, particularly the comments. Please keep up the good work, as you are vital to the cause.
I hope you have had a great Valentines Day!
At your service.
But when one of the funding sources is Charles G. Koch Charitable Foundation,
HOW can there be an unbiased study? When I read that yesterday, I couldn’t believe it! A pity I didn’t read CP first and catch the warning about the need for multiple head vises. Have been mopping up brain matter ever since. I am stunned. The Koch family?????
Well, I was surprised to see Koch on the list of funders. But please explain to me how Koch funding would bias the study? Especially when the funding is “countered” by funding from Gates etc.? And especially since the project participants intend to make every aspect of what they have done transparent, for anyone to check? And especially since the scientists involved are from Berkeley, with people over at WUWT concerned that they are typical “Berkeley radicals?” If the project was initiated by the Koch foundation, that would be an entirely different story, and if whatever was done did not have complete transparency, it would have no credibility at all. The Berkeley project is a very different situation.
A balanced study will be damned by both sides, there being no middle ground here.
David
You may be right when you write “a balanced study will be damned by both sides” (one point I can think of right away is that UHI will apparently not be considered).
But I believe that (if done honestly , conscientiously and transparently) it will be a major first step. As Judith has written, transparency will be paramount.
Max
You bring out a valid point that has mainly been undiscussed. These “envornmental” scientists claim an environmental science that is divorced from public reveiw. This does not occur elsewhere. As an environmental engineer for the past quarter century, I can assure the readers such a stance by other environmental scientists is unheard of. The reason is straight foward. Environmental concerns are by law and fact, a public concern and therefore require public overveiw. The legal aspect can be easily understood as well, the emissions of CO2 (an invasion of property, even if one considers it communal property) resulting in harm. How can it be hid? If it is occurring, as claimed, it cannot help but enter the legal realm; it is a harming. This IS what the EPA is doing. In fact, with their finding of harm, the claim is that it has occurred, and the EPA legally can address it. Then to say, well the evidence is commercial and cannot be used, is to say the EPA exceeded their authority, or used something else that is available for the legal determination of harm. Either way, it has reached the point in the US that some stances are no longer supportable. As should be expected, the EPA’s finding is being challenged. It would be ironic indeed, if the EPA lost because, they used such commercial data, and had the evidence thrown out.
Judy,
Glad to hear more work is being done on land temperature records. While I’d be quite surprised if this significantly changed our understanding at the global level, it will help fill in some regional gaps in areas that have relatively sparse coverage in GHCN. Plus, more data is always fun!
To give credit where credit is due, Ron Broberg and Roy Spencer were the first to process GSOD and ISH (respectively) into usable temperature records, and I assume that the majority of new records in the Berkeley set come from that source.
How can anyone hope to maintain a representative “surface temperature record” for use in measuring “global temperature”?
Gerlich and Tscheuschner already counsel on the flaws of the concept of a global mean temperature.
Specific existing temperature measurement has it’s use. Airports need to know temperature so pilots can figure out how much lift they have. People living in cities need to know the temperature in their city. I can’t see how any existing temperature recording stations can be any use to measure “global temperatures”, unless they are sited anew and sited in a geographically/spatially distributed manner.
The only attempt at “global” temperature measurement of any use will be the one designed at the outset to do the job. The problem will be that it can only be correct and bias free after it is created that way, and that’s not good enough for the politicians.
In the meantime, we have yet to see a serious consensus on the magnitude of urban heat island bias and airport heat island bias over and above urban. But satellite IR photos available on the internet clearly demonstrate both.