by Judith Curry
The preparation of a new land surface temperature record was heralded last week in this news article entitled “Professor counters global warming myths with data.” The article states:
The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.
So what is this all about?
The analysis of surface temperature used in the AR4 is the HadCRUT data set as described by Brohan et al. (2006).
Prior to the release of the CRU emails, I (along with pretty much everyone else in the c limate research community) used the HadCRU and NASA GISS surface temperature data sets in a range of different applications, citing the published error bars. My colleague Peter Webster started questioning these analyses in the tropics during summer 2009, which motivated his request to Phil Jones for the data set. After the release of the CRU emails, many people started questioning the surface data sets. Several independent analyses of the land data found essentially the same global average variation as the HadCRU and GISS analysis.
However, the surface temperature data sets have continued to be scrutinized and questioned. Over land, issues include station quality and interpretation of the urban heat island effect. Many concerns have been raised about the ocean surface temperature data sets, including adjustments to the actual measurements themselves and also the infilling of data using EOFs based on 1960-1990 temperatures. The issue of how to analyze the data statistically also need to be addressed, David Wojick describes his concerns about the surface temperature data sets as follows:
The so-called anomalies are first calculated for individual thermometers, most of questionable accuracy. The overall sample is a convenience sample, in no way statistically representative of the earth. Individual thermometer anomaly averages are then averaged for all the thermometers in individual grid cells, covering the earth. The statistical weight of a thermometer is inversely proportional to how many there are, another violation of statistical sampling theory. Many cells have no fixed thermometers so various kludges are used to fabricate grid cell averages. Then these averages of averages are averaged again to get the global average. The overall process is so statistically bizarre that no one knows how to carry the error bars of individual averages, or grid averages, or interpolations, etc., forward to even estimate the likely error.
Response by the climate community
From the Surface Temperatures web page:
To deliver climate services for the benefit of society we need to develop and deliver a suite of monitoring products from hourly to century timescales and from location specific to the global mean. As the study of climate science increases in its importance for decision and policy making – decisions that could have multi-billion dollar ramifications – the expectations and requirements of our data products will continue to grow. Society expects openness and transparency in the process and to have a greater understanding of the certainty regarding how climate has changed and how it will continue to change. Necessary steps to deliver on these requirements for observed land surface temperatures were discussed at a meeting held at the UK Met Office in September 2010 attended by climate scientists, measurement scientists, statisticians, economists and software / IT specialists. The meeting followed a submission to the WMO Commission for Climatology from the UK Met Office which was expanded upon in an invited opinion piece for Nature. Meeting discussions were based upon white papers solicited from authors with specialist knowledge in the relevant areas which were open for public comment for over a month. The meeting initiated an envisaged multi-year project which this website constitutes the focal point for. As work continues both this site and the accompanying moderated blog (used primarily to disseminate news items and solicit comments) will form the central focal point of the effort.
The envisaged process includes as its first necessary step the creation, for the first time, of a single comprehensive international databank of the actual surface meteorological observations taken globally at monthly, daily and sub-daily resolutions. This databank will be version controlled and seek to ascertain data provenance, preferably enabling researchers to drill down all the way to the original data record (see Figure). It will also have associated metadata – data describing the data – including images and changes in instrumentation and practices to the extent known. The databank effort will be run internationally and for the benefit of all. The effort required in creating and maintaining such a databank is substantial and the task is envisaged as open ended both because there is a wealth of data to recover and incorporate and because the databank will need to update in real-time. Novel approaches to data recovery such as crowd sourcing digitisation may be pursued. In the interests of getting subsequent parts of the work underway it is envisaged that a first version of the databank will be ready in 2011. This will definitively not mean that the databank issue is closed or resolved.
Presentations from the Exeter Workshop are here. Blog discussions on the workshop can be found at:
Steve Easterbrook states:
Now, it’s clear that any new temperature record needs to be entirely open and transparent, so that every piece of research based on it could (in principle) be traced all the way back to basic observational records, and to echo the way John Christy put it at the workshop – every step of the research now has to be available as admissible evidence that could stand up in a court of law, because that’s the kind of scrutiny we’re being subjected to. Of course, the problem is that not only isn’t science ready for this (no field of science is anywhere near that transparent), it’s also not currently feasible, given the huge array of data sources being drawn on, the complexities of ownership and access rights, the expectations that much of the data will have high commercial value.
Not feasible, eh?
The Berkeley Earth study
The website for the Berkeley study is here. The rationale for this project is stated as:
The most important indicator of global warming, by far, is the land and sea surface temperature record. This has been criticized in several ways, including the choice of stations and the methods for correcting systematic errors. The Berkeley Earth Surface Temperature study sets out to to do a new analysis of the surface temperature record in a rigorous manner that addresses this criticism. We are using over 39,000 unique stations, which is more than five times the 7,280 stations found in the Global Historical Climatology Network Monthly data set (GHCN-M) that has served as the focus of many climate studies.
Our aim is to resolve current criticism of the former temperature analyses, and to prepare an open record that will allow rapid response to further criticism or suggestions. Our results will include not only our best estimate for the global temperature change, but estimates of the uncertainties in the record.
Their data set:
The Berkeley Earth Surface Temperature Study has created a preliminary merged data set by combining 1.6 billion temperature reports from 10 preexisting data archives (4 daily and 6 monthly). We hope to be able to make the data set publicly available on this site by the end of 2010.
Whenever possible, we have used raw data rather than previously homogenized or edited data. After eliminating duplicate records, the current archive contains 39,390 unique stations. This is more than five times the 7,280 stations found in the Global Historical Climatology Network Monthly data set (GHCN-M) that has served as the focus of many climate studies. The GHCN-M is limited by strict requirements for record length, completeness, and the need for nearly complete reference intervals used to define baselines. We believe it is possible to design new algorithms that can greatly reduce the need to impose all of these requirements (see section on “Our Proposed Algorithms” under “Methodology”), and as such we have intentionally created a more expansive data set.
The project has been organized under the auspices of the nonprofit Novim group.
Novim’s mission is to provide clear scientific options to the most urgent problems facing society, to explore and explain the feasibility, probable costs and possible consequences of each course of action, and to distribute the results without advocacy or agenda both quickly and widely.
Novim builds on a set of scientific collaborative tools developed at the Kavli Institute for Theoretical Physics at UC Santa Barbara.
Novim’s motto is interesting food for thought:
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
The project is funded by:
- The Lee and Juliet Folger Fund
- Lawrence Berkeley National Laboratory, Laboratory Directed Research and Development (LDRD) Program
- William K. Bowes, Jr. Foundation
- Fund for Innovative Climate and Energy Research (created by Bill Gates)
- Charles G. Koch Charitable Foundation
- The Ann & Gordon Getty Foundation
- a number of private individuals
Project participants are:
- Robert Rohde, Physicist (Lead Scientist)
- Richard Muller, Professor of Physics (Chair)
- David Brillinger, Statistical Scientist
- Judith Curry, Climatologist
- Don Groom, Physicist
- Robert Jacobsen, Professor of Physics
- Elizabeth Muller, Project Manager
- Saul Perlmutter, Professor of Physics
- Arthur Rosenfeld, Professor of Physics, Former California Energy Commissioner
- Charlotte Wickham, Statistical Scientist
- Jonathan Wurtele, Professor of Physics
A view from a (sort of) insider
I was invited to be a member of this team last March. The impetus for inviting me to join the team was my interview in Discover Magazine, which brought me to their attention. My motivation for joining the group is that I thought the project was badly needed, and that it was especially important to have a group of scientists take a new, independent look at this and recreate the data set in a way that is transparent and unbiased. I was particularly impressed by the credentials of the team they put together, and their optimism for raising funds.
At the time I joined the project, they were researching sources of temperature data and looking through the literature to understand the data and past methodologies. By July, the data had been mostly collected, and they had identified their first source of funding. Efforts were made to communicate with personnel at NASA GISS and NOAA about the project. A representative of the group attended the Exeter meeting discussed above.
I’m not exactly sure what my originally intended role in this was, other than that they viewed me as person that was concerned about uncertainties in the temperature data set, relatively unbiased, and making public statements about the need for transparency and openness in the data sets. I participated loosely in this project, mostly as a resource person calling their attention to any new papers or blog posts that I thought were relevant and as a sounding board for ideas. As they have begun analyzing the data, I have completely refrained from commenting on the process or preliminary results, I have only made suggestions regarding where they might publish their analyses, etc.
Apart from building a comprehensive and (soon to be) publicly available surface temperature data set, with both raw and analyzed data, the most significant aspect of this project is the attention given to eliminating bias. They are testing all of their algorithms and analysis methods only on 2% of the total data, so that no unintended biases sneak in in terms of what the final result looks like. I have brought up this issue a number of times, in terms of the possibility for bias in the GISS analysis (since the same group makes predictions of next years temperature anomaly and uses the data to evaluate their climate models), and also the CRU data set (e.g. Jones-Wigley discussion of the 1940’s temperature bump). Further, there are some serious heavy hitter statisticians on the Berkeley team, and we can expect a more defensible statistical analysis of the data and its errors.
I have no idea whether any of their analysis methods will turn out to be “definitive;” the important thing is that there will be some new analysis methods on the table, and open data and open methods will enable all to assess the methods and contribute to the further development of improved methods.
I am holding my breath to see what the final results turn out to be. (FYI: I have not received any funding for my minor participation in this project).