by Steven Mosher and Zeke Hausfather
Today the Berkeley Earth Surface Temperature Project released a major update to their temperature data. The update includes:
- Global and regional land temperature estimates back to the 1750s, with estimated uncertainties.
- Temperature figures and data for every country, state, city, and individual station.
- New estimates of the effect of early volcanoes as well as CO2 on the temperature record.
- Globally gridded min, max, and mean anomalies for 1×1 lat lon cells for each month for land areas.
The link to the new paper from the Berkeley Earth group is [here].
Figure 1: Land temperature with 10-year running averages. The shaded regions are the two-standard deviation uncertainties calculated including both statistical and spatial sampling errors. Prior land results from the other groups are also plotted. The NASA GISS record had a land mask applied; the Hadley / CRU record refers to the CRUTEM4 series. Click on image to embiggen.
Figure 1 shows the newly released temperature data from 1750 to 2012 compared to land temperature reconstructions from NASA GIStemp, NOAA’s NCDC, and Hadley/UEA’s CRUTEM over their available records. Berkeley’s record overlaps quite well with existing records from about 1875 onwards, and includes the first ever global land temperature estimates from 1753-1850.
The Berkeley Earth method differs previous groups in several ways. Rather than adjust (“homogenize”) individual records for known and presumed discontinuities (e.g. from instrument changes and station moves), it splits the records at such points, creating essentially two records from one. This procedure, referred to as the scalpel, was completely automated to reduce human bias. The 36,866 records were split, on average, 3.9 times each to create 179,928 record fragments.
The Berkeley data is plotted with uncertainties estimated via randomly subdividing the 179,928 scalpeled stations into 8 smaller sets, calculating global land averages for each of those, and then comparing the results using the “jackknife” statistical method. Spatial sampling uncertainties were estimated by simulating poorly sampled periods (e.g. 1753 to 1850) with modern data (1960 to 2010) for which the Earth coverage was better than 97% complete, and measuring the departure from the full site average when using only the limited spatial regions available at early times.
More details on the methods used are available in the Methods Paper.
Figure 2: Number of station records available for use for each month from 1700 to present for Berkeley (red) and GHCN-M version 3.1 (blue). Click on image to embiggen.
Figure 2 shows the number of station records available for each month in both the existing GHCN-Monthly data (used as the basis for reconstructions by GISTemp/NCDC/CRUTEM) and the new Berkeley data. For the period from 1700-1800 Berkeley uses 27 percent more station months. For the 1800-1900 period, this number rises to 50 percent more station months. For the post-1900 period averages 240 percent more station months, up to about 700 percent more in the present period.
While this additional station data does not result in significantly different results over the past century on a global scale, it helps improve our ability to map regional temperatures and perform analysis that requires taking subsets of stations (e.g. comparing urban to less urban station).
Figure 3: Change in diurnal temperature range from 1900 to 2012.
Figure 3 shows changes in the diurnal temperature range (DTR) over the past century. One noteworthy feature is the uptick in DTR post-1980s, something that has not been present in past analysis (e.g. Vose et al 2005) which have shown a relatively flat DTR during that period.
Figure 4: The annual and decadal land surface temperature from the BerkeleyEarth average, compared to a linear combination of volcanic sulfate emissions and the natural logarithm of CO2. It is observed that the large negative excursions in the early temperature records are likely to be explained by exceptional volcanic activity at this time.
Figure 4 shows temperatures from 1750 with a simple linear fit using records of volcanic sulphate emissions and atmospheric CO2 concentrations. The strong negative excursions in the early period closely match major volcanic events (detected by sulphate deposition in ice cores). This is the first time that major volcanic events prior to 1850 have been matched to estimates of global land temperatures, though it is worth noting that stations available prior to 1850 are primarily located in the Northern Hemisphere, and may amplify the observed volcanic response to Northern Hemisphere volcanoes.
Figure 5: Sample country-level temperature record with uncertainties and station count.
In addition to global results, Berkeley has released new temperature reconstructions of every continent, country, and major city globally, as well as records for every U.S. state. These include various summary statistics, the number of stations used over time, and the data used in the temperature plots in a plain-text format. All of the country data, as well as data for all individual stations used is available for browsing here: http://berkeleyearth.org/locations/
Figure 6: Sample station temperature record along with regional and global record for a particularly iconic station.
The individual station records (coming soon), show the raw station records (in red), the best estimate of the regional record via kriging in blue, and the global land temperature record in gray for comparison.
Berkeley code and data are freely available for people to use.
As a way of clarifying the issues that have been raised we identify the following categories: 1) Questions of relevance. 2) Questions of method. 3) Questions of data. 4) Political considerations. The questions of relevance–is the surface temperature the most important climate metric and is the surface temperature physically meaningful—are not directly addressed in these papers. For example, Peilke’s argument that Ocean Heat Content is more meaningful and arguments that surface temperature is “meaningless” are not addressed directly. The latter argument, however, is addressed indirectly through one of the findings of the method. One result of the Berkeley Earth kriging approach is the construction of a temperature field for the entire land surface, such that every location on the land has an estimated temperature that is a function of latitude, longitude, altitude, climatology, and a residual – or weather. The field provides an estimate of the local temperature +-1.6C. This is both a means to test the method as new observations are added from historical records and gives meaning to the term “global temperature average.” The global average is that estimate which minimizes the error if one wants to estimate the temperature at an unknown location. If you know the location, but have no measures at the location, the estimate that minimizes the error is given by the field value at that location. Another way to look at this is that kriging gives us an estimate of what we can expect to find in unobserved locations.
There also have been several persistent questions of method in the debate over the global temperature average: questions about how the arctic region is handled (GISS approach and CRU approach); questions about how stations are selected and combined (GISS approach of reference stations and CRU approach of selecting stations that persist through a common anomaly period) and finally questions about adjustments. We defer the latter to the discussion about data. The Berkeley method relies on kriging, which is mathematically known to be an optimal solution for the problem at hand. This does not make the other methods wrong, merely sub optimal, in fact, the results shown in the results paper indicate that the other methods provide very similar results. This is not uncommon. In fact, much of the literature examining different methods for producing area averages of temperature often find only slight advantages for the various methods. One notable aspect of achieving similar results, for example, is the realization that concerns over how the arctic is treated in land temperature reconstructions are vastly overblown.
With regards to station combining, Berkeley has taken a diametrically different approach. The RSM method of GISS and the CAM method of CRU both use criteria of temporal overlap to select stations. The spatio-temporal problem is reduced to a spatial problem by selecting or constructing long series. In the Berkeley method all station data is used. Thus the 7000 stations of GISS and the 4228 stations of CRU4 are expanded and 36,866 stations are used; the spatio temporal problem is solved simultaneously. In this regard the method is similar to that implemented by skeptical blogger JeffId (with RomanM) and the methods developed by Nick Stokes and Tamino.
This approach also afforded Berkeley the opportunity to generate the result using subsets of station data. For example, only using stations that CRU and GISS do not. Those results match the results achieved with the entire dataset, thus showing that concerns over the “great thermometer” drop out are unfounded, as has been shown before with other analytic approaches (see Zeke, Nick Stokes and Mosher).
One unique feature of the Berkeley approach is the use of the scalpel. In GISS and CRU approaches stations are selected for their temporal coverage. And in some cases to achieve temporal continuity stations are spliced or adjusted so they can be treated as homogenous over time. In the Berkeley method there is no splicing.
Instead, stations records are “scalpeled” when a break point is found. For example, in other methods if a station moves from location X, to a higher elevation, a deterministic adjustment for changing elevation is made. In Berkeley Method the stations are treated as two separate stations. That is what they are. One drawback of the deterministic homogenization approach is that the errors and uncertainties due to adjustment are not propagated into the final answer. By slicing station records, this potential error/uncertainty due to adjustment is folded into the final confidence interval.
Another benefit of the kriging approach is that the total measurement error at the station level is estimated via a top down fashion, in sharp contrast to CRU method and the GISS approach. In the CRU approach the errors are estimated or built up in a bottom up fashion, so there is an estimate for thermometer error, for recording error, etc. This approach, it has been argued, leads to an underestimation of error. For example, siting errors are assumed to be normally distributed with a mean of zero. In the Berkeley approach no such assumptions are made and the error at the monthly station level is estimated from the top down. The nugget effect is calculated as 0.46C, which is considerably higher that the CRU estimate of 0.06C. That nugget effectively represents the sum of all errors at the average station, including errors due to different instruments, errors due to different observation practices, siting errors, etc. By looking at the correlation of all stations to all other stations, the difference between two stations that are co-located is 0.46C. While that looks substantially higher than bottoms up estimates it includes all forms of error by definition.
The most contentious area of scientific debate, however, revolves around the area of data and data adjustments. There are questions about the amount of data (number of stations), questions about geographical distribution, and lastly questions about using raw versus adjusted data. The methods paper demonstrates what has been widely claimed and widely doubted. The average land surface temperature can be estimated with relatively few stations. Of course uncertainties for small numbers of stations is higher, but the numbers used by GISS, NCDC, and CRU4 are more than adequate. This is due to the correlation length scale, which extends up to 1000 km or farther depending on the latitude and season. As stated before questions about the global average being a function of “dropping” stations are now definitively answered. The average is not materially impacted by any great thermometer drop out. That can be demonstrated any number of ways: by including stations that have dropped out of GHCN; by using only stations that remain in the record and by using random subsamples of the 36,866 stations.
The geographical distribution question is also more fully addressed. Stations were added from times and locations where GISS and CRU had no data. If there was a spatial bias in the GISS sample or CRU sample we would expect to find differences. With a few exceptions in the early part of the record, our answers match theirs. Where we differ is in our ability to push the record back to 1753. There are records that start and end before CRU’s common anomaly period. CRU are forced by method to drop these. There are stations that start after the common anomaly period, those are included in Berkeley earth as well. The results paper contains a few paragraphs discussing this early record in relation to the volcano record. We suspect this will occasion some heated discussion. Finally, the presence of a longer record allows us to make a preliminary and heavily caveated estimation of climate sensitivity. What is notable in that exercise is that changes in solar output had no discernible effect on the regression. In short, radiative forcing from GHGs and volcanic aerosols explains a great deal of the land record with a residual that follows a natural cycle: AMO.
That leaves of course the question about adjustments. CRU4 uses data that has been adjusted by National Weather Services. So for example they use the 207 homogenized station series for Canada provided by the Canadian weather service. CRU, in other words, don’t adjust data; they use data that has been adjusted. GISS, on the other hand, use adjusted data (GHCNv3) and make additional adjustments for the UHI effect.
In the Berkeley approach every attempt is made to use first reports. We avoid the term “raw” data because one can never know that data that purports to be “raw” is in fact “raw.” One sign that it is raw is the presence of errors, for example, temperatures of 1000C or -200C. An original report or first report is a report that has no documentation asserting that adjustments have been made. Adjusted data, on the other hand, normally has documentation with it that asserts that adjustments were made and often details the adjustments. First reports, thus are taken as “raw” unless there is reason to believe that they have been adjusted. For GHCNV3 data, for example, Berkeley uses the unadjusted data as opposed to the adjusted data than both CRU and GISS use. The largest data sources in the Berkeley Earth approach are daily data which is typically unadjusted. There are 27,000 stations taken from GHCN daily which is available in a form prior to the application of any QA procedures.
Finally, during the course of merging datasets and eliminating duplicates unadjusted data is given priority over adjusted data, such that data that comes from CRU4 for a given location is not included unless none of the other 15 data sources has data for the same location. That said, the possibility that some unadjusted stations may have been adjusted before entering the source datasets remains. But that mere possibility does not amount to a fact, and the ability to achieve the same result using random subsets argues for the proposition that there is no significant adjusted data contributing to or biasing the result.
Berkeley results demonstrate that the adjustments made to data do not materially impact the results. The process of identifying structural breaks in the time series and “slicing stations” is fully automated. There is no effort to cool the past or warm the present. Where stations exhibit an objective change in regime they are broken into separate records and then the kriging process estimates the field accordingly.
Finally there are the political or personal issues that have been raised around the subject. We know of no way to see into the hearts of men. Science provides us with some safeguards against personal bias: sharing data and sharing code. See the new website for those resources.
Disclaimer: Both Steven Mosher and Zeke Hausfather are participants in the Berkeley Earth Surface Temperature project. However, the content of this post reflects only their personal opinions and not those of the project as a whole.
JC comment: With regards to the new paper, I strongly disagree with their interpretation of attribution (Fig 3 in the post above). Here is what I have been saying in response to media queries:
The BEST team has produced the best land surface temperature data set that we currently have. It is best in the sense of including the most data and extending further back in time. The data quality control and processing use objective, statistically robust techniques. And most importantly, the data set is online and well documented, with a friendly user interface. That said, the scientific analyses that the BEST team has done with the new data set are controversial, including the impact of station quality on interpreting temperature trends and the urban heat island effect.
Their latest paper on the 250 year record concludes that the best explanation for the observed warming is greenhouse gas emissions. In my opinion, their analysis is way over simplistic and not at all convincing . There is broad agreement that greenhouse gas emissions have contributed to the warming in the latter half of the 20th century; the big question is how much of this warming can be attributed to greenhouse gas emissions. I dont think this question can be answered by the simple curve fitting used in this paper, and I don’t see that their paper adds anything to our understanding of the causes of the recent warming. That said, there are two interesting results in this paper, regarding their analysis of 19th century volcanoes and the impact on climate, and also the changes to the diurnal temperature range.
I recommend you check out here for a great paper:
Agree, +1 !
The new Berkeley paper confirms: Big Brother is running scared !
BB belatedly realized certain defeat ahead if government scientists continued to hide or ignore experimental data and observations that falsified cherished government models of reality.
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
PS – Congratulations, Professor Curry, for definite progress!
Probably nothing would better connect proponents and skeptics of AGW dogma, including this distinguished Professor Muller, to reality than a few minutes watching this video about our place in the universe:
May we all succeed in this most important endeavor !
– Oliver K. Manuel
Its an interesting paper. Unfortunately, until I have access to the underlying station classifications it will be difficult to verify and test their results.
BEST has a formatting bug for the country data (so far).
If Tmin < -9 you don't correct for the extra digit.
1838 1 -10.688
Indeed – AW has put the kibosh on this statement for good.
“The BEST team has produced the best land surface temperature data set that we currently have.”
What a joke. The headline “Global land temperatures have increased by 1.5 degrees C over the past 250 years” implies that they know the average temperature in 1750 to within 0.1 degrees. Then you look at the 95% CI and it goes off the scale!
Precisely. Fails right at the headline. Incredible the hype versus the *science* is so mismatched.
It’s high time climate science got rid of the hype. Was Muller ever so loose in announcements of his earlier findings in physics? But then, did he ever get asked to do a NYT op-ed as he released results in those days?
Corruption comes with the territory, it seems – but kudos again to Dr Curry for distancing herself from the wild claims on attribution.
Muller’s science by Press Release/Op Ed seems to be a fairly recently adopted “strategy”. Last time around (circa Oct. 21/11) it was via Op Ed in the WSJ, duly followed-up/accompanied by media blitz elsewhere – as you may recall ;-)
As I had observed in Will the real Richard Muller please stand up. once upon a time (circa Dec. 17/03), Muller had opined that:
IMHO, Muller’s NYT Op Ed does not leave one with the impression that he has any “self-doubt” – or that he has “understated” his “conclusions”.
My advice to Muller:
Physicist, heal thyself!
P.S. Harold Ambler has two related pieces that may be of particular interest to lurking laypersons:
Richard Muller has absolutely no idea what it is to be a climate skeptic
Temperature expert comes out swinging
Classy. And the two references are helpful too. Further down this page I suggested we can see three levels of possible skepticism in Muller’s conclusion. But in the paragraph Ambler highlights:
there’s another level evident: the more-extreme-events-caused-by-man skeptic. On the basis of this Muller still is one, just as I am.
So, expanding my levels a little:
a) skeptics that the land temperature record even matters very much eg Chirst Essex, Ross McKitrick, Roger Pielke Sr
b) skeptics believing land temperature has been systematically overstated eg Anthony Watts and many more
c) skeptics believing there’s been no warming since 1750 – virtually none
d) skeptics that man has been proven to be the main cause of the warming since 1750 – eg Richard Lindzen, Judith Curry and many others
e) skeptics that nastier extreme events are a result of warming eg Richard Muller, Roger Pielke Jr
f) skeptics that the IPCC’s got policies of ‘mitigation’ of the highly confused situation in a-e) absolutely right – take your pick.
Against this kind of background we should evaluate the claims of what even Andy Revkin wryly calls the ‘Converted’ Skeptic Professor Richard Muller of Berkeley.
g) Warming is man made and some men are damned proud of it. http://judithcurry.com/2011/07/26/americas-first-global-warming-debate/
“Benjamin Franklin understood climatic forcing factors better than anyone, surmising in a 1763 letter to Ezra Stiles that “cleared land absorbs more heat and melts snow quicker.”
I would add a variant to c): c1) skeptics believing the warming since 1750 from CO2 in the atmosphere cannot be shown to be more significant than warming from man’s land use or from nature.
Peer-rejected unpublished paper celebrated by Muller! Mosher applauds!
Richard Drake | July 29, 2012 at 5:52 pm
Mosher has proven that; climatology was / is the oldest profession… Money talks. After they stop climate off changing, should ask for money, not a cent in advance… Instead, trillion spent, they are searching how to fabricate data…
This is one of the first things I noticed. More accurately, the temperature has gone up by between 0.1 deg C and 2.9 deg C (95% confidence). Doesn’t quite have the same kick to it, does it?
And I suspect their confidence intervals are underestimates as well. As Pielke Sr has pointed out in the past, probably far better not to adjust, but use the raw data and expand the CIs instead, to reflect the difficulties with the data set.
There are no adjustments.
Uhmm? The very method itself adjust, so how can there be no adjustments? Or do you mean adjustments such as “arbitrary done by hand”?
Single stations get split into two, right? That has a statistical consequence. It’s a statistical treatment that must have an effect on the output (otherwise it wouldn’t be done), so it is equivalent in any reasonable statistical sense to an adjustment. Rebranding it as “not an adjustment” doesn’t matter to the statistics. To the stats, it carries the properties of an adjustment.
Secondly: I don’t see where the CIs are inflated to accommodate the imperfections in these adjustments. Jackknifing is good for producing CIs related to sampling issues, but will massively underestimate the uncertainty due to the type of problems I’m describing above.
Over the course of the 5 years I have been doing this on the web I have been one of the people arguing that we need to account for all adjustments. The proceedures we are talking about were things like filnet, shap, tobs, uhi adjustments.
Adjustment meant to us the application of an alogorithm to adjust a time series for a known problem. For example: we switched from thermometer A to thermoter B.
Scalpelling is not an adjustment.
Hmm at some point I’ll probably post up the before and after.
Yep, I’m using the word “adjustment” in a broader sense. And in that broader sense, I am including scalpelling as an adjustment.
Spence_UK, Scalpelling is one of the BEST things since sliced bread ;)
Actually, it makes a lot more sense than the method GISS used to shift and then “adjust” to the Dec-Nov year.
Spence_UK, I think the range you give in your first paragraph is a bit off. Eyeballing the figure, it looks like you’re using the lower uncertainty range for 1750 the upper uncertainty range for ~1770. Of course, the entire idea of picking two points and differencing them to give a “change in x years” is ridiculous, so I guess it isn’t so unreasonable.
Brandon, sure, the figures I give are bounds that we cannot rule out occurring at the 95% confidence interval at some point in the late 1700s.
I guess the CIs could be bought in somewhat if a longer smoothing was applied – say an average from 1750-1800; but then to compare apples-to-apples you’d need the same treatment at the other end. Using the Mk1 eyeball methodology I’d say that gives little more than a 1 degree C rise, albeit (I speculate) with greater confidence.
Eh, that’s okay. Those uncertainty ranges almost certainly are too small. I’ve seen at least two types of uncertainty they don’t seem to adequately cover: Volcanic eruptions (Zeke pointed this one out) and issues with correlation structures.
I too,,,, recommend you check out here for a great paper:
The CET, worlds longest and most analyzed temperature record shows different picture:
Have you used 1730 as reference temperature rise would be about 0.15C since.
That’s not a record of global temperatures. It’s irrelevant to the discussion.
It correlates with global temperatures. Or the CET might if we had global temperatures with which to compare it.
here is what England can tell you about the globe
the Correlation at a monthly time scale is .41.
The R^2 for annual figures is 0.7
Or it might if we had global temperatures with which to compare it.
The correlation is determined during the period where we have the greatest coverage for the globe. Once you are given this structure you have two choices.
A) operate under the assumption that the correlation structure is consistent over time.
B) Assume, based on no evidence, that the correlation structure changed between 1750 and today, and raise a theoretical objection.
This will be testable when more historical data is brought out of archives.
A) operate under the assumption,based on no evidence, that the correlation structure is consistent over time.
There. Fixed that for ya.
Mosher, you say there is “no evidence that the correlation structure changed between 1750 and today.” That’s an interesting comment. It’s easy to see changes in how regions correlate to your temperature record with your current data. If we’re assuming correlation structures don’t change, that shouldn’t happen.
I never like looking at the correlation of a data set to a subset of itself. Because of that, I decided to do a quick comparison between the UK and US. That score was only .31. It’s not very impressive.
You have just described what is known as an autocorrelation — one of the most useful probabilitiy measures for spatial and temporal analyses.
I never can understand how these scientifically uneducated poseurs that inhabit this blog’s comment section are not just completely marginalized.
WebHubTelescope again shows he can level insults at people even when he has no actual criticism of them.
For those who can’t spot the obvious, saying you don’t like doing something in no way indicates you think doing it is wrong. I don’t like doing multiplication on paper with six digit numbers either. Do I deserve to be snidely referred to as “Brandy” and an “uneducated poseur” simply because I don’t enjoy doing something?
WebHubTelescope either cannot read simple sentences, or he willfully ignores them in order to troll people. Yeah, I wonder who deserves to be “completely marginalized.”
No, it is beyond easy to tell that someone is completely out of their depth. You might have taken high school level science at best.
Has someone posted a prize for who can be the rudest ass at Climate Etc?
You’re starting to look like a sure thing.
Rudeness is the weak man’s imitation of strength.
Honesty is Hard; Rudeness is Easy
Hey kids, this isn’t community college we’re dealing with. Sink or swim when it comes to the science.
– Robert is talking nonsense
– Steven you are suppose to be true scientist, not cherry picking the dates, take 1730s, or 1770s both pre-industrial and calculate your difference.
– Stating that the CET doesn’t correlate you are wrong.
There is a good record since 1880’s and what it shows is not what you suggest, the CET correlates well both with N.Hemisphere where most of global records originate and even with global records as I show here.
CET is only reliable consistent long record available, and anyone dismissing its significance should not be taken seriously.
Huh. who said it didnt correlate? they all correlate.
using numbers helps.
Exactly, but using good numbers not a ‘hotchpotch assembly’ for which it is claimed to be global temperature (there is no such thing, there is global energy content, but that is totally different story)
So calculate correlation CET-GT from 1880 using 5 year bin averaging
P.S. your statement on natural variability on decadal scale is grossly misleading, you got about 130 years of good records so you need to look at multi-decadal picture. Do two hemispheres separately, and you will find natural variability very different.
vukcevic | July 30, 2012 at 2:14 am
Can you help Mosher and tel us the temp for New Guinea, for 1882-3? New Guinea is larger than Britain – if you can include the temp for the area between Easter island – Hawaii and Antarctic, please. Either you have it, or you are compulsive liar, same as Mosher. .
stevenmosher | July 29, 2012 at 10:34 pm Then confront the reality: Stefanthedenier against BARKLEY UNI & Dr. MOSHER
Avoiding the two biggest hurdles, is not science! Hurdle #1: when temp close to the ground increases -> the vertical winds SPEEDS UP accordingly, (the natural convayer-belt) and equalizes the temp in a jiffy.
Hurdle #2: when temp increases, for any reason -> troposphere expands INSTANTLY upwards and equalizes in a jiffy.
What a man-hours and tax $$ wasted – just for ignoring the two big hurdles… Mosher, the laws of physics were same 1789 as they are today. This is an official challenge – if I’m correct; Berkley uni + you are extravagantly stumbling in the dark. Lots of thin air in your & Berkley’s crystal ball for harvesting from. Cherry picking science is not scientific.
Prove those two hurdles wrong; you will never see me on the net. stefan
Given the existence of the Watts et al. paper, can one of the authors of this post please take the opportunity to point out why the BEST paper is not flawed ?
I have no idea if the Watts paper is rubbish or a first class contribution to the literature (or somewhere in between). However, its out there and the implications of the work (if correct) are startling. One can’t pretend it doesn’t exist.
I think a little bit of time to read the Watts paper is reasonable to allow Mosh and Zeke. But I’m sure we all look forward to the ‘synthesis’ that may arise :)
Ordinarily I would agree with you. However, one of the authors of this piece has already left comments talking about “data problems” at WUWT. He clearly feels that criticism can be made now.
I saw that too. But that’s not a full ‘synthesis’ as I called it. That’s needs a bit of time and thought. And, just as is true with the heroic Watts family and their loss of a vacation, I assume Mosher has never got paid for his troubles. I’m supremely grateful to all these people – even where they end up giving us a different point of view.
Sorry i was wrong.they used ushcnv2.
more troubling is the lack of data. im sure willis will be along to chastise them and steve will release the station list.
For what it’s worth, I’ll be critical if data isn’t forthcoming. I won’t be overly bothered if data isn’t ready immediately, but I’d expect/hope at least a list of stations and new quality rankings to be published fairly promptly.
One thing I’m curious about is if they used pictures not posted to the surfacestations website. If they didn’t, most of their data is already available. A station listing like I said would allow one to check just about all of their work.
(And not that it should matter, but I have asked for that list of stations over at WUWT.)
I hope that if they did a photographic analysis to estimate the area that they did not do this manually. If they did do it manually, then checking the work will require of bunch of additional data. If they did it with a good photo analysis tool, then one can at least check the work. If they did it manually, then I hope they used trained analysts and kept good notes. They mentioned using google earth which suggests they may have used the polygon tool. At some point I suppose we will see.
“However, one of the authors of this piece has already left comments talking about “data problems” at WUWT”
Ah but why would GHCNv2 be a “data problem”?
One reason I can think of: Why not use GHCNv3?
But another possibility I am leaning towards: He doesn’t have a problem with it, he’s asking climate skeptics who have attacked GHCNv2 why they don’t have a problem with it.
lolwot asks a hilarious question:
Let’s consider why Anthony Watts might not have used GHCNv3. The first thing we have to do is look at what he actually used. When we do, we find he used USHCNv2. When we look at USHCNv2, we find it is a subset of GHCNv3.
That’s right. lolwot is asking why Anthony didn’t use the data he used. The only reason to think Anthony used outdated data is the USHCN and GHCN version number don’t line up.
When a critic makes such a silly comment, you have to wonder about everything else they might be saying.
Brandon My error. They used USHCNv2, which is a current dataset for the US ( about 1200 stations ) it would be interesting to compare.
A) Watts spatial average using only his best stations.
B) the 14000 or so stations not in USHCN.
And it would be interesting to see the long term trend using only the small dataset that anthony used.
It happens. I mostly thought it was funny because your criticism got uncritically parroted even though it was so obviously wrong. Well, that and the fact you’ve spent so much more time than me on the temperature record. In any event, you at least admitted the mistake. I wonder if lolwot will too.
As for what you think would be interesting, your A) is somewhat covered by Figure 20, but I agree those could be interesting comparisons.
I dont think watts did area averaging with inverse density and land masks.
I think he may have just averaged the grid.
But we will never know, since there isnt any supporting data or code
Im kinda shocked that steve mc would post something without this supporting data. we
Mosher, it looks like part of your comment got cut-off, but I’m not surprised Steve would be involved. Publishing data and code immediately would be desirable (though not even BEST did so), but it isn’t necessary. I imagine he, like I, expects to those to be forthcoming and is willing to wait a short while.
“However, its out there and the implications of the work (if correct) are startling. One can’t pretend it doesn’t exist.”
That’s some epic hypocrisy, given that Watts’ mega-hyped “press release” of an un-reviewed un-accepted un-published paper COMPLETELY IGNORED Richard “this should settle the debate” Muller.
Watts’ shiny object is a desperate attempt to change the subject. It’s contemptible.
And, Robert, please tell us who has reviewed the BEST update? And where it has been submitted?
to be fair the BEST analysis has stood up to a lot of scrutiny, even arguably before the BEST analysis was even released we had mini-BESTs all over the place showing a similar result.
I think I remember the BEST paper was ignored at first with the excuse it hadn’t been peer reviewed. Oh and of course Science By Press Release Is Bad ™
but seriously I am fine with a challenge being made through press release and in this case the change of subject thingy is on subject so valid.
The hyperbole doesn’t help your argument. As it stands neither the Watts paper nor the BEST analysis are reviewed.
That’s certainly the false equivalence Watts is selling.
But I nowhere make the argument that we should treat Muller’s account of BEST as if it were a peer-reviewed paper. The interesting thing about it, at this point, is that he’s changed his mind.
If one is inclined to speculate, Muller’s findings are more fruitful ground, because he doesn’t have Watts’ long history of false and misleading claims that didn’t pan out.
You might say I’m . . . skeptical.
Robert: I’ve seen several quotes from 5-10 years ago indicating that Muller was a firm non-skeptic then. He may have had a brief flicker of doubt that has since been erased by BEST, but he definitely didn’t convert from skepticism to believing.
The first four BEST papers have been reviewed, and have been modified in response to these reviews (which led to no change in the basic methodology or conclusions).
Unless you think that Muller’s daughter and team member is an outright liar.
It’s more accurate to say that the Berkeley papers are under review.
If you would compare the first versions with the versions posted now you would see that they are being improved as the result of review.
At least one of the papers was reviewed, rejected, revised, and recommended to be rejected again. See the homepage of McKitrick, Ross.
Muller’s method of comparing temperature change to static population levels, rather than the change in population levels is fundamentally flawed at such a basic level that it cannot be accidental.
For example, if you compared the change in the speed of your car (on level ground) with how far down the gas pedal was pressed, you would see no correlation between the gas pedal and your car’s speed.
This is what Muller has done and from this he concludes there is no correlation between temperature and urbanization.
However, if you compared the change in the car’s speed to the change in the gas pedal, you would find very close correlation. This indicates a possible cause and effect relationship between the gas pedal and the car’s speed.
Muller has not done this. He did not look at the change in urbanization. He looked at static urbanization and compared this to the change in temperature.
This is such an obvious flaw that one can only conclude that this was not accidental. Rather that a flawed methodology was knowingly used to try and disprove a connection between population change and climate change.
Thus leading to Muller’s conclusion that in the absence of any other explanation, the cause must be CO2. The logically fallacy of ignorance. If we can’t find the cause, then whatever we did find must be the cause. This was the same argument that led to the burning of witches. If we can’t find the cause, the cause must be what we can find, our neighbors.
‘At least one of the papers was reviewed, rejected, revised, and recommended to be rejected again. See the homepage of McKitrick, Ross.”
Hmm. No Ross recommended rejection.
After his last comment, his commentary was taken in, changes were made per his suggestions, and his points were addressed, actually using data from ross’s own papers and the editor now has a revised copy. There is no final word on that, that I know of
After his last comment, his commentary was taken in, changes were made per his suggestions, and his points were addressed…
I’d maintain that they were *not* addressed- per his second review:
I had given some suggestions about how to fix the problems in the methodology in my earlier review, including one idea that would have been relatively straightforward to implement using easily-available data. Unfortunately the authors have made no methodological improvements, and
the arguments they offered for keeping their technique unchanged are, as I will explain, unpersuasive. So it will come as no surprise that my view of this draft remains unchanged from before.
(Unless, of course, you’re referring to activity after that review…)
Mosher: “Hmm. No Ross recommended rejection.
After his last comment, his commentary was taken in, changes were made per his suggestions, and his points were addressed, actually using data from ross’s own papers and the editor now has a revised copy.”
Muller: “There were no mistakes in that paper. McKitrick had comments and found things he thought were mistakes, but we wrote back to him and told him he was wrong.”
I’m having a hard time resolving these two statements. Help?
Even an idiot should know that turn about is fair play.
On the contrary, only an idiot wouldn’t spot the hypocrisy of one outclassed weatherman who whined and cried and held his breath until he turned purple — only to turn around and do what he claimed to despise.
That’s not “turnabout.” Just standard operating procedure for a liar and a hypocrite.
I’ve been ignoring your rants, but now I see what I’ve been missing. Mea Culpa. You really do view the world through blood-red glasses and think that everyone who disagrees with you is a liar, hypocrite, fool, etc. I’m not sure when you’ll go all Gleick on us, but I will have to watch more closely.
well when we first released our preprints we released the data. in fact we gave our stations to steve mcintyre and he wrote about that dataset. sadly the watts paper doesnt release the dataset or the raw data (photos) used in the classification. so its impossible to check duplicate or audit anything. that said they do use a proxy for rural that is based on data that is not suitable for use. ive discussed this before but it bears repeating. finally they have the amplification figures wrong as has been discussed at ca. the statistical analysis at the end doesnt take account of some vital details. it looks rushed. with the spatial distrubution they have they need to control for continentality or at least show some sort of control for that
I can believe it was rushed. And that as of now BEST has the edge on openness of data.
cross check the acknowledgments and who is thanked there to the author list.
Steven Mosher | July 29, 2012 at 7:55 pm
Stefanthedenier against BARKLEY Uni & Dr. MOSHER
All that data was collected; to promote that: water vapour (H2O) + CO2 are GLOBAL warming gases = ‘’for the cause’’ think green. If you burn a 5ton tree, 0,5kg ash left – the rest all gone in smoke as water vapour + CO2. CO2 is food, water is drink for the trees and crops; demonized by Berkley & Mosher. Do you have dignitary to supply your brainstrusts with some truth? ::: http://globalwarmingdenier.wordpress.com/open-pandoras-box/water-vapor-h2o/
New methane (CH4) creation is demonized by Berkley ‘’scientists / academics; the truth is COMPLETLY the opposite. Do you have guts, to supply the truth to them?::: http://globalwarmingdenier.wordpress.com/methane-ch4/
Students from Berkley: compare what you have being dished; V what is on those tread. You are wasting your parent’s and taxpayer’s money / wasting the best years of your lives, to be indoctrinated in woo-doo science – people on the streets are learning the truth from those treads. When you graduate; you will know less, than when you enrolled, carbon mania will not be trendy and popular for very long. Here is another one for you::: http://globalwarmingdenier.wordpress.com/climate/
The wife and I just spent 6 weeks exploring 8,000 miles of rural America.
Rural doesn’t necessarily mean an absense of UHI. We’ve been thru more then one rural town with population measured in the 100’s with a nice 12 foot wide paved bicycle path paid for with federal or state ‘grant money’.
Every-time there has ever been an economic downturn in the US the politicians in Washington open the floodgates with money for ‘shovel ready road projects’. The approval process for road projects is quite expeditious in ‘rural america’ and they help themselves to a nice chunk of money and pave something.
IMHO Anybody who tries to determine whether a thermometer is polluted by UHI effects based on whether or not it is ‘rural’ is going to fool themselves.
The Watts paper has only been posted online for open review and will be submitted soon.
Hi Judy – I have posted on the new Watts et al study. It undermines a crucial fundamental underpinning assumption of their analyses (and that of NCDC, CRU and GISS), that siting quality does not matter in terms of the trends. Siting quality does matter – see
Also, their statement that
“…the Earth coverage was better than 97% complete..”
with respect to surface temperature samplings, shows a remarkable lack of understanding on their part on the actual spatial variability in surface temperatures, including their anomalies.
Ooh, I misread. I thought they said the coverage was 99 and 37/100ths percent complete.
“It undermines a crucial fundamental underpinning assumption of their analyses (and that of NCDC, CRU and GISS), that siting quality does not matter in terms of the trends.”
Is that one of their assumptions? I thought one of the reasons for homogenization was precisely to deal with siting quality issues.
the spatial variability is best understood ny looking at the semi variogram. or you can look at the temperature feild. whether that field captures the varibility is a testable claim.
I’d say the Watts et al paper strengthens the case that homogenization or some other approach is necessary to deal with discontinuities in station data due to station moves, instrument changes, etc.
Of cousre Watts spin will be that the observed trend is 50% due to homogenization … and I’m sure RPSr will ride that bandwagon.
interesting, innit? Mosher contributed to best, which confirms his pteviously stated position(s). Watts contributes to his paper, which confirms his previously stated position(s). Both have supporters and detracters, in roughly the proportions you’d expect from the believer/sceptic ratio of the observors/commentors. Both feel their results are convincing, while the other, not so much. Confirmation bias much?
Actually my first positions in 2007 were pretty skeptical. you might want to ask anyone around in 2007
I’ll verify that …
Thanks dhogaza. its been a long journey
And yet, it cools.
> you might want to ask anyone around in 2007
I wasn’t around in 2007, but I can verify what Steven Mosher says as of early 2009. Disappointingly few people try hard to “call ’em as they lay,” IMO Steven and Zeke are two of them.
Find people you respect, notwithstanding disagreements on important topics… more interesting and productive conversations that way.
You noticed that too, eh? Shocking, isn’t it?
“Finally there are the political or personal issues that have been raised around the subject. We know of no way to see into the hearts of men. Science provides us with some safeguards against personal bias: sharing data and sharing code. See the new website for those resources.”
one check against confirmation bias. We posted our data and our approach. have at it
Having said that, I like the approach BEST takes of treating such discontinuities as being two separate sets of station data, and given that they have a world-class statistician on the team have confidence in their work.
The obvious error in correlating rate of change in temperature to static population shows that even a “world-class statistician” is able to make “world-class” mistakes.
see page 3 of both reviews below for an explanation.
In my view, making a claim about what happened 250 years ago based on thermometer readings invalidates the rest of the work. Thermometer coverage for the N.H. wasn’t sufficient until about the turn of the last century, for the S.H. it wasn’t sufficient until about 70 years ago. It’s just that simple. http://suyts.wordpress.com/2012/07/29/muller-never-was-a-skeptic/
“It is ironic if some people treat me as a traitor, since I was never a skeptic — only a scientific skeptic,” he said in a recent email exchange with The Huffington Post.
So in point of fact, he’s saying he was a skeptic of the science, and was not a skeptic of something other than the science, whatever the H that was.
In my opinion Muller was a skeptic. He was skeptical of the global temperature records and that’s why he did the BEST work. He subsequently accepted global warming as a result of his BEST work. In my opinion that makes him a converted climate skeptic. A lot of folk see it that way too, including more than a few headline writers.
Remember that no-one owns the definition of what a climate skeptic is. Folk on here are fond of telling me climate skeptics are not a monolithic group, don’t all believe the same thing, etc. Well the other edge of that sword is you don’t get to dictate to everyone else whether or not Muller was a skeptic. We get to choose for ourselves.
He wasn’t a skeptic, he was only ever a ‘scientific skeptic’. Now that makes me feel a lot better, the bruises from rolling around on the floor helplessly mirthful were starting to ache.
Even muller doesn’t get to decide. If he claims he wasn’t a climate skeptic then I simply disagree with him. In my opinion he was.
I think you have this right. Nobody owns the definition of skeptic. What’s helpful in this case, I think, is to look at how Muller concludes his op-ed:
This suggests three levels of skepticism even in Muller’s mind: a) global warming which in the context means the land temperature record (not the ocean heat as Pielke Sr would prefer) b) its human causes (where Judith Curry also parts company with Muller) and c) what can and should be done about b).
Muller seems to be claiming no longer to be a skeptic on a) and b). He doesn’t make clear if he’s a skeptic on c) – where the most reasonable definition of skeptic I think is someone who doesn’t accept the IPCC’s recommendations of how much human global CO2 emissions should be curbed. I’m a skeptic on b) and c) – as I think is Dr Curry, though I don’t think she would use that terminology.
But fair cop, use the terms as you see fit. Just be totally clear how you are using them.
The sound of you clutching at straws is almost deafening even through me laughing my head off……..
OH, this is fun. :-)
The quotation “”I was never a skeptic” – Richard Muller, 2011″ is a substantial distortion of the truth.
Right…. a silly bit of wordsmithing, but I wanted people to drop down to see the lack of thermometer coverage 250 years ago. By my count, we had about 10 thermometers, all in Europe.
How does he even imagine he can make a claim about the global temps based on this information? He certainly can not make any legitimate claim towards this. Given this bit of lunacy, how can anyone take any other claims he makes seriously?
How does he even imagine he can make a claim about the global temps based on this information?
Its the trees, stupid. Maybe he talks to them or something…..
Yes, a tree version of Dr. Doolittle. :D
Evidentially they can make such claims, the error range is on the graph. It isn’t infinite.
I suppose what they did was take a limited subset of readings from various recent years and see by how much the global average varied from that subset.
Put it this way: if global average temperature is within 1C of a small number of european thermometers 95% of the time then european thermometer average +- 1C is your 95% range.
I was referring to the silly nature of making such a claim and how this reflects on the rest of his work. Yes, I can state it was probably between 40-60 deg F.
We had about 10 thermometers in Europe in 1762. And, he makes a claim towards global temps based on that? People should laugh at him, not seriously address anything he has to say.
Check the additions in the colonial dataset
I’m looking, but what I found was this….. “The BerkeleyAverage procedure allows us to use the sparse network of observations from the very longest monitoring stations (10, 25, 46, 101, and 186 sites in the years 1755, 1775, 1800, 1825, and 1850 respectfully) to place limited bounds on the early average.
Steve, this is laughable….. there’s no two ways around it.
10 sites in 1755 = *global mean temperature*
Seriously? Agree with suits, this is a huge hit to the credibility of the rest of the work.
suyts. based on latitude, longitude, altitude, you can in fact estimate the temperature at an unknown location to +- 1.6 C.
I found that result pretty shocking. Till you test it. Also, there is probably some room for improvement of this if you include other parameters in the external drift.
In the simplest terms possible here is what you do.
The temperature at a given location is defined as a function of a deterministic process and a random process and an error.
So very simply you create a regression where the temperature is expressed as a function of latitude, longitude, altitude and seasonality.
This equation will have a residual. Think of the residual as the weather.
The “climate” is the deterministic part. The residual is then weather at a given location or rather that random part that is not explained by the deterministic equation. Hmm. take a look at this paper which gives you a flavor of what is involved in these kinds of approaches
Steven: “latitude, longitude, altitude, and seasonality”. What about proximity to oceans or other local climate? Does longitude somehow account for that? Last year, we went on a vacation to the Pacific Northwest, and I can assure you that a roughly straight latitude line goes from pacific rainforest to desert, from small temperature swings to large ones, in a day’s drive.
Steven, I keep looking for a forum to ask you another question, and can never seem to find it. Perhaps if I could sidetrack the discussion for a moment, to see what you think. I’ve been experimenting with Empirical Mode Decomposition (EMD) to decompose various temperature series. I downloaded the latest BEST from your link — thanks — and ran it through EMD. (R has an EMD package.) Two things I have to examine in depth: 1) boundary effects, and 2) does the decomposition in fact have a physical basis. I am working with the single EMD and with EMD ensembles.
A first glance the (non-ensemble) decomposition shows a trend for BEST temperature which has been increasing since around 1810, with an accelerating and decelerating rate that peaked around 1870 at roughly 0.06 C/decade, then again around 1990 at roughly 0.14 C/decade.
This would seem to support a luke-warmish interpretation: the temperature trend has not flat-lined recently, though the increase is currently decelerating at a rate that (if unchanged) would lead to zero growth by the end of the century. The apparent zero growth of late is the slowing growth rate combined with quasi-cyclical processes (the trend plus the three largest IMF’s).
Any thoughts or suggestions on EMD and temperature?
one question to the assumptions underlying the scalpel. Are they still valid, if an external forcing leads to a very strong but rather short-term disturbance (e.g. a very strong volcanic eruption) that either acts regionally (high-latitude eruption) or hemispherically (tropical eruption)?
yes some folks use lat/lon to represent continentality, others use distance from coast, I’m going to play around with distance from large body of water. In the end I’ve found it difficult to account for anything much large than 60% of the variance with the regression or external drift
No I have not looked at EMD.. sounds interesting..and its in R..
Steven, Thanks for the comments. Any thoughts on whether some kind of decomposition approach, like EMD, for finding a trend (the residual after decomposition) might give a better trend than the usual straight lines or moving averages?
The upside of EMD (compared to wavelets, Fourier, etc) is that it seems to find physically meaningful quasi-cycles, that it is data-driven, and that it’s seemingly less prone to boundary effects. The downside is that it does not have an underlying deep theory, hence the “Empirical”.
It seems to me that if there is an actual trend and if it actually is decelerating, there must be significant negative feedbacks that current models do not account for.
Last question: do any of the online or other resources let you see actual BEST scalpel points for certain series? For example, I visit DC’s Reagan National Airport frequently, and that particular station has had a significant move in its past, and significant transitions in terms of surrounding land usage, runways and other manmade surfaces, and an increase in jet travel. (I believe it’s only about 150 meters from the runways, and also adjacent to asphalt.)
“In my view, making a claim about what happened 250 years ago based on thermometer readings invalidates the rest of the work. Thermometer coverage for the N.H. wasn’t sufficient until about the turn of the last century, for the S.H. it wasn’t sufficient until about 70 years ago. It’s just that simple.”
It’s actually pretty simple.
Cressie, Noel (2012). Statistics for spatio-temporal data. Hoboken, N.J.: Wiley. ISBN 9780471692744.
Three cheers for the CET then, if correlation is good since 1880 for the N.H, no reason to be any different before 1880.
ya you can go with that. just put your error bars on it.
Thanks for the advice. That is way you data-processing people look at it, then error bars on error bars.
As an engineer I never bothered with the error bars, just look at the two envelopes (top and bottom) and an appropriate moving average in the middle, good enough for all practical purposes.
“Click on image to embiggen”
‘Embiggen’ ? Is that a word? What’s wrong with ‘enlarge’ ?
Embig is bigger than enlarge is large.
A nobel spirit embiggens the smallest man. It’s a perfectly cromulent word.
Look it up
I mean noble of course
thanks, a new word I can use to get my viagra spam through filters
Is ‘ganz einfach’. One of the co-authors is German. If you spoke it (I do, having lived and worked there 6 years) the transliteration would have been clear. To enlarge. Get a bit more multicultural.
Probably related to the newish US term “bigging up”, ~= exaggerate
> What’s wrong with ‘enlarge’ ?
Embiggen is from The Simpsons.
That uptick in the DTR is interesting.
yup.. its one of those dig here things.
Steven, that will never do, it hints at a 30 to 60 year cycle and with the error bars, it may be warming, may be not warming. BTW, how do y’all handle the out of phase oscillations? That SAM dealy is pretty pronounced.
A moment of silence for the Muller stillborn child.
One of the vanishingly few “skeptics” with actual science chops says AGW is real and his findings help settle the matter.
Watts completely ignores the guy he celebrated as a savoir, even though in 2011 he proclaimed:
“I’m prepared to accept whatever result they produce, even if it proves my premise wrong.”
So instead of accepting the findings as promised or explaining why he won’t he doctors up a “PRESS RELEASE” (Which differs from an ordinary blog post at WUWT how?) On an unsubmitted, unreviewed, unaccepted “paper” reiterating the same Surfacestation claims the reviewers shredded last time, forcing Watts into another humiliating walk-back.
Why does he do it? Because he knows people like you will grasp at every straw and fail to apply the most basic skepticism to his shiny object.
Give him this: Watts knows how to manipulate his useful idiots.
‘To be prepared’ to do something in the future is not an absolute commitment to do so. If Watts had written ‘I will accept …….’, you’d have had a point.
But he didn’t and you haven’t.
You’re hiding behind the fig leaf of semantics on this one, I’d say. The leaf isn’t really big enough and its not a pretty sight!
It looks like Watt’s preparations were all one way. He was “prepared to accept whatever result” which went his way but not all “prepared to accept” any result that didn’t.
Actually, Muller only says that in his view the temp trends are OK. He disavows the alarmist crap you wallow in.
Robert | July 29, 2012 at 7:24 pm
One of the vanishingly few “skeptics” with actual science chops says AGW is real and his findings help settle the matter.
Can Robert use his own chops and tell us exactly how BEST helps settle CAGW attribution ? Or is this just another one of his legendary content-free emotional outbursts ?
Mosher will say a paper is not a thing.
No this paper is bits in a file. It is a thing. I can tell you where it is and how to find it. you can go see for yourself. If you follow the instructions, you will find the bits. we can compare bits and see that we agree.
no, go get me a cup of science
“No this paper is bits in a file”
Then it’s not science.
papers are not science. papers are advertisements for the behavior of scientists. basically,, we did this, and we found that.
If the paper is reproduceable research, then you can take the bits they provide ( the data and methods) repeat their behavior and see for yourself.
Science is what scientists do. You can look at a paper as instructions for behavior. Take this data, apply this method, see this result.
What these instructions provide for you the reader is a way of insuring that the results are not merely a function of the writers bias or interest.
No matter how much we discover, the size of the unknown remains firmly fixed at infinity.
Steven Mosher, what makes you think only material objects are things ?
All entities are posits. The operational question is how parsimonious are you and does the posit actually explain anything that cannot be explained otherwise. If you want to discuss immaterial things, I’ll suggest a priest.
“It offends the aesthetic sense of us who have a taste for desert landscapes”
Steven Mosher, you have said “…you’ve got a science that….”
You have said “the science tells us”.
Seems you want it every which way, depending on how you want to argue.
Instructions are a thing. But science is not.
Steven Mosher, how can something that is not a thing say something ?
The paper can say something only because it’s a thing. However, you insist without reasoning, that science is not thing. Sure you say it’s what scientists do, but that does not make it not a thing.
Still, you say the science says this or that.
Let me see if I can be clearer for you. In this discussion there are any number of people who speak of science as if it were a thing. A thing that has always existed and never changed. an ideal. something out there independent of society, culture, history. That is the notion I am arguing against. If you want to insist that there really is this thing called science, I’ll suggest you go get some.
Thank you. There are several ways to denude your well decorated topia vacua.
If science is what scientists do, but science is not what has been done, then you could not say that anything has been said when all is said and done.
“If you want to discuss immaterial things, I’ll suggest a priest.”
You appear to be a trickster with this message.
Thanks for responding, when have been busy and yet replied to so many queries.
I think you mistake what ontological commitments are. I have only resisted your claim that science is not thing.
Even in your explanation you can see that the people you describe are in your mind, referencing a virtual thing. As you do also.
Because people perform the usual bodily functions during the day, and also sleep, and because scientists are people, when they poop, sleep or kick a dog, it’s science. Science is what scientists do.
Unless you are actually talking about that virtual scientist who ONLY does *ACTUAL* science and never poops ??
You just gotta laugh! ☺
They seem to have Denmark in North America
Oh yeah they mean Greenland
em- + big + -en. Ad-hoc coinage, created independently twice: first by C. A. Ward in 1884 in the British journal Notes and Queries (“but the people magnified them, to make great or embiggen, if we may invent an English parallel as ugly. After all, use is nearly everything”) and then by Dan Greaney in 1996 for The Simpsons.
Embiggen, coined by Dan Greaney, has seen use in several scientific publications
English is like that–usage is what determines validity. Probably why it’s so dynamic and rich.
Good that you posted this. Further documents biases in supposedly earnest science papers. First example is your own comment. Second example is purporting to be certain about temperature all the way back to the beginning of the invention of the thermometer. Third example is lack of correction for homogenization bias published last year (Steirou and Koutsoyiannis). Finally is the amazing lack of digging into the models behind future prognostications. GCMs are intimidating, but accessible via their technical documentation. Their results don’t hold up. And when you start to dig into their mathematical guts, you start to see why. See for example Chung et. al., Model simulated humidity bias in the upper troposphere, J. Geophys. Res. 116: D10110 (2011).
Data problems are only part of the AGW story. Faulty logic and faulty models are another, less explored but equally rewarding.
Which does not say there is no AGW. The questions are how much compared to natural background variation (hockey sticks); how much more (current thread via projections); so what (sea level, ocean acidification,crop yields, climate extremes). And most of those answers are less, mattering less, than IPCC consensus. Much less.
” Second example is purporting to be certain about temperature all the way back to the beginning of the invention of the thermometer.”
Guess you failed to notice how large the error bars are? Error bars indicate UNCERTAINTY.
“Third example is lack of correction for homogenization bias published last year (Steirou and Koutsoyiannis). ”
1. Berkeley dont use adjusted or homogenized data
2. The effect S&K find disappears is you make slight changes to their
station selection criteria. not very robust.
Of course, you do (basically) homogenize your data.
Berkeley fits station fragments (post-scalpel) together to generate a temperature field, with a weighting function based on how much an individual fragments match other records available at the time. It doesn’t adjust any individual station’s record per se. Its really a very different approach to homogenization than, say, Menne et al.
I agree it’s very different than the homogenization we’ve seen before. I just thought it sounded funny when I heard it said you guys “don’t use… homogenized data.” It’s true you don’t use it as an input, but…
“Of course, you do (basically) homogenize your data.”
no. lets take a simple example of homogenization.
You have series X. 1,1,1,1,1,1,2,2,2,2,2,2,2
And looking at metadata you find that series X had a station move from
high altitude to low altitude.
You use a lapse rate to “correct” the series
1,1,1,1,1,1,.9,.9,.9, .9 etc
And by doing so you introduce a error due to adjustment. You hope that
these equal out.
In the berkeley method, the series are just cut. You have two different stations. No homogenization.
There is a station quality weighting step,
“Another problem is unreliability of stations; some stations show large differences from
nearby stations that are not plausibly related to weather or climate; they could be measurement error
or local systematic effects (such poor station siting, or excess heating in an urban environment). To
reduce the effects of such stations, we apply an iterative weighting procedure. Weights are applied
to the station contributions that affect the Kriging averages, i.e. the contributions made by
individual stations towards the estimate of the temperature at a given location.”
this weighting is iterated until convergence.
Mosher, you say, “There is a station quality weighting step.” That’s what I was referring to.
Brandon, Then call it what it is.
Homogenization is used to create long series.
its used to reduce the temporal-spatial problem to a spatial problem only.
Slicing and deweighting outliers is an entirely different operation.
Mosher, I didn’t say it was homogenization. I said it was basically homogenization. The effects of the two processes have a lot of similarities.
Moreover, nobody has a monopoly on the term “homogenization.” Something can be homogenization without being the homogenization process which was used in earlier temperature records. I think anyone who looks up the definition for homogenization will agree my description was reasonable.
Dr. Curry’ summary says it all.
In effect a decent paper is turned to hype, like so much work associated with the climate consensus.
The really interesting question is to find out why ‘hype’ and ‘climate science’ are increasingly synonymous?
Politics, the thrill of getting public exposure, and ego. For example:
Politics Spreading the Delusion: A Case in Point
Mosh or Zeke
I have been through the various back up papers but can’t find the actual stations used pre 1880 or the data sets utilised. Was the ‘original’ data used or the adjusted material compiled by such as Phil Jones working for the EU funded ‘Improv’ project. Can you help- Where is this information located? Thanks
The data is all online. including the colonial dataset. goto berkeley website. or use my r packages
If you have problem with the R code just submit a request.
Noticed when you click on higher resolution versions of the two temp graphs you get this and this.
Which is.. AMO lags and amplifies by feedback mechanisms GMT?
Two things about the BEST paper.
First, they show where the warming has occurred since the 1950’s and it isn’t the urbanizing areas (US is one of the least warming areas). This seems to say Watts’ effect won’t do much to the BEST results.
Second, it is interesting that the diurnal range globally over land has started to increase visibly in the last few decades. I can only imagine it is an effect of drying over land areas, which is expected if land warms faster than the ocean.
The “curve-fitting” reveals the 60-year 0.2 degree amplitude cycle that the skeptics are so excited about, but also shows that the AMO lags the land temperature in this frequency range. I have suspected for a while that aerosol and solar variations drove the land temperature changes, which then affected the North Atlantic. There seems to be increasingly more evidence that the long cycle is not ocean-driven, but I think some skeptics will hold on to it somehow.
When BEST Land preliminary first came out Muller talked about the AMO as a possible natural explanation. My hunch has been it’s tag along, not a driver. Almost said it in response to Bart R., but was too chicken.
A starting point of 1950? What sham and a shame.
Watts only started in 1979. What’s your complaint?
I’ve only done the UK and USA using BEST data, but here are the mean temperature anomaly of certain decades. (And why did they use 11 years decades? Jan 1950 to Dec 1960 compared to Jan 2000 to Dec 2010????)
1930 1940 0.39
1950 1960 0.17
2000 2010 0.65
.48C over 50 years versus .26C over 70 years.
1940 1950 0.32
1950 1960 0.17
2000 2010 0.93
.76C over 50 years versus .61C over 60 years
Not as big, but still big.
By picking 1950-60 as the starting decade, you start in a big hole and therefore you can tell lies like “1.5F over 50 years”.
JimD, Ruralization, how is that for a new term.
Russia started populating Siberia just before 1900 and boosted expansion in the 60s. The CO2 really kicked in after that it looks like :)
Did their influence spread into Canada?
Actually, I think the Canadian variety of winter wheat used back then did start in Russia or Siberia.
They were Russian Germans – Germans invited to farm in Russia. My wife is a Russian German. They were very successful wheat farmers in Russia, and established satellite farms in Canada, the Dakotas, Nebraska, Kansas, and Oklahoma. Essentially the wheat belt.
Band leader Lawrence Welk was a Russian German.
maybe I didn’t notice that they replaced their boreal forests with winter wheat in these places. I thought those forests were still there.
A lot of the forests are, but not all. Russian information is not easy to come by, early in the 20th century, wheat was one of Russia’s main exports and they sucked as efficient farmers needing 2 to 3 times the acreage as the US and Canada. They used control burns and spread peat dust to help clear land and melt snow. The smoke and wind erosion help to melt a little more for them and can spread around a good bit. You can Google Earth and see the kind of haphazard areas that are cleared further North and East.
“Roughly 60 percent of Russia’s spring wheat is produced in the Ural, Siberian, and Far East districts”
That has gotta have some impact and the expansion, wars and Soviet mismanagement tend to match temperature trends pretty well.
So these hitherto unknown, or perhaps hypothetical, fields of waving barley in northern Canada and Siberia would also have a higher albedo than boreal forests whether or not they have snow cover, which would be a cooling effect. Where do you get the warming from?
Where do I get warming from? Mainly early snow melt from both intentional and unintentional ash and dust fallout. Part of the warming appears to be land use impact on the measurements that may be real or not. Mosher mentioned that there are larger siting issues are at the higher latitudes which makes sense. Would you clear snow near where you lived?
As I mentioned before, the Russians know how to move snow and break ice. They were also not known to be the most conscientious of farmers. Over used farm land tends to have more erosion, compaction and less irrigation, which would lead to more local warming, snow melt and water shed damage. That is a pretty sensitive environment up there. Just being there is bad for the environment.
Call me skeptical. Farming doesn’t add to warming except by replacing trees that are CO2 sinks.
Jimd, here is an interesting tale of how not to farm.
> Did their influence spread into Canada?
the great one. played in LA when I lived there.
greatest player ever.
JIMD, “Call me skeptical. Farming doesn’t add to warming except by replacing trees that are CO2 sinks.”
Poor farming practices most definitely impacts climate and compromises carbon and moisture storage in the soil. I believe Chief Hydrologist is up on the subject. The Virgin Lands program was only 35million acres but the impacted area was likely much larger.
Moving population into the region also increase Siberian timber production, general construction, mining etc. It was like state sponsored land grab, oh wait? It was a state sponsored land grab :)
That graph does look like AMO lags in some places but in other places like temp. lags? Hard to believe the land temp. drives the temp. of the ocean with its much larger heat capacity.
The land responds to the same forcing more quickly than the deeper ocean, so it makes sense that the land leads when they have common forcing, e.g. aerosols and solar.
It’s discussed in the paper – figure 6.
Of course if Watts’ paper holds up to scrutiny, similar analyses will have to be performed on the global datasets, so it is a little early to say it won’t do much to the BEST results.
Figure 4. Coffin, meet nail.
If you think the overly simplistic curve fitting of this paper tells us anything, you’re on shaky ground. If you think it is a “nail in the coffin” of anything, you’re a fool.
Brandon Shollenberger | July 29, 2012 at 7:59 pm |
On the contrary, the parsimony of Figure 4.’s elegant and straightforward presentation, while we are cautioned by excursions from the line of fit to not consider the correlations the only contribution to the system, nor to be wholly deterministic in nature, is explicitly and compellingly what Science seeks first and foremost.
Not “simplistic” but simplification. Look it up. I stand not on the ground, but on the shoulders of Scientists. And all you have, BS, is assertion and fiction.
Bart R, you’re welcome to believe what you want, but I do have a request. Would you please not refer to me as “BS”? That acronym has a rather negative connotation, and I prefer people call me just about anything else.
I will say that figure was the occasion of some good arguments.
My take away was that the finerprintof the volcanic signal was pretty striking. Anybody who questions the accuracy of the record, needs to explain how the hell that happened. In the course of that we even had the opportunity to identify some lacuna in publications on volcanic forcing which were missing data.
The simple curve fit illustrates that you do not have to understand all the fine detail of GHG forcing to grasp the big picture. In one way its hard for people who defended scaffetta to suddenly get religion about the approach..
Brandon Shollenberger | July 29, 2012 at 8:43 pm |
stevenmosher | July 29, 2012 at 11:05 pm |
This is one of those rare “fingerprint” images that overleaps several levels of reasoning and precaution justifiably. While Scafeffa attempts the approach, he also manipulates the marrow of the numbers and contrives Chaos Theory wordslaw that falls apart on close inspection.
And.. Unduly and uncharacteristically, not because I know and trust that the work has been closely reviewed by people whose skills I regard, but because it’s summer and we all could use a break, I’m not saying “pending review” on this. While what’s come out of BEST is startling, the provenance of the ideas, the data, justify some congratulations prior to review.
It’ll be interesting to see what review does turn up, though.
Peoples reactions come down to three
1. I knew that
3. too simple
Oh, I think number 4 is right up there by now
4. pack of lies
Looking at the Watts paper in a little more detail I noticed the paper says “Comparisons demonstrate that NOAA adjustment processes fail to adjust poorly sited 54 stations downward to match the well sited stations, but actually adjusts the well sited 55 stations upwards to match the poorly sited stations.”
But if you look at figure 4 and figure 20 you can see the adjusted data is warmer than the poorly sited stations. Ie it doesn’t “match the poorly sited stations”.
Something else is going on with the adjustments if the adjustments are warmer than all categories alone. It looks like a positive adjustment is being made to the raw data that doesn’t have anything to do with homogenization, which renders a direct comparison invalid.
Do the raw data figures in the paper include time of observation bias adjustment and/or any other similar “instrument”-like adjustments?
“raw” data is used No adjustments are made to the data.
where stations exhibit objective discontinuites they are broken into separate stations.
Sorry, if you are talking about the watts paper, then I don’t know.
We would need to compare what the paper says with the actual data.
We do know that anybody can screw up data
Especially if it’s reviewed by friendly reviewers who are only interested in ensuring the right papers are produced to feed in to AR5. In that case errors, no matter how fatal, are acceptable.
In addition of the fact that anybody can screw up data we have the bias: everybody is less likely to find the error in analysis that support his prejudices (or his favored arguments) than in analysis that contracicts those.
I would classify the Hansen et al. paper on loaded dice as a strong example of this – so strong that also people like Tamino found the result suspect as the effect may have been implausibly strong for them as well. I have similar doubts on the Watts paper as again the effect appears implausibly strong taking into account how difficult it has been to find similar effects in all earlier analyses. I’m confident that people will scrutinize that analysis and either confirm that the analysis is done correctly or point out where the error is. Until that’s done I’m skeptical as I tend to be skeptical on most unconfirmed results by any research group.
lolwot writes with all his usual breathaking cluelessness:
“Even muller doesn’t get to decide. If he claims he wasn’t a climate skeptic then I simply disagree with him. In my opinion he was.”
lw, I understand you folks are upset, but even you should be able to see how idiotic this statement is.
not if im an idiot
‘not if im an idiot’
Since when has that proposition ever been in doubt?
well it’s not a proposition
Lat — this isn’t one of those sites where you can proposition people!
for being funny and not having anything to do with you being (or not) an idiot.
So we have a 1.5 deg change in 250 yrs or .6 deg a century? I wonder what the trends are pre and post 1900 say? The difference should be the rate of warming that could be attributable to AGW. And where is the explanation for the flat spot starting about 1940? Gunpowder?
USA Tmax. 5 year averages,
It is now colder than several periods in the past and only .57C warmer than a 5 year period in the 1840s.
You mean this? http://berkeleyearth.lbl.gov/auto/Regional/TMAX/Figures/united-states-TMAX-Trend.pdf
In the U.S., there is certainly notably more warming in the min than the max over the past 30 years.
I prefer mine for two reasons.
1) Five year averages give a better sense of how it was warmer in the past.
2) Graphing the monthly anomalies gives you a better sense of climate.
I always associate UHI with a Tmin that warms faster than TMax.
Location location location.
As rpielke says at 29/07 5.55pm, siting matters.
Siting certainly does. Thats why Berkeley tries to use 2x to 8x more sites during any given month (post 1900) to calculate temperature than prior efforts.
The UHI effect is Climatology’s version of Hubble’s bad mirror. When finally launched in 1990, scientists found that the main mirror had been ground incorrectly, compromising the telescope’s capabilities. (wiki)
It is emblematic of the public school system. We’re paying for best results and what we’re getting is not worth the candle. It is what we have come to be the ‘best’ we can expect from government.
And Hubble’s bad mirror’s output was adjusted (via corrective optics added while it was in orbit) and the result was the most impressive astronomical instrument ever built.
Likewise, homogenization is meant to adjust for changes and biases in station data, resulting in a dataset useful for climate research.
You missed the point. Hubble was ‘corrected’ to reality. And, that was as best as could be accomplished given the circumstances. And we knew what reality was before and after shooting an improperly-ground mirror into space. You cannot make bad data relevant. God-in-garbage-out–GIGO–i.e., you can’t turn a pig’s ear into Flammkuchen.
” Hubble was ‘corrected’ to reality. And, that was as best as could be accomplished given the circumstances. And we knew what reality was before and after shooting an improperly-ground mirror into space. You cannot make bad data relevant.”
In the case of Hubble, the error in the grinding of the mirror was computable because it was due to a misuse of a tool. This allowed optical engineers to compute the corrections that were needed, that worked beautifully.
When we replace old temperature sensors with new ones, any difference in readings can be measured and these differences used to homogenize data in much the same way. Likewise, one can experimentally determine the change in temperature readings due to the introduction of Stevenson screens, and use that to homogenize older and newer data sets. Etc etc etc. There’s a rich literature on homogenization of historical climate datasets. I’m afraid that waving your hands dismissfully on a random blog on the internet is unlikely to overturn that body of work …
If I understand correctly you’re saying — given that temeperature in an intensive and an average of an intensive variable is meaningless — using homogenized data makes it less meainingless, right? But, once you are on the reductionists’ dead-end road to nowhere it’s hard to get off the bandwagon. Unfortunately, the convinced model-makers of climatism who have been beating the drums are so dedicated to the concept of an average global temperature that they no longer even care if their models are tested and fail in every possible way: regionally, seasonally, temporally and historically.
1. we identified stations that are very rural.
2. we used a classification system that is more stringent than the classification system used by Watts
3. we calculated the global land temperature using only rural stations.
Answer: doesnt make a difference.
There are some people whose daughter’s papers I will no longer read.
Kim have you read her chapbook? its some pretty good poetry. won an award.
I think you can explain homogenization more easily than you can explain Richard Muller.
Like I said, science gives us no way to see into the hearts of men.
data, code. I prefer them.
Why does the raw data always turn up missing? We seem to have more than a homogenization problem: New study shows half of the global warming in the USA is artificial
Those morons used monthly data.
So, you are saying that that stations with poor microsite (Class 3, 4, 5) DO NOT have significantly higher warming trends than well sited stations (Class 1, 2), or that this is NOT true in all nine geographical areas of all five data samples or that the odds of this result having occurred randomly are NOT vanishingly minuscule or is it your belief that none of these things has been inescapably demonstrated?
I am not sure “stringent” is the right word here. It is a stricter classification system in that fewer stations get top ratings, but it is much less precise at evaluating the actual impact of biases on individual stations.
This lack of precision muddies the water to such an extent that categorization becomes almost meaningless.
That’s the problem.
From my reading, this is the crux of the biscuit.
Question- I’ve seen it claimed that class 2 stations warmed at a higher rate than class 3. Could it be that the UHI effects are a curve with diminishing returns as urbanization increases? eg those first roads and parking lots and land use changes have a greater effect than we expect?
If you compare class 1 vs class 2 stations, what does that look like? If class 2 increases more rapidly than class 3, then comparing 1 and 2 to 3 and 4 will actually mask the effect. Is that correct?
The main unexpected and unexplained result is the is the shrinking of the diurnal temperature range. The Tmax – Tmin averaging is basic to the whole study and could introduce a slowly varying bias to the result. Why is this important? Some mathematical modelling is based on inertial delays and some on transport delays. The inertial ones can only give correct solutions if if the real life data is a true average. This topic certainly needs more research.
I note that the Berkeley group have continued the process of subjective labelling of some global temperature ranges as ‘anomalous’ or normal. This makes the 1940 temperature normal while the 1905 temperature was anomalous. Can anyone believe that classification? IMO the IPCC introduced this classification to justify its erroneous conclusions concerning their explanation of climate during that period. Indeed what the IPCC failed to learn from climate changes in that period was that climate could change rapidly from rising to falling temperatures, which could only be attributed to limits on the amount of narrow band earth’s radiation that CO2 could absorb. All resonant systems nave limits on the amount of energy they can absorb. The other lesson from this episode is the decades it takes for atmospheric temperature changes to percolate through to the oceans. See my web site.
The omission of reference to data indicating that there had been no global temperature rise in the last decade might be due in part because the method of smoothing ( decadel central moving average) could not include data after 2006. However other means exist to cover the missing years. Incidently why use 10 year averaging when 11 was closer to the sun-spot cycle and offered some cancellation of the latter’s effects, while the 10 year choice might over a period erroneously amplify some.
I agree with judith that the attribution effort is simplistic, wrong, and adds nothing to the paper. Aside from not including a host of known non-CO2 forcings (halocarbons, N2O, methane, solar cycle, etc) they ignore man made sulfates, which are claimed by at least some modeling groups to have offset up to half of all GHG forcing… Something which is speculative at best, but still ignored. The paper would be stronger without a discussion of atribution to CO2.
” (halocarbons, N2O, methane, solar cycle, etc)”
Without even re-reading what Muller said, I can state that you’re wrong regarding two of these items, methane and solar cycle.
Makes me think you didn’t actually read what Muller said:
I read what he said. In spite of his caveats, it remains a silly exercize. He is simply mistaken about ln(CO2) being a reasonable proxy for combined forcinG. It’s not.
Can someone tell me if I’m missing something? The new BEST paper says:
However, the paper explains it used CO2 levels as a proxy for all anthropogenic emissions. If that’s true, the “forcing parameter” they used is not “for CO2 doubling.” It’s actual relation to CO2 would be far more complicated. The actual value could be quite different, and the uncertainty range would necessarily be much, much larger.
Am I missing something, or is this estimate basically meaningless?
It’s basically meaningless, as Judith points out above.
The two issues aren’t actually the same. Neither Judith Curry nor I believe the curve fitting for this paper produces meaningful results. However, even if one disagrees about that, the issue I raise here remains. Even if you support the curve-fitting and results it produces, the stated sensitivity isn’t a sensitivity for CO2.
For people who share Curry’s view, the issue is moot. For people who share Bart R’s (and presumably the authors’) view, the issue is critical.
Brandon Shollenberger | July 29, 2012 at 9:17 pm |
For people who share Bart R’s (and presumably the authors’) view, the issue is critical.
That’s funny. I didn’t feel critical about it. Skeptical. But not critical.
Maybe if you spoke for me less, and explained yourself in detail more, it’d work out better for all parties.
Linear trends are unconvincing.
Nonlinear correspondences lasting a quarter millennium, and with a plausible mechanism? That’s the gold standard. It’s not a matter of view. It’s simple math. A coincidence of linear trends has one or two degrees of freedom. A graph like BEST has produced matches across a large multiple of that. It doesn’t happen by accident, and it isn’t chance. It’s not faked, and it isn’t manipulated to procure a correlation like that zodiac guy’s solar nutation hypothesis.
The stated sensitivity is therefore a mere footnote.
Bart R, if you choose to read a person’s comment as wrong, you can almost always find things to disagree with:
I never said you were critical, skeptical or anything else about it. It’s funny you suggest I “spoke for [you]” when in reality you are the one putting words into other people’s mouths.
Yes, you are missing something. Because of the way sensitivity has been defined, per doubling of CO2 concentration, you would have to find a like period of doubling in the past when temperature was also accurately measured, for a comparison. If no such period exists then the definition is meaningless.
So watts et al have shown that UHI accounts for 2 or maybe 3X spurious Stevenson boxes warming in the USA. Extrapolate that worldwide. there is NO AGW.Its about time this scam came to an end. The people responsible for it must be prosecuted.
Most of the warming is contributed by ocean surface temperature increases. Even if Watts et al are 100% correct, that only means a downward adjustment in the global average of under 15% of the total historical warming. No matter how you look at the data, warming over the last 100+ years is real, how much was due to human influences and how much due to other factors is not really known, but certainly some warming is due to human activities.
A 15% change in temperature trends would have an enormous impact on our understanding of things even though it wouldn’t disprove AGW. The effect it would have on climate modeling alone would be huge.
(And of course, this assumes ocean temperature data is fine.)
A 15% adjustment in surface tempertures would be of interest but hardly “enormous”. Most of the energy going into the Earth system from the addition of more greenhouse gases is going into the ocean. This is many orders of magnitude larger than the energy in the troposphere, making the 15% barely registering in comparison.
R. Gates, your response misrepresents my comment. I did not say a 15% adjustment would be enormous as you portray. I said such a change “would have an enormous impact on our understanding of things.” Because of your misrepresentation, you’ve failed to address anything I’ve said.
Mindlessly repeating a talking point (in an arguably hypocritical manner) is bad enough, but doing so even when it isn’t relevant is just obnoxious.
Some warming from for humans. Some cooling from humans. On local scale, easy to measure, but on global scale it’s difficult to measure.
Satellites had Stevenson boxes? Who woulda thunk it…
You should really do some basic climate study before making such an absurd statement. You seem to want very badly for Anthony to have a major find here, that is not in the cards. Those who think Watts’ paper is going to change any of the basic tenets of AGW will be sorely disappointed.
Watts’s paper may not ‘change any of the basic tenets of AGW’.
But if it is correct, a 50% reduction in the magnitude of a ‘problem’ that many perceive has already been vastly overblown will have huge implications for politics, policy, funding and academic careers.
And the existence (if shown to be correct) of a systematic warming bias within the NOAA will cast huge doubt upon the probity of that – and may related – institutions. IMO you cannot get your numbers that wrong for that long just by accident. Especially when the possible existence of exactly such a problem was called out in the blogosphere over five years ago.
It was either serial incompetence or a deliberate tactic to overstate the problem. Neither conclusion reflects well on the participants.
You give far too much credit for what the Watts paper may do in terms of the scope of the actual “problem”. It says nothing about how much extra energy the Earth’s energy system is retaining, as such a very small amount of it would be retained at the surface of the troposphere at any rate.
Anthony hurried along his paper as a reaction to Muller et. al., and I strongly suspect this was unwise and the Watts paper to get some very strong shredding. 3C of warming in the lower troposphere per doubling of CO2 will remain the best estimate of sensitivity, but most the energy is still going into the oceans.
If the extra energy you believe is present does not manifest itself as a temperature increase that affects us then, to be brutal, I don’t give a shit about it. And not all that much even if it does lead to a modest increase..
Similarly I know that down in the centre of the earth the temperatures are very very hot. But it rarely manifests itself up here. And so is of little concern.
You may worry yourself into a frenzy about it. But don’t be too surprised if very few join you in that anxiety.
The Fraud includes the Government of Australia which has imposed a carbon tax on its people based on fraudalent research
I think these two papers are a welcome addition to the debate. The methods and conclusions will be subjected to hopefully careful analysis over the next couple of months. I agree with Judith that their attribution argument is pretty unconvincing. I believe Appel called it data fitting.
This issue of station quality does seem to me to be significant. I have three outdoor thermometers at my house. One is a Davis Instruments station that is NOAA quality, the other two are on the north and south sides of the house. The readings can be significantly different depending on sunshine, etc. My initial reaction is that station quality must be accounted for. It seems questionable to me to try to use statistical analysis to make up for these differences. I would trust a first principles selection of higher quality data more.
Looking at the decadal land-surface average temperature chart brings out some thoughts about temperature curves. First, they use a ten-year moving average to make it look nice. A moving average destroys data, in particular El Nino peaks and La Nina valleys because their period is five years. They are an integral part of the temperature curve, not noise, and not some externally imposed factor. A non-destructive way to represent such data would be to use a magic marker to outline the trend, and do it by hand. The exact locations of El Nino peaks are important and Best has previously shown that they line up accurately in five different temperature curves from all over the world. One needs to know these locations to evaluate the volcanic cooling alleged to accompany volcanic eruptions. I have determined that what goes for volcanic cooling is not caused by the volcano but by dumb luck. (What Warming? Satellite view of global temperature change, pp. 17 – 21). The initial volcanic aerosol cloud first ascends to the stratosphere and warms it. This is followed by stratospheric cooling in a few years but it never reaches the lower troposphere. Whether a volcanic cooling is observed or not depends on the date of eruption compared to the phase of the ENSO cycle. If the eruption coincides with the peak of the El Nino warm period it will be followed by a cool La Nina valley. This is what happened with Pinatubo: it peaked with the 1991 El Nino which was followed by the 1992-93 La Nina. Self et al. who reported it assigned that La Nina to Pinatubo cooling and that what you find on temperature charts to this day. But El Chichon was not so lucky – it erupted near the bottom of the 1992 La Nina which was immediately followed by the 1993 El Nino warming. No one, including Best himself, had no idea why El Chichon did not bring volcanic cooling. There can also be intermediate cases where the eruption takes place somewhere between the El Nino peak and a La Nina valley. Krakatao is one like that and for its size the little cooling it brought can not be explained from conventional theory. Which brings us to their theory of volcanic cooling. I quote: “Figure 4 shows temperatures from 1750 with a simple linear fit using records of volcanic sulphate emissions and atmospheric CO2 concentrations. The strong negative excursions in the early period closely match major volcanic events (detected by sulphate deposition in ice cores).” They are completely wrong and have no idea what they are talking about. As I pointed out, volcanic eruptions do not produce cooling, especially on a regular thirty year schedule like their graph shows. Apparently they have picked up a climate oscillation and are still oblivious of it. The same thirty year oscillation has been picked up in tree ring data from coast ranges of California.The authors of that observation tried unsuccessfully to connect it with ENSO. Most likely the trace is from PDO which does have a thirty year period. Judging by Figure 1 its amplitude was higher during the LIA and since then it has been slowly decreasing.
The weakest point of the manuscript is indeed the question of relevance, but you stated it in an extremely weak form.
The manuscript attempts an attribution study typical of those twenty and thirty years ago. The modeling of the volcanic forcings are very handwavy, and the claim that the CO2 forcing goes as ln [CO2], actually is a model result contrary to the claim that all of the attribution studies only use observations. Yet, by far the largest hole is that regional patterns are not used, as well as no use of sea surface data and variation with altitude. In short, Eli would not be surprised to see the referees ask that the entire attribution section be triaged.
As to the Watts manuscript, no bunny has yet explained to Eli how a photograph made in 2009 tells you anything about the state of a station in 1980. Further, since raw data was used, it would be important to know is there was some difference in the TOB between the various classes of stations.
ln [CO2] or just plain C02 made no difference to the fit.
If folks like they can take this
and fiddle around with more complicated regressions. There were two schools of thought. One school argued for a more complicated approach. the other school argued for a simple approach. adding complication did not change the answer.
The gross characteristics of the land temperature curve can be explained by a radiative forcing and volcanic forcing. adding a term for solar explained nothing. adding other individual forcing terms, while physically more appealing, didn’t add much.
think of the simple approach as a answer to the mantra that the climate is too complicated to understand.
The surface station stuff is very good work on your part, but I agree with Eli on attribution. No one really believes anymore you can do a formal attribution without considering the spatio-temporal patterns.
Personally I didnt see it as a formal attribution. Somewhere else here I described the three canonical responses to the chart
1. I knew that
2. Wow, thats surprising
3. No way, thats too simple.
WRT spatio-temporal patterns and formal attributions I would agree.
The other interesting thing is that people’s reactions to it are really conditioned by their prior beliefs.
I’d have accepted the simple curve-fitting if it had just been curve-fitting. It wouldn’t have told us anything new, but that’s fine. However, once that curve-fitting was used to estimate the climate’s sensitivity to a doubling of CO2* (with only a 10% margin of error), it lost me completely.
*CO2 was used as a proxy for all anthropogenic emissions yet the estimated sensitivity is listed for CO2 itself. Yeah, that makes sense.
I did not view the sensitivity calculation as anything more than a sanity check. Its not evidence its not the reason one should believe. its just a sanity check.
Mosher, it isn’t a sanity check. It’s insane. It claims to calculate a sensitivity to a doubling of CO2 while using CO2 as a proxy for many things. If I used stock market prices as a proxy for human emissions, found it fit well, then calculated a sensitivity and said, “The planet will warm 3.1 degrees every time stock market prices double,” I’d be laughed at.
It doesn’t matter if CO2 is a reasonable proxy for all emissions. Once it is used as a proxy, any results derived are derived for a proxy. Those results are not derived for CO2 itself.
Oh, Robert Rhode deserves 100% of the credit for all the temperature work. My primary remit is just to help folks who want to use the data, make sure they can get it easily and use it and explain the ins and outs so that Robert can do real work. so I’m the data monkey. nothing more.
Surely we have moved beyond superimposing curves as a method of climatological investigation? It has added very little to the temperature data and nothing at all interesting to the analysis. Can we go back to the satellites now? Oh – and there seems a minor error in the references – I can’t find Tsonis in the body.
‘The close correlation between temperature increase and atmospheric CO2 increase, including a small delay, is indicative of temperature drive mechanism of CO2 liberation. The temperature changes correlate better with CO2 changes for a delay of half to one year, in this series the inverse order did not verify, which is meaningful for temperature driver and not CO2 driver mechanism. The delay is expected in order to heat ocean water thermo-cline lamina, to liberate CO2 and to transfer it to the atmosphere. The absence of correlation for temperature decrease and CO2 decrease means that the process is not reversible as it would be, if associated to less radiation absorption by CO2. The process of ocean uptake of CO2 involves complex and multiple mechanisms of the whole carbon cycle, differing from simple degassing.’
Figure 4 points to forcing functions (volcanic particulates and GHGs) swamping out internal natural variability.
Are you back again? Figure 4 says nothing about the direction of causality. But it is pointless discussing anything with an idiot. Here’s a correlation. http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=Wong2006figure7.gif Rhis one shows that clouds radiative forcing dominates greenghouse gases. You are a hopeless moron.
Apparently a moron has an IQ of between 51 and 70. So, if you are correct about WHT, I would say you’d need to improve your cognitive processes somewhat to catch up.
Oh that’s really so clever – I am so intimidated. It is about on par with your science and politics.
Oh – and there seems a minor error in the references – I can’t find Tsonis in the body.
It’s not like there is that much Tsonis in the crap you write. You selectively leave out about half of it.
Go team retarded. Do I care which swamp the consensus trolls crawl out of.
Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.
It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.
Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1909 to the mid 1940’s, cooling to the late 1970’s, warming to 1998 and declining since. The shifts are punctuated by extreme El Niño Southern Oscillation events. Fluctuations between La Niña and El Niño peak at these times and climate then settles into a damped oscillation. Until the next critical climate threshold – due perhaps in a decade or two if the recent past is any indication.
I tend to quote quite a lot – having read quite a lot over decades. Do you have a line in anything but smarmy consensus troll groupthink. Would you like to suggest where I have misquoted either Wong or Tsonis – or indeed the quts above? Are you a scientist or just a wannabe poseur like webby and TT? What a silly question. How many Tsonis papers have you actually read? What would you know?
I was trying to be helpful – something that doesn’t appear in the text should not be in the references obviuously. Which way the intellectual swamp moron.
Maybe the bunny thought you would know aforehand that deterioration and decay happens over time and that the photo today can only be worse than a photo taken 30 years ago or a year ago. Getting one’s referee opinions from bunnies is probably a post modern science characteristic. I don’t find it in the classic writings.
“Maybe the bunny thought you would know aforehand that deterioration and decay happens over time and that the photo today can only be worse than a photo taken 30 years ago or a year ago.”
There’s some sort of law against cutting down trees near met stations?
Well yes, and you know none have moved and none have been rebuilt and none have new keepers since the year dot. Oh yes, that barbie that featured some time ago, was not 100years old
If I were to calculate global temperature from looking at my kitchen-window thermometer, I would do the following:
– Make a reading of my thermometer.
– Subtract a few degrees due to latitude.
– Add error bars spanning the whole spectrum of possible outcomes.
Which is essentially what BEST has done with their early temperatures with an exception: BEST imply that the global temperature of 1750 CAN NOT have been as warm as it has been for the last few centuries.
They can of course not know this, so please increase the error bars, guys!
If I were to calculate global temperature from looking at my kitchen-window thermometer, I would do the following:
– Make a reading of my thermometer.
– Subtract a few degrees due to latitude.
– Add error bars spanning the whole spectrum of possible outcomes.
Which is essentially what BEST has done with their early temperatures with an exception ..
That is not how it works. What we do is kinda like this. Lets take your neighborhood as an example. Lets suppose that you have 50 neighbors.
For 1960 to 2010 we look at you and all 50 of your neighbors.
Its 52 at your house, 53 in the neighbors nearby, 62 at the grumpy guy with the sunny lot, 49 here, 48 there, and when we are done we have that
50 year average for your whole neighborhood. And we can note for example, that if your house is 52, the average of all your neighbors is 53.2
or that 95% of the time the average of the whole is within 1.2 degrees of your house. And we note that if your house is 52 and grumpy old mans house is 63, then the total average is 53.4. That map of your neighborhood over 50 years looks pretty cool, we can see the patterns and note what causes those patterns.
so lets push back the clock to 1959.. and oops, it looks like only 25 houses have temperatures. well, we can use those 25 + the information from the field to put limits on what the temperature at the other 25 house would have been had we measured them. We can actually test this by holding out data.
And we push it back further in time.. down to 10 houses, 5 houses.. etc.
Here is an example: take the temperature in the eastern North america and add in the temperature from europe.
Suppose that is 10C.
based on what you know, please guess the rest of the globe?
You know what? 95% of the time during the 20th century the global average is within .5C of the combined average of europe and eastern north america. Who would have thunk it!
So, If I know eastern north america and Europe I can predict the rest of the world within .5C.
Wanna know something else? if I know the land I can predict the SST.
‘So, If I know eastern north america and Europe I can predict the rest of the world within .5C.’
No, you cannot. The SH cannot be correlated to the NH this way. Different animals. Different inter-hemispheric energy flows, Different geographical distribution of land, ocean and snow/ice covered surfaces, Different energy sources and sinks both in atmosphere and hydrosphere, different isolation depending on celestial parameters.
‘Wanna know something else? if I know the land I can predict the SST’.
You cannot predict, no more than a biased wit. A constant value of insolation will thus create an imbalance between both hemispheres relating to stored energy. Wanna know something else? Temperature has nothing to do with energy balances on the surface of a planetary body. Your reasoning, and that of Berkeley Earth Surface Temperature Project, has fallacies.
Steven Mosher says:
“So, If I know eastern north america and Europe I can predict the rest of the world within .5C.”
Will you please show how you validated this claim?
Re: Muller wasn’t a skeptic, he was only ever a ‘scientific skeptic’.
Seems to mean : he always was an alarmist/believer, but nevertheless concerned that the science/data underpinning alarmism was ropey. So he set about looking for better support for alarmism.
Talk of “conversion” is hype.
This looks like a very useful addition to the data pool.
(1) The uptick in DTR may be simply explained by the decrease in cloudiness (and albedo) observed between 1981 and 1999 according to the international cloud project data. At least one paper has used this relationship the other way around to infer some GCR influence i.e. using local measurements of DTR as a proxy for the change in local level of cloudiness. (I can’t be bothered to look up the reference.)
(2) The attribution argument is not just bad, it is an insult to our intelligence which leaves us all poorer. It should never get past any half-way sensible review. The estimate of climate sensitivity and its associated uncertainty (!) adds further insult. If you have any influence over that part of the project I would strongly suggest you use it to get this offending section dumped – or at least reframed so it is not so technically repugnant. I can see some point in including the correlation with some very carefully worded qualification, but only to counter the argument that no such correlation exists. Beyond that, it is a crock of horse manure.
Lets just say there has been some ongoing debate over the attribution angle.
If my name and reputation were going to be associated with such a spurious argument, I would want a piece of 2 by 4 in my hand while I had that “debate”. I think it was very wise of Dr Curry to distance herself publicly from that particular conclusion.
Incidentally, I accidentally created a new sockpuppet “P”, when trying to input my normal handle. There is only one of me, erm, or do I say that there is only one of us.
If you look closely, you will notice that my name is not on the results paper. Not that I disagree with the results per se, but I think that Richard is right to attribute the majority of the warming of the last 50 years to anthropogenic causes but for the wrong reasons.
You and Steven are named in the acknowledgments section.
And the question here for me is not about the truth of attribution; it is about the validity of the argument about attribution included in the paper.
The argument presented in the paper is junk.
Anyway, I agree with Eli. If the JGR reviewers do their job properly, the section should get substantially re-written. If not, then it will harden my already high level of cynicism about the quality of peer review applied in climate science.
“Richard is right to attribute the majority of the warming of the last 50 years to anthropogenic causes but for the wrong reasons.”
Ahem, perhaps another post on that would be nice.
If by wrong reasons you meant to say wrong causes then I agree. Cities are warmer than surrounding rural areas. Everyone knows that. What they don’t really know is that water is the key player. Urban development means lots of impervious ground cover. When rain falls it’s channeled into transports that have little surface area. Evaporation is minimized and so is the cooling of the dirt and near surface air that comes with it. Virgin land sweats and cools itself. Urban land has anti-perspirants applied to it. There are other factors such as albedo change, buildings that diminish surface winds, and anthropogenic heat sources from anthing that consumes fuel or electricity to produce work and waste heat which buildings then help to trap.
“The uptick in DTR may be simply explained by the decrease in cloudiness (and albedo) observed between 1981 and 1999 according to the international cloud project data. At least one paper has used this relationship the other way around to infer some GCR influence i.e. using local measurements of DTR as a proxy for the change in local level of cloudiness. (I can’t be bothered to look up the reference.)”
yes, I’m familar with that paper. unfortunately I believe it was tied to Forbush events. those are transients.
to explain a secular increase of cloudiness you need something other than Forbush events.
Thanks for this.
“to explain a secular increase of cloudiness you need something other than Forbush events”
Agreed. I wasn’t trying to explain the reason for the secular increase – merely noting that the change in DTR was consistent with such change.
ya. well the change in DTR really fascinated me because of some of the arguments/discussions we had over at Lucia’s about warming/cloudiness/DTR/GCMs.. hmm I recall you ( or steveF) were there.
If I had to point to the one thing that I thought was noteworthy ( other than the pushback to 1753) it would be the DTR finding. personally I tend to like to focus on the little nits and knobs and bumps. I think back in 2007 when I started looking at this gavin said I would never find anything scientifically interesting in the temperature record. I agreed with him, but still thought it was fun to look at. for me it just is. crap I looked at 1 minute data the other day.. big data.. kinda cool.
Is this actually the temperature of the landmass (ie are there thermometers embedded in the ground and in objects such as buildings and trees, etc) ?
Or is it the temperature of the atmosphere close to the earth (ie a few feet above it, Stevenson Screen height) ?
Its is pretty much universally surface air temperature. Its called “land temperature” to differentiate it from “ocean temperature” or “global temperature”.
land temperature are 5′ white box as shade. Ocean temperature is top on ocean [which would similar to in white box 5′ in shade].
Is the temperature of the actual land of no interest to anyone though ? Surely it too is involved in the whole radiation budget thing ?
Land temperatures themselves are of interest, but as far as I know the only widespread measurements of them have started over the last decade, a period too short to do much trend analysis. In the U.S., the CRN has soil temperature measurements at various depths.
The interesting thing about land temperature, Zeke, is that you can measure it today and figure out what it was at times in the past. Surface temperature changes cause a diminishing amplitude ripple as you move downward through the strata. By figuring out the exact thermal transfer rate as you go downward in the strata being measured and taking an exact temperature measurement along the way surface temperature changes in the past can be observed. Decades ago I read about this but it’s not easy finding anything recent. One might presume that technology improvements over the past decades would make this paleo temperature reconstruction cheaper and more precise. But the climate boffins don’t really have much interest in testing their hypotheses in the real world anymore as nearly every time they do it disagrees with the alarmist narrative-science that pays the rent.
“Is the temperature of the actual land of no interest to anyone though ? Surely it too is involved in the whole radiation budget thing ?”
It’s the average temperature of other planets.
But air temperature in the shade is important information for
weather- it tells the public what kind of day or night it is. So climate
science is using the infrastructure used for recoding the weather.
If trying understand climate, it seems the actual ground surface
could be useful. But you have develop some standard because
different materials have different temperatures.
I think measuring water temperature [for land surface] might good idea. So temperature of some standardized size pool of water could useful. Because water is common on Earth and is already kinda “standard test material”. It temperature doesn’t fluctuate [it’s slow to heat up or cool down]. And Earth is mostly water planet.
Focus on radiation budget at or near ground level is the biggest source of misdirection in the whole CAGW charade. Latent flux dominates the action in the lower troposphere by a huge margin.
conduction (thermals) – 24W/m2
latent (evaporation) – 78W/m2
radiation – 40W/m2
I’m not making this up. Straight from the horse’s mouth. First figure, top of page, Trenberth’s famous planetary heat budget cartoon:
It’s all about the water cycle people. This isn’t Frank Herbert’s planet Dune. It’s a water world. We don’t live in the part of the atmosphere where radiation is dominant unless you’re flight crew on a jumbo jet.
Dave, I agree 100%. Evaporation dominates surface heat transfer.
Land temps are of major interest in permafrost areas. In the NH they been rising and the permafrost melting. Consistent with the general ocean and tropospheric temperature increases seen over the past century.
The temperature was taken in many different ways back to 1750. There is very little consistency in methodology the further back in time you go as thermometers were placed in a variety of locations that wouldnt be accepted today.
For example there was a vogue to keep them in a north facing room and outside first floor windows and on the ground and ten feet above it. The data has been ‘adjusted’ to suit our notion of what the temperature would have been if it had been taken in modern day conditions
Sometimes yr best isn’t good enough. :-(
I don’t trust this guy Muller.
This “BEST” team appears grossly incompetent.
There’s too much hair-splitting interest in “BEST” and land temperature issues more generally. It’s comical to see the insane lengths people go to arguing +0.1 this or -0.1 that. This accomplishes NOTHING towards understanding natural climate variations. Everyone runs from the heavy lifting, looking for light chores.
As for politics? What’s needed is a climate blog focused on exploration of natural variability with zero tolerance for politics.
“Finally there are the political or personal issues that have been raised around the subject. We know of no way to see into the hearts of men. Science provides us with some safeguards against personal bias: sharing data and sharing code. See the new website for those resources.”
You should have read to the end of our article paul.
The point isnt whether you can trust him or not.
There is no neverland free of politics.
Science provides us with other safeguards as well – at least it does in most scientific disciplines. One of those safeguards should have kicked in to prevent the spurious attribution argument from seeing the light of day. Since even an undergraduate statistics student should be able to enumerate its flaws, I can only believe that the argument was included very cynically to meet political expectations, despite full awareness of its fallaciousness. Unsurprisingly, the MSM has already picked up on that particular argument. Not good. Not good at all.
Corruption plain & simple. End of story.
Muller is a UC Berkely prof. Anyone familiar with that university would have predicted the extreme left bias that inevitably taints everything about the institution, its faculty, and its students. Many of us did predict this and tried to warn Watts that he was going to get screwed by Muller. Had I been participating on Curry’s blog at the time I would have warned her too.
| July 30, 2012 at 2:14 am |
Redirect focus from your politically-motivated phony brand of “science” towards CAREFUL exploration.
At present you’re unqualified to direct climate traffic. Your vision’s inadequate.
Why the BEST papers failed to pass peer review.
Mmm … that was supposed to have been in response to Beth Cooper | July 30, 2012 at 1:43 am .
Specifically, why McKitrick disagreed with the Berkeley UHI results.
McKitrick’s expertise in this area is not worth a bucket of warm spit
sorry for implying on another blog that you would not comment on “attribution” or “station quality”. Obviously, I was 100% wrong.
Thx Streetcred, it’s all coming out … )
Indeed, Beth. It shtinks !
as if i wasn’t hungry enough
Re aspects of above discussion, hey,
“A catalogue of immaterial entities:”
There is broad agreement that greenhouse gas emissions have contributed to the warming in the latter half of the 20th century; the big question is how much of this warming can be attributed to greenhouse gas emissions. I dont think this question can be answered by the simple curve fitting used in this paper, and I don’t see that their paper adds anything to our understanding of the causes of the recent warming.
Northern hemisphere is where most of the records are taken. Natural variability in the N.H. during last 130 years accounts for about 0.75C, which is half of 1.5C that is attributed to the GHG by the BEST report.
As it happens this variability is consistent with the natural oscillations of other geo-physical properties which in no way are dependant on the Earth’s surface temperature or any other climatic change.
the BEST report has shown no understanding of the natural variability.
The results paper says:
“An alternative is to assume that some or all of these variations
represent a form of natural variability. Using the curve from Figure 6, we can estimate that such variability on decadal scales is no more than ±0.17 C, 95% of the time. This can be understood as a crude bound on the amount of temperature change that might potentially be ascribed to natural variability.”
It looks like the authors have calculated two standard deviations from the 10-year SMOOTHED curve to assess the range of natural variability. If so, this is a serious stats error.
vukcevic wrote “the BEST report has shown no understanding of the natural variability.”
And that is all that matters.
I’ve written them off. All they’re doing is ignorantly &/or deceptively changing the channel AWAY from deeper climate understanding.
Administrators use such tactics to deflect attention AWAY from core sore points towards the details of endless streams of peripheral technical minutia. The tactic is used TO CONTROL THE DIALOG (micromanaging a controllable, engineered issue).
Naive participants have opportunity to feel invested & engaged WITHOUT accomplishing anything threatening to authorities. If participants are foolish enough to take the bait, they are left at the end of each engaging day hypnotized & drained. There won’t ever be enough refreshed time & presence of superior mind to know how, when, & where to step back to see infinitely more clearly. Residual energy at the end of each day will NEVER be over the threshold necessary for deeply lucid revelation. Potential threats to authority are neutralized by directing the unproductive squandering of precious time, energy, & consciousness. Just give them fake targets and watch them drain themselves shooting. Easy. We just sit back & watch.
Sensible parties WALK AWAY from such efforts to tie-up endlessly at draining, unproductive committee.
Muller & “BEST” are CLUELESS about natural variability. End of story.
Law of Authoritative Ignorance &/or Deception
Mosh or zeke
Upthread I politely asked this question;
Mosh or Zeke
I have been through the various back up papers but can’t find the actual stations used pre 1880 or the data sets utilised. Was the ‘original’ data used or the adjusted material compiled by such as Phil Jones working for the EU funded ‘Improv’ project. Can you help- Where is this information located? Thanks
to which mosh replied
Steven Mosher | July 29, 2012 at 8:06 pm | Reply
The data is all online. including the colonial dataset. goto berkeley website. or use my r packages
stevenmosher | July 29, 2012 at 11:36 pm | Reply
If you have problem with the R code just submit a request.
Thanks Mosh, but can you please be more specific as to the location?
We are talking about a very few stations for the period I am interested in and no doubt the information is easily found if you have been working on the project. However I havent, and can’t readily find the information. Thanks for your help
Unfortunately, its all together in a single file here: http://berkeleyearth.org/data/
I don’t have the time at the moment to parse out only the pre-1880 data and upload it, though you should be able to do it easily enough.
You could also download GHCN monthly, daily, and the colonial weather archive here, which will contain most of the really old records: http://berkeleyearth.org/source-files/
Thanks for that. So to find the actual data for the old records and check its viability will be a substantial job. Thats a shame as a great part of the value of this record is the pushing back of the parameters another 130 years.
I will see if I can distentagle the material but if I were a journalist wanting to write an article on the authenticty of the historic record what would you be saying to me to convince me that the older material has some basis in reality?
For the earlier release, I listed the pre-1850 stations here. I doubt if there are new ones.
You can visualise them over time on this KMZ file or this JS gadget.
That is kind of you to provide those references, however I need to know which stations were actually used in this study, not the ones that MIGHT have been used.
As you know, many historic stations do not have data available from their start to the prseent day, very many have moved, whilst still others have had their data substantially adjusted for such as the IMPROV project.
This study makes much of its extension to 1750 or so but omits to overtly tell us what stations were used and why, and what adjustments have been made to the data.
Don’t get me started on the IMPROV can of worms. Who besides you, tony, is critiquing this subtle, likely nefarious, rewriting of history?
It seems that no one is interested in Historical climatology unless they have taken the Kings shilling. The record back to 1750 is a seething mass of worms as I have tried to point out over the last five yeatrs. If we have a study such as BEST trying to push the boundaries into such suspect terriitory they must tell us what stations were used, why and how they have been adjusted.
It is not reasonable for those constructing the BEST dataset to expect people such as me to have to plunge ou hands into the can of worms in the hope of plucking out the live ones used in the study.
You would be far better off constructing your own composite. Here is map of Europe in 1750
select few critical capitals and look for their records (1750-1860), compare with the CET and graft onto the MetOffice N. Hemisphere data .
I am becoming more than ever convinced that CET is a reasonable (not foolproof) proxy to show the tendancy of global temperatures over the centurires, always assuming that a global temperature means anything anyway.
MyCET study to 1540 assumed accuracy of 0.5C. The Best study shows error bars of 2 Degrees C which surely makes it useless as any sort of scientific measure?
If correlation since 1880 is good, I see no reason why should be any different before despite Mosher’s claim
has to be born in mind though, the CET is averaged over small-ish area so annual variability is greater.
Some of the comments re CET coming from across the pond have roots going back to HRM GIII, but they fallen in for the Jones’ Norfolk (or is it Anglia) global ‘turkey’.
I have asked you for data and you have never even had the good manners to respond.
and now you want your hand held in walking through our data.
thats pretty damn cheeky
Our data is online and there is code for you to read it.
If you have problems send me email.
Ah, moshe, if it’s so easy to find, go ahead and show him. For sure it should be easier for you or Zeke to find than for tony. This is ungracious, not what is in your heart.
seriously kim. I write tools and donate my time so that people can help themselves. If they use my tools I am more than happy to give them MONTHS of free labor. But I refuse to help people like tony who do not share data. He knows where my tools are, he can teach himself or ask me to teach him.
Your turn, tony. You are both more gracious than am I, so you should be able to settle this amicably.
How about showing me what he wants, I, who have denied you nothing.
If you didn’t do your constant flitting about where you alight on posts momentarily and then leave, you will see that I have posted the information you requested (or the inormation I can supply) at least five times on this site and at WUWT. I have pursued you and asked if you have seen the material but no reply. Either you didnt see it or you ignored it. I choose the former.
I am flattered that you consider the data you asked me for to be on a par with the BEST project data I have asked you about..
I don’t need hand holding I just need you to tell me what stations were used, why and how the data was adjusted. The numbers are very small so surely its easier to link to it than indulge in some sort of tit for tat on information witholding?
Historical climatology has never been one of your interests up to now (I remember your disparaging comments about Lamb and Parker) so I am surprised to see you dabbling in it now, especially as you knew nothing of the Mannheim palatine when I made reference to it some weeks ago.
Ps I’ll dig up the information yet again and post it here. I want it to be on a an open forum such as this, rather than on your site.
to put a finer point on it kim. In the early days of the fight to open data one of the objections was that if we open data, then people will harass data providers with requests. My response then and my response now is the same. I dont expect help with the data. All I want is access. I will do my own work and NOT BOTHER OR HARASS the person who worked to make it open. Above all I have to remain consistent. I promised never to bother or harass anybody who made data available. I just wanted access. I give tony no more and no less than I asked for. access. It is up to him to learn. If he chooses to use my tools then I help him with my tools but I will not play step and fetch it. I dont ask others to be my slave, I wont be his.
Simple. I asked for access and did the work. He asks me to do his work for him and whats worse is that he does not share and share alike.
That is a core principle of my community and so I keep to it.
tony. post it on an open archive with free access to all.
Below is my reply to you for the umpteenth time (I Have slightly expanded the original to include another link) I can’t remember the thread this reply was posted on, but judging by the context you were obviously being your usual cryptic self.
It is complete and utter nonsense to say I don’t share information. I get criticised for the amount of information and references I supply with my articles.
—- ——- ——– ———–
I give you the opportunity of giving me a one word answer and you take it.
There are a series of records from a network of stations that predate Giss by 200 years. One was created by the Royal Society but the most famous was the network created by the Mannheim palatine. These used standardised records, methodology and instruments. I got the data from the Met office and they are the historic records that Phil Jones and his colleagues are systematically working through and appeared in Crutem4.
If you are interested I will send you some of the Met office pdf’s. I hope to write an article on these historic networks shortly.
I have not specifically looked at 1700 to 1850 but as for methodology and sources I have quoted this to you three times, I reproduce this from a forthcoming article;
“Those interested in learning something of the nature of historical climatology and how material is compiled, might find this comprehensive article on the subject interesting.
When sufficient data becomes available –as in part 1 of ‘The long slow thaw- ‘anecdotal’ information is translated into temperature data following the methods detailed by Van Engelen, J Buisman and F Unsen of the Royal Met office De bilt and described in the book ‘History and Climate.’ See pages 105,106, 107 and 108
The back up material to carry this out for ‘The long slow thaw’ was contained in ‘supplementary information’ and used in conjunction with the numerous references from that study.
Basically Nesting is awkward and you might not see this reply, so if I see you hanging around I’ll post this reply again.
Here is a link to the Long slow thaw, with all sorts of caveats
This is a link to ‘supplementary information’ from within the article which again contains lots of caveats
At the back of this document are the approximately 150 references I used, some of which in turn led to other information. Within the comments section of the article I make frequent references to the manner in which I gathered and interpreted the data according to the criteria set out by Van Engelen et al.
My reconstruction coincided strongly with the revised reconstruction by Craig Loehle and also this one by M. V. SHABALOVA and A. F. V. VAN ENGELEN : ‘Evaluation of a reconstruction of winter and summer temperatures in the Low Countries, ad 764–1998’
A version 2 of the Long Slow Thaw is planned once I have gathered more information. In particular I want to examine the two periods of pre instrumental warming I noted, and also to examine the 1700/1730’s warming which seems to approach that of the modern day according to Jones, Lamb and my own research.
Data would bits in a file. Not posting links to shakespeare in a PDF.
not a list of numbers in a PDF. but a file with numbers and metadata
and a description ( code) of how you calculated those numbers.
It’s not that hard.
Pointing me to a PDF that says
Does not cut it. Visit Climate audit. Ask steve Mcintyre what the meaning of TURNKEY is.
its pretty damn simple.
the Mannheim palatine.
If you have a link to the data, I’d gladly see about using it. To be included in the database the records have to be online. For example, the colonial records were not online and we worked to get them online.
Records that are kept offline and are not openly accessible to everyone are of zero interest to me. The data have to be online so that anybody can check the work without bothering data producers.
Even data like environment canada which is online is not included because it is not in a format that makes it easy for people who want to check.
I can only suggest you read van engelen et al who describe how you take such information and convert it according to a formula that, for instance, produces nine classifications for summers ranging from extremely cool through to extremely warm. The temperature then relates to it.
Putting together often detailed anecdotal information from a number of sources into a classification works well and correlated when I did the exercise with known instrumental data from a little later in the record.
The de bilt reconstruction also seemed to use this method and got accepted as a paper, but I can’t comment as to the method Craig loehle used. I am close to both of them in my reconstruction even though i didnt see either until i had finished my piece., so the van engelen methodology seems pretty sound to me.
What’s your email address regarding the mannheim stuff, i can’t post pdf’s on your site. I also wrote a short article intended as my own notes which might provide some context
tony PDFs dont cut it. You can well imagine how we would scream if GISS provided its data in a format that required copying data from a PDF to a file.
How do I verify that its been copied properly. dont make me count the times data has been cut and pasted improperly. If you have the file in a spreadsheet or csv or any kind of format that allows people to import it that is what is necessary. That is why I am very specific when I ask for the data AS USED and the programs AS RUN to create the document in question. So, if you have the data in a spreadsheet and you use that spreadsheet to create the document, the data AS USED to create the PDF document is the spreadsheet. The program AS RUN, is not a pointer to work somebody else did, it is a copy of your math as performed.
The Mannheim stuff comes from the Met office as a pdf. There is nothing I can do about that. I have had no need to convert the info into any other format. They are very interesting as they deal with real temperatures from real stations taken at the time, rather than station records that ‘borrow’ data from others up to 2000km away, but if you don’t want them that is fine.
Ok, if it comes in a PDF to you, then that is what it is.
I will see what I can do to get it in a format that is more shareable
and will share that back to you and you can share that back to the data provider. Ideally, they would post it as they are the originator.
you can email me at moshersteven
The rest is gmail dot you know what
The data is there as a text file, separated neatly by tabs – even the column length is consistent. I’m not sure how it could be more portable? To get the dates you could cut the data into 14 files and import to excel or Calc (which have about a million row limit) and sort there. It would take about 15 mins. You can even just open them in a text editor and regex
and then sort numerically. Admittedly that could take a while with 14 million+ rows, but it is time that could be spent drinking wine.
If you think that approach will work. you are more than welcome to get the PDF from tony. write the program and spit out the data.
Then, you can check the data against the PDF to insure that your approach worked.. or do you just assume that it worked because you wrote it. is it possible to parse a PDF to pull out numbers? of course.
That approach to sharing data is something I have devoted the past 5 years to changing, for obvious reasons.
Heh, tony, maybe moshe and Zeke are afraid to show you what you request for fear that you will find something wrong with it. There, that oughta work.
I started to go through the country files and it seems uncannily like Hansen and Lebedeff, whereby data is ‘borrowed’ from another station up to 2000 km away. Its made worse by there being so few genuinely ‘old’ stations (let alone continous, and untainted by uhi or a statrion move) so the data for say Albania (first readings 1951) is being borrowed from (probably) Bologna in order to stretch it back to 1750.
It is made worse that ALL the old data has been substantially changed over the years by such projects as the EU ‘Improv’. To call this a ‘global’ record seems very far fetched and to call a data base with two degree error bars (whilst honest) is surely stretching the phrase ‘scientific.’ somewhat
I am double checking the data as I find it difficult to believe that such a widely trumpeted data base has so little meat to it, so it may be there is much more to it than meets the eye.
stevenmosher: Sorry, this thread is not nested enough, I knew that but forgot to clearly specify what I was replying to. I was meaning, to tony, that the data in the BEST download is super neat and easy to parse as it is in a lined up text format as well in the matlab file. Even regex or excel can do it, and obviously super simple for anything code based.
I totally agree PDF is a terrible way to share data. Not only is it inaccessible to tooling, but it is internally stored as postscript – which is a language rather than data. That makes it prone to having the displayed data not follow the visual or logical flow. I’m sure everyone has had the experience of selecting text in a PDF and getting all kinds of jumping etc. This makes verification painstaking and absolutely necessary, so parsing a PDF properly usually defies automation (unless it was machine generated in a parser friendly way, which is rare).
Kudo’s for being so persistent and unwavering on requiring data and code, here and elsewhere. It seems to me without this everything is lost, and with this, much of the shouting will fade away. At a meta level, this is the most important work being done in climate science imo.
Gee thanks tngtgo, I’ve always enjoyed vet shows on television. And Tom and Jerry as well!
Tony, yer jest don’t have the time right now, what with the Olympics and everything.
Mosher. I’ve admired your independence for some time. I have no quibbles with this work but why, when prof muller says there may have been an MWP warmer than today, would he claim the recent rise in temp is almost certainly due to CO2? It may be, but the MWP?
I think the answer is that Prof. Muller is trying to have it both ways.
On the one hand, he wants to be “open minded”, yet he is apparently afraid of alienating the “PC” CAGW consensus crowd.
I may be wrong, but this is my personal perception.
Good point, Max.
Certainly, that must be true. Why else would he say this?:
Clearly, the only possible explanation is that he is “afraid” of alienating the PC CAGW consensus crowd.
Muller, like all the other boffins employed by UC Berkeley, is so open-minded that his brain fell out.
Dave Springer, please reflect upon Bill Mackey’s noble example, which has contributed so greatly to the improvement of this forum.
Thank you, Dave Springer! :) :) :)
I don’t see any contradiction there. The BEST project does not cover the MWP and has has little or no impact on our understanding of just how warm it was or what caused it – Muller’s opinion that it may have been warmer than today is obviously based on other areas of research. That doesn’t mean that we can’t attribute the warming we have seen in recent decades, even if one doesn’t find Muller’s own analysis of that question convincing. We have a great deal more data about the various factors which have been in play in the modern period than we do about the MWP.
Andrew, I realize that talking about the MWP may be bit off thread here, but since we don’t seem to know what caused it with any certainty (or the LIA) and since prof muller invoked it in his NYT op-Ed piece, I was surprised at his level of certainty about attributing so much of the recent warming to CO2. Some, of course, maybe even all, but the MWP and other warm periods should perhaps keep our theorizing feet on the ground.
I have to admit I hadn’t read Muller’s NYT piece so if he muddied the waters himself by bringing up the MWP then fair enough.
Attribution of past short term events such as the MWP and LIA will always be more difficult than with current climate change because the further back we go the less data we have, both regarding the extent of the changes that took place and the various factors which were in play. But the fact that we can’t confidently attribute the MWP doesn’t logically mean we can’t attribute modern warming – that depends on the information we have about what has happened in the last 100 years or so, not 1,000 years ago. For any particular period whether it is the warming in recent decades, the early 20C warming, LIA or MWP we have to make the best judgement we can based on the information we have available.
The simple fit to CO2 is one of those hmmm things. You know that it is not that simple, but the fit is so nice it looks like obvious proof that CO2 done it. But if you ask yourself, “Self, what else has an ln(2) relationship?” Damn near everything can. So a curious type would look more closely at regions that looked a little too good, which is just as interesting as regions that don’t look as good.
I think Mosher and Zeke have found some job security :)
Zeke volunteers his time and I volunteer my time as well.
I volunteered before they invited me to meetings.
I get to sit in the weekly meeting. I represent to the group what
other researchers look for in terms of data products. I support
people who want to use the data and people who are using it today. I don’t write papers. I care that the data be available to all who want to use it in a format
that is usable. I don’t care about the views of those who want to use the data. My concern is that they be able to use it. Period. There are things I disagree with in the papers and announcements. This is not the mosher show, so I try to correct the misunderstandings. Good critcisms, I will pass on. Personal attacks, will be met in standard Moshpit manner.
I Don’t know. The first thing that people should recognize is that the press release says things that go beyond what is show in the paper. The paper doesnt do an attribution, although its been characterized or framed as that.
Personally, I view the curve exercise as a sanity check of sorts. nothing more nothing less. basically, can we explain the general shape of temperatures with a couple parameters that express radiative forcing.
Can we “ball park it” with just a couple parameters. ? wow, shit, we can.
No you can’t. Unless the “ballpark” equates to “”anywhere”.
Sorry but you can. Its pretty simple. take the data. repeat the math.
Sorry but you can not.
And you know it.
And you know it well.
Thanks for posting this and for your comments.
It looks to me like the data themselves are an improvement, but the interpretation of what they mean are doubtful, in particular regarding a) the UHI impact on the record and b) the attribution of the observed warming to GHG emissions.
These two points constitute the essence of the scientific disagreement on whether or not CAGW is a real threat.
IOW we are left with the same open question we had before the new BEST report and the “C” in “CAGW” is still purely conjectural, despite the BEST interpretation.
A good summary Max. It seems that Anthony Watts’ most recent paper raises considerable concerns with regard to point (a) and as for point (b) the attribution question seems dubious in the light of Chief’s link, repeated here:
The ice core data indicates that CO2 changes FOLLOWS changes in land surface temperatures. There is something (or manythings) else out there that is affecting both and I am still unsure what it is.
Since I do not believe that area averaging of a convenience sample can produce meaningful results, I have little interest in this complex number crunching exercise. All we really know is that more stations show warming than cooling, and the acccuracy of those measures is questionable. Moreover, the satellites do not corroborate these findings.
It’s interesting that they have made much use of volcanoes in their latest work, such as El Chihcon in 1982 and Pinatubo in 1991 (though slightly unfortunate that they have labelled them wrongly in their graph).
Curiously, in their FAQ section “Has global warming stopped”
they try to explain the current lack of warming by pointing out that there was similarly no warming from 1980 – 1995, without mentioning the two major volcanic eruptions that occurred during that period.
What the BEST fans are missing is this. As concrete and asphalt encroach upon a weather station, it is equivalent to a slow station move. Did BEST correct for that? It strikes me as ridiculous that UHI can be claimed not to bias the temperature record upwards. It is a fantasy.
It’s worse than that. SHAP adjustment took the stations with heat pollution and used those to bias the pristine stations upwards. The whole key to this is Leroy 1999 vs. Leroy 2010 as the standard by which stations are rated for quality. BEST used Leroy 1999 which failed to adequately address encroaching heat pollution. Leroy 2010 addresses this. In defense of BEST their reanalysis was undertaken before Leroy 2010 officially (WMO-ISO) superceded Leroy 1999. They should, however, have known better as they were shown the station quality problems by Watts in his lengthy volunteer effort since 2007 to go and physically examine hundreds or thousands of GCN stations for compliance. Over 500 volunteers did the inspections all over the globe. The results clearly showed that NOAA was grossly negligent when it came to making sure that compliance meta-data was correct. A congressional investigation also found NOAA’s oversight to be in substantial need of improvement.
This promises to be a bigger circus event than Climategate. Pass the popcorn.
What you are describing is better termed Local Heat Island or LHI, not UHI. LHI is what Watts has been looking into. UHI is the difference between urban and rural stations, but LHI occurs in rural stations. BEST ignores LHI.
Agree. I like to call the overall effect ALW.
It’s a win-win – warmists get their anthropogenic warming and science moves forward.
Judith Curry said:
“That said, there are two interesting results in this paper, regarding their analysis of 19th century volcanoes and the impact on climate…”
. . . . . . . . . . . . . . . . . . .
On WUWT, Willis Eschenbach says:
“…The real problem is that many of these (volcanoes) occurred after or during the temperature drop that they are supposed to have caused …”
So BEST joins NOAA, the IPCC, GISS, CRU, and so many more in the long line of climate consensus hypesters with failed claims and/or less than worthy work products.
The hunger for climate crisis is surprisingly large.
lurker passing through, laughing | July 30, 2012 at 8:52 am |
Did you just intentionally self-lampoon, or are you really this absurd and I missed it up to n.. Oh. Looking back, I see it.
Are you mocking the anti-science crowd by pretending to be one of them?
Not that it’s easy to tell. When this is all over, I expect a lot of people to say, “But that’s what _I_ was doing!” and few actual dismissalists to be found.
NOAA just produced a big expensive waste of time claiming silly things about the weather. Hansen just wrote a childishly foolish paper on ‘loaded dice’. CRU has been demonstrated to be useless. The IPCC has been well documented to its poor procedures and incorrect claims.
I would suggest that you are the one diong a self-lampoon.
And I deeply appreciate your unintended humor. You are part of a virtual ensemble of accidental comedians.
lurker, passing through laughing | July 30, 2012 at 11:11 am |
So, that’s a “yes” to my question then.
I am wondering if they are going to end up using the “Goldilocks” temperature analogy for volcanic impact or not? In a non-linear system some impacts are strongest at the “just right” point. That is so much easier than going on about Hyperbolic responses in systems with various time constants, inertia and capacities or sympathetic and non-sympathetic bifurcations and such.
BEST study = MANNIAN SCIENCE = hokey schtick science.
Looks like Anthony Watts has won gold in the competitive Climate Science Stakes.
Why all the hurley-burley over the lowest-quality data, covering the smallest of earth’s heat reservoirs?
When we look at the best-quality data, covering the largest of earth’s heat reservoirs, we see plain evidence a large-and-sustained heat imbalance.
The earth’s oceans have risen a full centimeter in the last eighteen months. Is the beginning of the “acceleration of sea-level rise this decade” that climate-change scientists have predicted?
No amount of obsessive quibbling (on both sides) over decades-old north american land-station data is going to answer this key question.
“Nature cannot be fooled” and in particular, quibbling and squabbling and spinning and abusing cannot fool her. Isn’t that plain common-sense? :eek: :eek: :eek:
GRACE data suggested that the 2010 dip in sea levels was due to anomalous amounts of rain on land (La Nina-dependent). I suspect the reversion to the recent trend line is due to the rivers having finally run off the excess.
But Nature produces fools on a regular basis.
Fanny,I’m confused. I was told by no less than Teh Messiah himself, that sea level was going to start dropping in 2009. Are you telling me that Teh Messiah lied?
Little grasshopper “Discord”, the whole Climate Etc. forum appreciates your confusion, yet there is little that we can do to remediate it. Fortunately, as you mature, your confusion will sensibly diminish … “tincture of time” is a sovereign remedy.
Therefore cultivate patience, little “Discord”! :) :) :)
I am really surprised people take Ms. Curry seriously. Apparently she didn’t take the university courses on critical thinking.
Sweeping generalizations like this, “In my opinion, their analysis is way over simplistic and not at all convincing,” make her about as credible as Sarah Palin. Any Youtuber could have come up with that.
I came here following up on the BEST paper and I am amazed people think this individual’s juvenile response worth the time of day. Something is definitely rotten in the state of Georgia.
Goodby Ms. Curry. I see no need to come back.
Bill Mackey, the absence of content-free personally abusive posts is regretted by few.
Thank you for departing this forum, Bill Mackey! :) :) :)
Ah, c’mon, I like the personal testimonial for ‘university courses on critical thinking’.
Also, Bill, please note that almost four years ago, Sarah Palin had a very wise thought about attribution, when she declined to blame man for all of it.
Grasshopper! We agree!
An extensive post is coming later today, critiquing Muller’s attribution argument.
And exactly what kind of response do you expect me to provide on a short time fuse to a reporter who asks whether or not I agree with Muller?
I believe that Muller’s attribution is incorrect, but he has a problem if it is true. How much colder would we be now than we are?
Warming over the last 250 years has produced powerful and beneficial climate change. Similar warming in the future will also produce powerful and beneficial climate change, since a warmer world has greater carrying capacity for life than a colder one. Granted, there will be regional and local winners and losers in the climate change, but this also happened in the past, and politics can settle those, as it has in the past.
So why would people of Muller’s persuasion bless the past and fear the future? It seems it’s the conscious or unconscious acceptance of unnecessary guilt, and this unnecessary guilt will interfere with future adaptation to climate change, as it already seriously has.
Ah yes, another “skeptical” armchair psychologist, reading into someone else’s psychology (when just yesterday you said you weren’t focused on Muller’s personality) to formulate a diagnosis.
Consider a far more likely interpretation – a concern about warming at rates that are greater than the recent past, extended out for a very long period of time.
It’s always interesting how some “skeptics” decide which concerns expressed by others are derived from guilt. I wonder what Dr. Freud might have had to say about that.
Three times in the last century and a half, temperature rose at the same rate, and only in the last of them was CO2 also rising. Phil Jones heself told me so. Blaming CO2 for the temperature rise of the last quarter of the last century may well be the grandest example yet of the Post Hoc, Ergo Propter Hoc logical fallacy.
Granted, there will be regional and local winners and losers in the climate change, but this also happened in the past, and politics can settle those, as it has in the past.
Can you give some examples of where “politics” “settle[d]” a climate change issue in the past?
Are you including “war” as “an expression of politics by other means”?
Are the politicians allowed to agree to CO2 mitigation strategy and for richer nations and regional winners to support regional losers to adapt? Or is that the wrong sort of “politics”.
Please remember that the net result will be beneficial. To the extent that man can warm the earth it should be rewarded rather than punished.
I do hope that human society can evolve better ways to deal with change than wars and other destructive urges. Misattribution of blame will hamper that evolution.
So no examples then.
Please remember that the net result will be beneficial.
King Elizabeth the IV and the 95th President of the US of North and South America might disagree as the WAIS disintegrates.
If I were a grammar nanny, I’d point out that “way over simplistic” is redundant X 2 – perhaps suggesting a protest that is too much?
Good thing I’m not a grammar nanny.
Goodby Ms. Curry. I see no need to come back.
There is always Grant Fosters ‘Tamino’ were adulation and hero worshiping is dispensed in shedloads.
Don’t let that door hit you on the way out.
nice Sarah Palin reference.
Now, if I recall correctly, Ms Palin managed to become a Governor and did quite well at it. What exactly are your accomplishments?
Man, there’s a lot to this new release.
I’m going to withhold further comment on this until I have time to really get into it, and am less excited about the opportunities presented.
Other than, this is very impressive stuff compared to the general level of climatology to date considering the difficulties of the sources, and in line with what ought be expected of methodology and rigour, in my opinion.
(That’s very high praise, btw.)
Its probably worth noting that BEST has looked at the land only temperatures rather than the more usually quoted ones for land and ocean combined.
We can see that the warming on land is approx 1.2 deg C since the mid 19th century as compared with 0.8 degC for land and ocean. Or 50% higher. Of course, this isn’t surprising, the sea will generally be expected to be slower to warm. And because the sea covers 70% of the globe, the land temperatures in many maritime areas are also moderated by the effect of the ocean.
I thought Doc Martyn, not usually known for siding with the IPCC, produced an interesting graph, in his comment on the Climate sensitivity thread, where he suggested that the best way to estimate climate sensitivity was measure it. He came up with a figure of 2.2 deg C for 2 x CO2 on the basis that warming was currently 0.8 deg C. Of course if you use 1.2 degC you end up with 3.3 degC for the 2 x CO2 warming on land, which after all is where most of us choose to live.
And if it does warm by 3.3 degC on land it will also warm by the same amount everywhere , except that it may take a little longer.
“And if it does warm by 3.3 degC on land it will also warm by the same amount everywhere , except that it may take a little longer.’ That is an assumption. Water vapor feedback tripling CO2 forcing is another. Diurnal Temperature Range increasing fits with those assumptions how?
No its testable experimentally by measuring the sea and land temperatures over the course of a year. There is much greater temperature variation on land than in the ocean, and a much greater variation on land which is distant from the ocean compared to the variation on land close to it.
Kim on carbon and guilt. The Chifio, E M Smith has a lovely post re establishing a ‘Church of the Sacred Carbon,’ a place fer positive affirmation and celebration of being a carbon based life form.)
Some wittycomments on suitable rituals too.
He is actually called ‘The Chiefio,’ (rueful ) lol. It’s late here in OZ.
Bill Mackey is tooo funny — and silly — times ten.
Whaddya bet he lurks, nose pressed to the glass, afraid now, after his drama queen number, to say anything. ….Lady in Red
Apropos this comment by Bill Mackey:
Sweeping generalizations like this, “In my opinion, their analysis is way over simplistic and not at all convincing,” make her about as credible as Sarah Palin…
….I’d like to throw a plug in for the brilliant documentary, Undefeated, about the Alaska governorship of Sarah Palin. It’s available from Netflix and a fine crash course on her political brilliance, Bill Mackey’s “thoughtful” comment aside. …..Lady in Red
The BEST Report . . . Climate Scientology gets a bad case of Kardashian Disease and goes for a cheap Public relations stunt.
Mikey Mann could learn a few lessons on image management.
Because he is really, really, really going to do better.
I think it might be very useful to highlight Ross McKitrick’s evaluation of the review process for the BEST paper, as well has his statistical critique of it. Why did Richard Muller not want Ross McKittrick to publish that he was a reviewer for JRG?
This is looking into the heart of a man, so you can butt out, moshe. Nevermind, I’d like your insight on this matter. Data and code optional.
Before I joined the Berkeley Earth project.. Guess who was sending me things he should not be sending me.
hi Mosher,please, can you get the temp for 1789 of New Guinea? Maybe for New Zealand and Tasmania also?
Berkeley is keeping it a secret; how can they tell the temperature for individual year – by taking in consideration only the hottest minute in 24h – that is temperature for 365 minutes, for the WHOLE year (if lip year 366 minutes); AND COMPLETELY DISREGARDING THE OTHER MILLIONS OF MINUTES. It would be easier to pinpoint the wining ticket in lottery; than to ”GUESS” about the other millions of minutes. You can make much better cash, by predicting the wining lottery number every time
b] 2/3 of the planet’s surface is water – without monitoring… is that really irrelevant? c] if the satellite data is ”reliable” for the sea; why they use millions of man-hours collecting data on land; when for the lot can be done from Berkeley office, from a satellite photo? I have lots of other questions for you; would you take the challenge, or…?
Moshpit, can you make me a sandwich? Turkey club, not too much mayo?
Yes, most of these people don’t get the very simple standard I try to hold myself and others to. I request the data as used and the code as run.
I dont ask other people to do my work for me. they are busy. as long as they provide me the stuff they used to do there work, I am more than happy to do my own work.
In the begining I asked for hansens code. It was pretty simple. there were questions I wanted to ask that he didnt ask. Its not my place to demand that hansen do his work MY WAY. I only ask for the tools to build on his work and add my twist. Over the years that has morphed into this notion that people who work with data and code should somehow play step and fetch it for anonymous commenters. Bullshit. That was never the point of free access. In fact, one of the objections to free access was that researchers would be bogged down with frivolous requests from random strangers. My policy is pretty simple. I make my stuff available. It comes with a manual. RTFM and if that is not enough send me a personal mail. I help those who play by the same rulz I do and I dont ask anyone to do for me what I can do for myself.
Does this mean no sandwich?
Minor typo to fix:
The Berkeley Earth method differs from previous groups in several ways.
I can see the necessity of doing science by press release these days, still, this method is deeply flawed and should be eradicated as soon as practicable.
On the other hand, issuing pre-releases of a paper even before it is submitted to a journal for publication, is laudable, especially if it is done through a proper revision control system and all contributions are made readily visible (and are archived).
I believe these two requirements could be made compatible by appending an appropriate & obligatory Intellectual Property Statement to such pre-releases, which would effectively prohibit references to it in the MSM until such time it is actually deemed “published” by the authors.
I think this practice should be promoted by all means, and those who are slow to comply are to wear a dunce cap for life.
I remember when Watts first published his data on site quality rankings:
Mosher and his poodle John V were straight in trying to discredit it. See Watts first post on this at Climate Audit in 2007: http://climateaudit.org/2007/09/12/ushcn-survey-results-based-on-33-of-the-network/
Mosher has always been determined to be wrong on the idea that site quality hasn’t impacted the quality of the temperature record. Likewise his insistance that CO2 has any correlation to temperature. Clever, but wrong.
Actually my estimate for the effect was .1C to.15C for best to worst
see how that checks out
Has BEST team overcooked the temperature data?
I’ve read the Watts paper a bit more and I don’t think it applies TOB (time of observation) bias adjustment or other instrument adjustments (?). In the Fall 2011 paper there were graphs that included TOB figures, but in this new paper there is only one short mention of TOB in the data section and no indication whether it was applied.
The problem if it’s not applied (I think) is that the NOAA figures being compared include it, which begs the question as to whether eg figure 20 is a valid comparison.
Eg if you are comparing NOAA US 0.308C/decade with TOB with good quality sites 0.155C/decade no TOB how do you know the difference is due to station quality adjustments rather than TOB adjustment? (substitute TOB adjustments for any non-site quality specific adjustments).
I notice in figure 18 (http://wattsupwiththat.files.wordpress.com/2012/07/watts-et-al-2012-figures-and-tables-final1.pdf) the adjusted trends seem consistent* irrespective of station quality. Of course you’d expect that if the adjustments worked. The raw trends however are not consistent across station quality, in particular high quality stations (1+2) are a lot lower in trend but the others don’t seem very different (the category 5 discrepancy is odd, might be due to the uneven distribution in figure 3?).
*there are no error ranges on any of the graphed figures so it’s unclear which figures are consistent and which are not. It actually makes it fairly impossible to analyze anything.
Section 2.1 seems to include TOB, unless I read it wrong
“Whenever I’m working on my own material, I avoid arbitrary deadlines and like to mull things over for a few days. Unfortunately that didn’t happen in this case. There is a confounding interaction with TOBS that needs to be allowed for, as has been quickly and correctly pointed out.
When I had done my own initial assessment of this a few years ago, I had used TOBS versions and am annoyed with myself for not properly considering this factor. I should have noticed it immediately. That will teach me to keep to my practices of not rushing.”
It is interesting to look at the BEST TMAX data for the USA.
I calcultae 5 year averages by working back 5 years at a time from the last monthly anomaly.
Top 10 Highest TMAX 5 year averages
The current 5 year period is ranked No. 4 for united-states and is only
.02C warmer than 1951-1956 and only .06C warmer than 1931 – 1936.
1 2001 – 2006 0.82
2 1996 – 2001 0.66
3 1986 – 1991 0.57
4 2006 – 2011 0.43
5 1951 – 1956 0.41
6 1931 – 1936 0.37
7 1936 – 1941 0.36
8 1976 – 1981 0.2
9 1941 – 1946 0.2
10 1926 – 1931 0.07
Why isn’t TMAX higher?
STUDENTS FROM BERKELEY UNI:
Earth heat / water – that’s horoscope / zodiac entertainment. Jumping from earth to ocean, to Arctic, to Antarctic; is pretend knowledge, for confusing the ignorant. Official GLOBAL temperature is: ”from the ground and sea-surface, all the warmth to the edge of the troposphere; where oxygen & nitrogen finishes, all of it, not 6 feet from the ground, not 50m, or 1234m altitude; but all the warmth”. Monitoring 6feet of the ground, is a sick joke.
Heat that was 3days in the magma – is released in the sea – if is considered the sea heat, must be incorporated the heat in the center of the earth + the heat stored in the plutonium… cherry picking is not a science.
1] on the bottom of the sea, the earth’s crust is thinner; water gets warmed from the bottom, by the geothermal heat. 2] 97% of the Fault-line is on the bottom of the sea. One part of the tectonic plates gets more active -> hot vents and submarine volcanoes get more active -> they warm up the water -> currents spread that heat, we call it El Nino; then other part is more active – La Nina. Yes, tectonic plates walk same like you, left foot – right foot (when bigger movement and bigger El Nino / La Nina, are associated with earthquakes. Therefore, is that heat in the magma, or released in the sea – is same as shifting money from one pocket into the other – will not make you richer or poorer.
It becomes ”official ”GLOBAL temperature; when is released from the sea into the air” -> troposphere expands instantly and wastes that extra heat in a jiffy. Oxygen & nitrogen cannot waste that heat, before is released from the sea, or from the volcano, or from the plutonium. Stored heat in the sea, magma, plutonium, is called: ” STORED HEAT” not GLOBAL temperature!
The shonks monitor only the ”hottest minute in 24h” anyway. Most of the heat from the sea is released in the other 1439 minutes in the 24h, which is irrelevant for them. Yes, up to a 1000 times more heat is released during the ”irrelevant minutes”. Not 1000%, but 1000 times more heat released – the ”irrelevant heat” for the shonks…
They constantly repeat as a scratched record: ”climate sensitivity – climate sensitivity” – but are scared to acknowledge; the sensitivity of oxygen & nitrogen in expanding / shrinking in change of temperature + the speeding of ”vertical winds” when gets warmer close to the ground -and what’s the temp up there, where they expand when warmed extra.
Students, ask your ”Berkeley scientist” about that potato. Dr, Mosher avoids it, as the Devil avoids the cross. Sorry Doc, nothing personal; the truth is paramount, and urgent!!! http://globalwarmingdenier.wordpress.com/q-a/
Pielke Pere shakes it up, explaining the tectonism of the Watts and of the McNider paper. Read it and weep.
re yr comment tngtgo, here’s a book I have on my shelves; ‘ What is this thing called science?’ by Dr Alan Chalmers,Uni of Qld Press1982… ( say, could be a relative of mine.) …Introduction…’ In modern times, science is highly esteemed….But what, if anything, is so special about science? What is the “scientific method” that allegedly leads to especially meritorious or reliable results?This book is an attempt to elucidate and answer questions of that kind….’
Thanks, I’ve been able to read bits online.
One problem with Mosher’s approach is obvious. In asking for a cup of science, he gives a test for existence.
In so implying that it is true that science does not exist, he kneecaps a certain persons’ viewpoint as if they are unwittingly representing an ideal, perhaps not realizing the intangible nature of unthings and nonthings.
However, next he states that science is what scientists do. He’s only transferred the problem of holding an idealized object, not removed “the problem”. Now he has an idealized person or group. They are virtual, as they cannot be assigned any characteristics such as “red headed” or tall” or “just took a poop”. The only characteristic she/he may be assigned is that of doing science. Which returns us to the virtual representation, not existent unless we make it a part of the proposition. Which Mosher does.
Since it is a false proposition that Mosher’s scientist has red hair, or is a certain age, or has any other characteristic assignable I can safely assure him that I can extract a cup of science ONLY if he provides me with a gallon of his virtual scientist.
Did I miss the bit where they validated the 1979 to 2012 part with the satellites. If they don’t agree then the entire exercise is invalid.
Using Berkeley logic we can conclude that increasing CO2 has dramatically reduced volcanic activity.
“A new release from Berkeley Earth Surface Temperature
Posted on July 29, 2012”
The data presented to the US Senate by Christy shows a clear flattening off of the average global temperature as measured by satellites after 2000. The Berkeley group needs to explain why their new data does not show a skerrick or any sign of this flattening. Does their procedure automatically select ground measurements when they are in conflict with satellite? Do they have some other reason to ignore satellite data? If so they should tell us.