by Judith Curry
A new blockbuster paper published today by NOAA:
These results do not support the notion of a “slowdown” in the increase of global surface temperature.
Color me ‘unconvinced.’
Possible artifacts of data biases in the recent global surface warming hiatus
Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang
Abstract: Much study has been devoted to the possible causes of an apparent decrease in the upward trend of global surface temperatures since 1998, a phenomenon that has been dubbed the global warming “hiatus.” Here we present an updated global surface temperature analysis that reveals that global trends are higher than reported by the IPCC, especially in recent decades, and that the central estimate for the rate of warming during the first 15 years of the 21st century is at least as great as the last half of the 20th century. These results do not support the notion of a “slowdown” in the increase of global surface temperature.
The paper has just been published in Science Express [link] (apparently will be open access).
Here is the main summary diagram from the paper:
JC’s initial reactions
I received this several days ago, from an (international) journalist asking for comments, my quick initial reactions provided below:
The greatest changes in the new NOAA surface temperature analysis is to the ocean temperatures since 1998. This seems rather ironic, since this is the period where there is the greatest coverage of data with the highest quality of measurements – ARGO buoys and satellites don’t show a warming trend. Nevertheless, the NOAA team finds a substantial increase in the ocean surface temperature anomaly trend since 1998.
In my opinion, the gold standard dataset for global ocean surface temperatures is the UK dataset, HadSST3. A review of the uncertainties is given in this paper by John Kennedy http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html. Note, the UK group has dealt with the same issues raised by the NOAA team. I personally see no reason to the use the NOAA ERSST dataset, I do not see any evidence that the NOAA group has done anywhere near as careful a job as the UK group in processing the ocean temperatures.
I am also unconvinced by NOAA’s gap filling in the Arctic, and in my opinion this introduces substantial error into their analysis. I addressed the issue of gap filling in the Arctic in this recent publication: Curry JA, 2014: Climate science: Uncertain temperature trends. Nature Geoscience, 7, 83-84.
Relevant text:
Gap filling in the Arctic is complicated by the presence of land, open water and temporally varying sea ice extent, because each surface type has a distinctly different amplitude and phasing of the annual cycle of surface temperature. Notably, the surface temperature of sea ice remains flat during the sea ice melt period roughly between June and September, whereas land surface warming peaks around July 1. Hence using land temperatures to infer ocean or sea ice temperatures can incur significant biases.
With regards to uncertainty, in their ‘warmest year’ announcement last January, NOAA cited an error margin in the global average surface temperature anomaly of 0.09oC. The adjustments to the global average surface temperature anomaly is within the error margin, but the large magnitude of the adjustments further support a larger error margin. But they now cite a substantially greater trend for the period 1998-2014, that is now statistically greater than zero at the 90% confidence level.
My bottom line assessment is this. I think that uncertainties in global surface temperature anomalies is substantially understated. The surface temperature data sets that I have confidence in are the UK group and also Berkeley Earth. This short paper in Science is not adequate to explain and explore the very large changes that have been made to the NOAA data set. The global surface temperature datasets are clearly a moving target. So while I’m sure this latest analysis from NOAA will be regarded as politically useful for the Obama administration, I don’t regard it as a particularly useful contribution to our scientific understanding of what is going on.
CATO comments
Pat Michaels sent me these comments, which will be posted at the CATO web site:
IS THERE NO “HIATUS” IN GLOBAL WARMING AFTER ALL?
Patrick J. Michaels, Richard S. Lindzen, Paul C. Knappenberger
A new paper, from Thomas Karl and several co-authors[1], that removes the “hiatus” in global warming, will doubtless receive much attention in both scientific and policy circles. As with many scientific publications, Karl et al. prompts many serious scientific questions.
While this will be heralded as an important finding, the main claim[2] that it uncovers a significant recent warming trend is certainly dubious. The significance level (.10) is hardly normative and the use of it certainly will prompt many readers to question the reasoning behind the use of such a lax standard.
The treatment of the buoy sea-surface temperature (SST) data was guaranteed to put a warming trend in recent data. They were adjusted upwards 0.12°C to make them “homogeneous” with the longer-running temperature records taken from engine intake channels in marine vessels. As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the structure, and they were never intended for scientific use. On the other hand, environmental monitoring is the specific purpose for the buoys. Adjusting good data upwards to match bad data seems questionable, and the fact that the buoy network becomes increasingly dense in the last two decades means that this adjustment must put a warming trend in the data.
The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, so that the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.
Additionally, there multiple measures of bulk lower atmosphere temperature that are made independently from surface measurements and which indicate the existence of a “hiatus”[3]. If the Karl et al., result were in fact robust, it could only mean that the disparity between surface and midtropospheric temperatures is even larger that previously noted. Getting the vertical distribution of temperature wrong invalidates virtually every forecast of sensible weather made by a climate model, as much of that weather (including rainfall) is determined in large part by the vertical structure of the atmosphere.
Instead, it would seem more logical to seriously question the Karl et al. result in light of the fact that, compared to those bulk temperatures, it is an outlier, showing a recent warming trend that is not in these other global records.
[1] Karl, T. R., et al., Possible artifacts of data biases in the recent global surface warming hiatus. Scienceexpress, embargoed until 1400 EDT June 4, 2015.
[2] “It is also noteworthy that the new global trends are statistically significant and positive at the 0.10 significance level for 1998-2012…”
[3] Both the UAH and RSS satellite records are now in their 21st year without a significant trend, for example
GWPF comments
Received via email from GWPF:
Key pitfalls of the paper:
- The authors have produced adjustments that are at odds with other all other surface temperature datasets, as well as those compiled via satellite.
- They do not include any data from the Argo array that is the world’s best coherent data set on ocean temperatures.
- Adjustments are largely to sea surface temperatures (SST) and appear to align ship measurements of SST with night marine air temperature (NMAT) estimates, which have their own data bias problems.
- The extend of the largest SST adjustment made over the hiatus period, supposedly to reflect a continuing change in ship observations (from buckets to engine intake thermometers) is not justified by any evidence as to the magnitude of the appropriate adjustment, which appears to be far smaller.
1. They make 11 changes (not all are explained) producing the ERSSTv4 Sea Surface Temperature (SST) dataset that includes new estimates for the different way SSTs are measured from ships (intake or buckets). They also add 0.12°C to each buoy to bring their measurements in line with those taken from ships. These issues have been raised before by the UK Met Office when compiling their HadSST3 ocean surface temperature dataset, see, ‘A review of uncertainty in in situ measurements and data sets of sea surface temperature’
2. The greatest changes are made since 1998, which is interesting because this is when we have the highest quality of data and global coverage using several methods. Only this analysis finds any increase in global annual average surface temperature over this “hiatus” period. The authors have produced a dataset that is at odds with other surface temperature datasets, as well as those compiled via satellite.
3. The authors start their trend estimates in 1998 and 2000. This has long been considered unwise as 1998 is a very strong El Nino year and 1999-2000 is a much cooler La Nina period. The difference between them distorts their trend estimates. For example, their 1998-2014 trend is 0.106+/- 0.058°C per decade. Starting two years later (during La Nina influenced years) yields a trend of 0.116 +/- 0.067°C per decade as one would expect from starting at a lower temperature. Ignoring these caveats the authors say their analysis produces twice as much warming for 1998-2014 than earlier estimates. Their conclusion is, ironically, based on inbuilt biases in their analysis.
Their Fig 1 shows that when using their updates it is only with the use of these inappropriate start and end points that the “hiatus” is reduced.
4. Even with the 11 changes to their SST database and the problem of start and end dates the authors admit that the statistical significance of their results is only significant at the 0.10 level, and in some cases not even that.
“I believe their estimates of the error in their decadal trend figures are far too small. They quote the error in a 15-year period to a precision of one thousandth of a degree C. In their report the authors admit that their error analysis is not definitive and that looking at them another way invalidates their trend conclusions,” said Dr David Whitehouse, science editor of the GWPF.
5. Note that trends that include 2014 and 2015 must be treated with caution due to a recently persistent very warm feature in the NE Pacific that is affecting global SST estimates.
6. In addition, they do not include any data from the Argo array that is our best coherent data set on ocean temperatures. The authors state this is because Argo temperature data is not surface data. However, ship-derived temperatures can be from as much as 15 m below the surface. The Argo array samples 5 m below the top of the ocean. From 2004 to 2013 it shows considerable variation and little trend. The non-ARGO data aptly demonstrates the problem of starting trend analysis in 1998 or 2000.
Source: ‘Unabated planetary warming and its ocean structure since 2006’ Nature Climate Change, 2 February 2015. Black line: 5 m optimally interpolated (OI) ARGO; red lines: NOAA OI SST v2
7. Their conclusions are also at odds with satellite data that shows no trend in the past 16-years or so.
Source: http://nsstc.uah.edu/climate/index.html and http://www.remss.com/research/climate
8. Extending a change in ship observations (from buckets to engine intake thermometers) to the present time had the largest impact on the SST adjustments over the hiatus period, per Karl et al 2015:
“Second, there was a large change in ship observations (i.e., from buckets to engine intake thermometers) that peaked immediately prior to World War II. The previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata (18) reveal that some ships continued to take bucket observations even up to the present day. Therefore, one of the improvements to ERSST version 4 is extending the ship-bias correction to the present, based on information derived from comparisons with night marine air temperatures. Of the 11 improvements in ERSST version 4 (13), the continuation of the ship correction had the largest impact on trends for the 2000-2014 time period, accounting for 0.030°C of the 0.064°C trend difference with version 3b.”
Ref (18) is a 2011 paper by Kennedy et al. It states (paragraph 3.1) “Dating the switchover from uninsulated canvas buckets to insulated rubber buckets is problematic as it is not clear how quickly the practice of using insulated buckets was adopted. … Based on the literature reviewed here, the start of the general transition is likely to have occurred between 1954 and 1957 and the end between 1970 and 1980.”
A 2010 review article “Effects of instrumentation changes on SST measured in situ” by Kent, Kennedy, Berry and Smith states that “Models of corrections for wooden and uninsulated canvas buckets show the adjustments to be five to six times greater for the canvas buckets.”
So post 1980 adjustments to bucket measurements should be very small (under 0.1 C) Moreover, by 2000 ship measurements were a minority of total measurements and all types of bucket were a small proportion of ship measurements (see figs 2 and 3 of Kent et al. 2010). These facts imply that post 2000 adjustments warranted by use in some ships of bucket measurements should be negligible.
“The justification given for the change that had the largest impact on trends for the 2000-2014 time period – continuing to adjust ship SST measurements by reference to night marine air temperature (NMAT) data, ‘which have their own particular pervasive systematic errors’ (Kennedy 2014) – i.e. that some ships still continue to take bucket observations, appears to support only a very small adjustment,” said Nic Lewis, an independent climate scientist.
In summary
This is a highly speculative and slight paper that produces a statistically marginal result by cherry-picking time intervals, resulting in a global temperature graph that is at odds with those produced by the UK Met Office and NASA.
Caution and suitable caveats should be used in using this paper as evidence that the global annual average surface temperature “hiatus” of the past 18 years has been explained.
JC summary
I am posting this at 2 p.m. ET (when the press embargo is released). I will provide all the relevant links as soon as available, and will update with links to the forthcoming articles from the MSM; I am particularly interested in what ‘consensus’ supporting scientists have to say about this.
UPDATES:
Post at WUWT by Anthony Watts and Bob Tisdale [link]
From Carbon Brief:
Prof Sir Brian Hoskins, chair of the Grantham Institute at Imperial College London, says:
“[I]t had been thought that the reduction in surface warming must be due to natural variation in the heat exchanged between the atmosphere and ocean. Now it appears that any such exchange of heat between the atmosphere and ocean has not been large enough to obscure the global warming trend.”
But we shouldn’t dismiss the presence of a “slowdown” in surface warming just yet, warns Osborn. He says:
“There are other datasets that still support a slowdown over some recent period of time, and there are intriguing geographical patterns such as cooling in large parts of the Pacific Ocean that were used to support explanations for the warming slowdown.”
Neither should scientists stop seeking to understand the role of natural decadal variability in influencing short-term trends in climate, Osborn says. Dr Peter Stott, head of climate monitoring and attribution at the Met Office Hadley Centre, echoes this point, saying:
“[N]atural variability in the climate system or other external factors have still had an influence and it’s important we continue research to fully understand all the processes at work.”
On the whole, scientists seem to welcome the new study in terms of its contribution to fine-tuning the global surface temperature record. But the so-called “hiatus” – its causes, consequences and even its very existence – is a multi-faceted topic. Forster predicts:
“I still don’t think this study will be the last word on this complex subject.”
Post from the International Surface Temperature Initiative [link]
Doug MacNeal’s take [link].
Seth Borenstein from the AP:
Several outside scientists contacted by The Associated Press said the new and previous adjustments are sound. Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said the new work was “good and careful analysis” but only confirms what most scientists already knew, that there was no such hiatus.
“NOAA is confirming what we have been saying for some time that the ‘hiatus’ in global warming is spurious,” Berkeley team chief and physicist Richard Muller said in an email. Muller said global warming continues but in “many fits and spurts.”
John Christy of the University of Alabama Huntsville, one of the minority of scientists who dispute the magnitude of global warming, said the Karl paper “doesn’t make sense” because satellite data show little recent warming. “You must conclude the data were adjusted to get this result” of no warming pause, Christy wrote in an email. “Were the adjustments proper? I don’t know at this point.”
From Mashable:
Scientists who have investigated the warming hiatus or are otherwise involved in assessing climate change on various timescales told Mashable that the study’s key shortcoming is that it does what mainstream climate scientists have long criticized the climate contrarians, often now referred to as “climate denialists,” of doing: cherry-picking start and end dates to arrive at a particular conclusion.
Gerald Meehl, a climate researcher at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, told Mashable in an email that while he finds the new study laudable for improving temperature measurements, there are flaws in how the researchers interpreted the data. For example, Meehl says there is still a lower warming trend from 1998 to 2012 compared to the previous base period of 1950 to 1999, “… Thus there is still a hiatus defined in that way.”
Meehl says that adding two years to the time period by including 2013 and then 2014, which was a record warm year, makes the warming trend appear to be 38% larger than previous studies that did not include those two years.
“My conclusion is that even with the new data adjustments, there still was a nominal hiatus period that lasted until 2013 with a lower rate of global warming than the warming rate of the last 50 years of the 20th century,” Meehl says, “and a factor of two slower warming than the previous 20 years from the 1970s to 1990s.”
Lisa Goddard, director of the International Research Institute for Climate and Society (IRI) at Columbia University, told Mashable that the study does not support the conclusion that global warming didn’t slow down for a relatively short time period.
“It is clear that Karl et al. have put a lot of careful work into updating these global products,” Goddard said in an email. “However, they go too far when they conclude that there was no decadal-scale slowdown in the rate of warming globally. “However, they go too far when they conclude that there was no decadal-scale slowdown in the rate of warming globally. This argument seems to rely on choosing the right period — such as including the recent record breaking 2014.”
Another senior climate researcher, Kevin Trenberth of NCAR, says the hiatus depends on your definition of the term. To him, global warming never stopped, as climate skeptics argue, because most of the extra heat from manmade greenhouse gases such as carbon dioxide was redirected deep into the oceans during the 1998 to 2012 period. However, surface temperatures did warm more slowly during this time.
“I think the article does emphasize that the kind of variation is now much more within the realm of expectations from natural variability, but it is a bit misleading in trying to say there is no hiatus,” he said in an email conversation.
Michael Mann, the director of the Earth System Science Center at Penn State University, says the study helps drive home the point that “global warming continues unabated as we continue to burn fossil fuels and warm
the planet.”
The new study reveals yet again that surface temperature data has many flaws, says Peter Thorne, a climate researcher at Maynooth University in Ireland. In an interview, Thorne said critics of climate science are incorrect in charging that global warming is an artifact of urban heat islands and other influences on thermometers, but at the same time, our approach to taking the Earth’s temperature needs to be rethought.
Thorne says more investments should go toward establishing redundant, carefully calibrated temperature observing networks where data is currently sparse, such as the Arctic, much of Africa and especially the oceans.
““The uncertainty in the marine records is far greater than in the land records,” he said.
“If we put enough good quality, traceable, redundant observations around the globe we can make sense of all the other observations that aren’t so good,” he said. “There is no need to bequeath onto researchers in 50 years time a similar mess.”
I don’t see anything about the clear conflict of interest here with NOAA being directly administers by a government that so clearly wants this result.
Well yes this thought occurred to me also. It would have been more convincing if they first provided a detailed analysis of what they did to the ocean data and compared it with the HADSST data sets – in a journal that allowed a lengthy contribution. Instead, they went for a politically desirable headline first.
If you took the politics out of global warming ‘science’, there would be little left to merit concern.
The worst thing about this paper is the data it uses. Or rather the data it doesn’t use.
How about if we use ARGO, actually designed among other things to measure global SST with the needed precision, for the past 8 years and see what it says the global trend is. All other instrument types should be adjusted to match ARGO because it’s the gold standard by far.
Unabated planetary warming and its ocean structure since 2006 ~Nature Climate Change 5, 240–245 (2015)
My emphasis. This is important. SST trend from 2006 is 0.05C/decade from the only ocean temperature sensors, 4,000 ARGO diving buoys, with the range and precision for the job.
Figure 1: Globally averaged SST anomaly.
http://www.nature.com/nclimate/journal/v5/n3/images/nclimate2513-f1.jpg
5-m Argo OI temperature (black), NOAA OI SST v2 (ref. 27) masked to the same area as the Argo OI (solid red), NOAA OI SST v2 without the Argo mask (dashed red). All figures are 12-month running means unless otherwise noted.…
Prof Curry,
I agree. Bundling all these adjustments into one package that produces a politically useful conclusion seems inherently problematic. It’s a tone-deaf action guaranteed to arouse concern about integrity of the process.
Doesn’t this just shift the focus of the debate from the surface temperature trend to the model – temperature and surface – lower atmosphere divergences? Small adjustments to the surface temperature trend seems far less important than these growing divergences — which fundamentally challenge the climate science paradigm.
agreed. This new analysis changes the model-obs comparison story slightly; reduces somewhat the discrepancy in the hiatus period, but increases the discrepancy for the the earlier period.
Question: which is a more serious challenge to the climate science paradigm: the increase in the model – temperature divergence, or increase in the surface – lower atmosphere temperature divergence? Or both, or neither?
if I understand this correctly, the former suggests that the GCMs’ projections are not reliable (although this might result from small problems in the GCMs). The latter suggests that the models poorly represent one or more fundamental climate processes.
From the bleachers, it seems that neither topic gets much attention in the literature. That is, however, a commonplace with paradigm challenges. They’re not considered key question by the consensus until they produce a compelling alternative paradigm.
There is a detailed analysis of their SST analysis with comparisons to HadSST and COBE data sets.
Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. J. Climate, 28, 911–930.
Wei Liu, Boyin Huang, Peter W. Thorne, Viva F. Banzon, Huai-Min Zhang, Eric Freeman, Jay Lawrimore, Thomas C. Peterson, Thomas M. Smith, and Scott D. Woodruff, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4): Part II. Parametric and Structural Uncertainty Estimations. J. Climate, 28, 931–951.
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00006.1
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00007.1
Paywalled for me, but you can find pdfs of both by googling.
curryja says: “Well yes this thought occurred to me also. It would have been more convincing if they first provided a detailed analysis of what they did to the ocean data and compared it with the HADSST data sets…”
NOAA published those papers back in November. See:
Huang et al. (2014) Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4), Part I. Upgrades and Intercomparisons, and
Liu et al. (2014) Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4): Part II. Parametric and Structural Uncertainty Estimations.
I discussed the ERSST.v4 data in a number of posts last year, including Has NOAA Once Again Tried to Adjust Data to Match Climate Models?
It’s remarkable you made the distinction of NOAA running to the media and politics before a professional peer-review….
“While this will be heralded as an important finding, the main claim[2] that it uncovers a significant recent warming trend is certainly dubious. The significance level (.10) is hardly normative and the use of it certainly will prompt many readers to question the reasoning behind the use of such a lax standard.”
It’s clear they began with the question:, “What can we do to make it look like there’s no pause?” Terrible way to approach a science problem obviously, but we have to remember it’s not really about the science.
(aka pokwrguy)
All this buckets vs engine-room intake thing played out in the 1970s. I don’t know how the heck they manage to evoke it smother the 1998 El Nino. This is such a blatant frig that ti’s laughable.
The irony is that it was just that steep warming, at the end of the last century, that got all the alarm-bells ringing. It was trying to reproduce that up-tick that caused the climate models to get it all wrong and run hot ever since.
Now if they want to run with 0.1 deg C/decade since 1950 that is steady right up to present, then they need to explain why the model mean is about 2.8 deg C/decade
Pay attention to the marble..
Obama’s ‘climate’ legacy is being buried in the secret trade deal. He’s got to get enough political propaganda out their to push the trade deal over the Senate ratification line. (Republicans generally favor free trade treaties..what better place to bury misguided climate policy stench).
More government-academic conflict of interest and blatant lying about it by paragons of the ivory tower intellectually and morally superior climate crusaders:
http://www.breitbart.com/big-government/2015/06/04/harvard-syracuse-researchers-caught-lying-to-boost-obama-climate-rules/
I keep thinking I’ve lost my capacity to be depressed by this stuff. But it’s not true.
I was much happier as an abysmally ignorant believer.
(aka pokerguy)
Who goes into a career in climate science? Treehuggers. No one else in their right mind finds any value in the field (as opposed to meteorology). Right there is the root cause of the extreme bias in the field. Sound snarky? Perhaps, but I wager it’s the truth!
Don Monfort | June 4, 2015 at 2:57 pm | Reply
More government-academic conflict of interest and blatant lying about it by paragons of the ivory tower intellectually and morally superior climate crusaders:
http://www.breitbart.com/big-government/2015/06/04/harvard-syracuse-researchers-caught-lying-to-boost-obama-climate-rules/
—————————————————————
This deserves a separate article here.
So NOAA should never publish any scientific work unless it counter to the current administration’s political position? That, of course, being the only way that they could prove they are not unduly influenced politically, if your comment is taken to its logical conclusion.
Just how in your mind does a researcher prove that conflict of interest has not influenced them?
I submit that you should leave this question alone and let the science stand or fall on its merit – something that stands ultimately having a little more substance to it when compared to the vaporous considerations of conflict of interest.
Never is a tall order. Maybe just once would help.
If just ONE TIME they did not adjust the data so it gets closer to what the models say it “should” be.
If as much effort was put into CORRECTING the models instead of ‘correcting’ the data, they may get somewhere.
So we should not indulge in vaporous considerations of conflict of interest in drug research. The science will stand or fall on its merit.
No, they should always make a clear statement of potential conflict of interest by stating very clearly it is the policy of the administration we work under that climate change exists and is real and it is our job to prove it and if we don’t, we will likely be fired or have our money cut off. That way people would know just how seriously to take them. Imagine the screaming if someone with funding from say The Evil Koch brothers had published this article? The attitude that government scientists are off limits for conflict of interest is silly.
In any activity. “Follow The Money” is always good advice, including a politicized government agency such as NOAA.
Everything that comes out of the mouths of consensus climate scientists can be summed up in two sentences:
1. This proves that CAGW is happening, we need to limit fossil fuel usage now!
2. Although we know it is happening, we still have much data to torture.., err I mean analyze, so we need more grant money!
Nice try at diverting attention.
Not that it worked.
Frankly, I do NOT trust NOAA to give us the truth because as a government agency, they are now forced to advance government policy rather than be an honest broker. Bear in mind, the shrillness of climate alarmists is increasing as we approach a Paris climate treaty meeting and they realize it is likely to be another failure.
I find it peculiar that NOAA also wants to take on the IPCC which is what climate skeptics do.
Somehow, this doesn’t pass the smell test, but I could be wrong.
George Devries Klein, PhD, PG, FGSA
Let me add one thing. During 1993-1996, I was New Jersey State Sea Grant Director. NOAA’s Marine Fisheries Services (NMFS) was housed in the building next door. During their off-hours, NMFS scientists were involved with the activist movement promoting certain pro-environmental issues. When speaking publicly, they identified themselves as NMFS Scientists (giving the impression NIMFS supported tat particular environmental agenda item), which at the time was contrary to policy that allowed government employees to speak as citizens but not as representatives of their employer.
George Devries Klein, PhD, PG, FGSA
george, not NOAA wtf NOAA? Mosher says their pooh don’t stick.
This NOAA study is just the latest example of them trying to force data to fit their pre-conceived notion that CO2 is jacking the earth’s temperatures at record rates.
If it weren’t so pathetic, and dangerous, it would be laughable.
It is time for the mainstream media to start doing its job and to challenge these warmists who are too stubborn and too proud to admit that the hypothesis and models upon which all their beliefs are based is bogus.
This is a very dark period for science.
Ah, adjustments. How I love adjustments! The 2014-2015 winter is record-cold one day, and the next day the science of adjustments finds that old temperature records were imprecise, and after an adjustment the last winter is no longer record cold.
To paraphrase Lenin(?), he who measures temperatures does not create a true permanent record. He who adjusts measurements does.
Stalin (But “Lenin” is close enough for government work, as is sometimes said.)
The GWPF comments contain a couple of misunderstandings. ARGO floats are used for subsurface temperature and salinity measurements, not surface. There is another set of buoys called “drifters” that are used for sea surface temperature measurements. The other misunderstanding appears to relate to the ship-bias corrections. The adjustments during the hiatus should represent the change from ship inlets to buoys (drifters not ARGO), not the change from buckets to ship inlets.
Cheers.
Good point. But aren’t engine water intakes at least a couple of meters below the surface also, to allow for wave action? And those are surely contaminated high by engine heat conduction.
According to a couple of WUWT comments on this matter, the thermometers are located at the front of the intakes, so engine heat isn’t a factor.
If so, then why fretted about for decades of ‘adjustments’?
A ship usually has two sea chests, as the intakes are called, one on the vessel side to be used in shallow waters and one on the bottom, or well below the lowest load line (Plimsoll mark). The reason is to avoid ingestion of sediments when in shallow waters. It is not uncommon for the crew to forget to switch from one to the other. In open water this means sucking air into the system, and thus temperature increases.
The temperature measurement is normally made in a position well away from engine heat, as the temperature is used for engine control purposes. Although this requires accurate measurements, it does not mean “accurate” as in tenths of degrees. And ship instrumentation is wanting in the area of calibration, to put it mildly. I have been a classification society surveyor, specialising in machinery. The idea that automated temperature readings of inlet temps are anywhere near as accurate as required for the purpose of climatology is laughable.
Anders Valland:
Thank you for the excellent description of intake temp measurements.
The position of the intake varies with the ship types.
its not a simple problem
http://icoads.noaa.gov/advances/kent.pdf
http://www.ocean-sci.net/9/683/2013/os-9-683-2013.pdf
“While buckets generally sample the upper few tens of
centimetres (note that Parker (1993) describes two weighted
buckets designed to sample at 1–2 m depth), depths sampled
by intakes can be highly variable. Engine intake inlets are
usually close to keel depth to ensure submergence under all
sea conditions. Actual sampling depth for intakes on container
ships and bulk carriers can vary by several metres, depending
on shipload (Beggs et al., 2012). Large ships can
have dual seawater intakes, one close to keel depth and another
a few metres higher (Ecology and Environment, 2007).
The deep intake is used at sea and the upper when in shallow
coastal waters or canals. Intake depths reported in the early
literature are presented in Table 1. Brooks (1926) reports an
intake depth of ∼ 7 m on a Canadian Pacific steamship in
the 1920s. James and Shank (1964) estimate intake depths of
∼ 3–10 m for various US merchant, Navy and Coast Guard
observing ships reporting in 1962 and 1963. They defined
relations between intake depth and full-load draft for different
hull types and categorised observing ships by hull
type to estimate their intake depth. More contemporary intake
depths averaged by type of VOS ship reporting this between
1995 and 2004 are presented in Table 5 of Kent et
al. (2007). Container ships and tankers were found to have
intakes at ∼ 7–9 m depth while intakes on bulk and livestock
carriers were found to often exceed 10 m. Kent and Taylor
(2006) report that the average intake depth for VOS reporting
this in 1997 was 8.4 ± 4.1 m, with the deepest inlet being at
26 m.”
Steven Mosher, Variation in depth is the issue with intakes and not the asinine “engine room temperature” crapola. The subsurface temperature versus supra surface temperature is always going to be an issue which is why “Global Mean Surface Temperature” is a less than desirable metric. Interpolating a less than desirable metric to even more completely confuse subsurface and suprasurface is more not very smart thermodynamics.
You should try to understand more of what Dr. Curry dishes out.
“The position of the intake varies with the ship types.
its not a simple problem”
Doesn’t bother the NOAA Kidz. They can use the ships that pass in the night to adjust the buoys. Yeah, that’s a plan.
captain.
Nice change of subjects.
personally I would do a global SAT&MAT product
As for SST?
Tons of ways to look at the data. No such thing as a best metric.
You guys are way too certain about what is important.
plus most of you have never once looked at ICOADS. maybe you rad about in the newspaper.
“Doesn’t bother the NOAA Kidz. They can use the ships that pass in the night to adjust the buoys. Yeah, that’s a plan.”
Whether you adjust the ships or the bouys doesnt matter.
You have collocated measurement systems. you do the best you can
you report the results.
Thats the day job. we are not allowed to throw our hands up and run from uncertainty. Cause, like, there is life beyond the armchair.
This is Mosher’s contribution to the discussion of his friend Ross’s Karl paper analysis on WUWT:
” Steven Mosher
June 4, 2015 at 5:31 pm
Ross
“So what changed in the SST records? Bear in mind that there are very few records of air temperatures over the oceans, especially prior to 1950. So to get long term climate estimates, scientists use SST (i.e. water temperature) data, which have been collected since the 1800s by ships.”
WRONG
Check the source material ICOADS
http://icoads.noaa.gov/advances/parker3.pdf
There are 8 million records for air temps prior to 1900.”
Very profound, Steven. Somebody responded:
“dmh
June 4, 2015 at 10:37 pm
There are 8 million records for air temps prior to 1900.
From the slides you linked to, emphasis mine:
Undigitized marine observations for 1851-1900UK national archives
Given that they are not digitized, their existence is moot. They are of no value until they are, no matter their number.”
Steven never replied.
“Whether you adjust the ships or the bouys doesnt matter.’
So you are going with that.
Steven Mosher, “You guys are way too certain about what is important.”
Thermodynamics kinda forces you to use reliable references. That is where the certainty comes from. Since “surface” air temperature ranges from over 50 C to less than -70 C its average doesn’t meet the zeroth law sniff test. Average “surface” ocean temperature ranges from -2C to +34 C making it a better thermodynamic reference, not perfect, but better. Mixing the two makes zero sense if you are into thermo.
As for ICOADS I have looked at a lot of the data and you still have the zeroth law issues. Dr. Curry’s specialty is atmospheric thermodynamics not data manipulation.
Don.
Go get ICOADS.
The typical record for a ship includes SST and MAT all the way back to the begining of the record.
Ross is wrong.
or do some reading
http://www.metoffice.gov.uk/hadobs/hadisst/HadISST_paper.pdf
https://www.ncdc.noaa.gov/bams-state-of-the-climate/2009-time-series/mat
Well, let’s just stipulate that your friend Ross was WRONG in the statement of his that you quoted. Is he WRONG about everything in his analysis of Karl?
Mosher, lacking any experience with actual temperature sensing systems, ignores what Valland who has such experience has to say about engine intake temperature sensors.
1) resolution of the gauges isn’t in tenths of degrees
2) calibration of the gauges is lax
3) operators forgetting to switch between shallow and deep intakes
Never getting your hands dirty, Steverino, is a problem you can’t fix by increased bloviation.
Below are the 12 year running linear regressions of temperature trend in the mean of anomalies of GISSTEMP, NCDC, and HadCRUT4.
Appears that 2001 through 2013 and 2002 through 2014 have negative trends.
http://climatewatcher.webs.com/Trends12.png
Speaking of adjustments, perhaps someone can fill in any missing pieces to the reasoning I’ve heard described before for the following. Apparently the halt in warming that occurred between the 1950s and the 1970s was caused by sulphate aerosol pollution that initiated an albedo effect that has since mostly dissipated. Assuming there was never this pollution source, wouldn’t the hiatus have been even more pronounced then it is today? (ignoring the new NOAA data in the consideration)
Reblogged this on JunkScience.com and commented:
A new way to curve fit, make the data fit the model? I’ve maintained data required by various environmental permits and regulations for years. If I adjusted those data like these folks seem to do, I’d be in jail.
Ah, the latest attempt at killing the pause that is killing the cause. Will this improve the dim prospects for meaningful mitigation from the Paree junket? Let’s ask jimmy dee for his regurgitation of the delusional huffpo perspective.
In retrospect, it always get warmer. Why didn’t they tell us this a few years ago?
I will always be skeptical of adjustments people say are necessary when they only discover the necessity of those adjustments after they get results they dislike. Especially since the issue of bucket corrections has been discussed for at least 10 years now. If these particular adjustments for them are needed, why did it take them so long to figure it out?
Brandon,
Great point! If successful predictions are the gold standard of science theory, post-hoc adjustments that support theory are the brass standard. While they play a useful role, they’re unlikely to change the balance of opinion among laypeople concerned about the public policy implications of these trends.
I predict there will continue to be more “post-hoc adjustments”. We’ll see how successful that prediction is.
Post hoc. A good point, Brandon.
Well…
Climate science has an integrity problem, it actually has a number of problems.
1. People who are open advocates for a view on the “real look” of the data are allowed to work in the field. They should be fired. Science studies funded by the government should be as free from bias and objective as possible.
2. One team (the team in charge of the climate centers) is allowed to move the goalposts and change the heights of the goalposts at will. They should be fired if this continues. I am not interested in the rationalizations for why the goalposts should be moved 20 yards further away when the other team is kicking.
3. The earth is a heat engine with two sources, one sink, and a working fluid. Climateers measure one sink temperature, the temperature of the working fluid above the other sink, and guess at temperatures in the polar regions to produce a synthetic index. The temperature of the working fluid above a sink will not be an accurate reflection of sink temperature. I’m not sure the synthetic index has a real meaning. There are all kinds of adjustments that will make it do something different that can be justified for {insert rationalization here}.
We have created an artificial index for guiding public policy. There should be a fixed set of rules for creation of the synthetic index. The index should not be a moving target. People who believe the index needs constant adjustment should be encouraged to find employment in another field.
To make
Well, CO2 would seem likely to have an effect on radiative balance.
But from the get go with gcms there is group denial that somehow we could ignore the non-linearities in the climate system to make predictions. The recent more than decade long cooling trends and the fact that the HotSpot has not occurred are reminders that the dynamics of the climate system have internal change components in the decadal and centennial scales.
I trotted out this quote, but it bears repeating:
“Thus, the whole climate system must be regarded as continuously evolving with parts of the system leading and others lagging in time. The highly nonlinear interactions between the subsystems tend to occur on many time and space scales. Therefore, the subsystems of the climate system are not always in equilibrium with each other, and not even in internal equilibrium.”
‘The Physics of Climate’ ( Peixoto and Oort )
Pingback: Has NOAA ‘busted’ the pause in global warming? | Enjeux énergies et environnement
Whilst I have the greatest regards for John Kennedy of the Hadley Centre we simply must not continue with the notion that we have any sort of handle on global SST’s back to 1850. We have some degree of accuracy from the 1970’s and no doubt very well travelled sea routes could lay claim to having some merit in their readings back another few decades, but global is not possible. (sorry John-I know you will pop up here)
As for the first article, is not Thomas C. Peterson -one of the co authors, also a co author, with William M. Connolley of the paper purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?
tonyb
climatereason,
“purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?”
That’s an exaggeration of what the paper said, which was “An enduring popular myth suggests that in the 1970s the climate science community was predicting “global cooling” and an “imminent” ice age…”
http://journals.ametsoc.org/doi/abs/10.1175/2008BAMS2370.1
The record clearly shows that analysis to be correct. Your statement is, however, also correct. There was concern about the possibility of future cooling — although this was not a “consensus” forecast.
For example, the 1970s cooling played a large role in the creation of NOAA’s climate analysis center. See links to this an other information about the 1970s global cooling scare here:
http://fabiusmaximus.com/2009/12/07/cooling-4/
EOTFMW
I have had this discussion with William amongst others, which is why I chose my words carefully. The global cooling scare had its genesis in the 1960’s when a number of prominent commentators, amongst them Budyko and Lamb, expressed concerns. This was taken up very widely in the scientific literature of the day.
By the time the early 70’s came round Lamb and others had recognised there concerns were unfounded and like good scientists changed their minds after examining the available evidence.
The paper I cited therefore misses the point as looking at 1970’s papers is a largely fruitless exercise as by then the scare was waning. In those pre internet days it took some time to be recognised that the situation had changed.
tonyb
Tony,
Thanks for the additional history. I didn’t know this went back to the 1960s.
The famous letter to President Nixon about cooling described in NOAA’s official history was December 1972. The reduction of aerosols following the Clean Air Act of 1963 and its ever-tightening amendments had their effect, so that a 1974 textbook I cited discussed cooling as a possibility, but with warming more likely. And the rest is history…
“An enduring popular myth suggests that in the 1970s the climate science community was predicting “global cooling” and an “imminent” ice age…”
This is a classic straw man which does not address the actual timeline, but moves forward a decade, after the debate had flip-flopped. The following quotes provide a few time stamps to illustrate the actual timing. So yes, in the 1970s the climatologists, press and politicians were moving on to warming… after they had spent the 1960s warning about cooling:
After a week of discussions on the causes of climate change, an assembly of
specialists from several continents seems to have reached unanimous agreement on only one point: it is getting colder.
-New York Times – January 30, 1961
Like an outrigger canoe riding before a huge comber, the earth with its inhabitants is caught on the downslope of an immense climatic wave that is plunging us toward another Ice Age.
-Los Angeles Times December 23, 1962
The battle to feed humanity is over. In the 1970s, the world will undergo famines. Hundreds of millions of people are going to starve to death in spite of any crash programs embarked upon now.
– Paul Ehrlich – The Population Bomb (1968)
It is now pretty clearly agreed that the CO2 content [in the atmosphere] will rise 25% by 2000. This could increase the average temperature near the earth’s surface by 7 degrees Fahrenheit. This in turn could raise the level of the sea by 10 feet. Goodbye New York. Goodbye Washington, for that matter.
-Presidential adviser Daniel Moynihan, 1969 (later Sen. [D] from New York 1976-2000)
sci guy,
One thing we should have learned from the climate wars: journalists are not reliable guides to the state of a science. Their job is to sell stories, and science is among the most difficult to accurately dress up as lurid headlines.
On the other hand, we have reliable guides in the science literature. It’s just much more difficult to assess.
EOTFMWS wrote: “journalists are not reliable guides to the state of a science. Their job is to sell stories”
Most journalists act as intentional activists first and profit centers only secondarily. For instance, the mission statement of the NYT:
“Mission: The New York Times’s core purpose is to enhance society by creating, collecting and distributing high-quality news, information and entertainment.”
http://www.nytco.com/who-we-are/culture/standards-and-ethics/
So clearly their self-stated mission would be to “enhance society”, not to generate profits or growth for its shareholders by selling stories in the most profitable manner. Especially today, media outlets such as the NYT very clearly attempt to further the goals of certain activist groups, NGOs, and politicians by publishing a stream of headlines with little regard to the actual scientific underpinnings, as long as a meme is furthered.
For instance, why was the Karl paper pre-released to certain journalists allowing for a guarantee of favorable headlines on release day? Was that to spread the science among scientists, or to allow the media to “enhance society” by promoting a meme?
It is very productive to follow the media outlets, for that is where one can observe how activists use current events as a tool to “enhance society”. If scientific thought in a field is being misused in order to further such a mission, then it is up to the “three percent” of dissenting members to stand up and let their reservations be known. And such efforts will eventually find their way into the the media output, even if it takes a decade or so for it to appear. If the “three percent” chose to remain silent, then why should a future historian chose to conclude that science did not believe what was being printed daily in the media outlets “of record”?
Sci guy,
“So clearly their self-stated mission would be to “enhance society”, not to generate profits or growth for its shareholders by selling stories”
After decades of reading corporate mission statements, especially those of Wall Street firms, I no longer share your touching faith that they are more than public relations, unconnected to what the organization actually does.
As for the Karl paper, do you have evidence of its selective early distribution — or was it given to all major media who wanted it?
After 3 decades of working from within (the fringes) of large corporations I am acutely aware that mission statements betray the peculiar world-views of upper management, even while trying to mirror their desired customer base. I have seen these statements come and go with CEOs too often to think otherwise.
That the NYT would consider itself an “enhancer” rather than a “recorder” is revealing if we are to consider it a media outlet “of record”.
As to a pre-release copy list, there is of course no easy way to know who was given the details earlier or later… such is the nature of “leaked” information. There is plenty of FOIA evidence that Karl allowed inside access to Andrew Revkin and other media “friendlies” he wished to cultivate, including email copy lists which read like a who’s who of Climategate/Hockstickville.
Sci guy,
I believe your standards are commendable, but unlikely to be found except in Heaven. Down here working the press is just good sense.
==> “purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?”
Is that what is was purporting to show, tonyb?
Numbnuts chimes in.
As for the first article, is not Thomas C. Peterson -one of the co authors, also a co author, with William M. Connolley of the paper purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?
http://en.wikipedia.org/wiki/William_Connolley
Connolley is a green party advocate famous for defacing Wiki climate pages, who was banned by Wiki (a very liberal organization) for his activities. His wiki page is pretty mild – but if you look at some of his wiki edits it is pretty clear he is less than unbiased.
A less mild view of his activities
http://newsbusters.org/blogs/lachlan-markay/2010/10/21/wikipedia-bans-radical-global-warming-propagandist-editing-all-pages
Any information product associated with Mr. Connolley should be considered a meritless piece of advocate propaganda.
“(sorry John-I know you will pop up here)”
Pop!
Hi Tony,
By now, you can probably fill in my half of the argument. But anyway…
A lot of work has been done trying to understand uncertainties in the historical record. A flavour of this can be had by reading the papers written about the current generation of in situ SST data sets. E.g.
COBE-SST-2 (Hirahara et al. 2014)
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00837.1
ERSSTv4 (Huang et al. 2015 and Liu et al. 2015):
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00006.1
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00007.1
HadSST3: (Kennedy et al. 2011 and Kennedy et al. 2011)
http://onlinelibrary.wiley.com/wol1/doi/10.1029/2010JD015218/full
http://onlinelibrary.wiley.com/wol1/doi/10.1029/2010JD015220/full
copies here: http://www.metoffice.gov.uk/hadobs/hadsst3/
As well as various papers looking at the uncertainties specifically e.g.
Kent et al. Effects of instrumentation changes on sea surface temperature measured in situ
http://onlinelibrary.wiley.com/doi/10.1002/wcc.55/abstract
Are observations perfectly reliable? No, never. Can we make reasonable estimates of the uncertainties? Yes.
Best regards,
John
Hi John, thanks much for these remarks
John
Your antenna needs tuning. I made the original remark several days ago and repeated them again today following Beths’ link to a real life person who took actual real life measurements. Have you been on holiday or something? :)
Are the observations perfectly reliable? No.
Are there enough accurate measurements pre 1960’ish to give us a very good idea as to what is happening globally? No.
Can we rely on the SST’s in a few well travelled areas for short periods? Probably yes.
Should we heed Hubert Lambs maxim (applied to land measurements) that we can understand the tendency but not the precision ? Most definitely
My regards to Boggis. I hear that even large rats are in awe of him.
tonyb
Voic déjà la réaction de Judith Curry et de ceux qui la suivent sur son blog
Votre reponse de la probleme? Pour favoure.
Very interesting but I suspect that everyone will end up being validated by this news. For some it will be because the numbers are now of more supportive expected warming trends. For others it will be because their fears of official manipulation have been confirmed. The subset of people whose overall perspective is impacted by this news will be vanishingly small. I fear the divide will grow rather than shrink.
The problem with biased science is that it is polarizing rather than informing the public.
Distributing propaganda instead of empirically obtained knowledge seems to have that effect.
Time has come to eliminate government science funding for politically hot topics until the obvious bias in studies has been reduced.
I wonder if there is a correlation between the various reactions to this brand new study, and the attitudes of the respondents to previous AGW data and studies. My cursory spot check indicated a correlation of 1.0. Shouldn’t it be more like 0, since this is a completely new study?
The polar caps will melt, and the oceans will swallow cities, and the biosphere will collapse, and AGW will cost us trillions of dollars, and yet somehow the physical evidence is so infinitesimal and so miniscule that whether or not a 16 year pause in rising atmospheric temperature has occurred cannot be determined conclusively to the satisfaction of all.
Anybody see anything incongruous about this?
If they get into a political tussle over thermodynamics my old steam tables will be worth their weight in gold.
Ross McKitrick weighs in as well.
thx
I wonder if anyone has read Ross’s concise and insightful analysis. Mosher? Little jimmy dee? The SkS Kid who came here bearing gratuitous insult?
Ross is dead wrong about about MAT. he has never looked at ICOADS.
pity he should lose his focus
I see that over at WUWT you made what seems to be a nitpicking non-substantive attack on Ross’s analysis that didn’t pan out. What we are talking about here are the Karl adjustments. Do you have anything to say about your friend Ross’s detailed analysis of the Karl adjustments, or are you going to focus on one mistake you allege he made?
This is amazingly tone deaf. Does not look good, and is at variance with other SST, and both satellite records. Post hoc change to get the desired headline. But the world is increasingly watching. Would not be surprised if this stunt did not produce some significant media blowback.
Lots of ‘scientific ‘ acceptance of the pause, with possible explanations like Trenberth’s folly. And now there is no pause? So much for the settled science meme.
The alarmists want us to know that while they haven’t been able to figure out what has happened over the last few decades, they are absolutely certain what will happen in the next century. I suspect that their admission that their understanding of the recent past was flawed might not build a lot of confidence in their claims about the future.
It’s almost as if they are intentionaly destroying their credibility.
Aaron,
I’ve often thought that some of the obvious over the top stuff from Warmers were cries for help. But I might be wrong.
Andrew
Not intentional IMO. This is what happens when panic sets in because Mother Nature did not cooperate with their predictions and Paris looms. And when there is conflict of interest-Obama decreed the science was settled, so government employees have to settle it. And when AAAS plays the MSM PR game as if the internet and blogs with sentient denizens did not exist.
This has a good chance of backfiring.
From what I have seen the science reporters in the media who are covering “man-made climate change” are useless hacks… ready to spread any alarming story put out by the warmists, without any attempt to balance their stories. They know that alarming headlines sell newspapers. They are as irresponsible as the “scientists” who are putting out this utter nonsense (for their own self-benefit).
Avoiding ad hoc buoy and bucket adjustments.
http://www.drroyspencer.com/latest-global-temperatures/
Nic Lewis http://climateaudit.org/2015/06/02/implications-of-recent-multimodel-attribution-studies-for-climate-sensitivity/
Dare I wonder if the correlation of reactions to Lewis’ paper will have -1.0 correlation with the reactions to the paper here?
The pause is deeply embedded in another set of the NOAA’s data.
NASA:
“Variations in Earth’s magnetic field and atmospheric circulation can affect the deposition of radioisotopes far more than actual solar activity. ”
http://science.nasa.gov/science-news/science-at-nasa/2013/08jan_sunclimate/
What could Dr. Tony Philips of NASA be implying?
Is it that the climate change is related to the variations in Earth’s magnetic field?
Let’ have a look at the two variables and a possible correlation.
Relevant Earth’s Magnetic field data from NOAA
http://www.ngdc.noaa.gov/geomag-web/#igrfwmm
The latest global temperature data from:
http://www.cru.uea.ac.uk/cru/data/temperature/CRUTEM4-gl.dat
When plotted together on visual inspection (‘wiggle match’) it appears to be some relationship.
This is also confirmed by the linear regression, giving correlation factor R2=0.81.. Result is presented here.
http://www.vukcevic.talktalk.net/MTC.htm
There is distinct 1940’s ‘bump’ and more importantly the PAUSE is there.
Eliminating the PAUSE from the geomagnetic data would not be matter of a minor adjustment.
Even having in mind well known maxim ‘correlation is not necessarily causation’ the above is unlikely to be just a coincidence.
Possible mechanism come to mind:
a) radioisotopes nucleation and the cloudiness albedo
b) secular change in the magnetic field is simply a proxy for the ocean floor tectonics.
Consider it an alternative perspective on the natural variability.
Correct me if I’m wrong, but a prime reason for using the surface station + in-situ SST network is to compute in anomaly space a very long time series e.g. 1880-present of temperatures. We can show a gradual warming trend over the past century that occurs in steps. Again, this is in “anomaly space”.
Since 1979, we have global satellite coverage from a multitude of sensors that measure (in)directly many aspects of the climate system. Numerical weather prediction techniques like 4D-Var used by the best weather models (ECMWF, UKMET) produce an analysis of the atmosphere-ocean system every day incorporation so much data from many sources that imo it’s one of humanity’s remarkable achievements. For the past several decades, I contend that surface-station datasets are insufficient and (perhaps) not well suited to describe the exact global temperature. The methods used by NOAA/NASA/BEST are akin to the quality control procedures required by every NWP data assimilation scheme to initialize the analysis prior to running a weather forecast.
If I wanted to create a time series of global temperature since the 1990s, then I would NWP / variational techniques.
I agree.
Good to find you here. Please visit more often.
So if NWP is the gold standard and I match NWP…….
the conclusion would be????? what would logic say?
Of course if I used NWP and showed the haitus ending then I suspect
that people would argue that NWP is not data but a model. and they would demand only use observations..
But I probably agree using NWP will get you a physically defensible answer.. But why limit it to after 1990?
I kinda liked MERRA
I dunno.. how close to MERRA did we get?
http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf
.
Nit: The satellite SST record only goes back to 1981.
Still, for 1981-present, one can indeed do reanalyses. The CFSv2 reanalysis also used modern (variational) analysis techniques, and unlike purely weather models, included an ocean in its system.
But this will have artefacts too. When the number of satellites available changes, there’s a discontinuity. When the quality of the satellite data changes, there’s another discontinuity. When you add a new type of data (AMSR-E for SST, for instance), there’s a discontinuity. And when you lose that source, such as with AMSR-E dying in 2011, there’s another.
For weather-oriented systems, this is not so much a concern. The emphasis there is on getting the best possible analysis from which to make today’s run. If there’s a good new data source or algorithm, use it.
But for trend analysis, every one of those discontinuities is a problem capable of creating, destroying, misrepresenting the trend you’re trying to study.
In other words, data are messy. Anything based on data is inherently messy. And that includes data assimilation systems and their results.
If you look at the input data sets for weather models you’d be shocked.
If your life is weather foorecasting of course you think the models are the gold standard.
Basically we have various approaches to estimating things.
Its fun to compare them and try to improve them.
A good analyst compares them all and makes balanced statements.
advocates.. well, they have different approach
@mosher:
Actually, I wouldn’t be at all shocked by looking at the input data for weather (and ocean, ice, land, …) models. Part of my day job involves doing just that. That’s why I say data are messy. I also construct some data which are then used by such models. Even stronger reason I say data are messy.
Forecasters are generally quite skeptical about models, and model guidance, preferring data sources, and, typically, data sources untouched by models. Radar and satellite imagery, for instance. As the forecast lead time increases, they do rely increasingly on model guidance.
Anyhow, one of the data targets I work on is for SST, so I’ll probably write up something over at my blog this weekend. More on the general problem of trying to determine SST than specific details about this paper.
Call in Mosher to explain to us why these important adjustments don’t change the conclusion.
Andrew
Good one.
Pingback: Ahora intentan negar La Pausa (en el calentamiento global). De risa. | PlazaMoyua.com
If the temperature was about 70 deg. F between the ice ages, then shouldn’t we expect the current temperature to continue to increase until we get to about 70 deg. F since we are still in an ice age? What difference does it make what causes a rising temperature since we always have a rising temperature following an ice age and we are still in an ice age?
Why not manufacture a homogenized ocean set by synthesizing a decontamination of the boat data to conform with the more robust – and scientifically-intended – buoy data? Why did they chose to make the buoy data “more like the boat data” and not the other way around? Am I missing a clear reason, or is the obvious answer staging in my mind the real reason?
Adjusting buoy data to match “ship of opportunity” data is totally bizarre. We spend billions of dollars launching satellites and deploying state of the art ocean buoys and then “correct” their data to match ship obs, really?
The more details of GST estimation I am exposed to just confirms my existing bias that the entire endeavor is silly and subjective.
What will alarmist bureaucracy stoop to next?
Dunno. Cannot think that irrationally.
Extraordinary claims require extraordinary er …
Beth in Aus?, highest regards
Thx, ristvan ‘n likewise.
…Adjustments?
. . . gullibility?
Ah, it seems as though things have become utterly shameless as the Paris meeting nears. This was bound to happen.
I think that a novel aspect of this is some of the new language being used in the AP story. Seth Borenstein (who writes several biased articles every week) uses the term “non-scientist” three times when referring to people skeptical of CAGW. I have to wonder if we’ll see this offensive tactic again. The piece also tries to re-write the IPCC’s handling of the hiatus–as though the IPCC was just humoring the “non-scientists”. I suppose we’ll see that theme again, too.
In any case, the role of activist journalists like Mr. Borenstein interests me and I was wondering if there is anything worth reading about their involvement in the climate change debate. Thanks in advance for any recommendations. :-)
“In any case, the role of activist journalists like Mr. Borenstein interests me and I was wondering if there is anything worth reading about their involvement in the climate change debate. Thanks in advance for any recommendations.”
Well, following their deaths by hanging—as commendably advocated* by Luboš Motl for their crimes against humanity—their thus accelerated obituaries would probably be a good start.
You’re welcome.
* Minor natural extrapolation by me to Schmidt, Mann et al‘s coterie of media henchmen.
The scientific credibility of government global warming scientists is on hiatus.
What this actually busts is the surface statistical models. Booya!
“The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, so that the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.”
They did not do this – tell Pat to read the paper again. They provide an estimate of what that would mean but do not include it in their surface temperature product.
“I am also unconvinced by NOAA’s gap filling in the Arctic, and in my opinion this introduces substantial error into their analysis. I addressed the issue of gap filling in the Arctic in this recent publication: Curry JA, 2014: Climate science: Uncertain temperature trends. Nature Geoscience, 7, 83-84. Relevant text:
Gap filling in the Arctic is complicated by the presence of land, open water and temporally varying sea ice extent, because each surface type has a distinctly different amplitude and phasing of the annual cycle of surface temperature. Notably, the surface temperature of sea ice remains flat during the sea ice melt period roughly between June and September, whereas land surface warming peaks around July 1. Hence using land temperatures to infer ocean or sea ice temperatures can incur significant biases.”
Two things. First it sounds based on your statement like you’re implying you published a peer-reviewed paper on the subject of infilling in the Arctic which is certainly not the case.
Secondly, you have never shown any original analysis to counter the temperature analyses performed by Cowtan and Way and Berkeley with respect to the Arctic. In the time since you made your ‘comments’ there have been a number of papers (see below) which have provided validation for our approach from atmospheric reanalysis datasets, isolated weather stations and satellite datasets. We have provided numerous follow-up investigations which once again support the methodology (see below) and include assessments against the Atmospheric Infrared Sounding Unit for instance. You can’t continue to keep hand-waving on the subject. If you want to say it can’t be done then please show us why – also provide your cross-validation statistics on your proposed improvement ;) If not show us why its better to pretend the Arctic is warming at the global average rate (FYI it’s not).
Comiso, J. C., & Hall, D. K. (2014). Climate trends in the Arctic as observed from space. Wiley Interdisciplinary Reviews: Climate Change, 5(3), 389-409.
Dodd, E. M., Merchant, C. J., Rayner, N. A., & Morice, C. P. (2014). An Investigation into the Impact of using Various Techniques to Estimate Arctic Surface Air Temperature Anomalies. Journal of Climate, (2014).
Simmons, A. J., & Poli, P. (2014). Arctic warming in ERA‐Interim and other analyses. Quarterly Journal of the Royal Meteorological Society.
Updates:
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/updates.html
Robert
We can clearly see other examples of arctic amplification, for example the 1920 to 1940 period, the 1818 to 1860 era, the first few decades of the 16 th century and of course the extended warming during the Viking period.
As Enhanced co2 couldnt have been responsible what do you think could have been their causes?
Tonyb
Tony it’s the right question but you’ll get nothing in response from climate science because all the focus has gone into modern era fossil fuel emissions.
In complex systems, cause is likewise complex
isn’t it … aren’t they? Cascading responses to
things complex in themselves?
This is a pretty astonishing comment:
“First it sounds based on your statement like you’re implying you published a peer-reviewed paper on the subject of infilling in the Arctic which is certainly not the case.”
I was invited by Nature Geoscience to write a piece on the arctic infilling issue, based on my extensive expertise on the topic. You then imply I am trying to mislead people into thinking this is a peer-reviewed paper? Beyond bizarre.
I regard your method for interpolating into the Arctic to be deeply flawed. I have other things that I regard to be more important and/or interesting that I am working on, so at present I don’t intend to pursue this issue beyond critiquing on the blog any publications that I regard to be flawed.
I agree with Ryan Maue that with regards to recent (post 1990’s), we should be looking at the reanalyses, which provides the basis for a dynamically and thermodynamically sensible ‘interpolation’ into the data sparse region of the Arctic.
“I was invited by Nature Geoscience to write a piece on the arctic infilling issue, based on my extensive expertise on the topic.”
You have expertise, certainly, but I would question if it is entirely appropriate in this case because its clear based on your criticisms that you haven’t investigated deeply this subject matter. I presented to you three papers which provide strong indications that our approach is appropriate. There will be an additional paper which shows it is appropriate in the near future as well.
“I regard your method for interpolating into the Arctic to be deeply flawed. I have other things that I regard to be more important and/or interesting that I am working on, so at present I don’t intend to pursue this issue beyond critiquing on the blog any publications that I regard to be flawed.”
And you have not providing any reasonable commentary as to why it is flawed.
“I agree with Ryan Maue that with regards to recent (post 1990’s), we should be looking at the reanalyses, which provides the basis for a dynamically and thermodynamically sensible ‘interpolation’ into the data sparse region of the Arctic.”
Well luckily the reanalysis data support our results – we looked into this in rather heavy detail in our analysis and the updates afterwards. In many cases they show more warming than ours rather than less. Have a look at the study I cited above with ERA-Interim – the best performing reanalysis in the Arctic, even against out-of-sample observations.
We also have a reanalysis-guided infilling (MERRA) which gives almost identical results.
“Gap filling in the Arctic Ocean based on land temperatures is unphysical. Here are some things that you would have to include in a physically reasonable analysis: Daily sea ice concentration analysis
Satellite or buoy measurements of open water SST in the Arctic Ocean (e.g. polar pathfinder SST)”
Actually if you compare to the Arctic buoy station data (Rigor et al., 2002) you’ll see that it is ‘unphysical’ to use SST anomalies to extrapolate and that it is optimal to use land anomalies. This is in the Cowtan and Way (2014) paper where we validate against those stations over a long period. Please read the paper and the cross-validation measures used. Reading the paper you will see we do in fact use the sea ice data in the interpolation so that land temperatures are not extrapolated over areas which are not covered by sea ice. Once again it is in the paper and subsequent update documentation…
What do atmospheric reanalysis, drifting buoy stations, land surface temperatures measured from satellites and kriging guided by reanalysis/satellites/land stations have in common?
They all give similar results.
Even if your analysis gives similar results over a certain period, your method is unphysical and I would not expect comparable results going forward. Getting ‘close to’ the ‘right’ answer for a short period of time, for the wrong reasons, frankly isn’t useful.
You use climatological sea ice data, not the daily (or even monthly) varying sea ice. During the summer time, sea ice surface temperatures have nothing to do with land temperatures.
I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.
The inexperienced newbie not quite Phd, will often make profound discoveries with short term curve fitting.
Judith
“Even if your analysis gives similar results over a certain period, your method is unphysical and I would not expect comparable results going forward. Getting ‘close to’ the ‘right’ answer for a short period of time, for the wrong reasons, frankly isn’t useful.”
##################################
the problem is estimation areas where there are no data.
all methods of interpolation are “unphysical” except perhaps
re analysis and even there there are aspects of estimation where
physical processes are represented “unphysically”
Since the problem is “Estimate” you cannot reasonably challenge
a method that passes validation without
A) admitting you have nothing better
B) doing a better job yourself
Leaving the area unfilled is ALSO an estimate. And that estimate
performs worse in validation.
############################
You use climatological sea ice data, not the daily (or even monthly) varying sea ice. During the summer time, sea ice surface temperatures have nothing to do with land temperatures.
1. of course that will introduce errors.all methods have errors.
2. I have yet to see any proof that land temperatures have Nothing
to due with the temperature of air over the arctic. Nothing is pretty
strong statement. To show this, you would need to compare the two.
Problem: one of them is missing.
###############################################
I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.
I doubt whether the algorithm would validate out of sample.
“I doubt whether the algorithm would validate out of sample.”
Have the warmistas heard about this Mosher principle?
Kriging temperatures (eg over the Arctic)
An unexpected chance to gnaw on this bone, having done a little homework. Kriging seems more tricky than it may look! There are two forms.
There is statistical estimation. This requires multivariate normal data with an estimated covariance kernel. BEST make a limited attempt at justifying their methodology in these terms. It is most unlikely this statistical approach could be justified for temperature data – it would be remarkable if temperatures are anywhere near normal (gaussian), let alone with the same covariance kernel over the whole planet.
The second way of using kriging is as purely a functional interpolation in reproducing kernel Hilbert space. No statistical estimation. This is probably the only way it can be applied to temperatures. Not for the mathematically faint hearted I would suggest and needs careful documentation as regards the effective interpolating functions. Not provided for temperature kriging it seems.
Not necessarily a flawed interpolation technique, but much more detailed work needed to justify choices of interpolation functions and how these affect the results.
Regarding Kriging, I read this when the Sherwood paper on RAOB data came out:
“Kriging is based on regionalized variable theory, which assumes that the spatial variation in the data being modeled is homogeneous across the surface. That is, the same pattern of variation can be observed at all locations on the surface.“
Sounds as if sea ice/open ocean versus land makes Kriging a very dubious choice for the Arctic.
I also suspect it is not valid for RAOB data.
> I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.
Go for it.
While you’re at it, an explanation as to why the correlation between the price of tea in China should be fed to the algorithm might be nice.
Finding two other indicators that give similar results, like Robert Way did, would be even nicer.
“Even if your analysis gives similar results over a certain period, your method is unphysical and I would not expect comparable results going forward. Getting ‘close to’ the ‘right’ answer for a short period of time, for the wrong reasons, frankly isn’t useful.”
I’m really not sure where the unphysical arguments come in. We’re not talking about absolute temperatures we’re talking about anomalies and there are much fewer physical properties which can introduce bias when you interpolate anomalies. Even in absolute terms you can summarizes about 95% of geographic variation in air temperatures across land based on a small number of covariates (x,y,z) in for instance a thin plate spline algorithm. The idea that anomalies (which are much more autocorrelated at larger distances) can’t be used for interpolation in this manner is somewhat silly. What is unphysical about the interpolation when it’s guided by MERRA based anomalies or the UAH-based anomalies. I can tell you that the UAH-based ones probably miss some of the near-surface warming which is characteristic of Arctic Amplification.
“You use climatological sea ice data, not the daily (or even monthly) varying sea ice. During the summer time, sea ice surface temperatures have nothing to do with land temperatures.”
Using a climatology inserts a bias but it is smaller than the bias introduced using a variable (monthly) ice mask. These tests have been done. Expect to hear more on this in the future.
“I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.”
As Mosher said it wouldn’t pass cross-validation. If you have an alternative idea then please go ahead and present it – but there’s a certain sense of irony in watching you continually say that you’ve countered our approach when you’ve provided zero evidence to support your views.
No all I have to do is reject your approach and not use your data set; there are other surface temperature analyses that I regard as more reliable.
Robert Way, The “unphysical” part is the thermodynamics. -50C over sea ice represents 162 Wm-2, -2 C over ocean water represents 302 Wm-2 plus some latent, 18C over open water represents 406 Wm-2 plus approximately 100 Wm-2 of latent. Assuming that a temperature anomaly over sea ice is meaningful with respect to -2 over ocean Arctic or 18 C is pretty much naive.
Mathematically, the kriging is impressive. Thermodynamically it is just about useless.
TE.
Yes kriging across the boundary may introduce errors.
That’s not the question.
The question is are the errors significant?
Given the spatial area involved.. No.
We are down to polishing the bowling ball.
If you want to use a bowling ball analogy using old temperature series from instruments never meant to establish global average trends to hundredths of a degree C/decade then using those is like polishing an old pitted bowling ball that has such gross flaws as to make polishing it an exercise in futility.
A more applicable polishing analogy to grossly inadequate temperature data is “you can’t polish a turd”.
Foias.
Yawn.
Read more.
Steven Mosher, it is polishing, but balling ball wasn’t the first thing that popped into my mind.
If you want to see Judith’s point you need to consider the what the polished temperature represents in thermodynamics. If you neglect latent, just the range of possible SST gives you a thermodynamic uncertainty of about 0.35 C or SST could vary by up to 0.35 C with zero change in energy. You could play around with converting all the temperature data to S-B energy before taking anomaly and you see that the tropics have a higher energy weighting than Arctic winter temps.
Since there isn’t much other data to use though, SST and mean surface temperature have to be used, but when you assume that the mean temperature can be reliably converted to S-B energy you are stepping in the “balling ball”. There is a “hidden” uncertainty you can’t get away from.
> If you want to see Judith’s point you need to consider the what the polished temperature represents in thermodynamics.
Wait, Cap’n. Are you suggesting that unless we include daily sea ice concentration analysis or buoy measurements of open water SST in the Arctic Ocean, no statistical analysis can never make any physical sense?
Willard, “Wait, Cap’n. Are you suggesting that unless we include daily sea ice concentration analysis or buoy measurements of open water SST in the Arctic Ocean, no statistical analysis can never make any physical sense?”
No, I am saying there is considerable uncertainty as to what energy that temperature actually means. It makes physical sense when you include the realistic uncertainties. As far a thermo goes, tropical SST plus allowance for ice coverage would be more meaningful, but you have to use what you have. An average ocean temperature would be the best thermo reference, that only varies by a degree or so, but it isn’t exactly an easy thing to determine.
https://lh5.googleusercontent.com/-rowNIYdDNyM/VXHCqWpJDRI/AAAAAAAAN1g/SAW0dzgoOUo/w579-h343-no/sst%2Bt%2Bchoices.png
I posted that below, but see how the 0-100 meter temperature anomaly is right in the middle of the pack. That is one gut check. With a planet scale problem I think lots of gut checks are a good thing.
> No all I have to do is reject your approach and not use your data set […]
All one has to do to counter is to reject?
***
> I am saying there is considerable uncertainty as to what energy that temperature actually means.
Judy’s argument was that Robert Way would need to include of daily sea ice concentration analysis or buoy measurements of open water SST in the Arctic Ocean, Cap’n. Otherwise she rejects his analysis. That’s all she needs to do, it seems.
Your thermointerpretation of Judy’s argument looks a bit farfetched.
Willard, “Your thermointerpretation of Judy’s argument looks a bit farfetched.”
Pretty telling you would think that. Since it is supposed to be an energy balance problem, perhaps you should read more.
> perhaps you should read more.
I would real all the quotes from Judy that justifies your thermointerpretation of her argument.
Until then, all I have to do is to reject your thermointerpretation.
Willard, “Until then, all I have to do is to reject your thermointerpretation.”
Reject away then. Not particularly unexpected. Then if you think of it as more of a puzzle you might find some enlightenment. Basically though a ten C anomaly over sea ice in winter should not be averaged with a 1 C anomaly over open water which is why she would rather ignore C&W’s contribution.
Willard your rejection of a temperature series data set is not equivalent to Curry’s rejection of same. She’s a qualified expert user of such data sets and you’re a bored housewife taking anonymous, meaningless potshots. Why are you acting out in this way?
> She’s a qualified expert user of such data sets
I reject that as an appeal to authority.
Only Chuck Norris is so expert in everything that his rejections become counters. His stare suffices to reject almost everything. For anything else, there are round kicks.
I reject you as someone not playing with a full deck.
Willard, Dr. Curry shouldn’t have to remind a “climate scientist” like Robert Way of the most basic of the laws of thermodynamics.
From the Wikipedia page on the Zeroth Law.
“The law is important for the mathematical formulation of thermodynamics, which needs the assertion that the relation of thermal equilibrium is an equivalence relation. This information is needed for a mathematical definition of temperature that will agree with the physical existence of valid thermometers.”
It is really pretty simple. If you want to use temperature as a proxy for energy you have to consider the relationship between your proxy and reality. At the Top of the Atmosphere you have your energy reality. ~240 Wm-2 in/out and ~0.6 +/-0.4 Wm-2 imbalance in “equilibrium”. That reality only includes one form of energy transfer, radiant.
At the “surface” you have 15 C +/- about 1.7 C which is representing ~390 Wm-2 radiant, ~25 Wm-2 convection and ~90 Wm-2 latent heat transfer up/down. You have a temperature ~15C representing an energy of about 505 Wm-2 which should have an “effective” temperature of about 34 C degrees. Your 15 C temperature proxy does not represent all that well your energy. Stephens et al. pointed out that the “surface” uncertainty in energy terms is about +/- 17 Wm-2. I personally think that could be reduced to about +/- 8 Wm-2, but it would be a bitch to get lower.
http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/
Your “greenhouse effect” is actually closer to 505-240=265 Wm-2 not 150 Wm-2. I don’t have a reference handy, but Manabe discussed this in the past when he estimated the GHE in terms of temperature could be 90C.
This confusion is Thermo 101 stuff, Frame of Reference. At the “surface” you have ~390, 25, and 90 Wm-2 which are all heat losses, countered by approximately 340 Wm-2 DWLR, your heat loss prevention. That is a 165 Wm-2 “surface window” not an ~40 Wm-2 “window” which actually would be closer to 20 Wm-2 if that estimation were valid. That is also noted in Stephens et al.
https://curryja.files.wordpress.com/2012/11/stephens2.gif
Since Cowtan and Way are incorporating more data less representative of actually energy, they are polishing the wrong turd. Why? Because if there is a usable “equilibrium” of sorts on human time scales, advective heat transport would have to roughly equal TOA heat loss. Your heat engine, the tropics is a better temperature proxy than an “average” of some inconsistent and poorly defined “surface”. Your polar temperatures are your heat sink. Averaging your heat sink with your heat source is crapola.
What Cowtan and Way are doing is trying to determine engine efficiency by checking the temperature of the tires. They have done a wonderful job of checking the tires and that is about it.
> Dr. Curry shouldn’t have to remind a “climate scientist” like Robert Way of the most basic of the laws of thermodynamics.
You’re putting your own words into Judy’s mouth, Cap’n. Instead of whispering to her your own response, you ought to write a blog post. Until I see Judy’s seal of approval, I reject it as a tasteless trick.
***
> the relation of thermal equilibrium is an equivalence relation.
Good. Now apply the notion of stochastic equivalence, which is the relevant notion for what “scientists” such as Robert Way do, and report.
Willard, “Good. Now apply the notion of stochastic equivalence, which is the relevant notion for what “scientists” such as Robert Way do, and report.”
Willard they have a great model for a thermodynamically irrelevant entity. that is the whole point bubba. they done a fine job and I am sure their mommas are proud, but “global” anomaly by itself is meaningless.
I mentioned before if they or mosher converted all the absolute temperatures to S-B energy before figuring the anomaly they would have a second reference, also with issues, but naturally weighted for regional sensible heat.
Not that hard, or at least is should not be, to understand.
-50C and 34C have an average of -10 C, but if you average the S-B energy equivalent then convert, the average temperature would be -2C. Basic math. You have a wonderfully accurate equivalent of a teat on a bore hog.
The whole value of anomaly is depreciated as the range of absolute temperatures increase. SST has a small range of temperature, in comparison, so it is a more reliable anomaly. For a planetary scale problem you would want as many somewhat reliable references a possible to estimate a reasonably accurate range of uncertainty. A temperature reconstruction paired with and energy reconstruction would give you and uncertainty range of about +/-0.3 C degrees for the oceans.
CD:
I might be understanding. The surface of the Arctic ocean is the radiator. When closed, there is sea ice, you place your temperature gauge 2 meters above that. It reads -30.0 C. You open the radiator to run full, the sea ice is not there. The thermometer now reads -1.0 C. One might ask who would place a temperature gauge downwind of the radiator? For a car, on a warm day, that gauge will read higher as the cooling system works harder and maintains the equilibrium.
Gap filling in the Arctic Ocean based on land temperatures is unphysical. Here are some things that you would have to include in a physically reasonable analysis:
Daily sea ice concentration analysis
Satellite or buoy measurements of open water SST in the Arctic Ocean (e.g. polar pathfinder SST)
See this doc from WHOI, a good source of model assimilation and data available in the Arctic Ocean
http://www.whoi.edu/science/PO/arcticgroup/projects/andrey_project2/indexAP.html The ocean assimilation
I have been trying for years to obtain an error analysis of climate models: an error caused by a grid selection; an error caused by an approximation to convection; an error caused by an assumption that a latent heat of water vapor in the air is negligible. Can you please provide a link to such an analysis?
Talk about someone who doesn’t recognize when he’s punching above his weight class.
Robert Way,
I hope you are not seriously expecting to be taken seriously, with statements like this –
“Reanalyses, created by running modern weather models on historical data to infer the state of the atmosphere.”
Amongst a host of questions this begs, is this one – “How are modern weather models superior to older weather models?”
Do they incorporate more modern physics? Are they capable of providing the same wrong answers much more quickly?
The results of your reanalyses make absolutely no difference to the present, nor to the future. Seriously pointless, unless you are trying to provide anti fossil fuel propaganda.
Is there a point to the exercise? It doesn’t appear to achieve anything of utility.
> Doug MacNeal’s take [link].
That says quite a lot.
All the usual skeptic uproar when published work on dataset improvements are made. And yet I rarely see literature that shows that there wasn’t a sound basis for doing so. Clickbait. JP
You need to learn the difference between improvements and plain old fudging. I refer to my physics professor, Hamorski, and his Hamorski fudge factor: a multiplicative value derived by dividing the ‘right’ value by the experimental value, applied to the observed value, before submitting the experiment.
Essay When Data Isn’t in ebook Blowing Smoke. Please return with rational explanations for what is indelibly shown there. Rutherglen and Reykjavik, for example. Or DRX 2013 to NClimDiv 2014 for Maine.
I anxiously await the Curry, Tisdale, Michaels, et al publication that empirically demonstrates the errors in the Karl, et al work. But I’m not holding my breathe. JP
You already have it, above. The old canard about not peer reviewed? Behold Marcott (paleo), PMEL and Fabricius (2x ocean acidification), Cazenov (SLR)… All evidence of peer reviewed academic misconduct.
Other essays in Blowing Smoke, which you appear good at.
atarsinc – To put it a bit more simply than ristvan : You will never see anything using the la la la method. (Sorry about the mixed metaphor).
Yeah, Karl et al. is unassailable. As if the 7 billion folks who are not losing sleep over AGW give a flying f—. On to Paree! Good luck with that.
Me?
I am suspending judgement on the paper until I have an opprtunity to study it in detail.
The uncertainty monster left the building and some people have decided to judge without looking. Or they judged based on the names on the paper.
Or they judged based on the smell test.
Then read/ study faster. Some of us have already paid the paywall and read the thing. After the 1400 EDT embargo. Abysmal.
I am with Mosher, on this one. Paper got to be studied. We have no reason to doubt that a few months before the crucial to the survival of the the planet Paree junket, the Gruberites at the NOAA have suddenly found out that they have been failing to account for the temperature properly. Of course, the corrections and updates erase the pause. We wouldn’t expect it to be any other way. And we know we can count on the story of this newly discovered faux pas being legit, because we all got to keep our doctors, we are on average saving $2500/yr on our health insurance, we are kicking hell out of ISIS, and Hilly’s Billy didn’t touch that young intern woman, uh what’s her name?
Ross has read it, Showtime. Go see what he has to say.
Your reflexive defense of the consensus BS is getting comical.
Improvements. Right.
“Kevin Trenberth,[…] said the new work was “good and careful analysis” but only confirms what most scientists already knew, that there was no such hiatus.” contradiction his own 2009 statement: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.”
The pause and pre-pause periods in the various temperature records, including the new Karl et al. Interestingly, the new record is most similar in trends to Cowtan and Way and Berkeley Earth over the 1998-2014 period: http://i81.photobucket.com/albums/j237/hausfath/temperature%20trends%20comparison_zpsagn2clbe.png
Hmm, that image host seems to be having problems; this might work better: http://s28.postimg.org/6xwxipoy5/temperature_trends_comparison.png
“the new record is most similar in trends to Cowtan and Way and Berkeley Earth”
Is that good or bad, Zeke?
Andrew
Just interesting. I’m not convinced their new SST adjustments are better than those in HadSST3, but thats also not my area of expertise. Either way, their SST adjustments coincidently have the same effect on the trend as the greater arctic coverage in Berkeley and C&W.
Greater arctic coverage? Is coverage the word you want?
Zeke,
I think you’ve mis-labelled the x-axis in the left-hand panel. Should be 1951-1997 – I think.
Thanks ATTP; fixed version here: http://s17.postimg.org/ggmd032un/temperature_trends_comparison.png
Try embedded tweets:
https://twitter.com/hausfath/status/606581320173494272
Zeke, What I see here is large error bars for the shorter period and lower rates of change than I thought GCM’s found for the earlier and longer period. From a scientific perspective, it seems like this whole argument is rather meaningless and about spin as much as anything.
A more meaningful question is why the TLT satellite data and the radiosond data disagrees so much with the surface data. It seems to me with all due respect that this question is like a 3rd rail that most climate scientists steer clear of even mentioning.
As has been pointed out many times, the data is noisy and the signal we seek is quite small compared to the size of the variables and MUCH larger than the truncation errors in the GCM’s. Perhaps better data is a very important investment we need to make.
Looks like the plot shows a low of .06 C / decade and a high of .11 C / decade. Looks to be a difference of 0.5 C / century.
Thanks Zeke, that really clarifies what evrybody is getting so hot under the collar about. But really I thought the real comparison was the 1970s-1990s warming rate with the ‘hiatus’. It was that rate of warming that matched the expectation for AGW going forward and its the failure to match that rate which seems to demand explanation. Along with the model/obs comparison ( maybe they are one in the same thing)
http://s17.postimg.org/ggmd032un/temperature_trends_comparison.png
Perhaps I am missing something. The uncertainty ranges don’t seem to be correct.
The 1998-2014 trend has almost four times the uncertainty of the the 1951-1997 trend? Really?
My understanding was all the satellite thingies, automated weather stationy things, and the thousands of bob-in-the-ocean thingies gave us better information in modern times. If they don’t reduce uncertainty why do we waste money on them?
Zeke Hausfather,
I have never really bought into the definition of hiatus or pause being applied to post ’98 records, I have always just looked at it as a lower trend line from ’78. It amuses me that both sides really get caught up in the description battle and it seems as if real science suffers in a silly scirmish. I know politics and propaganda have a lot to do with it and that reflects badly on science IMO.
As far as all the dispute over temperature adjustments I would blame that on a lack of communication or transparency. In particular you are saying that it is interesting or coincidental that Berkeley Earth, C&W and now Burke et al are all very similar in their upward trending adjustments. Now wouldn’t that seem like an obvious red flag, to any outside observer, if there is a question as to why or how these adjustments are made? Correct me if I’m wrong but isn’t the upward trend of both B.E. and C&W attributed to the addition of arctic temperatures where as Burke et al is from adjustments to marine records? I don’t know which would be more coincidental? Do you see any possibility that these coincidents could be rationaly explained?
The met office mentioned the pause and wrote three papers on ot and had an international meeting about it.. The ipcc mentioned the pause. I heard Thomas stocker himself mention the pause
http://www.nature.com/news/ipcc-despite-hiatus-climate-change-here-to-stay-1.13832
Whatever it’s cause it existed. Why do some people deny it?
Tonyb
Tony B, It’s not a matter of denial for me it’s simply a matter of how long you make a trendline. If you go from ’78 to ’98 and then ’98 to present you have a robust upward trend followed by a fairly flat upward trend on most all records. If instead you measure from ’78 to present you get a less robust upward trend. One could say both there is a pause and there isn’t a pause and not be lying.
Whatever it’s cause it existed. Why do some people deny it?
Tonyb
They must be the “Deniers” that everybody dislikes.
Bad behavior, along with bad science, is being rewarded. The rewards go to President Obama who has directed his minions at NOAA & NASA & NCDC & others to get rid of his opponent: the hiatus. In true Al Capone Chicago style, Karl et al, with the blessings of Trenberth and others, via data management, have assassinated a personage, a stumbling block, who had been constraining Obama’s triumphant march on Paris. No more hiatus; no more obstacles remain from descending down the progressive rabbit hole. Government by fiat is alive and well in Washington. This date, June 4th, will live in infamy for the destruction of reason and the triumph of bestial power. No smoke and mirrors. Just raw power: this is the way things will be.
A complicit Congressman as in Whitehead sets the tone of intimidation for any and all who thoughtlessly wish to decry such Imperial behavior.
Skeptics of the World, Unite! you have nothing to loose but your shackles.
Whitehouse not Whitehead.
So, the latest NOAA cut reduces the pause but also reduces the longer term warming?
NPR was pretty quick with
‘Skeptics Contradicted – No Pause’
rather than the other equally valid assessment:
‘Global Warming not as bad as previously thought’.
The real significance is not so much the recent cooling trend ( which may end with this year’s El Nino ) but that the observed trends are all less than the low end model predictions and have been declining.
Actually Whitehorse (IIRC).
Oops–scratch that! Whitehouse it is.
Pingback: A new volley in the climate wars: here’s a playbill to the action! | The Fabius Maximus website
NOAA are acting like 5 year old children who have been told they can’t have something.
Pathetic!
One of the more interesting gaps to fill would be Greenland.
Looking at the periphery in the GISTEMP data, one would fill in anomalously high trends:
http://climatewatcher.webs.com/SatelliteEraMap.gif
But, UAH MSU seems to think high elevation Greenland has been cooling
( as well as Antarctica and the Tibetan Plateau ):
http://www.drroyspencer.com/wp-content/uploads/LT-trend-map-v6.02.jpg
Contamination? or real cooling of higher elevation that simple minded interpolations are missing?
Also note the incoherence of trends in the North Polar regions ( warming cells next to cooling cells ).
What is the trend of the Summit station in Greenland?
The station is in the middle of the green (cold) spot.
GIStemp should be showing data since there is a land station there.
Using UAH for the high elevation portions of the ice sheets is incredibly naive. There’s a reason RSS exclude that data.
Using UAH for the high elevation portions of the ice sheets is incredibly naive. There’s a reason RSS exclude that data.
That’s possible, surface pressure at the Greenland Summit would be quite low. However, the cooling trend is interesting.
And would interpolating over Greenland based on peripheral sea level stations also be incredibly naive?
I think so.
Greenland aside, the UAH trend map indicates a fair amount of spatial consistency with the surface temperature trends. Cooling in the Eastern Equatorial Pacific, two warming lobes in the Western Pacific ( consistent with PDO? ), cooling in the Souther Ocean and maximal warming in the Arctic.
And still no scientist (or even climatologist) able to demonstrate the warming powers of greenhouse gases in the presence of unbelievers.
Just more “new analysis” of a mishmash of temperatures from unknown distances above the nominal surface taken under a variety of changing conditions, temperatures from buoys in the ocean, temperatures of buckets of sea water plucked at random form the sea, supposedly accurate engine intake sea temperatures taken at inconsistent positions at inconsistent times, and purporting to be global surface temperatures.
Any real scientist would recoil in horror if asked to stake his or her reputation on the resultant average, as a reliable scientific measure.
And still millions die from disease, starvation, war, torture and other forms of deprivation. How will the reanalysis of the meaningless help to benefit these people? Or is it all a useless exercise demonstrating the ability to frivolously waste tax payers’ money?
Still, the analysts need a job, I suppose. Somebody benefits!
Well I just went and reread the huang et al paper
http://rda.ucar.edu/datasets/ds277.0/docs/ERSST.V4.P1.JCLI-D-14-00006.1.pdf
I remember spotting this when it came out, but i didn’t pay it much attention since I use HadSST. All this is more depressing than I thought. The 0.12C adjustment to the buoy data makes no sense – I get it that there is a problem with ship data, but why try to solve this by warming the buoy data? And why neglect the ASTR data, which is probably the best SST data there is? And why neglect ARGO, which measures temps at the same depth as ship intake?
If the issue is the hiatus, just look at the best observations you have since 1990, which would mean forgetting ship data, and just look at ASTR, buoys and ARGO.
I haven’t read the companion paper on uncertainty, but with all the stuff they are doing to the data, uncertainty should be quite large.
Have just done the same. Concur with you.
Looks like this was a planned multipaper setup. The fingerprint ‘evidence trail’ is clear. Legal types should be shouting “willful misconduct”.
And people wonder why Lewandowsky wanted to look at conspiracy ideation by “skeptics.”
Dr. Curry:
Don’t get your hopes up.
oh i see, they will use the ships to ‘fix’ the argo data
In other words, if they can find a way to torture the data to get the answers they want they will include it. And if they can’t, they won’t.
Can Zeke or Robert Way or Mosher or anyone here explain this in a way that doesn’t sound absurd? Why are they adjusting the buoy data? So far all I have is some statements by pro-AGW scientists that they’re happy The Pause is gone (and statements by anti-AGW types that this all makes no sense). I’d like to hear a more coherent presentation by the pro folks.
I’ve just read through the paper.
A while back we were looking at doing a new ocean and this bouy adjustment ( in other records) was something that caught our eye.
Technically It doesnt matter whether you adjust the recent past or the deep past.
If you do an adjustment you will either hold the current past constant and adjust the deep past. or hold the deep past constant and adjust the current past.
There is good argument for adjusting the deep past especially if the current past has for example , better measurement system. However, when people do this on the land record they are accused of re writing the past.
Think of it this way.
Suppose you want to adjust for inflation.. say the cost of your house.
You can put the past price of your house in current dollars or the current price of your house in past dollars.
The difference is not material because it is 100% reversible. if you describe your approach, then people can understand what you did.
Obviously if you are selling it you want to work in current dollars.
But when I talk to my 80 year mom about housing prices I put it dollars that she remembers when she bought a house.
Finally since we are talking about anomalies and focusing on trends it doesnt matter either way.
The only cogent objection I see is the one over the use of MAT.
“But when I talk to my 80 year mom about housing prices I put it dollars that she remembers when she bought a house.”
Your mom has probably noticed the changes in housing prices.
Have you read Ross’s comments on the paper?
With all the warmests’ angst about the MOD-denier-BIG OIL promoted pause, why do you think it took the Grubers so long to make these adjustments? Why hasn’t someone else noticed the need for these adjustments?
It may make no difference in the result, but it is sure likely to make a difference in how the rest of us deal with it. a) Changing the present means that you’re going to need to continue changing the future as well. b) Why are you making changes? Isn’t it because you think that one or the other measuring method is biased? Well, which one do you think is biased, and why? Surely not the present method, no?
I guess it would help to hear a coherent explanation of why this is being done. What do they think needs fixing, and what makes them think so?
Of course I could read the paper, but it might not help.
Steven Mosher,
You seem to love an analogy – good, bad, or indifferent.
Try this one for focussing on a trend. Obscure the windscreen of your car. Pick a straight piece of road.
Now drive your car by looking in the rear view mirror. You’ll just be following the past, which surely can be used to predict the future, can’t it?
Silly. Trends are useless. Try following one, and all it does is bring you closer to any inflection point. Is it not so?
Steven
I understand your point about inflation but that just brings out dozens of problems with the attempt to make CPI or any other methods of using a valid inflation indicator.
Regardless of the field, adjustments are adjustments, with inherently flawed results, in spite of the best intentions by the best minds in the world.
The deeper one digs into this whole attempt at reconstructing reality, the closer to bizarro world it gets. But then the next campaign is starting to heat up and we have all those donors to call, all those fund raisers to attend and all those yard signs to make. Leave no stone unturned and all that stuff.
Steven Mosher:
There are two issues.
One is using interpolation to provide missing data values. That has its own set of issues.
A second issue is using nearby values to adjust another existing value which is deemed to be “wrong”.
On the second issue – whether you use past dollars or current dollars – if you tell your mom that the sale price for the house she sold in 1950 for x dollars has been adjusted (in past dollars) to x times .9 in order to homogenize it with the sale prices of nearby homes (at the same time), I bet she would be pissed.
She might even say – NO my house sold for x in 1950 and it doesn’t matter what the other houses sold for – you should leave the sale price at x.
Mosher’s analogy to inflation adjusted dollars is (unsurprisingly) nonsense. The problem is that old and new dollars have no error bars. They each precisely represent 100 pennies. Old and new temperature data both have error bars. The latter usually much smaller bars. So the modern “dollar” represents between 99 and 101 pennies while the old “dollar” represents between 75 and 125 pennies.
Mosher makes an astonishing number of mistakes. One after the other in endless succession. Find a different hobby, Steven.
Pause or no, SST monkey business or no,
the slightly longer term perspective remains,
all the global temperature trends are at rates less than the low end model projections:
http://climatewatcher.webs.com/Lukewarming.png
I just tweeted this, no response yet:
If trying to improve analysis in ocean SST during recent hiatus, why not look at ASTR, ARGO and buoy data since 1990 (forget ship data)
The key issue IMO is to sort out these data sets for the recent past (say since 1990) and moving forward.
Lets use the modern, best datasets, and forget about tying ourselves to the long term continuity of the old data sets!
Sure, somebody can continue with the old data sets for the sake of continuity, but ignoring or mutilating our best data sets in the interests of long term continuity of sub par data sets makes little sense at this point
“Lets use the modern, best datasets, and forget about tying ourselves to the long term continuity of the old data sets!
Sure, somebody can continue with the old data sets for the sake of continuity, but ignoring or mutilating our best data sets in the interests of long term continuity of sub par data sets makes little sense at this point”
Now I am totally confused.
In the land based record we choose to adjust the past and hold the current observations constant. So, for example we have CRN which is a gold standard and when we adjust the past observations holding the gold constant we are accused of re writing history
In this record they choose to adjust the current and hold the past constant.
And Now they get criticized for the opposite reason.
The idea that one should ignore the issue of continuity strikes me as close to being malpractice.
If ARGO is replaced with a better system.. the same argument will be made. Just focus on the most recent best data.
You have the information you have. collected in the way it was collected.
The question is how can we removed uncertainty. how can we gain insight.
If you want to do the adjustment differently, then go do that. And then folks will accuse you of re writing history.
When the goal is to maintain uncertainty, no good faith attempt to reduce uncertainty will go unpunished.
We should probably stop trying to create a long term arctic ice series because establishing continuity between satellite records and map based records requires that some records be “multilated”. And that submarine data that Gore wanted.. throw it away too. it could never be brought into alignment with satellite measures.
Sun spots? throw out Leif’s work.. why try to make a long record.
The main benefit of NOAA’s approach is you get to see the analyst choices.
You get to see what things are in doubt and you get to see how the answer changes as a consequence. you get visibility into the uncertainty that is conditioned by analyst choices.
Every day in business I have to adjust data. even in a system where every transaction and mouse click is recorded I am constantly adjusting the past and present to get a consistent record. Its SOP
there are two separate issues:
• long term record
• best available data for recent past and going forward
Trying to accomplish both in the same analysis may not be useful or optimal for many variables
Steven Mosher,
You wrote –
“Every day in business I have to adjust data. even in a system where every transaction and mouse click is recorded I am constantly adjusting the past and present to get a consistent record. Its SOP”
Only in America. Companies specialising in adjusting the past – not to mention the present. Combine this with Government bodies adjusting the past – no, not the Bureau of Adjusting History to Suit Present Purposes – try NOAA, and you might wonder why an entire country seems to be preoccupied with the past.
Do you not realise that you cannot change the past? It is gone, history, finished, kaput!
Maybe the record reflects Nature. Inconsistent, chaotic, often totally unpredictable. What do you think?
Mosh “when we adjust the past observations … we are accused of re writing history”.
Well, duh. It is re-writing history.
As Prof Curry points out, NOAAs adjustments are to perfectly good recent history for the purpose of generating an alt-history. That’s not SOP anywhere but climate science.
“Now I am totally confused.
In the land based record we choose to adjust the past and hold the current observations constant. So, for example we have CRN which is a gold standard and when we adjust the past observations holding the gold constant we are accused of re writing history”
I reject the charge. You have not been accused by me. I am asking, I don’t know who else is asking: why would it make sense to adjust the buoy data to fit the ship data?
Steven,
I usually pay attention to what you post – except when you get involved in spitting matches – but more and more you seem to me speaking in tongues.
And I don’t need to read harder. You need to write more clearly.
Mosher,
I missed your last paragraph.
In certain fields what you describe can be called fraud. Adjusting ones interpretation of data is one thing. Adjusting the data is another.
Mosh,
There is something you may be missing in why the people you argue with never seem to understand your point of view concerning dataset constructions and data adjustment. I don’t think they have really conveyed this to you either.
The majority of educated climate skeptics have witnessed countless revisions over time to the surface temperature sets by the mainstream climate science groups that maintain. Old sets are revised or retired, and new versions take their place. Consistently and likely without out fail, every revision of the last 15 years or more, AlWAYS INCREASES THE WARMING TREND. This just reeks of a problem, whether conscious or unconscious and just throws the entire concept of these new and improved data adjustments under the bus.
It does not matter how honestly BEST has worked and presented its results. It does not matter that if you don’t adjust you actually reduce the trend, which may be absolutely true in your and BEST’s work.
All that matters is that something is wrong if revisions by the majors ALWAYS increase the trend. That in itself does not pass the smell test. Tracing the smell to the source may not yet have been done. It may be very tricky to find where subconscious bias distorted process at a deep level.
But it clearly smells.
This is funny:
==> “There is something you may be missing in why the people you argue with never seem to understand your point of view … Consistently and likely without out fail, every revision of the last 15 years or more, AlWAYS INCREASES THE WARMING TREND.
Jeez
“The majority of educated climate skeptics have witnessed countless revisions over time to the surface temperature sets by the mainstream climate science groups that maintain. Old sets are revised or retired, and new versions take their place. Consistently and likely without out fail, every revision of the last 15 years or more, AlWAYS INCREASES THE WARMING TREND. This just reeks of a problem, whether conscious or unconscious and just throws the entire concept of these new and improved data adjustments under the bus.”
Well thats not correct.
Take the karl paper we are discussing.
The change to the long term trend.. from 1880 to present is the OPPOSITE of what you claim.
See the table in the SI. YUP. the new version has a slightly lower trend.
Let me see if I can explain it to you in a simple way.
Suppose you a system that measures a variable from 1900 to 2000.
It consistently records 52.
in 1990 through 2015 you start collecting with a different system and the value for that same thing in the same location is 54.
So for 100 years you have data that says 52.
and then you have overlapping data thats says 54.
one measurement system say 52. the other says 54.
Got that? you know because they are done side by side that one is offset from the other by 2.
Suppose now that you want to build a long series what do you do.
A) use raw data.
B) adjust 54 to 52
C) adjust 52 to 54
If you use A you will have a bias in your record. you will introduce a bias in the trend.
if you use the B or C approach. the trend will not be baised.
It doesnt get any easier than this. We know from COLLOCATED SHIPS AND BOUYS that there is an offset. the question is option B or option C
So how do you fix that bias. you can change 52 or change 54.
which is “valid” when both yield the same answer in trends?
Further you are NOT REWRITING the history. The historical data is still there. go use it. You are not as judith argues “mangling” the data.
You have two data sets. There are STILL THERE. You use those two to create a third. If you want to create a long record you can only choose B or C. Thankfully the final answer doesnt change if you choose B or C
There are practical reasons why you might choose B or C, but those are not scientifically material.
“Take the karl paper we are discussing.
The change to the long term trend.. from 1880 to present is the OPPOSITE of what you claim.
See the table in the SI. YUP. the new version has a slightly lower trend.”
Yeah, it was necessary to erase the pause and the formula for adjustments they cooked up to do that happened to have an undesired effect on the long term trend. Sometimes you got to make a small sacrifice for the big headline.
The real question is are the Karl adjustments justified, or is it a case of politically motivated headline hunting? What say you, Mosher?
Don, you mean those evil climate scientists are committing fraud?
I will be generous and say that it appears they have committed deliberate premeditated blatant self-inflicted confirmation bias, yoey. The climate science needed that headline. Paree will still be a flopee.
“those evil climate scientists are committing fraud?”
Joseph,
Do you know if they are? Do tell what you know.
Andrew
Joseph | June 5, 2015 at 2:27 pm |
The real question is are the Karl adjustments justified, or is it a case of politically motivated headline hunting?
Don, you mean those evil climate scientists are committing fraud?
“Don, you mean those evil climate scientists are committing fraud, again?”
Fixed it for you.
jeez | June 5, 2015 at 4:35 am |
…
All that matters is that something is wrong if revisions by the majors ALWAYS increase the trend. That in itself does not pass the smell test. Tracing the smell to the source may not yet have been done. It may be very tricky to find where subconscious bias distorted process at a deep level.
But it clearly smells.
I archived all varients of the USHCN data set last year.
This year on the anniversary of download I will download the sets again and plot the difference. The last time I plotted the difference I got a couple of hundreds of a degree celsius change and a pumpkin toothed alteration pattern. The difference is about 1.57°C per century per century. By 2100 the data adjusters will have lowered the early 1900s about 3.14°C relative to 2100 and there will be 3.14°C difference between the early 1900s and 2100 even if a time traveler would have measured the same temperature.
Now – it is without question that a 3.14°C alteration for no reason is wrong. So the question is how to stop them.
The easiest solution is for congress to find the budget line item for the department that alters the temperatures and delete it. This will get the adjusters reassigned to other departments or fired which is really the only solution.
Steven Mosher | June 4, 2015 at 8:12 pm | Reply
“
NowI am totally confused.”Fixed that for ya!
“The main benefit of NOAA’s approach is you get to see the analyst choices.
You get to see what things are in doubt and you get to see how the answer changes as a consequence. you get visibility into the uncertainty that is conditioned by analyst choices.”
Yeah, that’s why they made those choices. So we would get to see them. At least the four hundred people that read the paper will see them. The rest of the world will see the headlines about the pause being busted in the NYT, the Guardian and all the other hack climate consensus mainstream BS publications. But it won’t do them any good. Paree will still be…busted.
the pause would be busted REGARDLESS of how you do the bouy adjustment.
Really? You might try to esplain that. But I ain’t expecting it.
Simple Don.
The trend don’t care.
Simple Steven,
The trend are useless – well, maybe, less than useless.
Suppose that particular odd buoy adjustment does not become popular, Steven? IPCC apparently didn’t think to trot it out. Ross ain’t impressed. Nobody else proposed using ships that pass in the night to adjust buoys. Did they? Why this, why now? It’s fishy, Steven. No, I would have say it’s bullcrap.
“The trend are useless – well, maybe, less than useless.”
The haitus is a lack of trend
When the question is “Is is real”
Then trends are useful.
Put another way. if the trend in observations doesnt match the trend in models then its useful.
Thank you for being a useful idi8t
Steven Mosher,
You still believe you can predict the future better than I, otherwise you wouldn’t be wasting your time, I suppose.
You can’t. Your manufactured trends, your models, are all completely useless, as far as I can determine. I can do as well, or probably better!
I might be wrong, of course. You can no doubt point to something of proven benefit to humanity at large, which outweighs, (or even equals), the large amounts of time, effort and money wasted on stupid models and history rearrangement.
And your factual response?
I thought as much.
I was hoping you would provide a useful answer, Steven. Is this paper a politically motivated publicity stunt, in the genre of Cook and Lewandowsky, or is it science? Please read your friend Ross’s analysis and comment with the laudable seriousness and intelligence that you have exhibited on special occasions.
Do you know any of the pilots flying missions over Iraq and Syria, Steven? Do you know that they are repeatedly being sent into harm’s way knowing that on about 3 quarters of their sorties they will not be allowed to drop their ordinance on the bad guys? They won’t complain about it publicly. Bad for the career. It’s ideology and politics. It’s all ideology and politics with this Clown-in-Chief. The people working at the NOAA have to suck up to the boss, just like everybody else. And that they share his ideology makes it all that much easier.
“You still believe you can predict the future better than I, otherwise you wouldn’t be wasting your time, I suppose.”
1. you don’t have any predictions of the future that I know of, so I would suspend judgement.
2. the only predictions I have are predictions of SAT. Since you havent
seen them I don’t know how you could even argue that they are different from yours.
You can’t. Your manufactured trends, your models, are all completely useless, as far as I can determine. I can do as well, or probably better!
Lets test that.
I have some data. Its never been used before in any temperature
series.
I will give you the latitude and altitude.
Predict what the temperature series will look like from 1900 to 2015.
go!
###################
I might be wrong, of course. You can no doubt point to something of proven benefit to humanity at large, which outweighs, (or even equals), the large amounts of time, effort and money wasted on stupid models and history rearrangement.
And your factual response?
##########################
1. you assert total uselessness, but assertion is not demonstration.
2. proven benefit is easy to demonstrate. People continue to buy
and use the products you say are useless. I believe in markets.
you must not. In other words, if you cant find a use for it, then nothing follows from that. Historical climate is of course used in forestry, in legal cases, in insurance, in warning systems, in agriculture. In some cases
these productscost money. In other cases the government and others pay to make it free for the public.
Steven Mosher,
I assumed you thought you could predict the future better than I. Apparently not.
You decide to test our relative abilities to predict the future by predicting the past. Stupid. However, let us set a different test. Let us each make a prediction of temperature, say, for a location equipped with an agreed temperature measuring device, for a particular date in 2016. Let us ensure that the temperature is monitored by an independent source, say a trained meteorological observer. You make your prediction, I will either agree or disagree. If I agree, then your prediction is no better or worse than mine.
On the other hand, if I disagree, for whatever reason, facts will determine who, if anybody is correct. You have the choice of setting the parameter to be predicted, if temperature doesn’t suit you, the date, the error range etc.
I have a presentiment that you will not be willing to wager even a minuscule brass razoo on being able to predict the future better than I, given the same initial conditions. Over to you.
As I thought, you are unable to point to a proven benefit of climatology which outweighs the cost. Instead, you seem to be claiming that because something is purchased, it is useful, in the sense of being fit for purpose. The US Government buys all sorts of useless things, ranging from incredibly poor advice on winning wars, to weapons systems of such poor standard that they are unusable, to economic analysis and advice which has demonstrated its inadequacy.
People pay large amounts of money to psychics, astrologers, dispensers of naturopathic remedies, purveyors of various nostrums and drugs. Governments pour money into companies such as Solyndra, and more recently, Tesla. Useful? If you say so.
Your point about historical climate being used in various areas is just silly. You can’t even define climate in that sense, let alone use it. The past is past. I would agree that a record showing that a particular flood level was reached may well be used in a dispute over an insurance claim. This is not climatology.
The money spent on climatology to date has been almost totally wasted. I has been mildly useful in demonstrating its inability to provide any specific benefits to humanity at large.
Prof Curry,
This is a common problem in other sciences, such as economics. People rely on the high quality data of current economic stats, and assume that historical estimates (paleo-economic data?) are as accurate – when in fact they’re little more than rough estimates.
Unfortunately we have little reliable economic data before WWII, and almost nothing before WWI. For example, we have all seen those wonderfully precise estimates of unemployment during the Great Depression. Guesses. The first survey was in March 1940, other than small surveys done by mailing postcards in 1937 and 1939.
http://www.bls.gov/opub/mlr/1984/06/art2full.pdf
It gets worse the further one goes back. Often famous “lessons from history” are little more than legends dressed up with guesses about the numbers.
Former CEA chair Christina Romer made her first academic splash by showing that the apparent lower volatility of U.S. economic output prior to the establishment of the Federal Reserve was largely an artifact of data weaknesses in the earlier period. I’m not sure what later workers in the field concluded about all that, but it was a remarkable case of findings over philosophy given her pro-demand-management views.
Steve,
Are you referring to her 1999 paper “Changes in Business Cycles: Evidence and Explanations”? I suggest a cautious interpretation.
(1) She constructs bricks from staw, doing esoteric statistical work on scarce and unreliable data, esp before WWI. It makes paleoclimate data look solid. Academics love this stuff, but no sensible person would wager a dime on the accuracy of the results.
(2) Her conclusions are narrow: “recessions only slightly less severe” and “less frequent and more uniform.”
(3) Her analysis does not include our 2 great post-Civil War deep, long depressions: the Long Depression (1873-1879) and the Great Depression. The first is before her start data; she treats the second as an excerption. It’s like asking “other than that how was the show, Mrs. Lincoln?”
Avoiding depressions is usually considered the primary goal of macroeconomics. That focus was lost during the long post-WWII expansion. The close call of 2008 (quite similar to 1930-1932) refocused people’s minds, as the sight of the gallows usually does.
She pulled a kind of Vesalius v. Galen argument (all our human anatomy knowledge is based on studying gibbons!) by pointing out that the old “GDP” data were sectorally biased, covering only a small number of volatile manufacturing sectors and extrapolating from there. That seemed a sound point to me. The Great Depression would have strengthened her argument, while the Long Depression would have weakened it, so leaving both out in order to look at higher-frequency cycles isn’t a terrible idea.
How many of the 7 billion folks not losing sleep over AGW change their minds over this latest prefabricated pause denial BS? I would guess fewer than 1,000. The Paree climate alarmist partee will still be a disasteree.
The pause is killing the cause. You can tell by how desperate they are to debunk it.
You already have it, above. The old canard about not peer reviewed? Behold Marcott (paleo), PMEL and Fabricius (2x ocean acidification), Cazenov (SLR)… All evidence of peer reviewed academic misconduct.
Other essays in Blowing Smoke, which you appear good at.
Busted thread. See anove.
“Has NOAA ‘busted’!
Yes Maam, they went for a five card trick and ‘bust’ before three!
Got to love it, future social historians, 50years or so on, are really going to love today!
Probably class it with ‘Tom and Jerry’
Yes Ristvan, peer reviewed literature is a “canard”. JP
Atarsinic, whoever you really are hiding behind that avatar nom de plume, yup. My most recent book proved rather vividly that much of the recent climate peer reviewed literature is a ‘canard’. Read it, then prove those examples otherwise. Marcott’s hockey stick abomination would be a good starter. Previously guest posted here. Or, you could try Thomas on extinctions, Fabricius on corals, or O’Leary on sudden SLR.
Doubt you would even try on any, since would ‘lose’ on all.
Today Steve Milloy correctly identified Harvard University’s science.
http://junkscience.com/2015/06/04/harvard-syracuse-researchers-caught-lying-to-boost-obama-climate-rules/
I deeply regret that blunt statements may be necessary to restore sanity to society and reverse the corruption of science.
I don’t think you can infer much from 15-year trends anyway. Look at how much a running 15-year temperature varies compared to the much more stable 30-year temperature. The 30-year temperature has had a trend near 0.15 C per decade since about 1980, and yes, including the whole “hiatus”. The 15-year one is all over the place including 0.3 C per decade just 15 years ago (now forgotten). The 15 and 30 year trends in C/decade are plotted here.
http://www.woodfortrees.org/plot/gistemp/from:1900/mean:120/mean:240/derivative/scale:120/plot/gistemp/from:1900/mean:60/mean:120/derivative/scale:120
And these are the 15 and 30 year temperatures themselves. Somewhat upward at the end there. Tough to spot a “hiatus”.
http://www.woodfortrees.org/plot/gistemp/from:1900/mean:120/mean:240/plot/gistemp/from:1900/mean:60/mean:120
I hope you will apply the same logic when it comes to warming trends.
The 30-year temperature has a robust warming trend.
Jim D, using your 30 year trend, natural warming peaked at 0.15C per decade and natural plus manmade warming peaked at 0.20C per decade. Low end is -0.15C per decade. What does that say about manmade warming and the potential for a 0.0 hiatus? To me, a pretty good chance.
The other interpretation is that the early data is garbage.
+10
Jim D,
Once again, use the longest trend there is.
Earth’s creation to now. Result? Cooling.
If you don’t like it, pick a shorter trend that supports your point of view. Use Steve Mosher’s HistoryChanger ™, until you get what you want. So, how much warming do you really, really, want?
The net effect of adjustments is to COOL THE RECORD.
get that? raw data has a WARMER TREND.
Here is a clue. Any time you have to adjust a record you are changing yesterday. You are always adjusting the past, because the future aint here. Now we could not adjust the data and then it would show a steeper rise in temperature
What’s the net effect of this “pause busting” adjustment? Add it to the dozens of excuses for whatever is alleged to be offsetting the CO2 warming during the pause, and what we got? I will just take a guess: 1.35652 C. That’s probably too conservative. Now you, Mosh.
Steven Mosher,
It doesn’t matter what sort of graph you draw. Upwards, downwards, warming, cooling, red pencil or blue crayon. It has no effect whatsoever, on either the past or the future.
You cannot change yesterday. It has happened. It is past, gone, will not come again. Pretending that you can predict the future by “correcting” history achieves nothing. The present is no cooler, no warmer, no wetter or drier, than it is. Why bother pretending that you can change history?
I’m not sure what you are attempting to do. I’m happy enough to base my life on the assumption that no one can predict the future better than I, in any useful way.
So far, so good. How about you?
“What’s the net effect of this “pause busting” adjustment? Add it to the dozens of excuses for whatever is alleged to be offsetting the CO2 warming during the pause, and what we got? I will just take a guess: 1.35652 C. That’s probably too conservative. Now you, Mosh.”
#########################
I see nothing terribly important in any pause discussion. We are basically talking about minor tweaking of data, tweaking of models, polishing the bowling ball.
In 1896 we knew everything we needed to know to understand that a policy on C02 emissions was going to be needed at some point.
The issue is comparison of observed temperature anomaly trends with climate model predictions. Even with new NOAA analysis, climate models still running substantially too hot
Steven Mosher | June 4, 2015 at 10:10 pm |
The net effect of adjustments is to COOL THE RECORD.
get that? raw data has a WARMER TREND.
https://stevengoddard.files.wordpress.com/2012/12/screenhunter_330-dec-06-22-18.jpg
http://davidpratt.info/climate/clim3-8.gif
https://www.climate.gov/sites/default/files/styles/inline_all/public/Fig2_UStrendSeries.png
This makes a great story. It isn’t true.
It isn’t hard to find NOAA plots of US temperature that were significantly warmer in the past and until recently
+100 PA.
That’s the thing Mosh et al just can’t appreciate. Constant nonsense data fiddling is just that – fiddling. It’s the creation of functional product, an alt-history.
I am getting the feeling this nonsense Karl paper is climate science jumping the shark. They’re written a novel and called it a dataset.
Only in climate science.
curryja | June 5, 2015 at 10:24 am |
“The issue is comparison of observed temperature anomaly trends with climate model predictions. Even with new NOAA analysis, climate models still running substantially too hot”
I suppose it’s worth noting that this depends on how careful one is when comparing observations with models. If you’re really un-careful you’ll exacerbate the discrepancy – this is what most people do. Comparisons are much more nuanced than perhaps is often let on.
Robert Way, I note Ed Hawkins has said precisely what Prof Curry has said http://www.climate-lab-book.ac.uk/2015/global-temperature-comparisons/. It doesn’t seem to me there’s room for nuance, so referring to nuance just seems like yet more climate scientivist weasel-classing wiggle-room.
The 30-year temperature has had a trend near 0.15 C per decade since about 1980,
And the RCP4.5 is modeled to be 0.18C per decade, so the warming is less than the low end of the models – it should be because the forcing is at the low end of scenarios.
360 months to 2006 – .19C per decade
204 months to 2006 – .23C per decade
180 months to 2006 – .26C per decade
120 months to 2006 – .24C per decade
Any reason to stop at 2006 instead of considering the most recent 30 years?
Sure looks like some pausing going on and looks like rates less than low end model predictions.
You will find that 30 year trends starting in 1975 are the greatest which stands to reason since the cooling period of 1945 through 1975 ended that year.
But those trends have been decelerating ever since.
Because the system, despite your claim of weakening forcing, is quite capable of producing trends in excess of .2C and could easily do it again soon. If the PDO is in a positive phase, it could produce trends above .3C.
hang on TurbE aren’t we at RCP8.5?
The deceleration has pretty much stopped.
Some of this is b*llshit.
1. There is a strong natural sinusoidal oscillation that contaminates the record. Picking trend intervals of less than 65 years isn’t useful.
2. RCP8.5 was 9.71 in 2013 so we were ahead of RCP8.5 in 2013 and are now slightly behind it. From RCP 8.5:
2014 405.27595
2015 408.90146
Now, it is pretty obvious that this year the less than 2 PPM rise is not the 3.63 PPM rise expected by the IPCC with the fossil fuel emissions being almost the RCP 8.5 expected amount. So something is going off the rails.
3. Fossil emissions doubled (increased 100%) since 1980. Only 21% of the post-1980 emissions increase is going into the atmosphere.
This bad news for global warming. If only a tiny fraction of any emissions increase stays in the atmosphere we will run out of fossil fuel long before we significantly increase the CO2 level in PPM and 480 PPM seems to be the upper limit.
Between that and the 0.2 W for 22 PPM forcing level the global warmers have a problem. Past warming has been 0.89W/m2 or about 0.24°C. The future worst case CO2 level of 480PPM means only 0.64 W/m2 more down dwelling IR or about 0.17°C of further warming.
So it is going to be interesting. For the rest of the century it really depends on nature whether it gets warmer or cooler.
JCH | June 6, 2015 at 12:58 am |
The deceleration has pretty much stopped.
The warmunists have a few challenges ahead and the view that temperatures have only paused and will soar off into oblivion is unrealistic and misinformed.
1. There is a strong natural sinusoidal oscillation that contaminates the record. Picking trend intervals of less than 65 years isn’t useful.
2. RCP8.5 was 9.71 in 2013 so we were ahead of RCP8.5 in 2013 and are now slightly behind it. From RCP 8.5:
2014 405.27595
2015 408.90146
Now, it is pretty obvious that this year the less than 2 PPM rise is not the 3.63 PPM rise expected by the IPCC with the fossil fuel emissions being almost the RCP 8.5 expected amount. So something is going off the rails.
3. Fossil emissions doubled (increased 100%) since 1980. Only 21% of the post-1980 emissions increase is going into the atmosphere.
This bad news for global warming. If only a tiny fraction of any emissions increase stays in the atmosphere we will run out of fossil fuel long before we significantly increase the CO2 level in PPM and 480 PPM seems to be the upper limit.
Between that and the 0.2 W for 22 PPM forcing level the global warmers have a problem. Past warming has been 0.89W/m2 or about 0.24°C. The future worst case CO2 level of 480PPM means only 0.64 W/m2 more down dwelling IR or about 0.17°C of further warming.
So it is going to be interesting. For the rest of the century it really depends on nature whether it gets warmer or cooler.
JCH,
Because the system, despite your claim of weakening forcing, is quite capable of producing trends in excess of .2C and could easily do it again soon.
Perhaps, but your links don’t support that. The maximum30 year trend you can find is 0.19C/decade, not in excess.
Here is what the thirty year trends look like:
http://climatewatcher.webs.com/Trends30.png
Close, but no cigar and decelerating.
This wouldn’t be such a big deal, I guess, if the IPCC hadn’t predicted in the AR4 that we’d see 0.2C per decade for all scenarios.
Oops.
hidethedecline:
hang on TurbE aren’t we at RCP8.5?
Emissions were at the high end, but GHG forcing is not.
If you go the the NOAA GHG index, get the values and shift them to the RCP scenarios 2000 ( because they use different baselines ) the chart below is what you get. The observed ( purple ) is lower than all the RCPs:
http://climatewatcher.webs.com/NOAA_RCP.png
So evidence is mounting that the pause never existed and was just noise in the data.
This can also be seen in NASA data
https://tamino.files.wordpress.com/2015/05/gissmovave.jpeg
And hadsst data is demontrating why it’s a foolish idea to expect short-term trends to last in any meaningful way
http://www.woodfortrees.org/plot/hadsst3gl
Nebakhet,
Perhaps you could explain, specifically, why it makes sense to adjust the buoy temperatures up to match ship observations, when the buoys are more modern, and designed to measure temps, and ship measurements are a hodgepodge of different methods, and are in no way designed for the purpose of measuring global temps?
Rather than beating on skeptics, who are simply doing their part to vet the “science” that is highly politicized, and should have already been vetted by peer review – how about you simply explain why these particular adjustments make sense and are appropriate?
http://www.giss.nasa.gov/research/briefs/hansen_07/fig1x.gif
Your global temperature is about 0.24°C Computer Generated Anthropomorphic Global Warming (CGAGW). Eliminating CO2 does not stop CGAGW.
Further – if you remove CGAGW it does not appear possible to hit the 2°C target in 2100 so we might as well “burn baby burn” fossil fuels since we will run out of fossil fuel before 2100 and can’t cause dangerous warming. And besides – since CO2 didn’t cause CGAGW, eliminating CO2 emissions won’t stop CGAGW.
This again from the simple mind of a retired high school chem/physics teacher
First thing I saw was the name Tom Karl. oh, no the same Tom Karl that made the homogenized adjustments, that paid much attention to Time of Day, Change in instrumentation and more and at the same time paid almost no attention to the continual corruption by an ever increasing heat island effect. But rather, found a way to make it a non factor.
I looked to see if there was any kind of peer review to the methodology.
put forth by Tom Karl.
Could not find it. It seemed to just materialize. However, I certainly am not
aware of all proceedings.
Folks, do you ever check old records which get changed, then changed and then changed again. Old temps lowered, New raised.
The NOAA seems to be at the forefront of temp change artistry.
Recently Roy Spencer had a little piece titled (essentially) It warms even when it doesn’t warm. In it he showed the NOAA change of record by a factor of three in the rate of U.S corn belt warming. That change of record was from 2014 to 2015. I just do not understand how this continual change and then change to the change to the change can continue.
What am I missing?. At the same time much of the opaqueness of some
procedures is totally transparent. Note here the usual names. Michael Mann. Trenberth et al
I permanently lost respect for the likes of Trenberth and Dessler with
their treatment of Roy Spencer regarding a withdrawn paper. Emphasizing
here the treatment of Spencer, not of the paper.
Would it not be a blessing if the scientific community could determine that
there was minimal danger in AGW In fact, added CO2 might very well be one of the greatest blessings of the century. Imagine, an increase in one of the sustainers of life on this planet.
Then again, how many thousands would not be getting grants. How many thousands might be out of jobs.
On both sides of the issue, the focus should be the citizens of the world and not
the climate community.
At one time I thought scientists were the sole source of that which is undeniably honorable. Foolish me. There may be some elements of
accuracy in this paper. However, it is so obvious that is an overreach and that the procedure is to find, someway, anyway to torture and manipulate
data and scientific procedure to come up with a desired result, regardless of whether it is a correct result. A mindset Akin to the hockey stick eliminating the medieval warm period.
To say it much better, ‘Figures don’t lie, but liers’ figure.
+1
That just about covers it.
The climate’s been shot. Round up all the usual suspects.
Yer must remember this.The fundamental things apply
as time goes by.
They’ll always have Paris.
Paree, Paree! Sigh, but another bitter memoree.
We are shocked, that this is going on in the climate science. Shocked! Well, if not for Climategate we would be.
+1 to all y’all
This could have been the start of a beautiful correlation.
Pointman
:)
I’m not much good at being noble, but it’s clear that a few degrees or inches don’t amount to a hill of beans in this crazy Holocene.
Reblogged this on This Got My Attention and commented:
Color me unconvinced, too.
Reblogged this on Climate Collections.
Dr. Bouldin took a look at the data:
…“So that’s 0.133 vs 0.086, which is a ratio of about 1.55. If one picks the year 2000 as the demarcation point between the two periods, then it’s a little trickier because Karl et al don’t actually give an estimated rate from 2000 to 2012. But one can estimate it by assuming the rate difference between 1998 to 2014, and 2000 to 2014, both of which they do give, is a decent approximation of the difference between 1998-2012 and 2000-2012. When I do so, I get a ratio of warming rates between the two periods (1951-1999 and 2000-201.2) that’s very similar: 1.47. Taking the inverses, the rate of warming between the slowdown period and the pre-slowdown period, is about 2/3. Given the large number of data points (grid boxes) that make up each year’s mean value, it has to be essentially certain that, according to these data, there has in fact been a very highly statistically significant difference in warming rates between these two periods, regardless of whether you use 1998 or 2000 as the breakpoint year.”…
https://ecologicallyoriented.wordpress.com/2015/06/05/did-the-rate-of-global-warming-decline-around-year-2000-or-not/
It Is very important to not come on board the Karl adjustment ship after the ships initial departure.
Data used in the aforementioned was itself homogenized.
I have over 50 articles from the 1970’s warning of the next ice age.
The most common line to be found. ‘There is a complete consensus
(where have I heard that word recently) among the worlds leading scientists
that the earth will be as much as –fill in the blank— (one example six degrees Celsius) colder by the year 2,000.
Included were graphs, more graphs, and more graphs yet.
They all showed cooling from about the early 1940’s to the late 1970’s in various amounts.
But Tom Karl was a leader in making that all disappear. As I mentioned before, for the fun of it check NASA /GISS temperature records for any one location recorded say every two years. Then plot the temperature change on the ordinate as a function of the years you checked.
Raw U.S. temperatures in the 1930’s, were much higher than today, until Karl and his gang waved their homogenization wand. Somehow my parents would have thought very little of Karl.
(so far I only have access to U S records)
—and if we had drought now like we had then, there would be AGW hysterics. Someone might decide to write a book entitled ‘THE GRAPES
OF WRATH’
“I have over 50 articles from the 1970’s warning of the next ice age”
newspaper articles.
The Myth of the 1970s Global Cooling Scientific Consensus
http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1
“Raw U.S. temperatures in the 1930’s, were much higher than today, until Karl and his gang waved their homogenization wand”
Look if you climate skeptics are too lazy to process the data yourselves that’s your problem. You deny BEST, deny NOAA, deny Met Office, deny NASA. All efforts to process the weather data into a global temperature record you deny.
Remind me, why should we care what you say? We ignore people who conspiracy theorize about tower 7. How are you guys any different? You bring no data, no analysis, just conspiracy theories.
nebakhet,
There are the Seebeck effect, the Peltier effect, the Hall effect, amongst others. They are have been measured and provide useful, predictable results.
On the other hand, there is the non existent “greenhouse effect”. No measurable effect, no use, and no predictable results.
I’m sure you have the ability to completely believe the non existent. I can offer you a wide range of non existent things to believe in, if you wish. From phlogiston, the luminiferous ether, zero point energy, and orgone energy to n Rays. Free of charge, of course.
Why stop at the greenhouse effect?
greenhouse effect is 100% proven science. Science is settled. Check the textbooks.
What Karl et al did is increase more accurate data buoy temperature data to match the “bucket” method of measuring temperature which has an error of +/- 1.7C, and accepted a statistical significance of 0.1 instead of the generally accepted 0.05. How is this even remotely scientific?
Looks like he’s going to take another look: “Update, 6-6-15: I’m revising this post based on an issue that Gavin Schmidt of NASA made me aware of on Twitter, involving the averaging of trends. It will change the results I’ve given below. Stay tuned.”
“Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said the new work was “good and careful analysis” but only confirms what most scientists already knew, that there was no such hiatus.”
Trenberth’s Travesty was fiction?
Wonderful observation. Apparently warmunists have memories equivalent to Alzheimers patients. They forget the web and Wayback Machine do not.
It gets to be more fun as Time Goes By (thanks, PBS and BBC, for the human comedy, NOT your CAGW scare).
maybe you are the one suffering an early onset mental illness.
Given that what Trenberth said now about a pause in surface temperature and said years ago about heat content of the oceans is not in contradiction.
Pingback: NOAA tries to make global warming slowdown vanish | Tallbloke's Talkshop
How do you ascertain which dataset or model is better/more skillful?
Predictive ability? The metrics used are themselves a product of the model…(?) Men formed in a circle it might seem.
When the UAH satellite temperature record was adjusted recently, warmists accepted such adjustment were a valid part of the scientific process, because we are pro-science and rational people.
When the NOAA record is adjusted, skeptics go nuclear, spewing bile and conspiracy theories without any focused argument.
Climate skeptics have steadfast refused for years to engage with the scientific process and get to the heart of the matter and process the weather data themselves into a global temperature record, yet have expended massive amounts of time attacking anyone who does. Nor do climate skeptics publish any arguments against the adjustments to peer review. One can only conclude that climate skeptics WANT the issue of adjustments, from their ignorant point of view, to be a mystery. They get lots of play out of that. If they were to do the work and discover the warming was genuine they’d lose some of their conspiracy theories.
Climate skeptics aren’t even consistent about whether they accept the records in the first place. They will lean on records like the met office record to claim there is a pause or no statistically significant warming for X years, implying that they accept those records are good enough to draw conclusions about the trend of the earth’s warming, but then in the next moment they are telling us the records are manipulated and cannot be trusted.
It seems skeptiic’s acceptance of a record is proportional to how convenient it is for their argument. With 2014 and now 2015 breaking new highs and questioning of whether the pause was just noise in a continued longterm trend, I notice skeptics are falling back to form of pure denial of the records themselves.
Nice spewing!
nebakhet,
I am unaware of anyone who denies the existence of climate. Maybe you could name one or two.
I am also unaware of anyone who refuses to accept that the Earth has cooled since it was created, but I accept that it is impossible to,prove that he Earth was created a second ago.
Whoops! Fat finger again!
To continue –
So four and a half billion years of greenhouse effect trapping or accumulating heat, due to GHGs in the atmosphere, has failed to stop the Earth cooling to its present temperature.
Of course it’s settled science. It is settled that the greenhouse effect has never been demonstrated or experimentally verified.
It’s also settled that you cannot provide a single specific instance of humanity having benefitted from climatology (or phrenology, for that matter, but I might be wrong about phrenology).
More hand waving, and unsubstantiated assertions.
Have you anything useful to offer?
“better have a good climatology to predict how much snow you expect to get.”
This is climate prediction based on historical trends, and is every bit as good as stock market forecasting.
Climate Projection (inference from first physical principles) = nothing at all.
> This is climate prediction based on historical trends, and is every bit as good as stock market forecasting.
The converse might not be true.
Steven Mosher,
I suppose you refer to the benefits which accrued to the operators of Heathrow Airport. They sold much of their snow clearing equipment on the finest climatological advice that snow was rapidly becoming a thing of the past.
They received a severe lesson about the stupidity of listening to climatologists’ forecasts, after record snowfalls caught them with their clever pants down.
I am glad you use climatological forecasts for sales forecasting and inventory control. I hope you aren’t confusing weather forecasts with climate forecasts. As you have have said, in your company you revise history on a moment by moment basis. Sales history, inventory history, weather history – you can just revise it all.
No one can forecast the climate better than I. My forecasts of both weather and climate in the past have proven 100% accurate.
Maybe all the Californian walnut farmers should have asked for the climatological predictions of drought in their specific areas? Or are you saying you can only make useful predictions after the event.
Your comment about consulting a climatologist before building a new ski resort is just bizarre. Who would actually put money on a climatological forecast? Companies waste money on such follies to demonstrate their green credentials, beg for Government handouts, or lull investors into a false sense of confidence.
I’m not interested in asking Judith about her business, any more than I would ask a drug company about its reasons for selling products of doubtful benefit. I don’t use either product, but what others choose to do is their business.
You made a statement that “we can fix your stupidity”, and I presume you are using the Royal “we”, as befits a supremely intelligent and August personage such as yourself.
You are not concerned you may have made a foolish statement, are you? How do you intend to fix my stupidity? A magic spell, perhaps? Possibly a Climatological Intervention ™?
You still haven’t provided a single instance of a documented, measurable, benefit of climatology to mankind. Surely there must be at least one! Even astrology gets things right once in a while.
Sorry Steven, but it still doesn’t appear that climatology is useful. At least belief in climatology doesn’t seem to cause the believer any physical harm!
People like you are the ones in denial about the abysmal failure of the models to predict the influence of CO2 levels on global temperatures. The CAGW hypothesis (not scientific fact, as you warmists like to assert) has not withstood the scientific method. Actual observations have a stubborn way of disproving flawed models and hypotheses. It is laughable the way warmists are now trying to blame every drought, heavy rain, hurricane (or lack thereof), harsh winter, mild winter and every other natural weather/climate event on CO2 increases. Your song is getting old. The temperatures are not going up, so “global warming” is now “climate change.”
“Globalclimatewarmingchange” – the perftect liberal cause, anything can be blamed on it. It can therefore be used to justify massive goblement intervention to save the masses at their expense while enriching the elite for saving the masses from themselves.
“The temperatures are not going up”
get a clue
https://tamino.files.wordpress.com/2015/05/gissmovave.jpeg
“so “global warming” is now “climate change.””
IPCC
get a clue
Neb, it is amusing that on a thread about the inaccuracy of surface statistical models you start off by offering one. According to UAH, which actually measures the atmosphere, that GISS graph is completely wrong. It did not warm 1978-1997, nor 2000-now. The only warming was a small (0.16 degrees C) step up coincident with the giant ENSO cycle.
On sea level my conjecture is that straight line is more or less accurate for the last 10,000 years or so.
The sea heat graph is hugely less accurate that the surface models because it is 3D, not 2D, and it uses a far sparser convenience sample.
You have simply missed the point, which is that the Karl adjustment indicates that the surface statistical models are probably junk.
David,
If you want to measure the greenhouse effect, you need to measure the surface temperature, not what is measured by the RSS and UAH datasets.
They may be more accurate, but they are less relevant to the question at hand.
Bob, the greenhouse effect does not happen at the surface so why would I want to measure it there?
bobdroege,
Maybe you should tell that to the people that measure anything but the surface temperature? That is, they don’t actually measure the temperature of the surface. They just pretend to.
Just like the greenhouse effect. Apparently nothing to do with greenhouses, but they didn’t think anybody would notice.
Silly!
“Bob, the greenhouse effect does not happen at the surface so why would I want to measure it there?
huh?
read the science. “the” green house effect happens in the oceans, in the ice, in SAT, in the stratosphere.. “The” effect is not simple.
The effects of the greenhouse effect are maximum at the surface, so that is where you want to measure it.
If it causes cooling in the stratosphere, you don’t want to measure it there, and at the surface and then average the numbers, that is sure lunacy.
I’ll take a decent modeled temperature of the surface, take a bunch of bi-metallic resistance readings at a thousand various locations, take the anomalies of each location and average, or something like that.
http://sealevel.colorado.edu/files/2015_rel2/sl_ns_global.png
https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content55-07.png
How naive do you have to be to believe global warming isn’t happening?
http://www.realclimate.org/images//haysl13.jpg
How naive do you have to be to think man is causing it?
Also – since the satellites are clearly skewed against the tidal gauge trend the satellite algorithms apparently are going to have to be fixed by legislation, since the satellite analysis teams aren’t smart enough to do to fix their algorithms on their own.
Neb
Adjustments are to climate science what steroids were to sports. If you can’t win on your own, get a little help from your friends. It is all a pitiful sign of desperation.
Horsepoop.
The pause happening and unhappening… so tough for its true believers. But the AMO is coming!
You forgot to mention Koch Bros. Bush, the tea party and Faux news…oh ans Sarah Palin
nebakhet | June 5, 2015 at 6:27 am | Reply
When the UAH satellite temperature record was adjusted recently, warmists accepted such adjustment were a valid part of the scientific process, because we are pro-science and rational people.
When the NOAA record is adjusted, skeptics go nuclear, spewing bile and conspiracy theories without any focused argument.
This is a warmist fantasy.
The numerous and particularly vicious attacks of the warmunists against any study, data, or scientist who casts the slightest doubt on global warming orthodoxy is legendary.
http://dailycaller.com/2014/03/17/u-s-college-professor-demands-imprisonment-for-climate-change-deniers/
http://scienceblogs.com/illconsidered/2008/11/hansen-wants-skeptics-in-jail/
http://www.weeklystandard.com/blogs/sen-whitehouse-d-ri-suggests-using-rico-laws-global-warming-skeptics_963007.html
The constant calls to jail skeptics by warmunists are offensive to any honest freedom loving American. The time has come to terminate the several $ 10s of billions in annual climate change funding until the advocates learn to operate by the rules of fair and honest debate in a free society.
We have all these warmer scientists publishing papers explaining the pause and here is Tom Karl saying it never happened. This will put the warmers into disarray for sure. Are they suddenly to abandon their statistical base?
Karl by the way is the top proponent of a NOAA Climate Service to match the Weather Service, which he would head of course. His is a bold move but it may backfire. He also chairs the USGCRP committee.
I have long said that the surface numbers are inaccurate, so now comes King Karl saying the same thing, with the ultimate adjustment. It does not get much better than this.
Reblogged this on Quixotes Last Stand.
Reblogged this on ajmarciniak.
Pingback: Has NOAA ‘busted’ the pause in global warming? | ajmarciniak
Pingback: No “pause”? | …and Then There's Physics
Over at Realclimate Gavin writes: “The ‘hiatus’ is so fragile that even those small changes make it disappear”.
I posted a comment there (which I don’t really expect to pass moderation):
How is that statement different from the “denier” viewpoint that any warming over the past two decades is so fragile that just small changes make it disappear? This looks like an admission of Curry’s uncertainty argument to me.
Indeed, these statistical models are very sensitive to small assumptions, which makes them worthless. Should all the papers purporting to explain the hiatus now be retracted for using erroneous data? Should the papers now in the publication pipeline be rejected and the grants terminated? The data is adrift!
“How is that statement different from the “denier” viewpoint that any warming over the past two decades is so fragile that just small changes make it disappear? This looks like an admission of Curry’s uncertainty argument to me.”
not really. The statement differs in that gavins is true and yours is not
Gavin’s statement isn’t true, though, is it Mosher. Think harder mate.
The existence of the pause is not robust with respect to varying methods and data. It may be in a few years. Patience.
The existence of a longer warming trend is robust to a wide variety of assumptions about data and methods.
People keep making the same mistake on both sides.. Isolating specific data sets and focusing on one or two methods.
5 more years..
Mosher thanks for replying but 5 more years? The existence of the pause not being robust in the face of varying climate science ‘methods’? Back to the ‘long term trend’ so long as it’s a cherry picked nice cold year?
Come on mate. Talk about moving the goalposts.
Climate science says that with as much CO2 about as there is these days we should be much warmer than we are and we should have been warming at 0.2 degrees centigrade per decade. The pause shows climate science models run hot. It is what it is. 5 more years of as is, climate science is not robust to the existence of the pause.
5 more years of Karl-quality data fiddling, Schmidt dissembling and persistent NOAA ‘adjustments’ and the climate scientists will be pointing at their stupid graphs and telling the Grapes of Wrath was never written and there was no dustbowl.
Alt history is a crappy fictional genre. Get out of it, mate.
Steven, how much longer, roughly, is the distance from A to B than the distance from B to A?
Climate science says that with as much CO2 about as there is these days we should be much warmer than we are and we should have been warming at 0.2 degrees centigrade per decade. …
.2C per decade starting at what date?
“The statement differs in that gavins is true and yours is not”
Gavin showed nothing. Gavin has no std deviation to argue from, nor a std deviation demonstrated to have decreased as SQRT(# of measurements)
> This looks like an admission of Curry’s uncertainty argument to me.
What would be Judy’s argument, Steinar?
Willard, Judy is in my opinion one of the few in this debate who dare to say aloud that uncertainty exists and is a problem. I think Gavin fears uncertainty.
Your probing of Gavin’s mind and your praise of Judy are duly acknowledged, Steinar. However, I reject both as ways to formulate Judy’s argument.
Thank you for your concerns.
Steinar Midtskogen | June 5, 2015 at 7:17 am | Reply
Over at Realclimate Gavin writes: “The ‘hiatus’ is so fragile that even those small changes make it disappear”.
http://www.climate4you.com/images/GISS%20Jan1910%20and%20Jan2000.gif
35% (100*0.45°C/0.69°C = 65%) of warming since 1910 is Computer Generated Anthropomorphic Global Warming (CGAGW). This technically is machine made global warming, not man made global warming and should be noted on all charts where it is included. Note: CGAGW is not caused by CO2 increases and eliminating CO2 emissions will have no effect on CGAGW. CGAGW is caused by computers and can only be stopped by defunding the department involved, scrapping the hardware, and firing the personnel.
http://data.giss.nasa.gov/gistemp/graphs_v3/
0.24°C or 35% is significant. It is about the same as the CO2 contribution to global warming since 1940 which is 0.24°C or about 35%. According to GISS the post 1940 warming is 0.8°C if by land and 0.5°C if by land and sea.
http://wattsupwiththat.com/2012/12/07/a-brief-history-of-atmospheric-carbon-dioxide-record-breaking/
It should be noted that fossil fuel emissions didn’t catch up to the rise in CO2 until 1960 (over 1/2 of fossil fuel emissions came after 1985). So it is unclear when we should really start blaming man for causing warming.
The tendency of government scientists to alter historic data makes attribution between anthropogenic, natural warming, and natural cycles difficult since the constantly changing CGAGW adjustment has to be removed first, Here is how much GISS has distorted data since 2008:
http://www.climate4you.com/images/GISS%20MaturityDiagramSince20080517.gif
> CGAGW is caused by computers
More precisely, CAGW is created by contrarians. They they used computers to create this strawman may only be a correlation.
So much dissembling, isn’t there PA?
Gavin Schmidt writes:
“…if by ‘hiatus’ or ‘pause’ people mean a change to the long-term trends…”
and
“If people use ‘slowdown’ to simply point to a short-term linear trend that is lower than the long-term trend…”
People use ‘pause’ to identify the period during which there has been an absence of the warming that IPCC, CMIP5 models and RCP8.5 scenario told us would occur when CO2 emissions reached the levels in that RCP8.5 scenario. The CO2 levels have been reached but not the warming. Ergo pause.
Schmidt:
“Thus I predict that while contrarians will continue to bleat about this topic (the Pause, as far as Ragnaar can tell), scientific effort on this will slow because what remains to be explained is basically now well within the bounds of what might be expected.”
See more at: http://www.realclimate.org/index.php/archives/2015/06/noaa-temperature-record-updates-and-the-hiatus/#more-18571
What remains to be explained is basically now well within the bounds of what might be expected. Two qualifications. Basically and might.
What remains to be explained is now well within the bounds of what is expected.
He did not say the immediate above. Bounding things such as 1.5 C to 4.5 C for a doubling. As I tell you you have from $1.5 million to $4.5 million in your bank account. You can now find a financial planer and use those numbers, and here’s my bill. I think we had a very good meeting. Thank you.
“Paris is a coming,
And so are the lies,
No warming pause,
Now there’s a surprise!
The weather recorded
Doesn’t fit with the plan;
Adjust temperature records,
Keep blaming man….”
Read more from:
http://rhymeafterrhyme.net/no-warming-pause-now-theres-a-surprise/
Mosher: “Technically It doesnt matter whether you adjust the recent past or the deep past.”
What your suggesting is that these important adjustments are just Squiggology.
How predictable.
Andrew
No. I am making a math point
You have a scale. you weigh yourself every morning. every morning
you weigh 200 lbs.
you buy a second scale. It measures 202. for the next 3months you measure with both scales. One says 200 the other says 202.
Now I ask you: Please estimate your change in weight over the last year
Your data looks like this
Scale1 : 200,200,200, 200,200,200,200,200,200,NA,NA,NA,
Scale 2 NA NA NA NA NA NA ,202,202,202,202,202,202,
Do you
1. Average the two?
2. Adjust 200 to 202
3. Adjust 202 to 200.
Note that option 1 gives the wrong answer
note that the trend doesnt care whether you use #2 or #3
I’m not sure why you feel compelled to adjust data, even when weighing yourself. Why not let the data be the data?
Oh… wait…
Andrew
Bad analogy. It should look like this.
You have no scale. you look at old pictures of yourself taken infrequently, over many years, none of which shows your whole body.
you weigh 100 lbs. .
you buy your first scale. It measures 100.01 for the next 3months. But it can only weigh one leg, half an arm and a third of your torso.
Now I ask you: Please estimate your change in weight over the last year
Note that the trend doesn’t care because you have no real measurements of your total weight, just guesses and extrapolations.
Would you give control over your and your families’ entire lives to a government bureaucracy because they think you might have gained .01 pounds in the last year?
No?
DENIER!
The actual criticism is that Karl, et al., chose to adjust the more-reliable buoy data because it was an increasing-over-time proportion of all measurements (which are, to a certain extent, “averaged” to produce global data). So rebutting the criticism is not as simple as your analogy suggests.
The above comment was for Steven Mosher, not Gary M, in case that wasn’t clear.
“I’m not sure why you feel compelled to adjust data, even when weighing yourself. Why not let the data be the data?
Oh… wait…
Andrew
####################
Nothing is done to the data. it remains the data.
The question has your weight changed?
IF you change the method of observation, you have to account for a bias.
Suppose I had a thermometer outside a city. it reads 70
Then I move it in the city. it reads 75.
with yoour approach to data that would be a valid trend.
“The actual criticism is that Karl, et al., chose to adjust the more-reliable buoy data because it was an increasing-over-time proportion of all measurements (which are, to a certain extent, “averaged” to produce global data). So rebutting the criticism is not as simple as your analogy suggests.”
WRONG.
mathematically for trend analysis it doesnt matter. there may be PRACTICAL reasons for adjusting the Deep past as opposed to the recent past. That is what Karl does with land records. And of course folks criticize him for doing that.
so with the land records they adjust the deep past and people bitch
with SST the adjust the recent pas and people bitch.
The actual difference between the methods is zero. zip. nada.
the issue with the buoy adjustment is this: the offset is 0.12C +/- 1.7C. This is not a clean adjustment. Adjusting a relatively clean dataset (buoys) with this very uncertain adjustment seems a strong way to muck up your data set, beyond the original quality of the data
“You have no scale. you look at old pictures of yourself taken infrequently, over many years, none of which shows your whole body.
you weigh 100 lbs. .
you buy your first scale. It measures 100.01 for the next 3months. But it can only weigh one leg, half an arm and a third of your torso.”
#################
you have measurements taken from ships. They are temperature measurements.
you have OVERLAPPING COLLOCATED measurements with bouys
They are temp measurements.
I dont like the fact that we may have to pay more taxes because of global warming. Perhaps the skeptics can save us from these taxes.
To DO THAT, they have to have smarter people than you giving smarter arguments. You have an opportunity to make smart arguments. Follow Nic Lewis and do what he does. At least people in power actually listen to him. Why? because he doesnt make dumb arguments like you do.
Your friend Ross is a smart guy, Steven. He said this:
“A. Looking at the first adjustment, K15 take the buoy data and add 0.12 oC to each observation. They computed that number by looking at places where both buoy data and ship data were collected in the same places, and they found the ship data on average was warmer by 0.12 oC. So they added that to the buoy data. This is similar to the amount estimate found by another teams, though the bias is usually attributed to ships rather than buoys:
Recent SST observations are conducted primarily by drifting buoys deployed in the global oceans (Figs. 1, 2). The buoys measure SST directly without moving seawater onto deck or to the inside of a ship. Therefore, buoy observations are thought to be more accurate than either bucket or ERI data… In the present study, we regard this difference as a bias in the ERI measurements, and no biases in drifting buoy observations are assumed. The mean ERI bias of +0.13 oC is obtained and is within the range for the global region listed in Table 5 of
Kennedy et al. (2011).
(quote from Hirahari et al. 2014 p. 61)
That quote refers to a paper by Kennedy et al. (2011 Table 5)[5] which reports a mean bias of +0.12 oC. However, Kennedy et al. also note that the estimate is very uncertain: it is 0.12 clip_image036oC ! Also, the bias varies by region. This is a key difference between the method of K15 and that of others. K15 added 0.12 oC to all buoy data, but the Hadley group and the Hirahari group use region-specific adjustments.”
Can you tell us if in your opinion adding the .12C to the buoys is justified?
should be:
“…also note that the estimate is very uncertain: it is 0.12 + – 1.7C !”
Steven Mosher,
The trend doesn’t matter. Neither does your slightly odd and irrelevant analogy.
If I attempt to measure the air temperature today, at a particular location with any accuracy or precision, I will be indulging in a fairly pointless exercise.
As Tyndall pointed out over 100 years ago, “To determine the true temperature of the air is, it may be remarked, a task of some difficulty: . . .” Tyndall practised the English habit of understatement quite well.
You are probably not measuring the temperature of the air. Even if you manage to do this, the air is in constant motion. Worse, it is transparent. The air you measured 5 minutes ago may not be the air you measure 5 minutes hence. The environment has most certainly changed in the interim, and assuming that measurements made at regular intervals will ensure the identical environmental conditions for each measurement is not rational.
Averaging anything is fraught with potential silliness. Average car colour? Average telephone number? Average weather at an average locality?
It is becoming apparent to real scientists, that supposed temperature measurements from buoys, ships, land based thermometers, satellites, etc., may be sufficiently imprecise to be worthless in relation to past conditions.
So you might as well invent past temperatures. There are worse ways of wasting time, effort and money.
Steven Mosher:
You do not seem to grasp the point of the criticism. It is an accusation that Karl, et al., chose to adjust the “good” data upwards (rather than the “bad” data downwards) with the intention of pushing future trends higher. Whether this is the actual effect of their adjustment process (buoys vs boats) may be debatable but you are not debating it. The critics are referring to this specific case — not a generic “equal choices” hypothesis.
Earlier in the comments you said, in response to a claim the warming trend was increased by Karl, et al.’s, methodology:
Yet when I checked that table I found what appears to be the opposite of your assertion:
Table S1. Trends of temperature (°C/decade)…
1880‐2014 Global
New:
0.068 +/‐ 0.017
Old:
0.065 +/‐ 0.015
2000‐2014 Global
New:
0.116 +/‐ 0.067
Old:
0.066 +/‐ 0.076
As I understand it, the paper’s buoy adjustments impact ERSSTv4. In the noted table, that would apply to the ocean trends, which also show a significant increase in warming trends over ERSSTv3 in recent years (and potentially confirm the critics’ suspicions):
1880‐2014 Ocean
New:
0.055 +/‐ 0.017
Old:
0.055 +/‐ 0.015
2000‐2014 Ocean
New:
0.099 +/‐ 0.078
Old:
0.036 +/‐ 0.097
http://www.sciencemag.org/content/suppl/2015/06/03/science.aaa5632.DC1/Karl-SM.pdf
Mosher doesn’t want to address the substantive criticisms of the Karl BS. Very disappointing, Steven.
0.12C+/-1.7C
A perfect metaphor for the goofiness that poses as normalcy in a closed system, a system so closed that brains have become oxygen deprived. Strange things happen. Sort of like being in a life raft for days unprotected from the sun.
And they call this science.
Close enough for gubmint work, but could get the perps jailed in the real world.
Hello,
Regarding the 0.12degC number. In our table 5, we presented results from matches between individual ship and buoy observations. Globally, there were 21,870 matches with a mean SST difference of of 0.12degC and a standard deviation of 0.85degC. This gives a standard error of 0.01degC to two decimal places.
The question is, what is the appropriate uncertainty – standard deviation or standard error – to assign to that mean difference when adjusting the ship data as a whole? (the “as a whole” bit is key). I think that the appropriate uncertainty is the standard error of 0.01degC rather than 0.85degC. Here’s why…
We know that individual ships will be biased differently and that every measurement from any one of those ships will have additional random errors that change from one measurement to the next. We can estimate the uncertainties associated with these two factors and this has been done several times. A large component of the standard deviation of the difference between the SSTs measured by ships and SSTs measured by buoys will come from those two factors. If the population of ships *as a whole* was unbiased (i.e. the individual ship biases had a mean of zero) then averaging all the measurements together from all the ships would have a mean close to zero. You would expect it, in fact, to be within a few standard errors of zero.
However, the mean of all the observations is 0.12C, which suggests there is some common bias across the fleet of ships. The uncertainty of the mean of that distribution is the standard error. It’s this common bias and its uncertainty we are interested in removing before combining ship and buoy data. The uncertainties associated with the other types of error that affect single observations are also factored in through the uncertainty estimates (in the case of HadSST3) or via the relative weights given to ship and buoy observations (in ERSST).
To make it clearer we can ask what the same numbers tell us about the bias in a single solitary ship observation. In that case, our best estimate of the bias in that single solitary ship observations is still 0.12degC, but in this case the uncertainty would be the standard deviation of 0.85degC which is close to the uncertainty associated with errors of a single ship observation estimated in other ways. As I mentioned before, that uncertainty is already included.
The question has also been raised as to whether it is better to adjust the ship data using the 0.12degC or to adjust the buoy data. There are arguments both ways. Adjusting the ship data brings the biased ships into line with the unbiased buoys. On the other hand adjusting the buoy data brings the buoys into line with the ship data, which constitute the majority of the historical record. Either way, when these are presented as anomalies relative to the 1961-1990 base period, you have to take into account the fact that data from the climatology period was mostly ship data. Subtracting 0.12C from the ship data would cool the climatology by 0.12C (more or less) which would mean that the anomaly for a drifting buoy observation relative to that climatology would increase by that amount. If you are looking at anomalies, the net effect of adjusting the ship data is the same as the net effect of adjusting the buoy data.
When we made HadSST3, one of the tests we did was try both. There was, as expected, little difference between the two choices. See part 2 of the HadSST3 paper, section 4.4 “Exploring the sensitivity of bias adjustments”
http://www.metoffice.gov.uk/hadobs/hadsst3/
Cheers,
John
@john kennedy
You state the majority of the historical record is from ship data. Okay.
In the same vein the majority of historical time stamps are from sundials.
Would you propose correcting modern time keeping devices because they don’t agree well with sundials?
You aren’t fooling anyone, John. Changing the much more robust, much more recent buoy data changes the most recent trend (read “the pause”) while changing ship data changes the older trend but leaves the pause unaffected.
Your goal was to make the pause go away so you chose to correct atomic clocks instead of sundials. Asshat.
Well, put that way it seems very reasonable. What difference does it make if your pretend that buoys are ships that pass in the night, or you pretend that ships that pass in the night are buoys?
“Globally, there were 21,870 matches with a mean SST difference of of 0.12degC and a standard deviation of 0.85degC.”
Please define “matches”.
Hi Don,
The full description is given in the paper:
http://www.metoffice.gov.uk/hadobs/hadsst3/part_2_figinline.pdf
copied (pdf’s don’t copy and paste well. I think I got all the typos, but check the original in preference) here:
“A database of nearly coincident ship and buoy observations for the period 1998-2007 was created in which ship-buoy pairs were selected that lay within 50km of one another and on the same day. To avoid complications from diurnal heating, only observations taken close to local dawn were used. The average differences were calculated for each ocean basin, and for the globe. The average difference between ship and drifting buoy observations in the period 1998-2007 was 0.12degC, with ships being warmer than drifting buoys.”
Cheers,
John
Thanks, John. That certainly inspires confidence.
“you have measurements taken from ships. They are temperature measurements.
you have OVERLAPPING COLLOCATED measurements with bouys
They are temp measurements.”
So speaks the defender of all things CAGW.
Funny, I never saw a global temp reconstruction based on bucket temp measurements from ships, particularly for paleo measurements going back as far as claimed by the gods of Thermageddon for whom Brother Mosher so eloquently obscures.
Didn’t Galileo invent the thermometer in the 1600s?
Yet the IPCC puts out, and its acolytes like Mosher defend:
http://www.theguardian.com/environment/2010/feb/02/hockey-stick-graph-climate-change
But these are “OVERLAPPING COLLOCATED!!!”,antiquated, imprecise measurements of a tiny proportion of the global climate area going back a thousand years. So that’s OK.
Silly me, climate tricks are for ‘scientists’.
“I dont like the fact that we may have to pay more taxes because of global warming. Perhaps the skeptics can save us from these taxes.”
Anybody who still believes Mosher is any kind of lukewarmer, email me for a prospectus on Solyndra at http://www.suckers‘r’us.com.
Steven Mosher | June 5, 2015 at 7:05 pm |
“To DO THAT, they have to have smarter people than you giving smarter arguments.”
Nah. The skeptics just need the warmists to have more supporters like Steven Mosher.
From what I’ve read of Nic Lewis, his argument is that accepting the temp records as they are, ECS is lower than the Church of CAGW claims.
My argument is that it is ridiculous to take the temp records as accurately reflecting “global average temperature” at all with the precision and accuracy claimed by the CAGW clergy, let alone as the basis for global decarbonization.
Notice, Mosher never rebuts such an argument, ever that I have seen, But he sure does like to give vent to his inner narcissist in obscuring his lack of an argument.
Don’t forget that the Mosherism that gave birth to this sub-thread was “Technically It doesnt matter whether you adjust the recent past or the deep past”, In defense of which he offers an analogy equating the chimera of temp data to scale measurements.
The farce is strong with this one.
Let’s not forget that Nic Lewis’s science is putting a dent in the bogus climate consensus and ranting Gary M is just an obscure anonymous blog character.
Nic Lewis isn’t doing jack diddly squat to change the debate. Neither is Moser. Neither have the chops either politically or as experts in the field. Granted Lewis has far more credibility than Mosher. Curry is a different story altogether but still successfully countered by peers. It’s a political football. Science left the arena a long time ago.
Judith
“the issue with the buoy adjustment is this: the offset is 0.12C +/- 1.7C. This is not a clean adjustment. Adjusting a relatively clean dataset (buoys) with this very uncertain adjustment seems a strong way to muck up your data set, beyond the original quality of the data”
That is not a cogent argument.
1. There is a dataset A. ship data
2. there is a dataset B . bouy data
The goal is to create data set S;
To create S you take A and B and adjust either A or B. in the creation of S
dataset A isnt Mucked up
dataset B isnt mucked up.
neither are mucked up regardless of how you process them.
They still exist.
Dataset S will be the same regardless of which you adjust.
Nothing is mucked up.
The goal is to bust the pause. Read their press release.
Don
“Mosher doesn’t want to address the substantive criticisms of the Karl BS. Very disappointing, Steven.”
That’s because there are none. That doesnt mean the paper is correct.
Everything I have seen is from people who haven’t worked with the source data and who have never produced a product themselves.
When you find a good argument let me know.
Personally, I will wait to see what the hadley guys say. and see how it compares with some of the work we started Then after patient study I might tell you what I think the weakest points are. dont hold your breath.
because that would take well over a year.
I can tell you that deciding whether to adjust ships to bouys or bouys to ships will not top the list of good arguments.
The really sad thing is that we could do a much better job if pirates didnt exist.
That’s my test. If a commenter doesnt understand the problem that pirates have caused in metadata then their comments are not worth reading.
First one on the thread to answer the pirate question wins. ( kennedy cant play and neither can Zeke)
You seem to be intent on marginalizing yourself, Steven. I won’t hold my breath hoping for change in your tactics or attitude. I think you are suffering from the Stockholm Syndrome.
“If a commenter doesnt understand the problem that pirates have caused in metadata then their comments are not worth reading.”
It’s a trick question. We don’t need no stinkin’ metadata. We can interpolate, extrapolate and krig all the data we need for areas affected by piracy – from tree rings in Nova Scotia. That’s the magic of ‘climate science.’
Does adjusting the ship data instead of the buoy data bust the pause? How about adjusting neither? Where is the rule written that says you have to adjust one or the other? Why haven’t the other fabricators of SST products made the same adjustment? Why should anybody believe that the choices made are not more politics than science?
Steven Mosher,
You wrote –
“The goal is to create data set S;”
One might ask why, but I suspect that the aim of creating data set S is to create a data set that supports a preconceived notion. The notion is, of course, the GHE.
Unfortunately, you apparently do not know whether the original datasets are correct or not. This is a situation where two possible wrongs may not make a right.
About as silly as running 100 computer simulations giving different answers, which ensures at least 99% are wrong, and averaging or adjusting them, claiming that that the resulting answer is somehow useful.
You may create as many contrived data sets as you wish. They will not change tomorrow’s temperature at all. Or anything else, for that matter.
Possibly, with less effort, one might devise an experiment showing the ability of CO2 to warm an object. I would certainly change my mind about the supposed GHE if it could be demonstrated.
Just a thought.
from Steve Mosher ” I am making a math point …
to Don Monfort The goal is to bust the pause. Read their press release. (and all in between)
IMO, the adjustments to the SST data may well be legitimate and correct. The explanations seem not implausible to me. The value of the data, the methods and uncertainty, could be a subject of debate. It is an interesting subject.
However, the Spin, Hype and Hoopla attending the release of this paper certainly gives the appearance of politically motivated science. For this reason, the scientific value of their work becomes irrelevant.
Why did they do that?
+1 The narrative of the paper and the hype that went with it was IMO its main undoing, because the actual data underlying the paper seems to be on the weak side with large error bounds.
Ken:”Why did they do that?”
Because they wanted to bust the pause. They started out with a need and came up with a “plausible” story to do it. They can’t be proven wrong, so why not go for it. It passes pal review, no problem. Standard consensus BS climate science using their well-worn tactic of employing deliberate confirmation bias. Got to save the planet, by any means necessary.
I greatly appreciate John Kennedy’s response to questions about the buoy adjustments. However, I remain unclear as to the impact of the paper’s adjustments on recent trends.
Steven Mosher suggests the impact is a reduced trend since 1880. Yet my reading of the chart in the supplemental information (see #comment-708969 , above) is that the trend is actually increased, particularly in recent years.
Am I reading the chart incorrectly?
Don, what is the difference between explaining why we are seeing a “pause” (science) and getting rid of the pause? In this case the explanation might be that the apparent pause was an artifact of the SST measurements.
That’s just some claptrap, joey. That’s a question for the IPCC and the gaggle of well-funded SST data product fabricators who didn’t spot your alleged artifact. The IPCC recently reviewed the science and admitted to the pause. The researchers in the field were aware of the differences between the relatively good buoy data and the pretty sketchy ship data. Nobody thought that .12 had to be added to the freaking buoys, until these clowns got the idea to create a pause busting artifact. Explain why adding .12 to the buoys is a good idea now, joey?
Honestly, I don’t know, Don, I assume that if there is some problem with what they have done, someone will demonstrate why they have erred. I am not so sure why you are so certain about their results. You obviously don’t have much a of background in climate science.
And, Don, you didn’t answer my question. I would appreciate it, if you would.
Did you know the point of science is discover things that people didn’t know before? If it was already common knowledge then it probably wouldn’t be published.
I don’t have anything else to discuss with you, joey. Hey, maybe I am just annoyed that they are spending a lot of public money to deploy buoy systems that collect pretty good data and then some public employee clowns come along and decide to make the buoy data match the crappy ships passing in the night data, so they can erase the pause. Just look at their press release. It ain’t science. End of story.
http://www.thecourier.com.au/story/930754/sos-to-naval-ships-as-pirates-bother-buoys/
Interesting comments Mosher. I get the idea there’s isn’t a lot of traction available when this kind of issue comes up, dealing with adjustments.
No, it’s not over, if there is fraud here. someone will speak out and prove it. I don’t believe climate scientists or NOAA or anyone else is in on some conspiracy related to climate science which would prevent them from letting something that significant go unchallenged. If there is no conspiracy, things should progress like they always have in any science related area.
A huge problem with this bias calculation goes back to the stakes and the lack of trust on both sides.
As to trust, its hard to trust parties who claim that science is settled (on almost any complex subject area, much less climate) and that anyone who claims otherwise is a flat-earther or corrupted denier (see various recent public statements from the President and the White House staff). It is also hard to trust when the ultimate stakes include the sacrifice of a wide swath of freedoms and huge quantities of wealth for all posterity.
With that in mind, and given the importance of the bias calculation, I have some concerns with a few omissions in K15 when the process is described, most pointedly this one:
““A database of nearly coincident ship and buoy observations for the period 1998-2007 was created in which ship-buoy pairs were selected that lay within 50km of one another and on the same day. To avoid complications from diurnal heating, only observations taken close to local dawn were used.”
Note that the authors did not say that ALL coincidental observations taken near dawn were used. Is that true? How was the database generated,… the actual queries? Is there original data for ALL coincidental observations. ALL taken near dawn? The final data set which was used? Could a third party replicate the culling process? Remember, taxpayers paid for the information and will have to make decisions based on the results, so unless claims of national security are made, then taxpayers are entitled to an open book. Anything less and trust simply descends yet another notch.
What do you do when data and models disagree?
In proper science you revise the models. In climate science, you revise the data.
Nobody’s “revise[d]” the data. They’ve only revised the models they use to massage the data.
Torture data long enough and it will confess to what you want.
Or maybe not.
JCH,
Give me some data. Tell me what you want it to be. I’ll bet I can torture it into confessing it was what you wanted.
It’s not really torture, merely “enhanced data interrogation techniques”, or “reanalysis”, if you prefer.
I can see you torturing the data.
The way I see it one can adjust all they want until the satellite era commenced in 1979.
For SST, satellite era starts September 1981. See, for instance, Reynolds OIv2, quarter degree, and its data sources.
This is more of their BS agenda manipulation of data.
I have pointed this out so many times in the past when I have said repeatedly that if the data does not conform to AGW theory it is either IGNORED, WRONG , or MANIPULATED.
This is just another attempt and will not be the last attempt to try to keep this soon to be obsolete absurdity alive, that being AGW theory.
The Karl et al paper only brings the temperature record into a consistent position with the recent observed acceleration in sea level rise.
Oh my gawd, how could they do such a thing?
izen, “The Karl et al paper only brings the temperature record into a consistent position with the recent observed acceleration in sea level rise.”
It actually is a good example of how “splicing” various generations of measurements can produce “unprecedented” results. You know that NASA has satellite models of hurricane wind speeds all hurricanes are stronger now than they were with previous measurements. Your drinking water has unprecedented levels of “toxins” compared to 30 years ago as well.
Why do we need thermometers? They just need to be placed out in the ocean, maintained and monitored, and then the actual observations have to be adjusted to bring into a consistent position with sea level rise. We can just figure out the temps from the freaking sea level rise. Save a lot of money. You warmist fellas are really smart.
Izen – looks like a pretty frickin’ linear (no acceleration) slope to me
http://en.wikipedia.org/wiki/Sea_level#/media/File:Recent_Sea_Level_Rise.png
http://www.climatewarmingcentral.com/images/recent_sea_level_rise.jpg
What is pretty obvious is the sea level rise preceded the CO2 so the natural warmth is driving the sea level rise and when it stops naturally warming it will stop naturally sea level rising.
The MWP had a 6 inch higher sea level, we should expect the sea level to top out 6 inches higher. The sea level isn’t going to rise forever no matter how much the global warmers scream and stamp their feet.
Thanks PA for replacing that wickedpedia link with same one from another source.
Interestingly sea level keeps going up regardless of global average temperature which has fallen twice for several decades at a stretch during the same time period .
http://woodfortrees.org/plot/hadcrut4gl/mean:48
“Science” like this is a few rungs below Phrenology, but it doesn’t matter. The MSM are already running with its press release.
https://thepointman.wordpress.com/2013/07/05/the-pause/
Pointman
Isn’t Congress trying to shift climate science/funding from EPA to NOAA? The Karl study now has me confused as to which Agency is frying pan, and which is fire.
Using the 0.10 significance test and other methods to move the goal line is something that EPA perfected long ago. See EPA’s botched analyses on secondhand smoking (environmental tobacco smoke) or residential radon exposure risks for examples. NOAA should have higher standards.
Congress is trying to increase NOAA’s National Weather Service funding because the satellite fleet is decaying. EPA has nothing to do with this goal. Congress wants to cut back climate funding across the board, including not funding Karl’s proposed National Climate Service, but it is an elusive goal, because there is no climate budget per se. USGCRP used to publish a detailed breakdown of each agency’s climate funding but it stopped several years ago. From a budget perspective climate has gone underground.
Well, time has come to eliminate the EPA. The enforcement powers can be moved to other agencies or the states, The environment is clean enough and the EPA is a major tool of social progressive mischief.
The EPA in social progressive’s hands is like giving dynamite to a small child. The solution is to dispose of the dynamite.
Plus, if we start eradicating government agencies that misbehave – the ATFE would be the next on the list – there are many benefits, including:
1. Smaller less oppressive government.
2. Reduced taxes and regulatory cost.
3. The abject lesson of the eradication of some agencies may induce the surviving agencies to straighten up and fly right.
Interesting plot from Ed Hawkins comparing new NCDC data set with HadCRUT4.3
Ed comparing MOHC and NOAA data sets http://www.climate-lab-book.ac.uk/wp-content/uploads/noaa_had_comp1.png …
We’ve nothing to fear about global warming but the lack of it. Everything else is dogma.
Ross McKitrick has a good analysis on this
http://wattsupwiththat.com/2015/06/04/a-first-look-at-possible-artifacts-of-data-biases-in-the-recent-global-surface-warming-hiatus-by-karl-et-al-science-4-june-2015/
Yeah, and Mosher doesn’t want to look at it.
Shock of shocks, Roossy is good. Who could have predicted it? Time for more acting lessons. June anomaly posed to breach .90C.
Huh.
I’ve actually spent a bunch of time with the source data. Ross’ comments suggest he has not.
At berkeley, of course there have been a couple of projects aimed at redoing the SST. The technique was pretty cool. basically exactly what mcintyre had suggested to me in one of our conversations.
Its a pretty cool approach. but I doubt it will go anywhere.
But you don’t make any freaking specific criticims of Ross’s analysis other than the BS claim that he hasn’t read ICOADS. What has happened to you, Steven?
“I’ve actually spent a bunch of time with the source data. Ross’ comments suggest he has not.”
What a brilliant rebuttal of McKitrick’s arguments. I particularly marvel at the point by point refutation of McKitrick’s list of weaknesses in historic data collection practices, and their impact on the accuracy of the data.
“…the K15 themselves explain that their new assumptions—not new observations—are what introduced the warming trend at the end of their data set.”
And that, in a nutshell, is what CAGW climate ‘science’ is all about.
Your assumptions don’t create the results you need? Change them,.
The data doesn’t match your predictions? Adjust it.
The data you need doesn’t exist? Manufacture it.
And then demand control of the global energy economy because of your results.
Here, Don Don:
http://wattsupwiththat.com/2015/06/04/a-first-look-at-possible-artifacts-of-data-biases-in-the-recent-global-surface-warming-hiatus-by-karl-et-al-science-4-june-2015/
That about covers it.
Lett’s thank Ross for his concerns.
That post was an unusually well-tempered and non-polemic analysis for the climate-o-sphere. Too bad such non-polemics are the exception not the norm.
Moshe says Ross is wrong.
What’s the likelihood of that? Slim to non-existent.
McKitrick doesn’t say it is wrong to make the K15 correction. He goes half way to demonstrating why it should be corrected with his first example table, but he misses having a column which is the apparent temperature from his mix of observations that would start near 4.0 with ships dominating, and end near 3.0 as the buoy observation dominate later despite the real temperature going from 3 to 4. In his example, a correction is clearly needed for the changing mix of observations. Why didn’t he say that? Instead he goes off on some tangent.
Here is a cogent comment from WUWT that puts the paper it in the proper perspective, yimmy:
“Jquip
June 6, 2015 at 12:16 pm
It needs to be repeated often: This paper is about establishing a new data product. That is, it is focused solely on the errors and inadequacies baked into the HadNMAT2 data product. And if this paper is held as legitimate, it does not speak to global warming as such: It is a refutation of all previous uses of HadNMAT2. Despite what the authors would like to claim, their paper cannot speak to overall global warming when it is solely and strictly a refutation of — and a creation of a new — single data product amongst many.
Much like the ‘ensemble averages’ of GCMs used by the IPCC, we have an ensemble problem with the data products. They cannot, quite obviously, all be the most accurate. So we’re faced with either refuting all but a single data product or refuting any that are inconsistent with the experimentally demonstrable correctives employed in the Karl paper, or refuting the Karl paper — and the validity of peer review along with it.
If this is unpalatable, and I suggest that it is, then the gold standard in Climate Science is to take an unweighted ensemble average of the data products to produce the data product. And that ensemble average, and all its ranges, is then the only valid input to apply to various GCM runs.
And, of course, it remains that if they can’t get the ensemble average of data products to produce a trend that doesn’t straddle nought, then there is no manner in which to claim that there is any warming at all from within the standard practices — valid or invalid — of Climate Science.”
Can you say why this new data product is any better than any of the others, yimmy? Start with UAH and RSS.
It is the newest product, and includes a correction that others may want to apply to theirs in the future. As Peterson asked Watts in their email exchange, if you knew there was this bias introduced by the changing observations, what would you do about it?
Well the main thing that I wouldn’t do is pretend that the error/uncertainty in the end result is small.
Don M, if you have something against correcting for known biases you need to say what it is. Karl is one of the originators of the TOBS correction, now widely applied and even accepted by McIntyre. Ocean measurements are more varied, but untangling them involves finding these corrections that are needed to make sense of the trends. Note that a long-standing warming correction of early 20th century ocean data related to buckets goes in the opposite direction and is accepted by everyone. We only get all this thrashing about at WUWT with ten new postings on it in the last couple of days because the correction went in the wrong direction for them and removed their beloved pause.
The bigger correction was related to ship bucket data that had not been corrected before, not the buoys. I don’t know how the UAH and RSS people calibrate their satellites against the surface, but perhaps they need to pay attention to this.
Schoolmarm is protecting teacher’s pet from having his little feelings hurt. How cute.
Look for the other 97% consensus data product manufacturers to suddenly discover that if they add .12 to the buoys and do that other crap, they can erase the pause that is killing the cause. Another new pause busting data product mooned over by the warmista media every few weeks leading up to the big partee in Paree. I don’t think it is going to help them with the 7 billion folks who are not losing any sleep over AGW. The sound of the next big belly flop you hear, will be coming from Paree. Sorry, jimmee.
No, I’m trying to prevent this blog from becoming a cesspool of insulting comments, which makes a lot of people not want to read the comments or participate here.
Don’s point is you’re not doing it fairly. Do you think it’s a coincidence that both Don and I used the term “teacher’s pet” or maybe instead we see something you refuse to acknowledge? Teacher’s pets, actually. There’s more than one.
JC comment: People who participate here regularly in a civil manner do not deserve gratuitous insults. Jim D is one of those people. Try making your criticisms substantive in terms of arguments, and lose the pejoratives.
[x] Refuses to acknowledge bias
[ ] Acknowledges bias
Jim D,
The Earth has a powerful radiator (the Sun), about 150 million km. away.
After four and a half billion years of heat trapping, the Earth has somehow managed to cool. Not all that much, really.
If you imagine the Earth to be the size of an apple, the solid crust is about as thick as the skin. The rest varies from merely viscous red hot rock, to dazzling white heat at the centre.
So the majority of the heat – better than 99.9% – is hidden, beneath the crust.
The heat from the Sun is unable to stop night time cooling. Winter is colder than Summer, some places freeze during the Winter, some are quite hot.
Antarctica was once ice free. Now it is not. Much of Europe was once ice bound. Now it is not. Pompeii had a severe attack of local warming. Hot gases galore.
Again. Over four and a half billion years, plenty of GHGs, no GHE at all. Just Nature. Sometimes more energy here, sometimes more there. Ceaseless unpredictable magma, hot plumes, volcanism on land, and under the sea. Clouds, atmospheric chaos. Changing land use. Land rising, falling, moving sideways.
Do we really need to invent a GHE? Are we all so dumb that we can’t understand what we see before us?
I’m pretty dumb, but I still prefer facts to figments of the imagination. What about you?
Don M, if you are getting stuff deleted, it is because you project more anger than sense. You need to do something about that. Perhaps McIntyre will look into this and agree, as he did with TOBS, and then the skeptics (mostly) will just move on. I don’t think McSteve is wedded to the pause.
Remove the mendacious comments and those that just insult the readers’ intelligence and you got something to brag about, Judith. Of course, you wouldn’t have a lot of comments.
To whom it may concern:
“There are arguments both ways. Adjusting the ship data brings the biased ships into line with the unbiased buoys. On the other hand adjusting the buoy data brings the buoys into line with the ship data, which constitute the majority of the historical record. Either way, when these are presented as anomalies relative to the 1961-1990 base period, you have to take into account the fact that data from the climatology period was mostly ship data.”
We know from the many “adjustments” with accompanying uncertainties that have been piled onto the sketchy ship data, and from the discrepancy with the more reliable buoy data, that the ship data is not good. Why should we have any confidence that doing these machinations to line up the better buoy data with the funky ship data is sciency and not just another effort to erase the pause that is killing the cause? Adding .12C to the buoys in the pause era is BS. If you want to erase the pause, use the freaking more reliable measurements of SST, or STFU.
As I mentioned the ship data were adjusted for the bucket data a long time ago, and that had the effect of warming the first half of the 20th century. If you want to remove adjustments, go back to the raw bucket data.
Mike F, the main difference between now and 50 million years ago when nowhere was glaciated is that CO2 levels were double back then. Go figure.
Jim D,
It’s amazing!
A cylinder of highly compressed CO2 at 20 C is exactly the same temperature as a cylinder of highly compressed O2 at 20 C.
Even weirder, an empty CO2 cylinder at 20 C is also at exactly the same temperature! How completely amazing is that?
What would be even more amazing is if you could look at them in infra-red wavelengths where they would be completely different, with the O2 being completely transparent, and the CO2 not. It’s just physics.
‘Fraid not Jim D. All you see in infrared is the cylinders containing the gas which at equilibrium will be the same temperature as the contained gas. Don’t forget that good absorbers are good emitters (Kirchoff’s Law). What happens is that if you shine an infrared beam through the CO2 gas it will be scattered equally in all directions whereas it won’t scatter going through the O2.
The problem with that is that no one can demonstrate that backscattered CO2 illuminating the ocean surface will transfer any heat into the water because it is completely absorbed in the top several microns of skin layer driving evaporation where the energy is transported insensibly to altitude where it releases the extra energy through condensation and precipitation. So it warms the atmosphere higher up but not much at the surface. It also cools the stratosphere due to it being an equally strong emitter as it is an absorber.
Good point, yimmy. The amazingly precise bucket adjustments. That took care of the mess with the alleged measurements that were haphazardly done on boats of all shapes sizes and colors by various untrained and disinterested peoples of the world plying the sea lanes.
Everybody knows the shipboard measurements ain’t good, yimmy. This allegedly new data product doesn’t change that by adjusting the freaking buoy data to line it up with the crappy ship data.
Those SkS Kidz, Cowpan and Whey, got a lot of warmista media attention a couple years back with their attempted pause buster paper. They turned to UAH to get data to fill in the Arctic. Here is the comment of one of the maroons on the SkS echo chamber when they were celebrating the paper by the SkS Kidz that put the final nail in the coffin of the pause:
“We had been assuming that the reason measured estimates of deep ocean warming didn’t quite cover the ‘missing heat’ was because we still weren’t finding all of it. If this result holds up then the last of the ‘missing heat’ may finally have been found.
The fact that they used UAH satellite data to ‘bridge the gap’… that ‘popping’ sound you hear is Roy Spencer’s head exploding.”
Hey yimmy, why don’t we use UAH to infill data that those ships following the sea lanes miss out on? Better yet, toss the freaking no good ship data and go with the buoys and satellites.
I was refreshing my memory on the Cowplanet and Whey story and ran across this thread on CA. Robbie the grad student made the mistake of participating and he took a beating. Por ejemplo:
http://climateaudit.org/2013/11/18/cotwan-and-way-2013/#comment-449976
That Permalink doesn’t work so well. It’s the comment immediately below Mosher’s:
ilmis
Posted Nov 19, 2013 at 12:08 PM
Jim D,
It doesn’t matter how you look at them, does it? Infrared, ultraviolet, visible light – they stubbornly remain the same temperature.
It’s just physics – no GHE to be seen. Is that amazing, or what?
> Well the main thing that I wouldn’t do is pretend that the error/uncertainty in the end result is small.
Of course not. You’d rather let yourself unconvinced and portray Mr. T like he was seven feet tall.
Corrections usually entail something about errors.
> Robbie the grad student made the mistake of participating and he took a beating. Por ejemplo:
That beating starts with “maybe OT“.
That doxxing was quite thin, Don Don.
You are getting more and more superfluous, willy.
Robert Way does not seem to have taken much beating, Don Don. The Auditor walked on eggs in the comment thread, including the technical comments. URL manipulation can lead to hacking, BTW:
http://www.theinquirer.net/inquirer/news/2079431/citibank-hacked-altering-urls
I reject your characterization of that thread.
Then he did a lot of whining for such a small beating. But you are entitled to your warped viewpoint, willy.
The Auditor has recently declared moving up in the data food chain. Ross has been strangely polite. You just can’t buy a clue, Don Don. Extortion doesn’t work very well either. Your protection is less threatening when your leach gets pulled that often.
Please, do continue.
That’s rather cryptic willy, but I am pretty sure you are offending my poor little self. I hope Judith protects me from you scurrilous attack the in the same way she takes care of little jimmy.
. They added 0.12 oC to readings collected by buoys, ostensibly to make them comparable to readings collected by ships. As the authors note, buoy readings represent a rising fraction of observations over recent decades, so this boosts the apparent warming trend.
My reply
Really based on what!
In other words all of the data from various sources that show otherwise to this study are wrong ,while this shame of a study is correct.
This is not going to stand and satellite data going forward is going to keep affirming that this study is a shame and needs to be dealt with and shown for what it is, which is just more agenda driven manipulation of the data which does not support AGW theory.
What more could be said.
https://lh5.googleusercontent.com/-rowNIYdDNyM/VXHCqWpJDRI/AAAAAAAAN1g/SAW0dzgoOUo/w579-h343-no/sst%2Bt%2Bchoices.png
Looks to me like Karl et al have produced some confirmation bias crapola. I am not particularly surprise since Menne is part of the team. When Menne et al. violated Watts surface station dot org intellectual property rights it was a pretty solid indication of his character/competence.
How exactly does “confirmation bias” lead to their results?
May anomaly from RSS .31 which is the correct non agenda driven data.
When did Roy Spencer, UAH, cease being the politically correct source?
I do not know if you are asking me but his data is not out yet which is just as correct as the RSS data.
Not even the scientists at RSS would agree with this.
https://twitter.com/BigJoeBastardi
more correct data
How the shame has come about read below.
But few scientists have believed a hiatus meant climate change had stopped. Trenberth says the slower warming is, in part, caused by unusual currents in the Pacific Ocean. Others have cited volcanic activity.
Now Karl’s team, which is directly responsible for taking the Earth’s temperature, says a technological shift in the way the measurements are taken has also obscured the temperature’s climb.
Here’s why: The single number — average global temperature — comes from tens of thousands of independent temperature readings. And, in recent decades, the technology for getting those readings has gradually shifted.
On land those measurements are made by weather stations; on the sea, the job has generally been done by commercial and military ships for decades. But starting in the 1980s, governments also began dropping buoys into the ocean to do independent measurements.
Karl and his colleagues decided to look at stretches of water where ships pass very near buoys, in order to compare the two temperatures. And they made a surprising discovery.
“The buoys actually read colder than the ships,” Karl says.
sham not shame although they both apply.
Humanity has been egregiously deceived.
NOAA doubles down on their mistake.
Atmospheric CO2 has been identified as a possible climate change forcing. Forcings, according to the ‘consensus’ and the IPCC, have units of Joules /sec/m^2. Energy, in units Joules/m^2, divided by the effective thermal capacitance (Joules/K/m^2) equals average global temperature (AGT) change (K). Thus (in consistent units) the time-integral of the atmospheric CO2 level (or some function thereof) times a scale factor equals the AGT change. When this is applied to multiple corroborated paleo (as far back as 542 million years ago) estimates of CO2 and average global temperature, the only thing that consistently works is if the effect of CO2 is negligible and something else is causing the temperature change.
CO2 has no influence on climate, ocean cycles and solar cycle 24 are both on down-slope. The only way to make it appear that it is still warming is to change the temperature numbers.
See the proof that CO2 has no effect on climate and discover what does cause climate change (explains 97+% average global temperatures since before 1900) at .http://agwunveiled.blogspot.com
Delingpole’s take on this latest dose of CAGW PR disguised as science.
“The thrust of Karl’s paper is this: that far from staying flat since 1998, global temperatures have carried on rising. It’s just that scientists haven’t noticed before because they’ve been looking in the wrong place – on land, rather than in the sea where all the real heat action is happening.
And how did Karl et al notice what everyone else has missed until now? Well, by using a specialised scientific technique called ‘getting your excuses in early before the Paris climate conference in December.’”
http://www.breitbart.com/big-government/2015/06/05/hide-the-hiatus-how-the-climate-alarmists-eliminated-the-inconvenient-pause-in-the-global-warming/
The hiatuspause is the scientific gift that keeps on giving.
I’d be sort of funny if we weren’t all paying through the nose — putting the economy on hiatus and pausing job grown for a generation — as we all wait for the specialists to work things out.
Ah Paris! …
Non, nous regrettons rien …
data tricks ‘n cherry picks,
economy loss ‘n economies’ loss,
environmentalist gloss …non,
nous-regrettons – rien.
Tres bien, merci
C’est un plaisir pour un vrai gentilhomme.)
bts
————————————-in global warming.
——————————-e—————————
——————————s—————————-
—————————–u—————————–
————————a———————————-
——————-p—————————————
Here lies the —————————————–
The AlMightO AMO:
http://www.ospo.noaa.gov/data/sst/anomaly/2015/anomnight.6.4.2015.gif
The acceleration in SLR:
http://www.aviso.altimetry.fr/fileadmin/images/data/Products/indic/msl/MSL_Serie_J2_Global_IB_RWT_GIA_Adjust.png
OHC:
https://itsnotnova.files.wordpress.com/2014/05/heat_content2000m.png
I figure 98.5% by August…
Well, we will see in a couple of years after the La Nina comes through and the dust settles.
It has been almost 2 decades since 1998. Some variation is expected.
https://denierlist.files.wordpress.com/2012/11/gisp2-temperaturesince10700-bp-with-co2-from-epica-domec.gif
Most of the interglacial has been warmer than current temperatures. Claiming current temperatures are unusual or extreme is an outright lie.
No, it’s stabilized now, and poised to start going back up. But it’s heartening to see at least some skeptics now realize it’s impossible warming stopped 18 years ago as 2005, the middle of the almost completely dead pause that took place after 2006, was the zenith of modern warming, not 1998.
The last Triple Crown… 37 years ago.
Well… we’ll see.
The CO2 warming seems to be about 0.24°C. The total warming since 1900 has been 3-4 times as much, depending on whether you include CGAGW. It might get an equal amount warmer due to CO2 – but it really depends on what the rest of the climate is doing.
No one has completely modeled the pre-1940 warming correctly or seems to completely understand it.
After a century of warming it is going to take another century for the temperatures of the oceans to even out.
Thirty years or more from now we will know if CO2 is a major climatic influence. We only have reasonable data for about 12 years and somewhat reasonable data back to 1979. We really need to collect data for a full 65 year cycle.
You might be right about warming short term. I was surprised that warming stopped and that does seem to indicate low CO2 forcing. Don’t know if the pause is possible or impossible. The warming had a 20 year run, so until 2020 it is too early to weigh in on the pause. The warmers dance around when it gets a little warmer, the deniers dance around when it gets a little cooler. I left my dancing shoes in the closet.
JCH whistling past the graveyard.
The pause killed the cause. Get used to it.
It really doesn’t matter what the next La Nina does unless it is an extremely powerful one. Like the one in 2011, which some people say was the 2nd most powerful in the record. If there are series of very powerful la Nina events, which is highly unlikely, there could be cooling.
Last La Nina (blue) and subsequent period of negative ONI
None of the above should have been that friendly to warming.
If there are normal La Nina events, it’s going to continue warming at around .165C to .245C per decade (30-year trends) for the rest of the century. Just as it has been.
JCH, .245 K/decade 30-year trend? The trailing 30-year trend peaked bellow .2 K/decade around 2005 and it’s declining now.
http://www.woodfortrees.org/plot/gistemp/last:360/trend/plot/gistemp/to:2010/last:360/trend/plot/gistemp/to:2005/last:360/trend/plot/gistemp/to:2000/last:360/trend
The average 30-year trend is ~.05 K/decade.
https://bobtisdale.files.wordpress.com/2015/01/figure-42.png
McKitrick correctly concludes this: “….small changes in assumptions about how to deal with uncertainties in the data can have a large effect on the final results.” To me this implies that any specific results are largely uncertain, yet these statistical model outputs are repeatedly used as though they were precision measurements. Hottest year on record and all that jazz. Maybe the Karl piece will finally wake people up. There is no record, just a lot of poor data.
Plus a lot of questionable assumptions, of course,
PTR should have been PRT. Slysdexia.
Well… Yeah.
That seems to be an accurate description of the problem.
Do you have any solutions to suggest or do you see it as an insolvable problem.
The solution is to live with the problem. The only way to know what is happening is to build the proper systems to measure it. Wanting to know what the average global temperature is requires a lot, although the satellites get us a lot closer than these goofy statistical models. What it once was is unknowable. Many facts about the past are simply unknowable, also about the present for that matter. Science is not omniscient.
Satellites don’t measure temperature. Peroid.
They record brightness at the sensor.
From that data you have to apply models, regressions, first guesses, and then you get an estimate of what the temperature is. If you had ever cared to look at validation you would be stunned.
For example rss adjusted temperature data using a gcm. Uah has different approaches but they all involve adjusting data with models.
worse than that; satellite measures voltage
Even worse than that. Satellites convert the voltages to radio wave modulaton and earth based recievers measure the radio waves.
This is an exercise in absurdity. Stop it. These instruments are physical systems not abstract models of physical systems. I don’t expect Mosher to know better he’s not schooled in any physical science, engineering, or even math so the line between the physical and the abstract is blurred beyond detection for him. He’s an English major and his speciality is semantic pedantry. He’s a rock star in that.
That’s shocking.
All true, Mosher, but at least UAH is sensing the atmosphere, which the surface models are manifestly not doing, much less the SST buckets et al.
King Karl says BEST is wrong. Got a reply?
I am reminded of the NAS/NRC workfest back in the late 1990s before the giant El Nino hit. UAH said there was no warming but the surface models said there was a lot (AGW is based on the latter). The workshop conclusion was that the uncertainties in both approaches were so great that they did not actually disagree. I read this to mean that we did not know if it was warming or not. Apparently we still do not, in which case science has nothing to explain.
Technically a satellite measures accumulated charge (that’s why it is charge coupled device).
But a weather station measures resistance or volume of fluid (manual stations) which isn’t a great improvement.
To this point no one actually measures the temperature…
All thermocouples and digital thermometers measure voltage. Most all digital equipment do too. Only mercury thermometers with mark 4 eye balls measure temps thru calibrated pieces of glass.
Scott
Mosher – if sat data is that bad, why does it agree with radiosonde data so well?
Also, PTRs in modern weather stations measure voltage and current as a function of sensor resistance. Each has to be calibrated independently in order to determine a set of calibration coefficients applicable only to that individual sensor.
PTR should have been PRT. Slysdexia. (posted in wrong spot the first time.)
all known instruments that attempt to measure temperature actually measure some manifestation of temperature rather than temperature itself…like the volume of a known quantity of mercury…or the brightness (in whatever wavelength of EMF one wants to measure) so mash’s obsession over just what it is that satellites measure seems misplaced.
Of course, he comes at this enterprise from a background in English studies…
> all known instruments that attempt to measure temperature actually measure some manifestation of temperature rather than temperature itself…
All known instruments don’t actually attempt anything. The attempts are actually ours. Actually, temperature itself may not even exist.
Another engineer with parsomatics problems.
Dr Robert E Stevenson, cited in a comment by dennisambler
@ Jo Nova 4th june ’15, 2.12am:
“Sources of 20th Century Ocean Temperatures
I learned to deploy Nansen water bottles and reversing thermometers for deep-sea sampling in 1949. I spent the rest of the subsequent decade seagoing, for the most. I can’t remember how many bottle casts I made, or how many bathythermographs I deployed. There had to be thousands in the waters off coastal California. Other students and post-docs were doing the same farther offshore in the eastern Pacific, from the E.W. Scripps. In the westernmost Atlantic, a similar cadre worked from the Atlantis.
In the 1960s, more ships were out at sea: from Fisheries Laboratories, U.S. Coast and Geodetic Survey (now NOAA), and research institutions at Scripps (La Jolla, Calif.), Woods Hole (Massachusetts), Miami, and Texas A&M (in the Gulf of Mexico). The British sailed the new Discovery, the Germans the new Meteor, and there were small ships sailing from Denmark, Japan, and France. Many cruises were dedicated to the geophysics of the sea floor, where deep-ocean casts for water and temperatures were few and far between.
Surface water samples were taken routinely, however, with buckets from the deck and the ship’s engine-water intake valve. Most of the thermometers were calibrated into 1/4-degrees Fahrenheit. They came from the U.S. Navy. Galvanized iron buckets were preferred, mainly because they lasted longer than the wood and canvas. But, they had the disadvantage of cooling quickly in the winds, so that the temperature readings needed to be taken quickly. I would guess that any bucket-temperature measurement that was closer to the actual temperature by better than 0.5° was an accident, or a good guess. But then, no one ever knew whether or not it was good or bad. Everyone always considered whatever reading was made to be precise, and they still do today. The archived data used by Levitus, and a plethora of other oceanographers, were taken by me, and a whole cadre of students, post-docs, and seagoing technicians around the world. Those of us who obtained the data, are not going to be snowed by the claims of the great precision of “historical data found stored in some musty archives.”
…Yet buoys’ data is adjusted upwards to align with bucket, er,
data ?
Beth
You may remember my article here
http://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/
The idea that anyone can believe that global SST’s are known to fractions of a degree back to 1860 is completely baffling to me.
They are not precise measurements with any irrefutable wider value to science until probably the 1960’s or even the Argo era. That is not to say that some scientific expeditions measuring SST’s don’t have merit-such as the Challenger and those using reversing thermometer with a rigid methodology of sampling- but other than that there are just too many variables in the mix to have any confidence in the figures. I am surprised John Kennedy-a fine scientist-has not popped up ton defend them.
tonyb
So presumably neither mosh nor Judith would use the satellite record for sea level change? That would be wise as its a terrible and inaccurate system .
Tonyb
Tonyb,
Satellites aren’t flying too high to go under the AGW bus.
Andrew
The CATO post puts this tempest in a teapot into an interesting perspective:
“And finally, even presuming all the adjustments applied by the authors ultimately prove to be accurate, the temperature trend reported during the “hiatus” period (1998-2014), remains significantly below (using Karl et al.’s measure of significance) the mean trend projected by the collection of climate models used in the most recent report from the United Nation’s Intergovernmental Panel on Climate Change (IPCC). “
Yup. 0.1C/decade is not significant. It’s noise.
That’s right! 1 deg. per century (assuming the Karl BS passes the laugh test) is not worth destroying the world economy for.
What a laugher. The 30-year trend is .165C per decade. Wait until the El Nino is in the books. It’s going to be over .17C by the end of 2015. If the PDO is in a positive phase, it’ll be well over .2C per decade by 2020.
The pause is paws up. Just like I said it would be.
I meant to put this link in the previous comment:
http://object.cato.org/sites/cato.org/files/wp-content/uploads/karlgraph.jpg
The future is certain. It’s the past that keeps changing.
– Russian Parable
THE ‘NEW’ LOOK:
http:// cdn.arsteshnica.net/wp-content/uploads/2015/06/noaa-karl-etal-640×486.jpg
http://cdn.arsteshnica.net/wp-content/uploads/2015/06/noaa_karl_etal-640×486.jpg
This new paper raises once again the issue of data adjustments, that turn out to rather be data manipulation and corruption. But beside the questionable validity of the new data adjustments, another question should also be addressed regarding the constant “fluctuations” of those adjustments.
1/ With HADCRUT4 data, the Hadley Centre has introduced new adjustments compared to previous HADCRUT3 data series :
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/offset:0.025/plot/hadcrut4gl/from:1970/mean:60
Curiously corrections are always in the warming direction… But why?
Has anyone assessed the validity of HACRUT4 adjustments compared to HADCRUT3 ones ?
I guess the answer is the unfortunately that nobody knows except those who have defined the adjustments.
2/ Data adjustments appears to be obviously “fluctuating” and indeed corrupted.
When looking at US Temperature record as published in Hansen et al 1999 (graph fig. 6) :
http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal_1.pdf
Data were also available at the following address but NASA has recently deleted the file… Guess why…
http://www.giss.nasa.gov/data/update/gistemp/graphs/FigD.txt
– Warmest year is 1934
– 1998 only ranks 4th after 1934, 1931 and 1921…
In Hansen et al 2001 new adjustments made 1998 tight to 1934
This situation has been maintained up to 2007
http://icecap.us/images/uploads/NEW_RANKINGS.pdf
In 2007, NASA GISS made a fruitless attempt to make 1998 ousting 1934 as Hottest U.S. Year
In [Link]
The “trick” has been discovered by McIntyre and NASA had to step back.
But the record published in 2012 finally reached the objective of ousting 1934 as warmest year in the US :
http://data.giss.nasa.gov/gistemp/graphs_v2/Fig.D.txt
Compared to the 2000 publication :
– 1998 average temperature anomaly has been adjusted by +0.35°C
– 1934 average temperature anomaly has been adjusted by -0.21°C
NASA also deleted those inconvenient data, but the resulting curve can be seen in Hansen et al 2010.
3/ looking at individual weather stations, one can also observe significant and questionable adjustment evolutions :
Few examples or how to hide the inconvenient truth that temperature have been warmer in the past, despite small anthropogenic signature :
Station Data: Reykjavik (64.1 N,21.9 W)
– Old adjustments : the 30’s are clearly warmer than current period.
– New adjustments : Current period becomes much warmer. But why ?
Station Data: Punta Arenas (53.0 S,70.8 W)
– Old adjustments : 1st half of 20th century is clearly warmer than current period.
– New adjustments : Current period is slightly warmer ??
Conclusion :
Temperature data sets are manipulated and corrupted by questionable adjustments and nice “tricks” whose aim is “to hide the decline”.
When observations’ data do not support your favorite AGW theory then modify the data…
That’s climate junk science.
As long as global warmers keep adjusting historic temperatures we don’t need to do anything about global warming or CO2.
If it was really warming the historic temperatures wouldn’t need to be played with because the current temperatures would be warmer.
If they ever stop adjusting temperatures and it gets 2 °C warmer we might have a problem worth addressing. It isn’t even worth starting to track the trend as long as they keep playing footsie with historic temperatures.
It is unfathomable how climate scientists view historic data as a moving target.
Dear Eric,
Some of the differences between HadCRUT3 and HadCRUT4 are due to the way that the SSTs were adjusted.
If you want to see the overall effect of the SST adjustments on the SST data, see the figures from the HadSST3 paper here:
http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html
Anyone is free to download our data sets and do their own assessments. HadSST3, for example, can be found here:
http://www.metoffice.gov.uk/hadobs/hadsst3/
The raw data, also publicly available, are from ICOADS:
http://icoads.noaa.gov/
If you read the HadSST3 paper, you’ll find some tests of the SST adjustments. These are by no means exhaustive, and we’d welcome more. We tried various things: separating out different subsets of the data to see if they were consistent, comparing to air temperatures made at sea, varying different assumptions in the analysis to see if the answer depended critically on any of them.
Since the paper was published we’ve also compared the SST data to oceanographic measurements. Oceanographic measurements are likely of higher quality having been made (for the most part) on research cruises:
http://onlinelibrary.wiley.com/wol1/doi/10.1029/2012GL052975/abstract
In the modern period, comparisons have also been done with the best available SST data set derived from satellite sources:
http://onlinelibrary.wiley.com/wol1/doi/10.1029/2012JC008400/full (Figure 14).
The new ERSST paper with its comparisons of various SST data sets is an important part of the ongoing understanding of uncertainty in the historical SST record.
What these studies show is that there are uncertainties in these records, but that they are limited.
Best regards,
John
Just call it what it is- Karl and NOAA started with the result, and then worked backwards from there to devise the method. As a scientist, I find this appalling.
Good grief.
Today’s conditions are what they are. It makes no difference whether yesterday’s temperature was higher, lower, or the same. That was yesterday. Adjust history all you like. It won’t alter the future at all, meteorologically speaking.
All of this kerfuffle is designed to support the wondrously and inaccurately named greenhouse effect, which has never been demonstrated to exist in the presence of skeptics or unbelievers. Rather like Uri Geller’s spoon bending ability, I suppose.
But just for one mad moment assume that the “surface” is warming. Combined with increased CO2 levels, this will result in more plant life, presumably leading to increased biodiversity, and the general greening of the planet.
Who could possibly complain about more food, more pasture, increased useable land? The anti fossil fuel “back to the stone Age” crowd, that’s who!
Bah! Humbug!
I’m cautious about the opinions of Thomas Karl et al. After all, aren’t they just a bunch of climate pause deniers?
ianh001 | June 5, 2015 at 8:57 pm | Reply
I’m cautious about the opinions of Thomas Karl et al. After all, aren’t they just a bunch of climate pause deniers?
http://onlinelibrary.wiley.com/doi/10.1029/1999GL010877/abstract
The record breaking global temperatures of 1997 and 1998: Evidence for an increase in the rate of global warming?
Authors:Thomas R. Karl, Richard W. Knight, Bruce Baker
First published: 1 March 2000
He has a track record of being wrong about the pause, and increasing global warming.
Where’s the Amazing Randi on this paper? Is he skeptical? Or did Bill Nye give him the Evil Eye?
I wonder why the hair dye guy with the bow tie wasn’t a co-author of this remarkable “study”
I think Tiny Tim may have been a ghost co-author.
My attempted communication with
MrDr Karl in the past has remained without responseTom Karl could start his own blog, as Judith as done.
And if he had the guts enough he wouldn’t censure it either
> And if he had the guts enough he wouldn’t censure it either
Are you suggesting that Judy’s should become unmoderated?
“Moderated” in the sense of Real Climate, Skeptical Science, Think Progress et al. means, comments from me will not appear
So as long as your comments appear, anything goes.
But RC moderation indeed.
I don’t think any real, live, Red-Blooded, Freedom Fightin’, Gun Totin’ (that’s not me) Deniers had the opportunity to review this animal before it hit the press and I don’t think the Editors will ever send a proposed paper to one
good night
Global temperature is like a beautifully presented fruit salad with rotten apples to cover the taste of rotten oranges which cover the taste of rotten apples.
The good thing is that a global temperature, if ever ascertained, would be trivial. Climate being a bewildering flux ‘n all. The Australian Alps are presently carpeted in snow. The globe didn’t cool to cause that. It’s just been snowing down here, that’s all.
In fact, here’s been a marked cold snap in much of Oz. That doesn’t mean we won’t soon be copping a perilous spring with hot inland wind fanning fires. Because short term trends end just like long term trends end…which is why they call ’em trends.
Trends end.
Today’s Australian: “Two of Victoria’s largest snow resorts … are experiencing their earliest snow dumps [> 25 cm] since the record-breaking year of 2000, while at Thredbo in NSW pre-season conditions have also been almost perfect.” … The BOM declared an El Nino event on 12 May; their “climate prediction manager says that there is still reason to be cautious about the year ahead. … Dr Watkins said ‘but having said that we have low humidity and quite cool nights, which is really good for snow making.’”
So we don’t knows if it snows, but we knows we’re headin’ for Armageddon’. I don’t think that I’ll cancel my 2090 skiing holiday just yet.
Faustisnowman
Oops, Thermageddon’.
Pingback: Road to Paris, e la pausa non c’è più | Climatemonitor
Pingback: Global warming pause too much to hope for - Weekly Observer
Robert Way
Several days ago I asked you a question.
‘We can clearly see other examples of arctic amplification, for example the 1920 to 1940 period, the 1818 to 1860 era, the first few decades of the 16 th century and of course the extended warming during the Viking period.
As Enhanced co2 couldn’t have been responsible what do you think could have been their causes?’
I note your Inuit background and have read your profile with interest so good luck with the PHD. Have any of your studies required you to look at the history of the Arctic? If so, you will be well qualified to answer my question. The arctic seems to change dramatically on a regular basis-why is today any different to past episodes?
tonyb
Thank you Tony for your work. You ask some of the most difficult questions posed at this site. I hope Mr Way attempts to rise to the challenge.
Buried beneath the permafrost I guess, Tony.
Multiple? Ok, five will do that show significant human caused sea level rise.
Tonyb
One major difference, in 1910 to 1940 the PDO progressively grew in intensity whereas from 1983 to 2003 the PDO progressively decreased in intensity.
Gee, and I suppose that lends more support to your absurd argument for AGW.
Yes, it does. Modern warming fought the PDO all the way to 2006. The AMO does very little. We’re about to get very hot. The May anomaly looks like it will exceed .80C, and June is roasting so far.
RSS is out for May. 0.31C less than half your wild ass guess.
Read it and rejoice, global warming is not a problem:
http://www.reportingclimatescience.com/uploads/pics/RSS_TS_channel_TLT_Global_Land_And_Sea_v03_3_01.png
RSS is out for May. 0.31C less than half your wild guess.
Read it and rejoice, global warming is not a problem:
http://www.reportingclimatescience.com/uploads/pics/RSS_TS_channel_TLT_Global_Land_And_Sea_v03_3_01.png
Who gives a squeezed rats ring with what RSS is out? At the surface, it’s garbage, and anybody with with an ounce of genuine skepticism should “sense” it.
wandering around lost in the dark in need of a new idea for how to to do it
JCH | June 6, 2015 at 9:08 am | Reply
One major difference, in 1910 to 1940 the PDO progressively grew in intensity whereas from 1983 to 2003 the PDO progressively decreased in intensity.
http://cses.washington.edu/cig/figures/pdoindex_big.gif
Well…
I’m not sure I would characterize the PDO as decreasing in intensity, it has changed sign. Between the drop in solar and the decline in PDO – it could be argued that it is going to cool and perhaps even get cold.
You really don’t have much choice.
More low numbers; less high numbers = less impact on the GMST.
Wrong JCH.
Lol. It cannot be wrong. When the PDO index changes trend direction, the GMST follows it like an obedient puppy. Except around 1983. It is you guys who are wrong.
David
I am sure that jch is equally sceptical about the merits of sea level measurements derived from satellites.
Tonyb
I’ve looked at it. There are multiple lines of evidence for sea level rise.
Gravity
Altimetry
Tide gauges
Ice melt
Water storage
Paleo
OHC
physics of thermal expansion
Whatever.
How many times has it been claimed that there has been no warming for 18 years, sometimes more, because of RSS? It been obviously wrong for years.
JCH | June 11, 2015 at 3:21 pm |
I’ve looked at it. There are multiple lines of evidence for sea level rise.
This claim is dumb.
The tidal gauges are still in use.
If the rate of change of the tidal gauge aggregate hasn’t changed, the sea level rise hasn’t changed.
We don’t care if the subsidence of inland areas due to ground water/oil extraction throws the geoid off.
If the satellites aren’t calibrated to the tidal gauges it is pointless to do satellite measurement of sea level. We should treat satellite measurement of sea level as a different measurement since it is pretty obvious it isn’t measuring sea level.
However – if the satellite sea level is divided by two it is roughly comparable to the real sea level rise. Perhaps the satellite teams should be required by law to divide their sea level anomaly by two so that their result correlates with real measurements giving the satellite measurements some physical significance by proxy.
JCH said “I’ve looked at it. There are multiple lines of evidence for sea level rise.”
Yes – the sea level has risen 120 meters over the last 20,000 years.
The average sea level rise per year over that time is 6mm/year.
Currently we are only rising at 1/2 the average rate.
Perhaps humans adding CO2 to the atmosphere has slowed down sea level rise (grin).
But seriously – are you talking about sea level rise or sea level rise caused by humans? You have to subtract out the natural sea level rise from the total to obtain the human cause sea level rise. I have never seen a convincing analysis for that number (or any at all for that matter).