by Judith Curry
Two new papers that discuss uncertainty in surface temperature measurements.
The issue of uncertainty in surface temperature measurements is getting some much needed attention, particularly in context of the HadCRUT datasets. For context, some previous Climate Etc. posts on this topic:
- On adjustments to the HadSST3 data set
- Critique of the HadSST3 uncertainty analysis
- Unknown and uncertain sea surface temperatures
The first paper, by John Kennedy of UK Met Office, provides a comprehensive and much needed uncertainty analysis of sea surface temperature measurements and analyses:
A review of uncertainty in in situ measurements and data sets of sea-surface temperature
John Kennedy
Abstract. Archives of in situ sea-surface temperature (SST) measurements extend back more than 160 years. Quality of the measurements is variable and the area of the oceans they sample is limited, especially early in the record and during the two World Wars. Measurements of SST and the gridded data sets that are based on them are used in many applications so understanding and estimating the uncertainties are vital. The aim of this review is to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, it also aims to identify current gaps in understanding. Uncertainties arise at the level of individual measurements with both systematic and random effects and, although these have been extensively studied, refinement of the error models continues. Recent improvements have been made in the understanding of the pervasive systematic errors that affect the assessment of long-term trends and variability. However, the adjustments applied to minimize these systematic errors are uncertain and these uncertainties are higher before the 1970s and particularly large in the period surrounding the Second World War owing to a lack of reliable metadata. The uncertainties associated with the choice of statistical methods used to create globally complete SST data sets have been explored using different analysis techniques but they do not incorporate the latest understanding of measurement errors and they want for a fair benchmark against which their skill can be objectively assessed. These problems can be addressed by the creation of new end-to-end SST analyses and by the recovery and digitization of data and metadata from ship log books and other contemporary literature.
Published in Reviews of Geophysics, link to abstract and full manuscript.
Excerpts:
In using SST observations and the analyses that are based on them, it is important to understand the uncertainties inherent in them and the assumptions and statistical methods that have gone into their creation. In this review I aim to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, I also aim to identify current gaps in understanding.
Section 2 provides a classification of uncertainties. The classifications are not definitive, nor are they completely distinct. They do, however, reflect the way in which uncertainties have been approached in the literature and provide a useful framework for thinking about the uncertainties in SST data sets. The uncertainties have been tackled in ascending order of abstraction from the random errors associated with individual observations to the generic problem of unknown unknowns.
Throughout this review the distinction will be made between an error and an uncertainty. The error in a measurement is the difference between some idealized “true value” and the measured value and is unknowable. The uncertainty of a measurement [is defined] as the “parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand”. This is the sense in which uncertainty is generally meant in the following discussion. This is not necessarily the same usage as is found in the cited papers. It is common to see the word error used as a synonym for uncertainty such as in the commonly used phrases standard error and analysis error.
Broadly speaking, errors in individual SST observations have been split into two groupings: random observational errors and systematic observational errors. Although this is a convenient way to deal with the uncertainties, errors in SST measurements will generally share a little of the characteristics of each.
Random observational errors occur for many reasons: misreading of the thermometer, rounding errors, the difficulty of reading the thermometer to a precision higher than the smallest marked gradation, incorrectly recorded values, errors in transcription from written to digital sources and sensor noise among others. Although they might confound a single measurement, the independence of the individual errors means they tend to cancel out when large numbers are averaged together. Therefore, the contribution of random independent errors to the uncertainty on the global average SST is much smaller than the contribution of random error to the uncertainty on a single observation even in the most sparsely observed years. Nonetheless, where observations are few, random observational errors can be an important component of the total uncertainty.
Systematic observational errors are much more problematic because their effects become relatively more pronounced as greater numbers of observations are aggregated. Systematic errors might occur because a particular thermometer is mis-calibrated, or poorly sited. No amount of averaging of observations from a thermometer that is mis-calibrated such that it reads 1 K too high will reduce the error in the aggregate below this level save by chance. However, in many cases the systematic error will depend on the particular environment of the thermometer and will therefore be independent from ship to ship. In this case, averaging together observations from many different ships or buoys will tend to reduce the contribution of systematic observational errors to the uncertainty of the average.
In Kennedy et al., [2011b] two forms of this uncertainty were considered: grid-box sampling uncertainty and large-scale sampling uncertainty (which they referred to as coverage uncertainty). Grid-box sampling uncertainty refers to the uncertainty accruing from the estimation of an area-average SST anomaly within a grid box from a finite, and often small, number of observations. Large-scale sampling uncertainty refers to the uncertainty arising from estimating an area-average for a larger area that encompasses many grid boxes that do not contain observations. Although these two uncertainties are closely related, it is often easier to estimate the grid-box sampling uncertainty, where one is dealing with variability within a grid box, than the large-scale sampling uncertainty, where one must take into consideration the rich spectrum of variability at a global scale.
In the context of SST uncertainty, unknown unknowns are those things that have been overlooked. By their nature, unknown unknowns are unquantifiable; they represent the deeper uncertainties that beset all scientific endeavors. By deep, I do not mean to imply that they are necessarily large. In this review I hope to show that the scope for revolutions in our understanding is limited. Nevertheless, refinement through the continual evolution of our understanding can only come if we accept that our understanding is incomplete. Unknown unknowns will only come to light with continued, diligent and sometimes imaginative investigation of the data and metadata.
JC comment: Uncertain T. Monster is VERY pleased by this comprehensive discussion of the uncertainties. The greatest challenges (discussed at length in the paper) are how to assess structural uncertainties in the analysis methods and how to combine all the uncertainties. Any application of these data (including trend analysis) needs to consider these issues.
The second paper attempts to slay the uncertainty monster.
Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends
Kevin Cowtan and Robert Wray
Abstract. Incomplete global coverage is a potential source of bias in global temperature reconstructions if the unsampled regions are not uniformly distributed over the planet’s surface. The widely used HadCRUT4 dataset covers on average about 84% of the globe over recent decades, with the unsampled regions being concentrated at the poles and over Africa. Three existing reconstructions with near-global coverage are examined, each suggesting that HadCRUT4 is subject to bias due to its treatment of unobserved regions. Two alternative approaches for reconstructing global temperatures are explored, one based on an optimal interpolation algorithm and the other a hybrid method incorporating additional information from the satellite temperature record. The methods are validated on the basis of their skill at reconstructing omitted sets of observations. Both methods provide superior results than excluding the unsampled regions, with the hybrid method showing particular skill around the regions where no observations are available. Temperature trends are compared for the hybrid global temperature reconstruction and the raw HadCRUT4 data. The widely quoted trend since 1997 in the hybrid global reconstruction is two and a half times greater than the corresponding trend in the coverage-biased HadCRUT4 data. Coverage bias causes a cool bias in recent temperatures relative to the late 1990s which increases from around 1998 to the present. Trends starting in 1997 or 1998 are particularly biased with respect to the global trend. The issue is exacerbated by the strong El Ni˜no event of 1997-1998, which also tends to suppress trends starting during those years.
Published by the Royal Meteorological Society, link to abstract.
There is a web site with data and and metadata [here], and also an explanatory youtube video.
The Guardian has an extensive article, excerpts:
There are large gaps in its coverage, mainly in the Arctic, Antarctica, and Africa, where temperature monitoring stations are relatively scarce.
NASA’s GISTEMP surface temperature record tries to address the coverage gap by extrapolating temperatures in unmeasured regions based on the nearest measurements. However, the NASA data fails to include corrections for a change in the way sea surface temperatures are measured – a challenging problem that has so far only been addressed by the Met Office.
In their paper, Cowtan & Way apply a kriging approach to fill in the gaps between surface measurements, but they do so for both land and oceans. In a second approach, they also take advantage of the near-global coverage of satellite observations, combining the University of Alabama at Huntsville (UAH) satellite temperature measurements with the available surface data to fill in the gaps with a ‘hybrid’ temperature data set. They found that the kriging method works best to estimate temperatures over the oceans, while the hybrid method works best over land and most importantly sea ice, which accounts for much of the unobserved region.
Cowtan & Way investigate the claim of a global surface warming ‘pause’ over the past 16 years by examining the trends from 1997 through 2012. While HadCRUT4 only estimates the surface warming trend at 0.046°C per decade during that time, and NASA puts it at 0.080°C per decade, the new kriging and hybrid data sets estimate the trend during this time at 0.11 and 0.12°C per decade, respectively.
These results indicate that the slowed warming of average global surface temperature is not as significant as previously believed. Surface warming has slowed somewhat, in large part due to more overall global warming being transferred to the oceans over the past decade. However, these sorts of temporary surface warming slowdowns (and speed-ups) occur on a regular basis due to short-term natural influences.
The results of this study also have bearing on some recent research. For example, correcting for the recent cool bias indicates that global surface temperatures are not as far from the average of climate model projections as we previously thought, and certainly fall within the range of individual climate model temperature simulations. Recent studies that concluded the global climate is a bit less sensitive to the increased greenhouse effect than previously believed may also have somewhat underestimated the actual climate sensitivity.
This is of course just one study, as Dr. Cowtan is quick to note.
“No difficult scientific problem is ever solved in a single paper. I don’t expect our paper to be the last word on this, but I hope we have advanced the discussion.”
To give a flavor of twitter discussion:
Dana Nuccitelli: This new study kills the myth of the global warming pause
John Kennedy: The irony is that the study being used to bash HadCRUT4 assumes that HadCRUT4 is correct where we have data.
The paper is getting plenty of media attention, I’m also getting queries from reporters.
JC assessment
Let’s take a look at the 3 methods they use to fill in missing data, primarily in Africa, Arctic, and Antarctic.
- 1. Kriging
- 2. UAH satellite analyses of surface air temperature
- 3. NCAR NCEP reanalysis
The state that most of the difference in their reconstructed global average comes from the Arctic, so I focus on the Arctic (which is where I have special expertise in any event).
First, Kriging. Kriging across land/ocean/sea ice boundaries makes no physical sense. While the paper cites Rigor et al. (2000) that shows ‘some’ correlation in winter between land and sea ice temps at up to 1000 km, I would expect no correlation in other seasons.
Second, UAH satellite analyses. Not useful at high latitudes in the presence of temperature inversions and not useful over sea ice (which has a very complex spatially varying microwave emission signature). Hopefully John Christy will chime in on this.
Third, re reanalyses in the Arctic. See Fig 1 from this paper, which gives you a sense of the magnitude of grid point errors for one point over an annual cycle. Some potential utility here, but reanalyses are not useful for trends owing to temporal inhomogeneities in the datasets that are assimilated.
So I don’t think Cowtan and Wray’s analysis adds anything to our understanding of the global surface temperature field and the ‘pause.’
The bottom line remains Ed Hawkins’ figure that compares climate model simulations for regions where the surface observations exist. This is the appropriate way to compare climate models to surface observations, and the outstanding issue is that the climate models and observations disagree.
Is there anything useful from Cowtan and Wray? Well, they raise the issue that we should try to figure out some way obtain the variations of surface temperature over the Arctic Ocean. This is an active topic of research.
Or perhaps Cowtan and Wray found Trenbeth’s missing heat in Santer’s workshop?.
How can we celebrate scientists who claim to have found a way to breakthrough all the noise that in the statistics of AGW is represented by huge error bars — to detect a human signal in the greenhouse warming effect — when humanity’s contribution is immeasurably miniscule at best? Sure, sure, charlatans are persuasive. The reason a human signal due to human activities is impossible to detect within the natural variation of the continually changing climate is because there is no link.
Always thought this would be a more prominent point – Co2 lags warming because natural warming creates more life which leads to more Co2, etc.
http://hockeyschtick.blogspot.com/2013/11/new-paper-finds-ice-core-co2-levels-lag.html
Pencil whipping the data results in more warming than we thought.
What a shocker. Has applying corrections to bad or missing data in consensus climate science ever resulted in less warming that we thought? Innocent mistakes should go both ways. If they don’t they’re probably not so innocent. Pencil whipping of data is a notorious way to massage it into giving a desired result instead of a result that is closer to the truth.
Wag…I disagree that humanity’s CO2 contribution is miniscule!!! Consider how many cubic miles of coal have been burned over the last century along with the billions of barrels of petroleum (for locomotion and heating) while forests have been cut down to create crop land. Current estimates of CO2 creation exceed CO2 removal by billions of tons per year.
Your examples are like a match compared to the energy of the Sun that global warming alarmists ignore.
Wag…the estimate is that 9 petigrams of CO2 are added each year, but only year only 3 petigrams are removed by natural vegetation. So, how can you think the atmospheric CO2 levels are not increasing??
My second-hand CO2 is running about 40,000 to 53,000 ppm (parts per million)–i.e., 4% – 5.3% Carbon dioxide (wiki). By comparison, just 0.0387% of the air we’re breathing in is CO2–i.e., about 387 ppm or 0.000387ths by volume.
If we were planning a mission outside the solar system we’d want lots of CO2 to grow healthy plants for our journey to the stars–e.g., growers keep CO2 levels at 1,000 to 2,000 ppm in Earthly greenhouses, which is about the level you’d find in a lecture hall full of students and pretty much what has been normal over most of Earth’s 550 million year history. Plants begin to die below 150 ppm. The Sahara wasn’t always a desert. Dr. Will Happer testified before the U.S. Senate that, “the planet is currently starved of CO2, and has been so starved for several million years.”
Gaia has a real problem, a whole plant kingdom with the vicious and unsustainable habit of virtually permanently sequestering CO2. How did she get so lucky as to develop an animal who could unsequester CO2?
========================
True, the “new approach to environmentalism,” according to Dr. Patrick Moore, “requires embracing humans as a positive element in evolution rather than viewing us as some kind of mistake.”
Wag.as usual, you ignore the big picture and quote useless info. What I stated pertain to the global ecosphere, not a backyard greenhouse!!
If the ‘global ecosphere’ acts nothing like a greenhouse, why confuse things by using it as an example? The Left does just that because it’s just too good to not to use it as an analogy — even if the Earth really doesn’t work that way — because, it fits their narrative — that modernity is heating up the globe (AGW). The Left is pushing AGW for political purposes, even if it means pushing crazy ideas like a backyard BBQ and the SUV you used to pick up the charcoal is contributing to global warming, raising the seas, stirring up more and bigger hurricanes and burying warmth deep in the oceans where it cannot be measured but nonetheless will someday arise to the surface and consume us all like a fiery Phoenix.
How altogether that your ad hom attack should be your final bonfire of the vanities. All genius climate alarmists should try this experiment–
Start a bonfire in your backyard and stare into the flames for an hour.
Ok, now…
Go outside on a cloudless day and stare into the sun for 5 minutes.
You’re blind now, right?
Waggie…you have a serious problem with reality!!! You brought up the greenhouse, not me! And YOU seem to have a problem as you label anyone who disagrees with you as ‘the Left’. How long have you been sooo confused??
So… the atmospheric CO2 levels increasing at a net rate of 6 “petigrams” a year (as you say) is not worrisome as climate alarmists using the analogy of a greenhouse wish to portray? You have some other reason to be alarmed by the increase? I agree that the increase is so relatively small we must measure it in parts per million but that to you was just meaningless data. What is it that you fear about an increase in atmospheric CO2 if it isn’t heating up the globe through a ‘greenhouse’ effect?
Waggie…you remind me of Scarecrow of Oz fame. Anyway, the answer to your question: “What is it that you fear about an increase in atmospheric CO2 if it isn’t heating up the globe through a ‘greenhouse’ effect?”
Is found in:
scientificamerican.com/article.cfm?id=new-york-state-begins-planning
You are afraid of rising seas. Got it. I heard the election Obama stopped that.
Surely you mean Santa’s workshop.
It is a double pun…
If not, Santer will beat the heat out of them!
BAck to the trust issue. How come the changes when they adjust data are always in the same direction, Increased warming today and yesterday was actually cooler.
Same link from my trust comment.
http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/
Scott
Perhaps the boats took more measurements when they were travelling Eastwards….
Spot on old chap! Port Out Starboard Home, QED and all that. Going out East, one was all fired up on meeting the fellow travellers , sipping champagne, eh what. Some were learning the intricacies of quoits and some were becoming naturalists measuring the sea temperature. Coming home it was all bridge, deckchairs, malaria and where the hell is my gin and tonic?
:)
they arent data adjustments. its inferring missing data.
1. The methodology of infering the surface from SAT data has already been shown to be useful by no less than mcIntyre. see Odonnell and mcIntyre
2. We know that HADCRUT underestimates the warming at the pole
from other surface datasets that use more complete datasets.
The only question is how badly biased HADCRUT is. Way takes a good first step toward answering that question
‘they arent data adjustments. its inferring missing data’
Quite. Infilling missing data in most scientific traditions is a no-no, in my own field its a Felony.
Some of us predicted that the pause would end when the data was reanalyzed.
In most sciences inferring missing behavior and data gets you a Nobel prize. Einstein and Brownian motion. Bohr and the photon. Etc.
DocMartyn whines that its not fair!
“Einstein and Brownian motion. Bohr and the photon”?
Didn’t do history of science did you. Einstein published a theoretical analysis for the movement of liquids, that describe Brownian motion. However, Einstein was unaware that Brownian motion, as he described it, had been observed.
“He wrote in May 1905 to his friend and discussion partner
Conrad Habicht, a famous letter in which Einstein listed four of the five pathbreaking papers on which he was working during his miracle year. The paper on Brownian motion was, after the paper on the light
quantum and the dissertation on the determination of molecular dimensions, the third on Einstein’s list, before the relativity paper, which he had only outlined at that time:
“The third proves that, on the assumption of the molecular theory of heat, bodies on the order of magnitude 1/1000 mm, suspended in liquids, must already perform an observable random motion that is produced by thermal motion; in fact, physiologist have observed motions of suspended small, inanimate, bodies, which motions they designate as “Brownian molecular motion.”
http://www.physik.uni-augsburg.de/theo1/hanggi/History/Renn.pdf
This is rather like the discovery of cosmic microwave background radiation by Robert Woodrow Wilson and Arno Allan Penzias, where the theoretical work was being done by Robert Dicke.
Think of all the unfinished symphonies that would still be unfinished if somebody didn’t infer the endings. Inference has its place.
steven, they are inferring missing data under the assumption that the difference between two measurements is predictable. I think they have a small issue with the Arctic due to inconsistent Stratospheric warming events which should create one mongo temperature inversion. That might be worth checking into.
If they are doing something like to co-kriging–I suspect this is the case–then they would’ve developed a quantitative model relating the two variables including associated uncertainties. Both the model and those uncertainties could be incorporated into the final kriging calculations.
(The difference in the two measurements is predictable…it just may be lousy (or good). I suspect/hope the limits of predictability is taken up in the paper. That predictability seems to be a key aspect.) This one will be interesting to follow….
Capt.
Nothing is wrong with using present data to create an estimate for missing data. Especially if you do cross validation and if you have a bias if you dont infill.
And I did see you or other skeptics object when mcintyre odonnell and Id and nic lewis used a simalar mthod to
Improve on steig.
And nobody bitches when tonyb uses diaries to infill missing temps in cet.
mwgrant, “(The difference in the two measurements is predictable…it just may be lousy (or good). ”
The thing with polar SSW events is that the difference is inconsistent as in reversing phase. I think that can be overcome, but I don’t think it is easy.
Steven, “And I did see you or other skeptics object when mcintyre odonnell and Id and nic lewis used a simalar mthod to
Improve on steig. ”
I didn’t bitch, I just thought that since the Antarctic is out of phase more often that in phase it was an exercise in futility :) Especially when the change is in -40C degrees temperature ranges where each anomaly should count for 50% of the “average” anomaly.
Nothing is wrong with using present data to create an estimate for missing data. Especially if you do cross validation and if you have a bias if you dont infill.
Its a legitimate approach,however there is a significant distance in skill levels between Ruzmaiken and Feynman or Kravtsov et al and Cowtan and Ray.
http://www.ocean-sci.net/7/755/2011/os-7-755-2011.html
The Wasserstein distance being around 10 DOF
Its Way. Robert Way
Mosher says Way does solid work. Presumably that means not like Mann or Lewwhatever. Perhaps we could get a more detailed discussion of the issues at that moderated blog you’ve participated in. (I don’t remember the URL).
I’m not that impressed by the “pause” anyway, or by “global averages”, since the relative importance of temperatures at different times and places probably varies widely and we don’t know how. But if the observed Arctic temperature field can be filled in better, that would be a valuable accomplishment, useful in validating models, even if not relevant to “global warming”.
I know robert does first rate work because we’ve been comparing notes and methods and code for well over a year. At one point we spent about 3 months looking at labrador data from enviroment canada and BEST. he really likes to get down in the weeds.
he’s not your typical desk jockey and does real live field work placing sensors in remote locations.
So, I trust his work. why? because I’ve watched him work, watched him discover his own mistakes and my mistakes and I appreciate his attention to detail.
Of course, folks should double and triple check, but he’s pretty damn solid.
That’s all right, Steven, what I was hoping for seems to be happening here.
> Lewwhatever.
That’s Lew. Simpliciter.
But that doesn’t express my disdain.
“…doesn’t express my disdain.”
Actually, it’s hard to express disdain in a way that satisfies. It’s so viscerally perceived that the it seems under served by mere words. At least to me.
Spitting would work.
> Spitting would.
Go for it, Poker. Don’t forget to wipe your screen with a good lint cloth.
If your screen needs more care, use ammonia or alcohol.
“Spit on your monitor”
I generally sublimate my disdain through fasting and prayer. Also coloring books.
On the unknown unknowns, I couldn’t suppress a chuckle when I read (in the submitted document):
” Donald Rumsfeld memorably divided the world of knowledge into three quarters:….”
I wonder if he intended it to be that funny, and did it make it into the print version?
The John Kennedy paper that is.
“John Kennedy: The irony is that the study being used to bash HadCRUT4 assumes that HadCRUT4 is correct where we have data.”
Pretty much says it all.
Kennefy helped. See acknowledgements.
Steven, being correct where you have data is good but anytime you have to create data you have potential problems. “global” surface temperature is not a particularly good metric because “global” temperature is not a particularly reliable indication of “global” energy. The more effort expended trying to tweak GMT in regions where a degree of temperature anomaly is equivalent to half a unit energy for an “average” temperature anomaly is not exactly where I would focus much effort.
Then that’s just me
I believe that should be Robert Way not Robert Wray.
The Cowtan & Way study agrees with the Wyatt and Curry Stadium Wave hypothesis in that natural variability can explain much of the observed variance in global temperatures. I see this as a positive for Wyatt & Curry.
While there maybe uncertainty in the global SST there is no doubt that saving energy makes cents (pun intended).
Check out my entry in the Biggest Energy Saver contest where I compete with every electric customer of Texas’s largest grid operator, ONCOR electric.
http://www.biggestenergysaver.com/vote/
My entry is labeled Jack S.
Note: My solar array was completely owner financed, owner designed and made with 100% USA materials, labor and includes no tax credits or subsidies.
No doubt that this is a plus for the Stadium Wave hypothesis. The discrepancies between the main temperature time series of gistemp and hadcrut are vanishing so that the underlying variability can be focussed on, and the uncertainty on that reduced.
It’s the same as having two clocks with different times. Why stand for that?
Jack,
Thanks for your reasoned posts on HBB. No longer on that site but will try to vote for you in the BES contest. Not on social media either, though looking for a work-around.
Keep up the good fight.
ahansen
Dear Dr Curry
Thank you for you comments. We indeed hope that one of the results of our paper will be to stimulate a vigorous discussion in this area.
With respect to kriging across land ocean boundaries, we note that this is a problem in the paper. Can I draw your attention to our update memo in which we test separate reconstruction of the land and ocean data before blending, which is in our view a better approach. To do this properly would require access to the HadCRUT4 land ensemble which is not currently distributed, but with the CRUTEM4 data (which lacks some corrections) the results of blending pre- or post-reconstruction is almost indistinguishable, even under different ice-coverage assumptions. (There is no reason why this must be the case, it is a result of the distribution of the unobserved regions). Dynamically changing ice is more difficult, and you can’t do it with anomalies as you don’t know what kind of bias you introduce when changing a cell from land to ocean, so we’ll have to leave that problem to the BEST team.
Most interesting is the issue of the UAH data over Antarctica. We’ve recently been looking at this with respect to both Vostok, and the Bromwich 2012 Byrd reconstruction. Byrd particularly interesting – it sits on a cell boundary and is remarkably well modelled by the cell to the north in the hybrid reconstruction. The cell to the south models the year-to-year variations, but not the long term trend. We’ve made some preliminary analysis of what is going on based on differencing North-South transects in the UAH data. Some regions show no significant changes, whereas others show large changes in either direction around 2000. I hope to write this up as another update, and maybe Dr Christie will be able to shed more light on the issue, although I’m afraid everything takes a long time when you’re doing it in your spare time.
So it may be that kriging is a better approach for Antartica, especially with remediated data from some of the isolated stations – Byrd is critical here, and I want to do some detailed comparisons with BEST too. Against that, the holdout tests actually favour the hybrid approach for most of the existing station locations, including the SP.
Having said all of that, the difference between the hybrid and kriging reconstructions of Antarctica is only really significant around 1998, so it doesn’t greatly affect our conclusions. And the Arctic is sufficiently small that the two reconstructions are very similar. Most of the Arctic coverage bias also arises in the NH winter, when the Rigor result is most relevant.
If I may appeal to your own expertise, there would seem to be a parallel between our results and those of Cohen et al 2012 (doi:10.1088/1748-9326/7/1/014007). Do you think there is a plausible connection?
Dr. Cowtan
Does this mean that the GISTEMP measurements should converge to the HadCrut with kriging adjustments?
http://imageshack.us/scaled/landing/818/2yd.gif
The chart above is an overlay of GISTEMP on top of your Fig S6,along with a simple model assuming variability.
The differences to GISTEMP appear rather minor and are mainly in the last few years.
Amazing work, congratulations.
Dr. Cowtan
Thanks for coming here and engaging. Though I won’t pretend to understand the science, I respect Dr. Curry and am interested in the unfolding dialog.
Dear Kevin, thank you very much for stopping by to engage here. With regards to the UAH data in the polar regions, there are good reasons why RSS doesn’t show data for the polar regions. While I think UAH is hopeless over sea ice, i do see that there could be some sort of a useful signal over the Antarctic continent.
Can you clarify what you see as the parallel between your results and Cohen et al. 2012?
In any event, it is good to see some new perspectives on this topic.
Actually, the Cohen thing is curiosity and an unhealthy obsession with patterns. Our bias being greatest in winter and the boreal winter cool patterns caught my attention. But it’s way beyond my expertise, and our dataset is probably not the right one for this problem, and experts like you and Jennifer Francis are already doing good work on this, so I’ll sit on the sidelines and watch.
Good critical discussion is invaluable to good science. One of the best things we did with this paper was ask for referees who we thought were best qualified to spot the holes in our work. The discussion here is also very helpful me to shape a plan of work going forward. I’m sorry I haven’t been able to engage further, it’s a busy teaching time.
I’m pondering on posting what I think are the next steps on dealing with the coverage issue. On the one hand that will tell people what they can expect from us, and also provide a list of interesting projects which we know we can’t take on. On the other hand I don’t want to influence other people’s approach to this problem too much. It’s a difficult call.
I really like your approach in engaging publicly on this. Debates and critical discussions are what moves science forward. I would encourage to do a post on next steps to encourage discussion and to generate new ideas, I would be happy to post it at Climate Etc also.
Just curious – why isn’t the HadCRUT4 land ensemble available. Is it still a work in progress?
Do you mean CRUTEM4? It is available.
Dr Cowtan
Thanks for your explanations. Two comments and a query:
1) I do not think the Bromwich reconstruction for Byrd station in Antartica should be relied on. Almost all the large difference between their fast warming reconstruction and previous reconstructions for the small grid cell containing Byrd (eg that by Steig et al, 2009, and that by O’Donnell, myself, McIntyre and Condon in 2011) arose from splicing the early manned Byrd station and the later automatic Byrd weather station records into a single record with nil offset, despite the long gap between them, the different station location and type. No one else had thought fit to do so.
2) For our Atarctic temperature reconsturction, we used AVHRR data from polar orbiting satellites that measured the skin temperature of the Antarctic surface. We found that these displayed sensible spatial correlations although their trends were unreliable. The MSU atmospheric temperature data that UAH mainly deal with is not really suitable as a proxy for near surface air temperature over high altitude snow covered regions, nor (as I recall) as a proxy for sea surface temperature.
3) Can you clarify exactly what satellite data you used, please? I have been unable to tell from the documents that you have made publicly available – perhaps I have missed it.
Why would you spend the time to publish a paper using methods so easily refuted? Just to get a Nuccitelli like response from the cheer leading squad? I am far from a conspiracy theorist but papers like this look more like damage control propaganda than science. The more I learn about academia the less reputable it seems. Judith you are an exception.
A bit of a retraction. My first sentence is an actual question, not rhetorical. The rest of my post was a bit ham handed and unfair to Dr. Cowtan and Way. My apologies.
They haven’t been refuted.
“Why would you spend the time to publish a paper using methods so easily refuted?”
They haven’t been refuted.
” I am far from a conspiracy theorist”
This is directly contradicted by the evidence of your own post.
“The more I learn about academia the less reputable it seems. Judith you are an exception.”
This is strictly a consequence of a) you don’t like what they say and b) you do like what she says … and that’s why you take her comments to be a refutation of the paper, despite your own inability to evaluate either.
First, Kriging. Kriging across land/ocean/sea ice boundaries makes no physical sense. While the paper cites Rigor et al. (2000) that shows ‘some’ correlation in winter between land and sea ice temps at up to 1000 km, I would expect no correlation in other seasons.
Response [1] Actually in the paper we show through rigorous cross-validation tests (see Table 1; Table 2; Figure 3) that kriging is an effective approach for estimating temperatures, even across boundaries. However the hybrid approach performs better than any other method at reconstructing high latitude temperatures (see Figure 3 – cross validation) even at distances of 1650 km). In the case of sea ice this hypothesis has been tested (see Figure 4) where it is shown that kriging from land regions outperforms kriging from ocean cells.
Second, UAH satellite analyses. Not useful at high latitudes in the presence of temperature inversions and not useful over sea ice (which has a very complex spatially varying microwave emission signature). Hopefully John Christy will chime in on this.
Response [2] As indicated in the response to the 1st comment – we have tested the methodology adopted in this study against both held-out observations and against grounded/floating buoys in the Arctic ocean, often located on sea ice. The results of our study indicate that the performance of the hybrid method is reasonable over ice (Figure 4; Figure S5). We also provide an attempt at showing the impacts of changing sea ice conditions on the reconstruction. Although not available in the supplemental information we have also tested the method in Antarctic against the reconciled Byrd station located in one of the most icebound, isolated places on the planet. The results of this test show very reasonable performance with the hybrid method.
Third, re reanalyses in the Arctic. See Fig 1 from this paper, which gives you a sense of the magnitude of grid point errors for one point over an annual cycle. Some potential utility here, but reanalyses are not useful for trends owing to temporal inhomogeneities in the datasets that are assimilated.
Response [3] Since the paper in question was published there have been significant advances in reanalysis methods. In particular, 4-D methods such as those employed by ERA-Interim have shown to be much more reliable in the Arctic and Antarctic. There are a series of papers by James Screen at Exeter which delves into many of these issues and examines the performance of reanalysis products in both the Arctic and Antarctic. I would suggest that Dr. Curry take a bit of time to have a look at the results of some of these studies. That being said the paper does not use reanalysis to infill temperatures, nor do we use it with the kriging, reanalysis is simply presented as an additional source of evidence in additional to satellites, radiosondes and isolated weather stations which show that the Arctic is rapidly warming. Physical evidence is also available in the form of sea ice reduction and glacier changes as well as melt records from high Arctic ice caps. There is a wealth of literature supporting the conclusions that the Arctic is warming rapidly and this relationship (Arctic Amplification) is clear in the paleorecords.
James, thanks for stopping by and engaging here. I agree that there is evidence of warming in the Arctic, however, I remain unconvinced that your methods are verified in any meaningful way for surface temperatures of open water and sea ice in the Arctic Ocean. I see no reference to papers by James Screen in your paper, I don’t know what papers you are referring to. I have recently done a comprehensive literature survey regarding in situ surface temperature and surface flux measurements in the Arctic Ocean (for a grant proposal). I have not seen any recent studies evaluating reanalyses using these data sets.
James?
Joshua?
@curryja: “however, I remain unconvinced that your methods are verified in any meaningful way ”
Can you explain in your own words what you think those methods consist of? That might shed some light on things.
If Arctic temperatures in the modern era where we have more and better instrumentation than ever before have been shown to poorly estimated what’s that say about estimates of Arctic temperatures before the modern era?
I’ve long held that the data anthropogenic warmists need to find hundreths of degree warming in global average simply doesn’t exist even now and just gets progressively farther from adequate with every year it steps back in time.
In other words if the Herculean efforts to produce accurate GAT were mistaken up until this paper was published in 2013 how mistaken are estimates of what was happening one hundred years ago? How can we possibly compare now to then with any confidence? Answer is simple: we can’t. Yet ideologues continue to massage here, interpolate there, and then present the results as proof of their predetermined conclusions. What a load of BS. The higher and deeper it gets stacked the less scrutiny it takes to see it. That’s why consensus climate science is losing the war for the hearts and minds of everyone outside the field.
As always, write that down.
Yep, we need more data, not more guesstimates.
Whether kriging works over boundaries (by which I assume is meant discontinuities) surely depends on where you sample.
Provided you sample near both sides of the boundary I don’t see how a problem can arise.
Conversely, if the boundary is far from any samples, I don’t see how knowledge of the respective covariances on each side of the boundary can tell you anything at all about where the boundary is other than that it is between certain samples. Covariances between samples on opposite sides of the boundary are presumably useless other than as a diagnostic that there’s a boundary.
Overall, I would advise that commentators read the full paper and the supplemental materials before making assertions as to the applicability of certain methodologies. The cross-validation steps taken in this paper are very important and the paper shows rather clearly that the Hybrid method in particular appears to be fairly robust even at long distances from adjacent cells.
Can I ask a very simple question?
Did you remove individual stations, at random, then calculate the temperature at that site, then compare the real with the calculated?
I believe that such calculations are the only way to know how well you model captures reality and where and how it fails.
Doc, in Cowtan’s first post, he mentions something about holdouts, so they do seem to be checking the corrections on a holdout sample. People sometimes do what you’re suggesting to estimate the covariance matrix of an estimated parameter vector (eliminate each observation, use the remaining N-1 to estimate, get N parameter vector estimates, calculate matrix using them… in the circles I know they call this “the jack-knife estimator”)
There is a link to a youtube video which very briefly describes what they did.
DocMartyn, you can check the video. They removed large areas as a test to see how well their method infilled the missing data regions.
Good suggestion. Unfortunately the article is pay-walled… and they won’t even tell you the price until you provide them with your credit card. Too bad.
So, lets’signore it and make uniformed critcisms.
Go team Skeptic!
Michael,
Did I make an uninformed criticism? Nope. But you sure did. Get a life pal.
Steve,
Apologies – I wasn’t meaing you in particular, becuase you didn’t.
But if people are so interested that they want to critique, why not cough up a few bucks and read the damn thing…..or just be quiet??
From the SI:
Wow. Eight times. It sure is a good thing that we are able to remove that substantial cooling bias from the global temperature trend, for the period 1997-2012.
What about the rest of the record? What do your results show for Arctic temperature bias for, say … 1925 – 1942? Or 1960-1977? How badly was the Arctic temperature bias screwing up the reported global temperature trend then? What does the record show, now that you’ve removed it?
The Arctic warming is a lot faster more recently, witness sea-ice trends.
Jim D
The Arctic warming is a lot faster more recently,
More recently than when? 1925-1942? 1960-1977? Cant say that without data.
It sure will be nice when Robert Way comes back and shows us what the comparable trends were, now that they have removed the Arctic temperature bias from the temperature records from those periods.
And I wonder – given the see-sawing that seems to occur between the Artic and Antarctic, when over the last century was there an Antarctic temperature bias that was not previously accounted for in the global temperature record?
I can’t wait to learn what these exciting new results say about these interesting and important questions!
Well, it would appear that the Robert way is to vanish as quickly as one appears. Attention drawn off by more important matters, one supposes. Perhaps a new shipment of uniforms has arrived over at SS, and he is needed for a rather different kind of modeling activity.
Pity.
The paper is getting plenty of media attention, I’m also getting queries from reporters.
Greet David Rose for us!
Ouch.
http://www.breitbart.com/Big-Government/2013/11/12/Oops-Solar-Energy-Plants-are-Killing-Rare-birds
Skyscrapers kill millions of birds each year. Let’s cut them down.
http://www.breitbart.com/Big-Government/2013/11/12/Oops-Solar-Energy-Plants-are-Killing-Rare-birds
Well, Lefties are mass murderers, so it’s not tooo surprising.
The birds will simply have suffer for the Cause.
“Skyscrapers kill millions of birds each year. Let’s cut them down.”
Maybe that was part of Osama bin Laden’s plan.
To some it makes sense.
Hey Neven, nice of you to pop in!
Speaking of whom, he was all sciencey today, e.g.:
https://twitter.com/DavidRoseUK/status/398831247683633152
A question to David Rose which was met with a payload of crickets:
https://twitter.com/nevaudit/status/400757641884221440
But Mike blocks David.
Somewhat on topic, and it assumes John Kennedy will be scanning the comments. Could you please give us an update on the status of the HADISST2 dataset, John?
Cheers
I think it is awesome that the authors defend their paper on the blog where it was challenged. I am not competent to judge the issues myself, but I’m guessing that I will soon have a clear impression of where people end up. This is a thousand times better than each group publishing proofs in their own echo chambers for their own fans.
+1
Agree mike. Non scientists are well served in these exchanges. After a while, you can get a sense of who’s on more solid ground, even without fully understanding the science. Sneering elitist warmists like Web who would if they had their way, require a Ph.D in physics to be produced at the voting booth, will never acknowledge that.
Well, and algebra test might not be such a bad idea.
Hey PG, someone got moved into the BoreHole at RealClimate for saying that Dr. Cowtan is a mere X-ray crystallographer.
Well, I got my PhD in electron diffraction and did X-ray work as a post-doc and if there is one thing that these dudes are good at it is in reconstructing reality from a reciprocal space and limited data.
I suppose that is too elitist for you..
“Well, and algebra test might not be such a bad idea”
Barely passed algebra. In my Junior year my homeroom teacher tossed my report card on my desk and said, “Congratulations. Full house.” I opened it up to find 3 D’s and 2 F’s. Never held me back. Had a successful business career, and have achieved some literary success writing personal essays since then.
My general point obvious as it is, is that there are different forms of intelligence and more than one way to solve problems.
JC there is an author error for Robert Way(who is commenting here)
When different temperature time series are discussed, it’s good to remember that the warming cannot be described by any single time series. The global average surface temperature is just one proxy for the warming, and not necessarily the best proxy by objective criteria.
A good proxy
– Is closely correlated with phenomena that affect us.
– Can be determined accurately and unambiguously from measurements.
– Has little random variability.
From the list given above we can conclude that a good proxy is not strongly influenced by surface temperatures in regions where they vary exceptionally much without affecting anything else as strongly. That may be the case for winter temperatures at high latitudes in areas with highly varying influence of temperature inversion, to give just one example.
I find myself in agreement with Pekka, one of us must be unwell.
Pekka makes many excellent posts. I have been much influenced by his many interesting informed posts over the years.
Help me understand this. When paper 2 says that HadCrut4 “covers 84% of the globe,” does that mean that 84% is fairly represented by direct measurement while 16% has to be inferred by some method? And, the inference procedure that differs between HadCrut4 and paper 2 just makes inferences for that latter 16% of the globe?
If the answer is yes, then the results stated in the Graudain seem hard to believe. 16% is about one-sixth of the global area. HadCrut4 and paper 2 get a difference of 0.115 – 0.046 = 0.069 in dCent/decade in the average warming rate over the entire globe. To get that, HadCrut4 and paper 2 would be asserting a 6*.069 = 0.414 dCent/Decade difference in average warming rates over the unsampled one-sixth of the globe. That’s roughly an order of magnitude faster warming than the claimed global average rate in HadCrut4 (0.046).
I suppose this is possible if polar warming is predicted (say by the models) to be 10 times the global average, but I don’t recall hearing such a big multiplier.
Or am I misunderstanding the numbers?
I wondered the same thing. Either my interpretation is way off or the Arctic and Africa are going vertical.
Land areas are warming at 0.3 C per decade (CRUTEM), so 0.4 C per decade for an area dominated by land and the Arctic is not that surprising.
Jim D
Not true, Jim.
Over the current decade (since 2002) CRUTEM4 (land areas) has cooled by 0,023C.
http://www.woodfortrees.org/plot/crutem4vgl/from:2002/trend
Curiously, HadSST3 has cooled at a slightly greater rate, at 0.029C per decade.
http://www.woodfortrees.org/plot/hadsst3gl/from:2002/trend
And explanations for this?
(Seems to contradict not only your statement but also Webby’s hypothesis that land temperatures change more rapidly than sea temperatures.)
Max
Not my theory, but of all climate science.
Land doesn’t have significant heat capacity so has to respond quickly.
Webby
Your “theory” may well be right (and it sounds reasonable).
But it ain’t working out that way in real life.
Max
Your “theory” may well be right (and it sounds reasonable).
But it ain’t working out that way in real life
False.
The 30-yr linear trend for CRUTEM4 (land only) is 0.28 C/decade.
manacker, you prefer to extrapolate short trends for some reason. Take 30 years and extrapolate that. I suspect the Arctic itself may be even faster, which is what I said.
GISTEMP has the Arctic warming by 1 C per decade. They may have an extrapolation method, but it shows a high enough value to account for the impact of the missing data. This shows the difference between the last decade and the 30 years centered 30 years before.
http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?year_last=2013&month_last=10&sat=4&sst=3&type=anoms&mean_gen=0112&year1=2003&year2=2012&base1=1963&base2=1993&radius=1200&pol=reg
Hi NW,
What this means is that the global average calculated by HadCRUT4 is averaging only 84% of the world. The other 16% not observed is implicitly assumed to have the mean temperature of the 84% observed. Of course, when there is a relationship between latitude and warming rates, and high latitude areas are disproportionally missing observations, this can lead to bias.
@Zeke,
“What this means is that the global average calculated by HadCRUT4 is averaging only 84% of the world. ”
That’s absurd.
What was the reasoning behind using UAH and not RSS data? Additionally, would not a comparison methodology substituting RSS for UAH data be informative?
Of course they need to use UAH data. Slam dunk in da face.
So science is about defeating your political enemies?
No, science is very competitive.
I am sure that a York guy is proud of his one-upsmanship over East Anglia.
Thanks, Professor Curry, for this report.
It encourages me to go ahead and boldly identify the elephant in the living room that has been danced around and ignored by mainstream scientists for the past sixty-eight years (2013 – 1945 = 68 years).
Here is a picture of the tiny, massive, politically-incorrect and forbidden elephant that has lived in the middle of the solar system’s living room for five billion years (5 Ga):
http://www.omatumr.com/Photographs/Suns_core.htm
“with the hybrid method showing particular skill around the regions where no observations are available.”
Incredible [in the Princess Bride sort of way]
Any method must show considerable skill when there are no other comparisons to the results.
Could they try chicken entrails for example, Steve.
After all a bad model is better than none
Or wait.
There are a whole lot of Climate Models at the IPCC just waiting to be compared
angech,
You misunderstand the Way of Warm. Chicken entrails perform best for predicting future data, in lieu of future observations. I have used them many times, and neither the Team, the IPCC, or even WebHubTelescope are able to disprove the results.
What you need for creating data where none exists, either in the past or the present, is to use “Runestones of Power”. These show incredible skill, not just considerable skill, particular where the results cannot be confirmed by observation.
Even cheaper is to just guess. Nobody can prove you wrong.
Live well and prosper,
Mike Flynn.
angech, I don’t think that’s what was meant. I think “around the regions” means “at sampled, direct measurement points near the boundaries of the unsampled regions.”
NW No, I think it means “where no observations are available”
Cowtan and Way: Your method presents a short-term trend (1979-2012) that’s even greater than GISS. 0.11 deg C/ decade and 0.12 deg C for your infilling methods versus 0.8 deg C/decade for GISS.
Part of the GISS warming bias results from their masking of sea surface temperatures in areas where sea ice can exist and replacing that sea surface temperature data with land surface air temperature data.
http://bobtisdale.files.wordpress.com/2012/04/figure-14.png
Full post is here:
http://bobtisdale.wordpress.com/2012/04/13/the-impact-of-giss-replacing-sea-surface-temperature-data-with-land-surface-temperature-data/
How does your method address this bias?
Should read: Part of the GISS warming bias (with respect to HADCRUT4) results from their masking of sea surface temperatures in areas where sea ice can exist and replacing that sea surface temperature data with land surface air temperature data.
isn’t there an SST homogenization issue there too?
Should also read 0.08, right?
Yes, Bill. Should read 0.08 deg C/Decade. Thanks.
Here’s Ed’s third tweet of a series if two:
https://twitter.com/ed_hawkins/status/400683303059329024
Concerning the SST measurements over the last 100+ years, the only strange anomaly I have come across in my own analysis work using the CSALT model, is a warming glitch starting in late 1943 and lasting into 1944 before declining.
This spike is only weakly associated with a SOI peak and is suspicious as it corresponds to many missing temperature readings during the war years. It also emerges in the land-only data.
A warming spike also occurs in 1939. The big SOI event occurs in 1941 which does show up in the data.
It may just be coincidence but Kennedy does say that the uncertainties are “particularly large in the period surrounding the Second World War owing to a lack of reliable metadata”.
WebHubTelescope,
“In the context of SST uncertainty, unknown unknowns are those things that have been overlooked. By their nature, unknown unknowns are unquantifiable; they represent the deeper uncertainties that beset all scientific endeavors. By deep, I do not mean to imply that they are necessarily large. In this review I hope to show that the scope for revolutions in our understanding is limited. Nevertheless, refinement through the continual evolution of our understanding can only come if we accept that our understanding is incomplete. Unknown unknowns will only come to light with continued, diligent and sometimes imaginative investigation of the data and metadata.” – Kennedy.
You say you have come across a strange anomaly. Why would you find his strange?
As Kennedy says, you may need a little bit of “imaginative investigation”.
As I become more fluent in the language of the Book of Warm, this means use your model results to create the data which should have existed, according to your model, and then adjust actual observations to fit. Voila!
No more stupid anomaly!
No thanks necessary. I am glad to be able to help.
Live well and prosper,
Mike Flynn.
Willard, ” But why would we need models if we can have access reality by looking at the data?”
You don’t need models for anything other than describing data. Your model may find errors in the observations, but you never assume the model first unless it is as solid as a rock aka a physical law. So when your model butts head with physical laws, ya need to proceed with caution.
You seem to think that that attitude is some sort of failing on my part and accuse me of being a cherry picker. Which is extremely humorous.
> So when your model butts head with physical laws, ya need to proceed with caution. You seem to think that that attitude is some sort of failing on my part and accuse me of being a cherry picker.
Thank you for asking, Cap’n. Perhaps I can clarify two points.
The first is that your complaint may be seen as trivial: all models are wrong. All models will butt head with some physical laws. Numerical methods oblige.
The second is that I did not wish to accuse you of being a cherry picker, but to show how easy it would be to dogwhistle it with a counterfactual like “If reality don’t suit you […]”. I have no idea if you really cherry pick or not, and quite frankly I don’t care.
Taken together, the two points amount to suggest that it might be more fruitful to argue about the models’ usefulness than to entertain mind-probing counterfactuals on the basis of a trivial property of models.
Hope this is clearer.
Willard, you should be more observant of the two engage in conversation. Webster’s warmth and charity is a little less obvious than most which tends to set the tone of the discussion. I sowed the seed of his model being able to find blemishes in the data and am enjoying my harvest.
Web SST were measured by examining ships logs. Before the WWII ships would take take the most economic routes. In the run up to WWII routes changes in the Atlantic, and also with the break down in relations between Japan and America, in the Pacific.
The war saw ships using different routes, and slow, coal burning were replaced by faster, oil fuels ships.
After the war the trade routes used by shipping were completely altered, with respect to pre-war years. Japan and (West) Germany didn’t return to a pre-war GDP until 59/60
And the ships were probably more worried about dodging torpedoes and evading aircraft than slowing down to take accurate temp. measurements.
Kennedy says that “During the war years 0.2K was added to reflect the additional uncertainty during that period”
This is an uncertainty level and not an offset, but it is curious that the only time that the CSALT model residual error stays above 0.1K for any length of time, and actually reaches 0.2K is from the years 1938 to 1945.
The WWII temperature anomaly numbers are suspect as the CSALT model also substantiates.
Webster, “This is an uncertainty level and not an offset, but it is curious that the only time that the CSALT model residual error stays above 0.1K for any length of time, and actually reaches 0.2K is from the years 1938 to 1945.”
BEST “global” is suppose to look into that with kriging which should answer a few questions. I doubt there will much change though based on the land surface temperature which BEST can use in their kriging.
Webster,
Oceania
http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/oceania-TAVG-Trend.png
Ahh, I can always count on Cappy for a heaping dose of MISDIRECTION.
Other side of the globe buddy.
It appears that the major portion of the WWII anomaly is due to contributions from the Arctic zonal region. The CSALT model residual of the GISTEMP series has dual spikes that straddle a broad Arctic peak during the war years:
http://img198.imageshack.us/img198/1193/hdro.gif
On the right hand side is a finer resolution which highlights the two spikes occurring at 1939 and 1943 in the Arctic and how they line up with the CSALT model residual spikes.
The CSALT model does not use data that is specific to the Arctic so that the temperature anomaly could possibly be of a mechanism other than one of the CSALT indices, or it could be a measurement error in the Arctic.
The data is described here by KevinC
http://www.skepticalscience.com/print.php?n=1378
Webster, “It appears that the major portion of the WWII anomaly is due to contributions from the Arctic zonal region.”
Oh really? After breaking the SST into 10 degree latitude bands and weighting them by actual area, it kinda looks like 5S-5N, 5N-15N and 5S-15S had the largest impacts while 55N-65 north was actually rising in temperature, though it did likely start the ball rolling in the 1920s. Since the Oceania surface temperatures tend to agree with that, I would say you are trying to blow smoke up someone’s arse :)
Webster, climate explorer ersst3
http://climexp.knmi.nl/data/iersstv3b_-179-179E_-30-30N_na.png
http://climexp.knmi.nl/data/iersstv3b_-179-179E_30-70N_na.png
Figure out the areas
More misdirection, Cappy?
How quaint.
Arctic amplification amplifies the noise. That is essentially what I am looking at this point.
Webster, “Arctic amplification amplifies the noise. That is essentially what I am looking at this point.”
The you should notice that there is more amplification in the 30N-60N latitude band because the “choke point” or rapid reduction if Sea surface to land area ratio. 45N has the highest variance and should be a good break point which is why I used the 65N-45N 45N-45S and 45S-65S areas to show the “waves”.
The problem is finding a starting point, 1910-1920 is actually volcanic/solar nearly synchronized push down on 200 year weakly damped recovery by the looks of it. That’s why you get and amplified rebound in ~1940, that fairly consistent ~30 year lag.
Arctic amplification “should be” the biggest, but 30N-60N “is” the biggest amplification.
btw Webster, if kriging mapped variance base on actual energy instead of just temperature it would pick up more of the blemishes :)
Kevin Cowtan intersects the work of John Kennedy with this post:
http://skepticalscience.com/hadsst3_a_detailed_look.html
The CSALT model has the largest residual in the early 1940’s:
http://imageshack.us/a/img534/3678/nj7d.gif
This spike sticks out like a sore thumb on the GISS series as well as the other ones.
If what Kennedy says is correct, ship crews didn’t fuss with the trailing buckets and he thinks the temperatures were high during the WWII years because the thermometers were near the engine room intakes.
Knock that down by 0.2C as Kennedy says is the uncertainty and the model actually predicts that this is an instrumental measurement error. That’s what models are good for !
Webster, “Knock that down by 0.2C as Kennedy says is the uncertainty and the model actually predicts that this is an instrumental measurement error. That’s what models are good for !”
You hit the nail on the head, if reality don’t suit you, model it away. Since standard kriging wasn’t quite good enough, hybrid it wit stratospheric reading until you get it right. If you want AGW to start in 1900, declare that 1900 was “normal” . If you don;t like the recovery pop and drop in 1941, ignore the coastal tropical surface station data and massage that away.
That is what models CAN be good for.
> If reality don’t suit you […]
If we had a more direct access to reality, Cap’n, do you think we’d bother with models?
You do have more access to reality, you just choose to spend money on supercomputer climate models instead.
Willard, there was a CAN there. Models and reality are both imperfect. What Webster is thinking is that his model justifies ignoring other real evidence, reality, that the 1941 SST event is real. He is letting his confirmation bias get in the way. I am not advocating revising history or ignoring data.
Cowtan and Way had a nice paper that has potential but stopped at the results they liked when they could have taken the next step and had a great paper. Now they seem to think that their model is good enough to rewrite some history. I think history can take care of itself.
In ten years this will be part of history, probably an interesting chapter.
That it remains an interesting footnote in ten years would be a success, Cap’n.
If you like a dataset, just call it reality, right?
Willard, “If you like a dataset, just call it reality, right?”
I understand that you forte is not the hard sciences so you don’t realize that the C&W paper is effectively changing the freezing point of salt water to ~+4C degrees which is a physical impossibility, but that is the changing of “reality” that is being proposed.
Logic and esoteric debate is a wonderful past time but actually applying it tends to elude some folks.
I love it when you resort to ad homs, Cap’n.
You just got caught conflating your favorite data with reality, you know. Acknowledging this only forces you to admit you also need to rely on something like a model, which means you’re remark reduces to “my model is better than yours”. But then you’d have to put forward a model, and argue why your model is better than the ones you criticize. For instance, you can take the Lewis gambit and pretend your model is (more) empirically based, as if such empiricism was free of dogmas.
And since you want to play tough, this ain’t your turf at all. Epistemology 101, really.
Willard, “You just got caught conflating your favorite data with reality, you know. ”
That is complete BS, I don’t have favorite data, I look at data. When you switch from a SST metric limited by the physical properties of water to a surface air temperature at the end you will get an anomaly spike. Since the surface air temperature is multiple tens of degrees lower than the sst you have apples and oranges. Now if they provided a reason to mix the metrics, their method would be useful especially for determining the impact of sudden stratospheric warming events. As it is though, it is a misrepresentation of the sst data.
You of course cannot realize this because it is well outside your field of expertise which I am still trying to determine.
Kevin Cowtan has a SkS post from yesterday on SST Bias
http://www.skepticalscience.com/the_other_bias.html
Scroll to the bottom of the comments and you can see my addition of the WWII correction.
There is much evidence that the SST temperatures from 1940-1945 were biased warm by about 0.1C due to less rigor in measurements. This is understandable as the merchant marine were more concerned about attracting nearby U-boats than dragging a bucket behind their ship to get good temperature readings. Because the default thermometers were near the engine intake, the temperatures were biased high until the war ended.
http://img809.imageshack.us/img809/6500/zrj.gif
View it and weep.
Webster, ” Because the default thermometers were near the engine intake, the temperatures were biased high until the war ended.”
So that is why the cooling anomaly occurred during the war? Then prior to the war the confidence degrades the further back in time you travel.
webster, if it is engine intakes, that would be obvious in each and every area of the ocean. Simple instrumentation error.
https://lh4.googleusercontent.com/-_6Ae9_Qkoek/UofH_DwH5xI/AAAAAAAAKiY/ZKDY2xWO8gQ/w677-h431-no/bucket+bias.png
Where is it?
OMG Webster! It looks like the buckets-intake virus is contagious!
https://lh6.googleusercontent.com/-oXDS9d9Jlog/UofZNxxJkXI/AAAAAAAAKjA/fWfmWoKE9s8/w867-h453-no/best+and+GIS+get+bucket+bias.png
It infected BEST Tmax, Tmin and GISS dTs! Oh the Humanity! Why would a full 0.2C of error just randomly spoil a perfectly good correlation?
captd, I thought BEST was land only. Since when did they use buckets for land?
JimD, “captd, I thought BEST was land only. Since when did they use buckets for land?”
They didn’t, it is sarcasm. Webster believes that the SST anomaly during WWII is an obvious out mistake that needs to be removed. Oddly his “mistake” shows up in the land data Tmax, Tmin just like a real “global” event might. Per Webster, Cowtan and Ways, “superior” kriging method “proves” that the switch from buckets to intakes is a glaring error. I say if the same thing is in all the other data, they just might be wrong.
> I don’t have favorite data, I look at data.
My mistake: you look at data, Cap’n. Then you see or feel reality. But why would we need models if we can have access reality by looking at the data?
Thus we get back to the first question. That means you’ll have to do better than that if you wish to divert me from it.
Thank you for the other ad hom, Cap’n.
Cappy, you are always so full of it. The peaks are real as the year 1941 had a full-blown El Nino event as evidenced by a strong SOI extremum.
http://img202.imageshack.us/img202/9397/o51.gif
The 0.1C correction comes about because it is understood that the sea temperatures were exaggerated on the warm side during WWII.
Furthermore, as far as the sharp Northern Hemisphere peaks in 1938 and 1944, those are spikes that are not captured by the SOI or other CSALT components.
You can see these clearly in the Atlantic and the land areas here:
http://imageshack.us/a/img585/5273/y6w.gif
I can add an AMO index as the Hurrell diff in presures to capture that.
CaptnDallas
“OMG Webster! It looks like the buckets-intake virus is contagious
[snip link]
It infected BEST Tmax, Tmin and GISS dTs! Oh the Humanity!…”
Ooooh!!! I like that. [Really] But one nit. I have to keep working on you to stop the slander:
“Per Webster, Cowtan and Ways, “superior” kriging method “proves” … etc. etc.”
I’m watching you.
mwgrant
Webster, “The 0.1C correction comes about because it is understood that the sea temperatures were exaggerated on the warm side during WWII.”
And if they correct for that front peak they will add to the valley at the end. The data “might” have a wart. That wart is inside the margin of error. You live with some warts. Since the Best Tmin, Tmax and GISSdTs all have a similar wart with what should be expected lags, that wart just might not be a wart. Given that each latitude band has its own seasonal oscillation, one really should expect those warts
Verdammte strikeouts on ‘kriging’ didn’t take. fPhee on WordPress.
mwgrant, my apologies :)
lusers.
MNFTIU
@WHUT
Sorry sweets, Captn’s comment was funny. That is about where you have said ‘touche’ and proceeded. Lighten up.
WebHubTelescope, as a respecter of your model but not yet a believer, I would like to repeat my question from yesterday:
What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?
In your thinking globally, but not in your model, there is a long lag between the transient response and the equilibrium response at the surface. If for the sake of argument we take the ECS to be 3C, I think that the mean surface temprature rise by 2.8C would occur in under two years, compatible with your model not entailing much of a lag between CO2 change and near-surfce near-“equilibrium”. (i.e. bearing in mind that no “equlibrium per se ever occurs.)
Marler,
You have to understand how the transient response works. On the CSALT model, there is a lag response that you can adjust. Make that longer and the ECS will increase, since the forcing is reduced initially.
In practice, the response is diffusional which isa fast transient followed by a fat tail.
The paper by Caldeira and Myrhvold describe this, which I blogged here,
http://ContextEarth.com/2013/11/13/simple-models-of-forced-warming
Because of the fast transient, a good approximation is to just use the TCR and then assume a gradual uptake for the fat-tail as it approaches an ECS.
I could use my own diffusional approximation but I don’t think the world is ready for it yet. It is better to keep it at this level of abstraction.
WebHubTelescope: I could use my own diffusional approximation but I don’t think the world is ready for it yet. It is better to keep it at this level of abstraction.
Let me try again: What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?
Transient diffusion does not follow first-order, i.e. damped exponential, dynamics. There is no such thing as a conventional time constant when dealing with diffusion.
The fast transient occurs quickly but the rest of the warming occurs slowly. Tell me that you understand this at least, because it is a very elementary aspect of diffusion theory.
WebHubTelescope: Transient diffusion does not follow first-order, i.e. damped exponential, dynamics. There is no such thing as a conventional time constant when dealing with diffusion.
The fast transient occurs quickly but the rest of the warming occurs slowly. Tell me that you understand this at least, because it is a very elementary aspect of diffusion theory.
I understand that the fast transient occurs quickly and that the attainment of the equilibrium, should it exist, takes long.
Now back to my question: What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?
In systems that are actually known, like chemical kinetics and pharmacokinetics, the answer is obtained fairly simply. Near equilibrium or near steady-state can occur in some compartments in less than 1% of the time it takes to achieve near equilibrium or near steady-in all compartments.
You should read that paper by Caldeira and Myhrvold. They show various temporal profiles of the transient thermal response.
WebHubTelescope: You should read that paper by Caldeira and Myhrvold. They show various temporal profiles of the transient thermal response.
Sure thing. If you ever decide what you think best, let us know. With luck, and I am pretty sure luck will be required, it will comport well with your model.
WebHubTelescope: You should read that paper by Caldeira and Myhrvold. They show various temporal profiles of the transient thermal response.
This one? N P Myhrvold and K Caldeira 2012 Environ. Res. Lett. 7 014019 doi:10.1088/1748-9326/7/1/014019
Greenhouse gases, climate change and the transition from coal to low-carbon electricity
Here is a quote: We estimated the change in surface temperature, 1T by using a simple energy-balance model. The radiative forcing
1F supplies additional energy into the system. Radiative
losses to space are determined by a climate feedback
parameter, . We used D 1:25 W m2 K1 [6–8], which
yields an equilibrium warming of 3.18 K resulting from
the radiative forcing that follows a doubling of atmospheric
CO2 from 280 to 560 ppmv. The approach to equilibrium
warming is delayed by the thermal inertia of the oceans. We
represented the oceans as a 4 km thick, diffusive slab with
a vertical thermal diffusivity kv D 104 m2 s1 [8]. Other
parameter choices are possible, but variations within reason
would not change our qualitative results, and this approach
is supported by recent tests with three-dimensional models
of the global climate response to periodic forcing [9]. Our
simple climate model treats direct thermal heating in the
same way as radiative heating; heat either mixes downward
into the ocean or radiates outward to space. To isolate the
effects of a transition to LGE energy systems, we consider
GHG emissions from only the power plant transition studied.
Initial, steady-state atmospheric GHG concentrations are set
to PCO2 D 400 ppmv, PCH4 D 1800 ppbv, and PN2O D
320 ppbv, at which 1F D 1T D 0. (Use of other background
concentrations for GHGs would not alter our qualitative
results (SOM text SE1.3 available at stacks.iop.org/ERL/7/
014019/mmedia)).
They do not answer my question either: If the equilibrium warming effect is 3.18K, then how long after the doubling occurs will the surface temperature meet or exceed a specified value, such as 2.88K of warming? Notice that they say “heat either mixes downward into the ocean or radiates outward to space”, but there must be some non-negligible amount of heat that is transferred into evaporation, and then convection from the surface to the upper troposphere.
This was the 4th time in one thread that you wrote about a lot of other stuff without answering the question or admitting that you don’t know the answer. The question is obviously important as it relates to the accuracy of your model and other lnCO2 models, and the relevance of the past temperature change at the surface to the future temperature change at the surface — all that “warming in the pipeline” that may hardly affect the surface temperature at all.
Matthew: “you wrote about a lot of other stuff without answering the question or admitting that you don’t know the answer.”
Doncha hate it when that happens?
Matthew R Marler,
I assume you noticed that they tested their results with a 3 dimensional model.
As usual, use a model to test a model. If it doesn’t agree, change one or other until they agree. Success!
Live wll and prosper,
Mike Flynn.
Marler said:
I know the answer.
(1) On LAND, the equilibrium will be reached quite quickly, within years of the forcing, moderated very slowly by gradual ocean changes
This is the relevant passage by Caldeira and Myrhvold
(2) In the OCEAN, the equilibrium is reached asymptotically as a fat-tail. This means that at the surface, a fast transient to the TCR is reached quickly according to diffusion kinetics, followed by a gradual climb to the ECS. This could easily take hundreds of years partly because that is the way that Fickian fat tails work and mainly in consideration of how long it will take for the ocean to sink all the heat necessary for the temperature of the bulk to rise.
This is another relevant passage by Caldeira and Myrhvold
Why you think I am being evasive, I don’t know. I have worked out process diffusion equations my entire career. The SiO2 that is grown on the MOSFET devices that constitute your computer’s RAM and CPU is grown according to the Fickian diffusion kinetics that former Intel CEO Andy Grove wrote up in his PhD thesis in the early 1960’s. You wouldn’t ask a semiconductor engineer how long it would take to grow a thickness of an oxide unless you were being very specific. To grow a micron thick oxide it doesn’t take to long, but to grow a millimeter thick oxide thick oxide will likely take millions of times longer using conventional techniques. It is actually insane to even think about that once you realize how diffusion and bulk effects work.
The ocean is a huge heat sink and it will equilibrate very slowly to an external forcing.
Read this again:
http://contextearth.com/2013/11/13/simple-models-of-forced-warming/
Also read the paper on my blog called “Diffusive Growth”.
WebHubTelescope: In the OCEAN, the equilibrium is reached asymptotically as a fat-tail.
Of course! The equilibrium is always an asymptotic result. That is why I have asked how long it takes to get 90% of the way to the equilibrium — that is a finite time. Even with simple exponential decay the final state is an asymptotic result, but after 5 half-lives the system is 97% of the way there.
For many purposes, it may be a sufficient approximation to use a one-dimensional heat-diffusion ocean model having just one degree of freedom—in effect, to approximate warming as a simple heat-diffusion process.
No denying that. But what may be a sufficient approximation for many purposes has to be shown to be a sufficient approximation for any particular purpose before its results are relied upon. That’s why there is always so much testing by as many means possible of whether the approximations that have been used are good enough for present purposes.
Why you think I am being evasive, I don’t know. I have worked out process diffusion equations my entire career.
I respect your many years working on related problems. I have worked many years on non-linear differential equation modeling of non-stationary multivariate biological time series, including repeated-oral-dosing and continuous intravenous infusion of drugs. The calculation I have requested here I have done many times. In this case, the answer can not be computed: if the calculated equilibrium change is 3C, how long does it take for the surface to change 90% of the way toward the equilibrium value? How long it takes the deep ocean to move 90% of the way toward the new “equilibrium” value is a separate question.
That’s assuming that “equilibrium” is a relevant concept in this case: the current “equilibrium” temp of the earth is 288K, and the hypothetical new equilibrium value after a doubling of CO2 is 291K, and the deep ocean will never equal either of those.
So back to my question and to your model. If the doubling of CO2 actually causes a 3C increase in the earth surface mean temperature, how long does it take the earth surface mean temperature to increase 2.7C (that’s 90% of the way toward equilibrium, but any value could be used.)? It’s under a 1% increase of downwelling LWIR, way under a 1% increase in total radiant energy at the surface. In order for your model to be accurate, it has to be a short period of time, such as 1 year, and in that case any “warming in the pipeline” will have little effect at the surface.
That assumes that the “equilibrium” is even relevant. It looks to me like an interesting calculation that has become a great distraction from what is needed.
NW: Doncha hate it when that happens?
I think in the AGW debate the blatant non-answering of important questions, like the gross exaggeration of the importance of every storm and fire, is a losing strategy. It’s one of the reasons that the purveyors of claims of catastrophic CO2-induced global warming are not prevailing in the public policy debates. afaict
Matthew Marler, Web answered that the land responds almost immediately to forcing changes (within a year according to his evidence). We see a near 4 C per doubling TCR when the last 30 years are taken over land, if we can attribute the warming to the CO2 change. This is mainly in the internal and northern continental areas. Everyday evidence of the diurnal cycle shows this difference between land and water. The only mitigating factor for the land response is because air comes in from the ocean areas.
Right JimD, Marler is simply trying to apply rhetorical devices to win the argument.
Practically speaking the ECS of 3C will never be reached and even 90% will not be reached for the global average any time soon. Yet what matters is what the land temperature is doing, and how much accumulated heat that the ocean is absorbing.
The latter especially is verification that the GHG is doing its physically expected thing, based on physical theorizing by physical scientists who have an advanced education in the physical sciences.
Sure, we are fortunate to have a heat sink that big on earth, but like a heat sink that is placed too far away from your computer’s CPU, it’s not going to do a lot of good sitting that far away from the midwest and the middle of inner Siberia !
IR is absorbed by water only at the surface, and even then the water instantaneously vaporizes. Very little of the energy from IR makes it into the bulk volume of the ocean. SWR is the primary source of heat for the ocean.
Yes, if we imagine a global 100% land cover, the TCR would have tracked the ECS simply because there is no significant storage in the land surface for these time scales. The colder ground layers below have almost no surface influence, due to poor conduction, while upwelling and mixing parts of the obviously dynamical ocean circulation do.
I think the problem is one of misplaced projection attribution.
If you can find one instance in my hundreds of blog posts that I have written in the last ten years that has ever made a big deal out of anecdotal information, I would like to hear about it.
That is actually why I don’t care for stuff like Mr. ClimateReason does in his “research”. All he does is put together subjective, qualitative anecdotal information and treats that as if it were actual science.
The only thing that I know about the atmosphere and its propensity for more violent storms is that the specific humidity has increased by 4% since 1970
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-es.html
This agrees with what one can expect by an Arrhenius thermal rate activation based on increasing average SST values. If the highest wind speed in a storm is proportional to water content (increased updraft bouyancy) then a 4% increase in humidity could change a 190 MPH hurricane into a 190*1.04 = 198 MPH hurricane.
It certainly won’t make a storm weaker, eh?
Web
I do nothing of the sort and If you would just look beyond your well known prejudices and read what is written you might find the material useful. Historical climatology has a very long pedigree and is a useful adjunct to, but does not supplant, other scientific information
Tonyb
Debunked and discredited assertions originally made by Fred Singer
A Closer Look at Sea Surface Temperature Trends: How Effective is Greenhouse (GH) Warming of SST?
http://www.climatescience.gov/workshop2005/posters/P-GC2.9_Singer.S.pdf
So TonyB, was Marler’s accusation “like the gross exaggeration of the importance of every storm and fire, is a losing strategy.” directed at you?
Some medieval diarist exaggerating the fury of a particular storm and it goes in the ClimateReason database. Spare me.
“Debunked and discredited assertions originally made by Fred Singer”
WHT pronounces this debunked. No proof. Right. I’m a chemist. I can assure you IR won’t penetrate very far through water.
he results are given of an experimental investigation of the evaporation of large water drops in the field of λ = 10.6 μ laser radiation of 40–120 W/cm2 power density. The results obtained are compared with theoretical estimates. It is shown that the efficiency of the process of evaporation of a drop in the field of a laser radiation of λ = 10.6 μ is in the range 40–72%. Small drops were found to fly apart under the action of focused laser radiation and this could considerably alter the cross section of the drop being evaporated.
http://iopscience.iop.org/0049-1748/3/5/A03
Web
Over the years severe Storms have been examined by numerous researchers and their likely severity or provenance quantified by sch as the met office.
One off non cross referenced storms are always of interest as well but take second place to ones that can be verified and may have had a scientific study made of it.
In ‘the Long slow thaw ‘ I quoted at length some 30 science papers and referenced another hundred but you seem to conveniently ignore all that.
Tonyb
It doesn’t “immediately vaporize the water”. What kind of idiotic assertion is that?
The water at the surface is constantly being agitated and this creates a diffusional effect which will randomly walk the infrared heated surface volume downward at a diffsuivity at around 1 cm^2/second. Jim Hansen understood all the effects of vertical eddy currents and effective thermal diffusivity in 1981.
Of course some of this heat will get rereleased as a latent heat of vaporization and transferred upward into the atmosphere, but you can not say that it all gets vaporized. That is just ridiculous
see:
http://www.realclimate.org/index.php/archives/2006/09/why-greenhouse-gases-heat-the-ocean/
There you go jim2
Tony B, I don’t do subjective and qualitative anecdotal reasoning because guys like Mathew R. Marler will come after me and accuse me of “gross exaggeration of the importance of every storm and fire … a losing strategy.”.
Of course Mathew R. Marler won’t go after you TonyB because he is a member of your team. And so it goes.
So the water drops are suspended in the experiment and so can only release energy they collect by radiation and latent heat of vaporization (or by other surface tension releasing mechanisms).
Surface water is connected to the deep you realize. The energy has to go somewhere you know.
jim2, who exactly do you imagine is your audience? Dunces?
Jim : We see a near 4 C per doubling TCR when the last 30 years are taken over land, if we can attribute the warming to the CO2 change.
I have never denied that the dry land surface warms faster than the ocean surface. My question is, if 3C is the projected increase in the equilibrium temperature of the earth surface, how long does it take for the spatio-temporal mean surface temp to increase 2.8C?
WebHubTelescope: Practically speaking the ECS of 3C will never be reached and even 90% will not be reached for the global average any time soon.
No rhetorical tricks, but repeating a question and an implication of your model. According to your model, the 2015 mean surface temp will be proportional to the ln of the 2015 CO2 concentration; and so also for the 2075 mean surface temp and 2075 CO2 concentration. Your model has no lag, though you believe there is a lag. “Practically speaking”, either the surface response gets to near the equilibrium value fairly quickly (e.g. Doc Martyn’s half year), or your model is wrong — not just your model, but every model in which current T is proportional to current lnCO2; for which the partial derivative of T wrt CO2 is 0 when CO2 is constant.
Aside from the fact that you want to avoid answering the question I posed, despite that fact that global surface mean T is important enough for you to model it, you do not want to face the fact that two of your assertions (taking your model as an “assertion”) can’t both be accurate.
WebHubTelescope: guys like Mathew R. Marler will come after me and accuse me of “gross exaggeration of the importance of every storm and fire … a losing strategy.”.
No! To you I criticize only what you write, and I quote it exactly.
tonyb and I are not on the same team; we appear to be coordinated because we are responding similarly, though indendently, to the “invisible hand” of the information marketplace.
Naomi Oreskes, in the last couple days, exaggerated the importance of Typhoon Haiyan in an editorial in the LA Times. I was criticizing Naomi Oreskes,not WebHubTelescope..
WebHubTelescope: The water at the surface is constantly being agitated and this creates a diffusional effect which will randomly walk the infrared heated surface volume downward at a diffsuivity at around 1 cm^2/second.
Evaporation occurs continuously at the ocean surface, so that diffusional effect does not account for all of the radiant energy incident upon the surface. So the question is: Given the ongoing evaporation in the diurnally varying incident radiation as it is now, what happens if there is a 3.7W/m^2 increase in the incident radiation? That’s a tiny fraction of the night-time radiation, and a tiny fraction of the day/night difference in radiation, and an even tinier fraction of the daytime radiation; but it must be admitted that the “tiny fraction” varies considerably from Equators to poles..
You can try it with CSALT and put in a first-order CO2 lag right there in the interface.
http://entroplet.com/context_salt_model/navigate
It will just make the TCR value larger because the CO2 is being deferred from making an effect until a later time. That is the problem with a single exponential lag (i.e. first order), and what Caldiera and Myrhvold are discussing. You need at least 2 or 3 exponentials of differing time constants to be able to piece-wise model the temporal behaviors.
I have not added the diffusional response to the CSALT interface yet because I don’t want to do that until the time is right. Showing the fast transient TCR is enough for me right now.
The issue is that Team Skeptic has these yahoos such as Roy Spencer and Nic Lewis that are intentionally trying to drop the TCR to very low values — and I want to stay conservative so that we can at least debunk their junk.
And so we also have you, Mathew Marler, that keeps trying to catch me on some semantic trap that is completely invalid. It’s annoying but keeps me at least engaged.
Matthew Marler, with continued forcing I think the land would overshoot the 3 C before the global average reaches 2.8 C. Currently we are lagging maybe 0.5 C behind the equilibrium, and that gap is not closing due to the rapid emission rate. For the last 40 years the ocean heating rate has been 0.125 C per decade, which seems to be some kind of limit because it is distinctly falling behind the land. At this rate it takes 80 years per degree. Eventually this might put a brake on the land warming, but so far they are diverging fast with land warming 0.25 C per decade (40 years per degree) in the same period. Consistently, the global average trend has been 0.16 C per decade (60 years per degree). (Numbers from HADCRUT4, CRUTEM4, HADSST3).
WebHubTelescope, quoting me: I think in the AGW debate the blatant non-answering of important questions, like the gross exaggeration of the importance of every storm and fire, is a losing strategy. “
In that quote, the “non-answering of important” questions might be a criticism of you, though I meant it as a statement of your responses on this thread. “Gross exaggeration of every storm” was a different “strategy”, unrelated to anything that you wrote.
WebHubTelescope: And so we also have you, Mathew Marler, that keeps trying to catch me on some semantic trap that is completely invalid. It’s annoying but keeps me at least engaged.
I don’t perceive a semantic trap. I repeat a simple question: given that 3C or whatever is the “equilibrium” change, how long will it take for the globally averaged surface mean T (which is in your model) to increase 2.7C? And I repeat a related simple question: if there is “warming in the pipeline”, how much warming of the surface mean T will there be? And I repeat a fairly simple assertion or question: if ECS is a lot different from TCS, I do not see how any of those models that have CO2 only through lnCO2 can be accurate.
And lastly, I repeat the result of a simple derivative calculation: if T at time t is proportional to CO2 concentration at time t (with high enough accuracy to be useful for planning for the future), then dT/dt = 0 unless dCO2/dt is non-zero.
Jim D: with continued forcing I think the land would overshoot the 3 C before the global average reaches 2.8 C. Currently we are lagging maybe 0.5 C behind the equilibrium, and that gap is not closing due to the rapid emission rate. For the last 40 years the ocean heating rate has been 0.125 C per decade, which seems to be some kind of limit because it is distinctly falling behind the land. At this rate it takes 80 years per degree. Eventually this might put a brake on the land warming, but so far they are diverging fast with land warming 0.25 C per decade (40 years per degree) in the same period. Consistently, the global average trend has been 0.16 C per decade (60 years per degree). (Numbers from HADCRUT4, CRUTEM4, HADSST3).
My expectation does not match that, at least not today, it being closer to what Doc Martyn wrote. I can see how you might be right. I hope I can live long enough to find out.
I am sure the land will get higher than the fast transient indicates, since the water vapor coming from the SST heating is contributing to the land increase. As the SST continues to creep up, the land will further warm.
These are part of the medium-slow feedbacks that Hansen talks about. Albedo changes are the very-slow feedbacks and those further contribute to the uncertainty on the high side.
WebHubTelescope: I am sure the land will get higher than the fast transient indicates, since the water vapor coming from the SST heating is contributing to the land increase. As the SST continues to creep up, the land will further warm.
These are part of the medium-slow feedbacks that Hansen talks about. Albedo changes are the very-slow feedbacks and those further contribute to the uncertainty on the high side.
OK. Faster is faster than slower; and sooner is sooner than later. Water vapor effects contribute to uncertainty in albedo changes..
That’s why all the uncertainties are on the high-side of the PDF.
The low-side of the PDF is being attacked by the annoying ankle-biters and that’s really what these simple robust models help to solidify. Right now the low-side barrier is a TCR of 2C and if one sees anything much lower than this, the analysis is suspect.
WebHubTelescope: The low-side of the PDF is being attacked by the annoying ankle-biters
Til next time, be of good cheer.
To krige or not to krige,
That is the question.
Weather ’tis better to derive
A best linear estimation of
assumptions of covarience
based on GaussianTheorum
or take no action against a sea
of troubles, puzzles the will.
And makes us rather bear
The uncertainties we have
than fly to others that
we know not of.
Must give us pause …
With apologies ter the Bard.
The quality of data is not strain’d…
I love it!
So–not having access–did Cowtan and Way in essence perform co-kriging with the hydrid scheme?It also sounds that unlike BEST they used the error estimates from the kriging and not an external scheme. Just curious–does here anybody ‘in the know’, i.e., with a kriging background and paper access, know? Nice to see some cross-validation, though again they use of the term may different from my ‘conventional’ expectations.
…interesting development…
If you would like to dig into it they have a very well organized site for the paper.
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/methods.html
Oops, I put my response in the wrong place…
mwgrant | November 13, 2013 at 7:28 pm
http://judithcurry.com/2013/11/13/uncertainty-in-sst-measurements-and-data-sets/#comment-413018
Sorry about that Captain
mwgrant, can’t help beyond that, but they appear to be kriging the difference between UAH lower troposphere and Hadcrut which I think has some problems due to a shift in the magnitude of NH sudden stratospheric warming events. Their correlation after 1995 appears to be closer to the Northern extratropical stratosphere than the lower troposphere during DJF. That is a lot of temperature but not much energy.
Thanks, Captain. It will unfold in time, but certainly is nice to see the effort pop up at this time.
High accuracy of Sea Surface Temperature (SST) is necessary because the world’s oceans are such an enormous reservoir of heat. very small changes in SST hide very large changes in stored heat. So it is important to know of sources of measurement errors and correct them..Because thermometers are not uniformly distributed over the oceans ans land, particularly over the Arctic and Antarctic, sampling errors occur and have to be corrected
However none of the errors so far discovered significantly alter Ef Hawking’s comparison of global average surface temperature with the IPCC sponsored models. The models continue to exaggerate temperature after 1997. Since the IPCC models can’t replicate present temperatures, what confidence can we have for their future predictions. IT appears that basically the IPCC have failed to come to terms with the on/off nature of climate-change.
The climate consensus marches backward from absolute knowledge to ever increasing ignorance.
Most scientific disciplines start from virtual scratch, and slowly build a base of knowledge that dispels the initial ignorance.
In climate science, we had a couple year incubation period beginning with Hansen’s 1988 congressional theater of the absurd. In no time, “we” knew what the temperature was within tenths of a degree. We knew that currently, then tens, hundreds and thousands of years in the past. Almost simultaneously we knew future temperature trends to within tenths of a degree per decade. And sea level rise to within a few milimeters a year.
Not to mention the ability to predict droughts, and famines, and pestilence and earthquakes, and lions and tigers and bears oh my.
But as climate science continues its march backwards into reality, we finally “learn” that with hundreds of millions of dollars worth of sophisticated climate models, and an additional 25 years of research, we now know that we know less than we thought we knew. Albeit with much more certainty. (Excuse me while I laugh my a** off for a minute.)(And don;t get me started on the recent warmist fad that Hansen’s 1981 model was more accurate than his 1988 model – which only proves he was becoming more wrong about climate as he went along.).
Is it nice to see some peer reviewed papers admitting that krigged, assumed, extrapolated, estimated means of anomalies with poor global coverage do not give us “global average temperature” to within tenths of a degree? Sure.
But who seriously needed formal papers to tell them that. This global average temperature myth is like Obama’s “You can keep your plan if you like it. Period!”
Everybody knew he was lying through his teeth.
is there any commenter here, warmist or skeptic, who actually believes we know the global average temperature of the entire Earth climate system with anything approaching the precision claimed? (And feel free to substitute “total global heat content” for GAT,)
I have no problem using a convenient fiction, like money or average temperature, if it is useful.
This damned paywall is a bit of blow with respect to analyzing the paper.
If I walk into a dealership with enough cash, I can walk out with a Ferrari. If I try to govern a nation with CAGW’s inflated claims of precision, I will bankrupt it.
It amazes me that folks still continue to ignore the words of people in the instrumentation and metrology fields. Engineers and technicians from those fields frequently post that claimed climate temperature reading accuracy is much too high. These are the folks who have studied, calibrated, and worked with the instruments used by climate scientists. Dismissing their comments as ignorance of statistical methods is plain silly. Folks, taking the average larger amounts of inaccurate data simply gives you a more precise average of inaccurate data. The original accuracy specifications of the instruments and the test method must be accounted for, not ignored.
The point is that no instrumentation engineer would ever sign off on a claim that instrument calibration errors and drift amounts may be considered random. No instrumentation technician would sign off on the accuracy of an instrument until you brought it to him for a calibration check. Why would they be so picky about something that climate scientists seem to think is of no problem? That is simply because those engineers and technicians see the problems inherent in obtaining and maintaining high accuracy measurement capabilities.
In fields like instrumentation in nuclear power plants, test instruments and gauges are run through a calibration lab both before and after a field calibration or test procedure. The instrument is checked before going out for the test to make sure it starts out within stated tolerance and then again after the test to verify the instrument remained within tolerance during the test. (It might be of interest that for safety analysis in nuclear power plants, using expensive high accuracy and high reliability sensors, overall instrument accuracy is assumed to be no better than +/- 10% of full range.)
SST and sea water temperatures at depth accurate to 0.001 degree Centigrade over months of operation? Not gonna find any technician to sign off on that!
GaryM
re; “You can keep….well, not much”
A comment posted at Yahoo of all places.
rams1956
“While suturing a cut on the hand of a 75 year old rancher, the doctor struck up a conversation with the old man. Eventually the topic got around to Obama and his role as our president. The old rancher said, ‘Well, ya know, Obama is a “Post Turtle”.
Not being familiar with the term, the doctor asked him, what a ‘post turtle’ was. The old rancher said, ‘When you’re driving down a country road and you come across a fence post with a turtle balanced on top, that’s a ‘post turtle’.
The old rancher saw the puzzled look on the doctor’s face so he continued to explain. “You know he didn’t get up there by himself, he doesn’t belong up there, he doesn’t know what to do while he’s up there, he’s elevated beyond his ability to function, and you just wonder what kind of dumb #$%$ put him up there to begin with.” “
Instantly thought of many gravy train post grads.
Except many of us who voted for Obama think he is still far, far better than the alternative.
http://sunshinehours.files.wordpress.com/2013/06/dailynormals_2013-05.gif
“Using the stations in Canada with Environment Canada calculated anomalies, here is the month of May visualized using the mean temperature for each station for each day.
You might have to click on the image or refresh the page to restart it.
The black circle in the top left corner represents a 5 Celsius anomaly from the 1971-2000 average.
Blue are below normal. Red above.”
How would satellite data actually predict the data I have shown considering how much it varies from day to day in magnitude and sign?
Can weather be predicted by satellite at every one of those locations?
No way.
.
Why would any think a satellite is a replacement for a ground station?
@Captdallas
I was at the site before posting my comment. No answer there and I really would not expect it to be there. But thanks for the link anyway. I agree they did a nice job with side dishes at the site but the publisher has the meat.
With cokriging being around for decades, the question would be easily answerable in the form a simple declarative sentence by someone familiar with both geostatistics at a working level and the paper and any supplemental text. If someone can say, ‘Yes, they co-kriged the SST and UAH’, then I would know what they did in the ‘hydrid’ approach. If the answer is ‘No’, then ‘hybrid’ remains uncertain (at the level of my interest) behind the paywall–no heartburn, just uncertainty. Then maybe I would poke around in the code, but probably not. Lately, I’ve been looking some variograms using the USA NCDC data and that suffices to keep me busy.
Given what I can learn about the work from a distance it looks interesting. I’ll just wait and see what unfolds.
Regards, mwgrant
Step 1: Take a dataset riddled with errors and biases, leading to error bands larger than the effect that you want to “find”.
Step 2: Declare that the dataset is wonderful as is, and perfectly good for informing thousand trillion dollar decisions.
Step 3: Selectively eliminate some of the errors in the dataset that drive the results toward your pre-selected “finding”. It’s worse than we thought!
Step 4: Declare the newly improved dataset is wonderful as is, and perfectly good for informing thousand trillion dollar decisions, while the old dataset is now hopelessly biased and useless for decision making, especially if the recent data in that version is trending in inconvenient directions.
Rinse, repeat.
So, once the data is adjusted properly to match the theory, we are safely back on course for catastrophe, unless Something Is Done Immediately.
Fortunately, it appears that the entire US government has been spurred into action and within a couple of years after the recently announced onslaught of regulations and taxes, we will learn the following:
a. The results of the first, rudimentary efforts at controlling CO2 are in and they show that the Climate Scientists were right all along: global warming has been slowed, as predicted, and the Big Carbon shills posing as ‘skeptical scientists’ have been discredited, once and for all.
b. Although the reduction in CO2 achieved by our tentative first efforts has SLOWED global warming, it has also confirmed that anthropogenic CO2 continues to pose an existential threat that MUST be confronted. Therefore, we will immediately begin implementing the additional taxes and regulations required tol achieve the 90+ percent reduction in anthropogenic CO2 that the Climate Experts have been recommending for years, but which have been blocked by extremists financed by the Carbon Industry. We have tolerated the stonewalling of these denialists long enough; we can no longer afford to wait. And won’t.
Bob Ludwick
it is certain that: if it gets warmer than normal, for any reason; oxygen &nitrogen expand INSTANTLY, and release the extra heat in a jiffy. GLOBAL warming is a concocted myth, by dishonest people that don’t know how to do anything positive to the society: http://globalwarmingdenier.wordpress.com/
stefenthedenier,
But, but . . . according to the Book of Warm, oxygen and nitrogen are non radiative gases. If they don’t radiate, they are obviously at absolute zero. Due to the magical radiative powers of CO2, we only “think” that oxygen and nitrogen are actually radiating, and only “think” that the atmosphere is gaseous.
Warmists are an odd lot, Stefan. Maybe aliens stole all their brain waves. I can’t think of any rational explanation for their beliefs. It’s different for religions, but this crew claim to be scientists. Scientivists, more like it.
Luckily, as the money runs out, this mob should be the first to be “let go”.
We can only hope!
Live well and prosper,
Mike Flynn.
Mike, when the soil warms extra -> vertical winds increase and equalize in a jiffy. Vertical winds cannot feel them on the ground, because is a starting point, but up they ban keep a man on a glider for hours in the air. those people with hang-gliders can tell you that: where the ground is hotter, those winds are stronger – that’s why they don’t like rice paddies, swamps. O&N regulate overall temperature on the whole planet to be always the same – if one place gets warmer than normal; other place / places gets colder than normal. cheers! http://globalwarmingdenier.wordpress.com/climate/
Don’t know if adding new data points will remove all the “pause”, for certainly the sleepy sun and cool phase of the PDO, along with a moderate increase in natural aerosols have brought some negative forcing to tropospheric temperatures, but whatever pause this brought, it’s well over now for Australia, with 2013 set be that country’s hottest year on record:
http://www.bom.gov.au/climate/change/index.shtml#tabs=Climate-change-tracker&tracker=trend-maps
This of course is driving the Aussie climate “skeptic” nutters crazy.
The sea ice extent at the arctic (much more important to R Gates) has increased dramatically over the same period causing extreme agitation to Neven, and Gatesy and all warmists driving American, and European climate “warmists ” nutters crazy and loopy .
By the way, is this the same R Gates who has recently taken to saying that tropospheric temperatures are not reliable as the heat is stored in the oceans and we should disregard the pause for this reason.
No, it must be a cherry picking imposter. LOL
It’s winter.
You should have been skeptical of the the pause because it made no sense. Unless you had confirmation bias. Then it made a lot of sense.
Heat is stored in the ocean.
Starting at a huge El Nino and ending at a huge La Nina, it never occurred to you that possibly might be a cherry pickin’ thing to do?
You must be one of those Aussies confused about how your record warm year fits in with the meme that the “globe is cooling my friends”. Here’s a hint: it doesn’t. One of those doesn’t fit and will cause you increasing cognitive dissonance.
R. Gates
Tell us about it after it has occurred, Gates, not while you are skeptically speculating that it might occur..
Max
Only 6 weeks left in the year Max, and the Austalian summer has been stating out quite warmer than average.
Jch. It’s winter, duh.
Read what I actually wrote.
Re heat stored in the oceans. If the oceans were hotter ie storing more heat, then the atmosphere would be warmer as well.
In other words if the oceans were storing more heat ie hotter for 17 years the atmosphere would have been hotter for the last 17 years ( no pause)
Or heat is shared you know, basic science 101.
Glad you admit there is a pause, perhaps it will give you pause for thought.
By the way, if one is to have a pause that is real , it will not matter where the El Niño La Niñas come as there will always be several of both in most 17 year pauses and your argument can always be made (wrongly) as there will nearly always be an El Niño somewhere at the start of any real pause
R Gates this year is on course to be the 5 th coldest of the last 10 years world wide. This will of course change 2012 and 2011 into being the 10th and 11th coldest years this century from 9 th and 10th.
Thank god Australia was warmer for you this year or your global warming would have really gone down the chute . So the last 3 years have been the 5th, 10 th and 11th coldest out of 13 years this century.
Seems more like free fall on these cherry picked examples alone.
Angech,
Your examples are even betters than Tisdale’s psychotropic cherries. 2013 is on track to be the warmest non-El niño year on record, and the last La Niña year as the warmest La Niña year. Given the huge influence of ENSO on these temperatures, this shows remarkable underlying warming, completely shattering the “globe is cooling” meme, but “skeptics” don’t see it that way through their psychotropic-cherry induced haze.
Warm Aussie temps may “be driving Aussie climate ‘skeptic’ nutters crazy”, BUT
The fact is that since the new millennium started (January 2001), global temperature (HadCRUT4) has been cooling.
And 2013 is set to become the 8th warmest (or 5th coolest) year of the millennium
1 2010 0.547C
2 2005 0.539C
3 2003 0.503C
4 2006 0.495C
5 2009 0.494C
6 2002 0.492C
7 2007 0.483C
8 2013 0.474C
9 2012 0.448C
10 2004 0.445C
11 2001 0.437C
12 2011 0.406C
13 2008 0.388C
And that must” be driving the climate “warmist” nutters crazy”.
Right?
Max
Max said:
Max, Doesn’t drive me crazy. The CSALT model is the equalizer:
http://imageshack.us/a/img818/3699/2yd.gif
I had been blogging that the last couple of years of data weren’t matching the CSALT model, yet this paper comes along and it makes sense.
http://img534.imageshack.us/img534/3678/nj7d.gif
Webby
Your CSALT model doesn’t provide evidence of anything.
The HadCRUT4 record (with all its known ex post facto adjustments, warts and blemishes) does provide empirical evidence that the global average temperature (whazzat?) is not rising, but cooling slightly.
And that “must be driving the climate ‘warmist’ nutters crazy” (as Gates puts it).
Max
Max said :
“Warm Aussie temps may “be driving Aussie climate ‘skeptic’ nutters crazy….”
—
Yes it is.
Sure, and when I was there a few weeks ago the farmers were complaining that the grape harvest was being wiped out by the unseasonal frost. I think the heat is of that special gridded, adjusted and interpolated sort that is only noticeable by climate scientists while us simple peasants only notice that we have to scrape ice off the windscreen..
Gates, if 2013 is the hottest year on record in Australia, please answer the following:
1. How long is “the record”?
2. What, exactly is “the record” – how many temperature sensors in the same locations for how long.
3. What number do you get when you divide the length of “the record” by the estimate age of Australia?
4. What percentage of the total existence of Australia does “the record” cover?
5. In light of 1-4, how much sense does the comment “… with 2013 set to be that country’s hottest year on record” make?
And finally, for extra credit, one moe:
6. I assert that that 2013 was actually ranked as the 22,013th hottest year ever in Australia. Can you prove me wrong?
I reckon you are wrong.
In the past 500 million years, the planet has been without ice at either pole for about 75% of the time. Even allowing for the fact that the Australian land mass cruises between the south pole and way north of the equator over that time, let’s just assume that for the 75% of the the 500 million years when there was no ice at either pole, Australia was warmer than now. On that basis, Australia has been warmer than now for 75% x 500 million = 375 million years.
Therefore, I suggest 2013 was actually ranked approximately 375 millionth hottest year!
:)
;-)
Seems to me that saying that 2013 is the hottest year on record for Australia is like someone who has never driven anywhere but New Jersey saying that exit 3 on the New Jersey Turnpike is the worst road design on record…
Funny that it’s only Warmists that I see talking about “hottest ever”… Wonder why?
When they talk about the hottest year “on record”, they are speaking of the instrumental record. Speculation of temps before the thermometer period goes beyond the scope of what the BOM has stated, and is just noise-making.
“2. What, exactly is “the record” – how many temperature sensors in the same locations for how long.”
The Australian national temperature record starts in 1910. Though there are earlier temperature measurements, there are fewer weather stations reporting before then. For more information, make smart work of your browser search function. Information can be found at the Australian Bureau of Meteorology website.
I’ve recently communicated with the BOM about the alleged, recent record-breaking twelvemonths. It took a fortnight for them to respond, and the answers were helpful, giving links for me to check them out. Any truly interested party can do likewise. More vested interests are still at liberty to be argumentative rather than learn anything.
Something I learned from the exchange is that there is no official (single-mnumber) uncertainty measure for the Australian national temperature records, owing to the diffuclty of resolving structural uncertainties, like spatialization. But they offered an annual temperature uncertainy of 0.1C, on par with global (which does have a formal uncertainy measure), on the reasoning that the Australian weather station ensemble is denser than global.
That brings up further queries for me. People with agendas instead come up with objections. A subtle, but critical difference in thinking that separates scientific analysis from propaganda.
What good is good data? I just watched MSNBC ‘news’ report that polar bears are threatened because of disappearing arctic ice. NBC has become the Huffington Post of broadcast media. Facts simply don’t matter.
Since ice is actually increasing in the Arctic, the polar bears have turned their attention to logging on to the Obamacare website. Unfortunately, they are getting stressed over not being able to log on or acquire healthcare and are losing their fur…
..so we are back to the “polar bears are in danger” meme.
PS I have inside sources on this info.
– Teddi Bear
Naw, Teddi, the reason they’re losing they’re losing their fur is because it’s getting so warm there because of your SUV (and my pickup truck).
Max
PS Obama said so.
Given that parts of Alaska are apparently descending into an ice age it could be that the bears are afraid Canadians will be hunting them for their skins, again.
I find it interesting that both Alaska and the SSTs in the Bearing Strait show cooling yet the constructed temperatures in the adjacent Arctic show strong warming. Perhaps I’ll see if I can find a free copy of the paper to find out why.
It is an interesting approach, and appears to have been carefully done. A couple of initial comments:
1) The reanalysis data seems very far out of line, even compared to the ‘hybrid’ reconstruction. I was particularly struck by regions with reasonable good instrument coverage where the reanalysis data was not close. It may have been prudent to point out in the conclusions that the results suggest the reanalysis data may suffer a substantial positive bias, and so should be used with caution. I am reasonably sure that the reanalysis ‘temperature data’ has been widely used in other studies…. and maybe that is not a good thing.
2) The authors correctly note the potential contribution of multi-decadal cyclical behavior to recent arctic warming, along with possible contributions from soot (black carbon) on snow and ice, changes in albedo due to recently exposed land (from ice melt), and of course, polar amplification of GHG driven warming. However, the relative importance of these remains unclear. It would seem to me prudent to extend the kriging reconstruction back through much more of the Hadley temperature record, in order to better evaluate the potential biases from sparsely covered high latitude regions in earlier times. Focusing on only the post-satellite period does not provide a sufficiently broad perspective to evaluate multi-decadal cyclical contributions.
steve,
reanalysis data is sketchy. Sometime after AGU hopefully zeke and robert and I will post our poster comparing hi res surface measurements to re analysis ( Merra and Narr ) and RSS and UAH
Part of the issue may be the data sources they use .
More later
What I notice in the response to this paper by the denizen skeptics is something akin to confirmation bias in reverse. Results are presented that disaffirm your favorite concept (the pause perhaps), and immediately the reaction is to disbelieve the methods and criticize the authors, rather than first trying to understand what they did. Don’t criticize confirmation bias if this is what you do. I, on the other hand find this result gratifying because it does confirm some things I thought should be happening, particularly the magnitude of Arctic warming being missed. It is not confirmation bias at all if it is good science.
Jim,
Highly relevant to what was missing on the last thread – what should ‘skeptics’ do to earn the trust of scientists.
This thread has been a great example of precisly what not to do.
Certainly we’ll have a couple of scientists who now have less reason to trust the input of ‘skeptics’ after their interaction at Climate Etc.
(1) Have the authors done anything to stimulate distrust? Have they violated any of the “skeptics” rules of engagement? Have the followed the suggestions for how to gain trust?
(2) Have the “skeptics” indicated trust?
Answering these questions should help to illuminate why this problem is more complex than outlined in the previous post.
Michael,
I’m not sure whether you can even come up with any reason for anybody at all to earn the trust of a scientist.
Why would a scientist care about a person’s “input”?
Facts are facts. Whether people agree or not, makes no difference at all.
If I am wrong, I am sure you will speedily correct me.
Live well and prosper,
Mike Flynn.
Joshua,
1. In your typical Warmist way, you are trying to play the victim. So the poor authors are not being “trusted” by the readers. Boo hoo. Cry like a baby, have a tantrum – who cares?
2. Who cares?
You are definitely confusing me, Joshua. Confusing me with somebody who cares what you think.
Live well and prosper,
Mike Flynn.
Mike,
There have been calls here for scientits to ‘engage’ more with skeptics.
I’m just suggesting that if they could trust that such an activity wasn’t a complete waste of their time, they might be more inclined to do so.
Michael wrote:
There have been calls here for scientits to ‘engage’ more with skeptics.
Scientists already are skeptics — it’s drilled into them from day 1.
So what is it you’re really trying to say?
David,
Sorry that should be ‘skeptics’.
I think this is an important point. But I hope the authors will ignore the “noise” present on any blog’s comments. Plenty of us were extremely pleased to see the authors show up here and comment, and it certainly made us trust them more. (Mosher’s letter of reference didn’t do any har, either.)
I think it would be unreasonable of scientists to expect nothing but politeness when they do engage, it’s not the nature of the medium. I hope that they will notice those of us who appreciate their being here.
There are some immediate reactions to papers that elucidate the internal workings of the mind. It is a kind of skeptical normativism that resists change or attacks concepts not fitting it. The “pause” is now an established part of that norm, and this study for sure did not fit with the program.
The “pause” is now an established part of that norm…
+1 … oh, fiddly! +5 !
Has anyone bothered to tell the ‘pause’? „Entschuldigen Sie, wann geht der nächste Schwan?“
Jim D
“I, on the other hand find this result gratifying because it does confirm some things I thought should be happening, particularly the magnitude of Arctic warming being missed. ”
Confirms? The work (Cotwan and Way) is a set of analyses (essentially interpolation), not observations. Certainly it potentially inform future observations, invites further comparison with observation, and that is what it should do. The road is long. I just hope it represents an improvement in the evolving methodology…from BEST to better? ;o). Time will tell.
I do not deny having confirmation bias which is why I use that word. Confirms just means it fits with other independent lines of evidence that I already tend to trust, like what’s happening to the sea ice recently.
“not observations.”
arctic bouys. never used before. guess what?
“arctic bouys. never used before. guess what?”
Touche. It is there in Way’s comment. Thanks, I am a poor scanner. I assume, that was limited data used to develop and/or test the UAH surface relationship applied more generally to UAH observations where no buoy was present…icing on the cake so to say. Is that correct? To me the present nugget in the paper is the use of coregionalized RV’s [or something similar].
I can’t say this is groundbreaking. We have known for some time that Arctic ice has diminished for several years. So, the circumstantial evidence is in line with the paper, generally speaking. I don’t know how solid a number theirs is, and the range, but given the paucity of data, I wouldn’t bet the farm on it. And also, it begs the question as to the cause of the Arctic warming. Still, it is a contribution to climate science.
Jim2
For me life is simpler. Cotwan and Way looked for an approach to more coverage and the effects of the missing areas of data on estimates. Their approach seems reasonable and consistent with geostatistical practice [I say ‘seems’ because of the paywall constraint–not a complaint, just a caveat on my perception.] From the point of view of the ‘missing data’ regions it makes observation(s)–actually a prediction. This may be subject to testing by future observations. A method for imputing [someone needs to introduce this term!] the missing data is laid out.
Frankly these seem to be all good things to me, and substantive for the times. Good for them. I hope they handle the publicity side well. Too much flash compromises the goods.
“From the point of view of the ‘missing data’ regions it makes observation(s)–actually a prediction. This may be subject to testing by future observations. A method for imputing [someone needs to introduce this term!] the missing data is laid out.”
Precisely.
One of the things that folks should know is that there are data rescue efforts going on. SO, we use methods to predict (estimate) what temperature would be at a location where we have no real measurement.
After data rescue efforts we then have a list of station data from the past that has recently been digitized. We can now compare this to our prior estimate.
For example, Robert Way recently sent me a comparision of what our method predicted for a station in one area of the world and the actual record which was recently recovered. And yes it matched.
“And yes it matched.”
Noted. Matching probably is not/should not be a surprise, but it is always pleasing when estimating, and very nice to say.
Jim D | November 13, 2013 at 10:25 pm | Reply……
Forest for the trees Jim. The data is grossly under sampled geographically so the authors are trying to find creative ways to fill it in.
The major issue, however, is the minuscule length of the record of reliable temperature measurements that is available.
Even if one stipulates that the temperature trend for the past 50, 40, 30, etc. years is the highest on record, who cares? That’s akin to saying the last 50, 40, 30 yards we driven of Interstate 90 in the US are the worst on record! Well, we’ve only driven on what, 300 yards and the road is 3,101 miles long.
Perhaps our perspective isn’t as broad as we think.
We might see that land temperatures have risen 0.9 C in the last three decades, which is unprecedented in the record. Some might not care. That’s fine. We don’t all have to be interested in understanding why this is happening.
Yes Jim D, and the condition of the road in the last 30 yard we drove was unprecedented in the record! OMG, we must tear it all up and start again.
Doesn’t Chicken Little mean anything to you?
When the “pause” goes paws up has always been just a matter time. Before it even existed will do nicely.
Anomalously, we’re told, snow this early in Chicago, today.
Actually that is only 2 days earlier than average.
As a former Downers Trojan, I know Chicago is rather cold.
As a northern neighbor of Chicago I actually breath much heavier that I need to… trying to warm the damn place up!
Always fun to see the bellmen outside the Chicago hotels — dressed like “Bear Claw” in Jeremiah Johnson — while out-of-towners stand in line in short sleeve shirts… freezing!
Take your “pause goes paws up” argument to the Sun, its not cooperating.
It’s already virtually a done deal. The napping sun is a spineless cooler. All of the coolers are spineless. When it comes to snot knocking, team cool is light in the loafers. And they don’t have finesse either.
JCH,
Are you reading from the Book of Warm?
It defines “virtual” as “actual”, from memory. As in “virtually a done deal”.
This means “it hasn’t happened yet, but we’ll pretend it has. We’ve got away with it for years, and nobody’s woken up so far!”
Then you are supposed to throw in some meaningless phrases. “Snot knocking” is a good one, because nobody knows what it means.
“Light in the loafers” is also good, because you can always pretend it is just another meaningless phrase, which of course it is.
I’m pretty light in the loafers, which hopefully allows me to float like a butterfly and all the rest. Finesse? I’m sure I can leave that to the followers of the Way of the Warm. Here’s one definition : –
Finesse : – To handle with a deceptive or evasive strategy.
Live well and prosper,
Mike Flynn.
SST measurements are one of the best data sets we have for assessing climate in the long term: a longer span and a better spatial distribution over time than any other measurements available because SST was routinely collected by merchant ships. Add to that the integrating effect of the ocean’s mixed layer and the absence of urban heat island effects. Why then is there this concern with “improving” these data? My analysis of the HadSST2 data set showed that it exhibited much greater variance than the output from the HadCM3 model almost everywhere on the globe. This greater variance can mostly be attributed to ocean current boundary variations. It is not due to “measurement error” but rather reflects the essentially stochastic nature of climate, something that is not captured by climate models which are deterministic. This endless reprocessing of SST observations sounds like an attempt to rig the data to fit the models. My paper on this topic can be found at
http://www.blackjay.net/papers/climate-modeling-hypothesis-testing/index.html with some introductory remarks at
http://www.blackjay.net/papers/index.html
True, true a human signal does not exist at all without manipulating the data and pointing to statistical models that real world observations invalidate altogether. The only correlation observed between increased CO2 and global warming, is the other way around: the historical record shows that increases in atmospheric CO2 follow periods of global warming. The lag time is measured in centuries – 1000±500 years (Wahlen et al. 1999).
If cigarettes in Chicago will be $14.01 per pack, what would the majority on the Left wish to charge per gallon of gas if they could escape the burden?
I wouldn’t propose a tax I wouldn’t be willing to pay. So there is no answer to your question. I could go as high as 3 bucks a gallon, if the proceeds were distributed back to the population as a dividend. With no returns for those making over a certain income.
That number should be less than what this liberal hard working progressive earns producing a product that hopefully will lead to an improvement in the prognosis for those suffering from a common malady. Amyvid and Vizamyl hopefully will lead to treatment some of the denizens on this blog sorely need.
If the gubmint raised my taxes and balanced the budget, I would be happy.
I am no longer willing to pay cigarette taxes, but on other combustible vegetative matter, I would go much higher.
Apparently, smoking is a risk factor for dementia and yet may prevent Alzheimer… it just depends on who you ask–e.g., does dietary cholesterol really
cause heart disease?
From the quoted paper (WRT random errors);
“Although they might confound a single measurement, the independence of the individual errors means they tend to cancel out when large numbers are averaged together. Therefore, the contribution of random independent errors to the uncertainty on the global average SST is much smaller than the contribution of random error to the uncertainty on a single observation even in the most sparsely observed years”
I am afraid that the climate science community still has a very poor understanding of “errors” and ”uncertainties” WRT to historical temperature data records. And the community is misapplying the “law of large numbers” in an almost obscene way.
Let me explain with a few engineering examples;
1) I wish to make a piece of metal that is 100 inches long out of one hundred individual 1 inch long pieces (I would never do that, but this is an example after all). So I tell 100 hundred vendors to make me a piece of metal that is 1 inch (plus or minus 1/8 inch) long. Yes, when I assemble those into a finished assembly the law of “large numbers” will likely result in an assembly that is 100 (+/- .01”) inches long. I am counting on the statistical distribution among the vendor’s private errors to cancel out. Each vendor is independent and has differing degrees of “competence”. If I was really “frugal” I could ask 99 really cheap vendors to make me 1” (+/-1/2”) metal blocks and then I could hire one expensive competent vendor to measure all those blocks and make me one (1) final block that makes the final assembled length equal to 100 (+/- 0.001”) inches. In the engineering trades this is known as “shimming”, we often make an assembly with more cost effective “mass market” parts (volume is the main driver of ultimate costs) and then adjust the final assembly to meet the required specification with an assortment of relatively cheap parts. In some cases we make “shims” that are explicitly “oversized” so we can remove some material by grinding once we know the final “stack-up” of all the other dimensions in the assembly. The trick is to make sure you have a “shim” with a positive value, skilled as engineers are we still have not figured out how to make a part that is MINUS 10 thousandths of an inch thick (although it is quite easy to do in a spreadsheet).
2) I wish to know the absolute voltage (hint/hint temperature) present at one location in my factory. I have an old factory powered by DC electricity (I was buddies with Thomas Edison, Like George Eastman was) so we can ignore details like “True RMS Voltage” for a moment. So I purchase 100 voltmeters from several reputable firms. The spec sheet for each meter says it has an absolute accuracy of +/- 1%. So I now proceed to make 100 voltage measurements by connecting/disconnecting each of my meters one at a time to the “measurand” (i.e. the wires across whose voltage I wish to determine). So after a few hours I have 100 measurements of the absolute voltage at that point in my factory. Now the question becomes; how well do I know the absolute voltage at that point. Folks that misunderstand the “law of large numbers” would likely say; well that’s 1% averaged over 100 individual meters, so you know the voltage to 0.01%. WRONG. Without specific prior knowledge of the statistical distribution of the errors in each meter (which you do not have, remember, each meter is only good to +/- 1%, no more) you still (after 100 measurements) only know the absolute voltage to +/- 1%. If you want to know it better you just need one +/- 0.01% meter.
Just one more example of the “law of large numbers”; the machines that make the spheres used in “ball-bearings” where invented by the Germans before WWII (look up “centerless grinding”, and the Air raids against Schwinefurt). In fact most ball bearings are still specified in metric sizes because of this heritage. The process is relatively inexpensive and can make great quantities of spheres. However, it can be quite difficult to adjust the machine to spit out millions of spheres that are accurate (knowledge wise) to 0.0001” of an inch. So the simple manufacturing plan is to make lots of spheres knowing that there will be a large (+/- 0.01”) spread (statistical distribution) of final sizes. Then you sort though the spheres (easy to automate) and pick out the 0.1% that have the highest accuracy. The “law of large numbers” simply says if you make a million spheres one thousand of them will be “very/very” accurate. And then of course you can sell them at a premium, even thought they cost you exactly the same to manufacture as the “crappy” (+/- 0.01″) spheres.
The “law of large numbers” does not say say that multiple measurements from independent instruments will yield a better accuracy, sorry but that is an incorrect interpretation by the climate science community.
Cheers, Kevin.
Kevin,
Just another example of the breathtaking unfounded assumptions by the pseudo scientific climatological measurebators.
Live well and prosper,
Mike Flynn.
Mike
I can access John Kennedy’s draft paper in full but not the Cowtan and Way paper. Has anyone got a link to the full paper as otherwise there seems to be a lot of discussion here about an abstract and articles carried by such sources as the Guardian?
Judith has linked above to my article carried here a couple of years ago on the uncertainty of SST’s which drew a spirited response from John Kennedy which continued into private emails.
I am delighted he has addressed many of my concerns and even used my Rumsfeld quote. John is definitely a scientist whose integrity I would trust but that does not mean I fully agree with what he has written.
The fact remains that historically very few SST’s were taken and of those very few have any degree of confirmed accuracy. The discussion about canvas or wooden buckets is irrelevant compared to the depth the sample was obtained at, how frequently, and how it was measured.
SST’s-apart from perhaps a very few on very well travelled routes -should be taken with a very large pinch of salt until the 1970’s. Justifying the accuracy of the record back to 1850 means far more assumptions have to be made than is warranted by the nature of the data and much of the data has to be interpolated from very sparse records in adjacent grids
tonyb.
KevinK, but that is the joy of anomaly. You are not concerned with the finished product, just the deviation of the parts. So with combined measurements you can get a GMST anomaly repeatable to 0.05C even though the actual product is unknown to +/- 1K and the effective imbalance energy maybe +/- 17Wm-2.
Even with that reduction though you still have 0.05C uncertainty in the latest data and maybe o.25C in the early data. If kriging can reduce the whole chain of data uncertainty that would be a big help.
You still have the Wm-2 problem which is the bugger. If the GMT is 288K/390Wm-2 the average anomaly should represent 5.44Wm-2/K. For the oceans the range of error is +/- 0.9 Wm-2/K even if you have perfection with the anomaly. With the atmosphere the error is closer to +3.6 to -1.9 Wm-2/K with a perfect temperature anomaly. This is where the law of large numbers gets interesting. You can have nearly 0.5C of “Warming” with no change in energy just be redistributing the energy. So if you Krige one data set you still need to Krige every calculation based on that data set if you want your finished product to match the accuracy of the data.
KevinK: The “law of large numbers” does not say say that multiple measurements from independent instruments will yield a better accuracy, sorry but that is an incorrect interpretation by the climate science community.
You are quoting the Law of Large Numbers, but the authors are quoting the Central Limit Theorem. Your critique is irrelevant to the problem at hand.
Kevin’s premise was that thermometers need be neither independent or identically distributed. Since those are the requirements for the classical Central Limit Theorem, Kevin can justifiably infer from his premise the stronger result that the Central Limit Theorem does not apply, even if he only made a weaker claim.
I would instead question Kevin’s premise. If the temperature records starting in 1850 were so systematically biased by inferior 19th century thermometers as to be meaningless, just imagine how much less meaningful the Central England Temperature record must be for the measurements begun in 1659, whose thermometers would be even more systematically biased.
Vaughan Pratt
As the most scrutinised temperature record in the world CET has been cross related to a number of local instrumental records to compile it and then substantiated by means of other records and observations. It bears no relation to most data sets which were assembled and then received limited cross referencing. The exceptions include the 7 long temperature records in Europe examined by Phil Jones amongst others and funded by the EU. If you read the resultant book you will see exactly why the historic records are no more than a guide and need cross referencing.
See Manley for the compilation of the monthly CET record to 1659 and Parker for the Daily record to 1772.
I had the great pleasure of meeting David Parker when I went to the Met office a couple of weeks ago to discuss CET and my own reconstruction of it, currently from 1659 to 1538. I will be there again next week to assemble more information to try to find the transition dates between the LIA and MWP.
tonyb
Doesn’t the Lindeberg-Feller CLT allow for independent but not identically distributed r.v.s?…The r.v.s have to satisfy the Lindeberg condition, but I thought that was virtually guaranteed if the r.v.s are bounded. The rate of convergence can be slower than root n, but you still get convergence to a Normal.
Tony,
plus one, (shucks you gave me the franchise fer plus ones,)
fer yr observ-ay-shun on cross referencin’, the anti- doat ter
con-firm-ay–shun bias. )
Context’s the thing whereby
we may unearth the problem
situa-shun of the king, (and troops.)
Situ-ay-shun analysis is able ter
transcend the myopia
of point of view and the
opacity of time and space.
Ref / me serf Sixth Edi-shun ‘History’s Chequered History.’
Beth-the-serf.
@NW: Doesn’t the Lindeberg-Feller CLT allow for independent but not identically distributed r.v.s?
Yes (and that’s (a) why I wrote “classical CLT” and (b) “and” instead of “or” between i. and i.d.), but good luck getting Kevin to agree that thermometers are independent. This seems unlikely for thermometers made by the same manufacturer, and also for thermometers based on the same principle.
My counterargument would be that thermometer manufacturers have had several centuries to learn how to control whatever biases result from either the principle or the manufacturing process. 1850 is not all that long ago in the history of precise measurement, see e.g. [[marine chronometer]] for the lengths people were going to in 1750.
@climatereason: It bears no relation to most data sets which were assembled and then received limited cross referencing
Quite so, Tony, and I wasn’t claiming otherwise. I was addressing Kevin’s concern about the possibility of biases in the measuring equipment, which would apply even more to two centuries earlier. Had he complained instead about the lack of cross-referencing of sea temperatures I wouldn’t have objected.
(For [[marine chronometer]] read marine chronometer. I forgot what forum I was in. :) )
Hi Kevin,
Your critique of random errors is fair – not all errors are random and independent – but my paper discusses other kinds of errors like those you mention: systematic errors and pervasive systematic errors. Most of the work on uncertainty in observational data sets is involved with trying to understand the systematic errors, particularly those varying slowly with time.
I’m intrigued by your examples though. In the block example where each block is an inch long with an eighth of an inch uncertainty, I would have thought that the uncertainty in the total length of one hundred inches would be around one and a quarter inches i.e. about ten times the uncertainty on a single block. How did you arrive at the +/-0.01″ answer?
Cheers, John
Hi Dr. Kennedy
Paper ‘Reassessing biases and other uncertainties…’ is an excellent work, but then I am someone with rather unconventional approach to these matters..
One could forever argue about 1/10 of a degree C in the N. A. SST, but it appears to me that the general decadal trend in the NOAA’s AMO database is good representation of the reality.
Why ?
Well, the atmospheric pressure at Stykkisholmur/Reykjavik.has been (accurately) measured from the early decades in the 1800’s, it matches closely the AMO trends (see LINK ), further more it could be taken as decadal precursor to the AMO.
After looking into details, data, and methodology of the Cowtan & Wray paper a bit more, I am persuaded by their approach. It does seem consistent and the results reasonable. Because of the lack of data precisely in the region of the planet where tropospheric warming has been the greatest, i.e. the Arctic, some of the “pause” was exaggerated– that is, there was not as large of a pause. But I think it is accurate to say that the rate of growth in tropospheric temperatures certainly moderated back to a mean that does reflect the influence of the many factors being researched elsewhere: namely, a reduction in the rate of flow of energy from ocean to atmosphere (the negative PDO effect), a reduction in overall solar output, and a moderate increase in natural aerosols from an overall slight uptick in volcanic activity. What this gets at is that some influence from a positive PDO (and AMO), and during the 1975-2000 period should be recognized, but even filtering this out, we see the underlying warming of the troposphere in the range of 0.14C per decade globally from GH increases even during the so-called pause, when factoring in the even greater warming of the Arctic. But of course, those of us who think the 0.5 x 10^22 Joules per year that have been added to the ocean down to 2000m consistently for the past 40+ years, without a “pause” at all, vastly outweighs the rather fickle, and far smaller energy changes in the troposphere any way, and so the lack of as huge a pause in the troposphere is not all that shocking. These few tenths of a degree in the troposphere one way or another are a tiny fraction of the energy the ocean has been stockpiling away.
Gates-” But of course, those of us who think the 0.5 x 10^22 Joules per year that have been added to the ocean down to 2000m consistently for the past 40+ years, without a “pause” at all, vastly outweighs the rather fickle, and far smaller energy changes in the troposphere any way, and so the lack of as huge a pause in the troposphere is not all that shocking. ”
With oscillations for everything more numerous than Miley’s Twerks, what evidence is there that for the last 40 years we are not coming out of an oscillation of lower energy down to 2000m and that we are just returning to previously high levels through an as yet discovered oscillation? What data exists to falsify the hypothesis that high levels of energy existed hundreds of years ago that equate to those levels now?
Dennis,
You can have any alternative hypotheses you want about ocean warming, but will it fit the facts as well as the rapid increase in GH Gases does? The oceans have been both warming rapidly and absorbing CO2, exactly as would be expected with the ongoing eruption of the human carbon volcano:
http://phys.org/news/2013-11-scientists-hot-sour-breathless-oceans.html
Massive amounts of Carbon is being moved from lithosphere to atmosphere and hyrdosphere by human activity. The results of this movement are becoming increasingly obvious.
Gates- Are you not answering my question because you dont want to or you dont know? The question was whether this warming was part of natural variability and part of an unknown oscillation of unknown periodicity. I have heard all the other CO2 stuff before. If you dont know the answer, that is alright. It was not a trick question.
Dennis
Gates is not answering your question because he does not know the answer.
He simply ASS-U-MEs that since CO2 is rising and global surface temperature is NOT rising, there must be some “missing heat” somewhere.
And (once the team “corrected” the initial ARGO data, which showed cooling of the upper ocean) ARGO now shows very slight warming (1.4×10^22 joules or 0.05C over the past decade).
So, being a bit less “skeptical” than he claims, he gloms onto this data to explain the “missing heat”.
The fact that the HadSST3 sea surface temperature cooled slightly over this period, doesn’t bother him at all.
And, of course, the postulated ocean warming is projected back to the past 50-odd years (despite the fact that there are no meaningful data to support this) and it’s all because of GH warming.
The whole story is becoming more bizarre day-by-day.
Max
Max- Thanks. I suspected as such but didnt want to embarrass him. They all have their own pet theories until you probe a little and find out their real understanding is as thin as chiffon. I think it is a legitimate question and goes to the issue of whether the increase in energy is unprecedented.
Max is crazy. A scientist finds out why. They went looking to find the missing heat. It would have been monumental scientific incompetence not to go looking for it.
This attempt to find the missing heat reminds me of separation anxiety.
You’re an accountant. If you have a journal that indicates deposits should be 10,000 dewdews, and the banks says only 7,000 dewdews were deposited, do you just adjust to the bank, or do you find out how many, if any, dewdews actually should have been deposited.
JCH
Your accounting example sucks.
Here’s another one:
– You have a hypothesis.
– It isn’t working out that way in real life.
– So, instead of revising your hypothesis, you try to find some way to keep it alive.
The above example obviously also sucks.
The “truth” is somewhere in between.
Max
It’s best when the bank statements and the journals captures every transaction. I’d prove each deposit in the register against the bank statement. 9 times out of 10 the mistake is going to be in the register. And it that case I’d adjust the journal given sufficient confidence in the bank statement. The reconciliation of the register to the bank is just another case of doing things twice. Like adding a column of numbers twice instead of once. Accountants have is easy compared to climate scientists. We can look at everything in the system. Even with large systems, say in use by General Mills, every transaction is verifiable and associated with the individual who authorized it. The amount of assumptions used in accounting are limited compared to climate science. It seems that accounting in climate science is like the Wild West. With so many ways of doing it as in the Cowtan and Wray paper. With so many transactions missing and various approaches to inferring them. As these approaches evolve they hopefully move towards a standard. But lacking agreed upon standards we get varying answers on total heat content (Balance Sheet) and gains and losses (Income Statement).
We have a poster here who emphasizes TOA data from the Satellites. With accurate data, that’s an Income Statement, but there are limits. But in the long run, everything passes through the TOA. Everything going on below the TOA is like inter company transactions. If the IRS watched one thing going with the climate, they’d be watching the TOA. Once TOA gains and losses are accurate enough, we’ll know reasonably well about changes in total heat content.
Stephens et al (2013) writes about an imbalance at the TOA of 0.6 watts/meter^2 with an estimated uncertainty of +/- 0.4 watts/meter^2. Using their SW in of about 340 watts minus the 100 watts apparently immediately reflected back out to get to the smaller number of about 240 in order to emphasize any imbalance, we have 0.6 / 240 watts which I think is a 0.25% net gain above neutral. Flat atmospheric temperatures if true, would say that that gain is going into the Oceans, or perhaps melting ice as well. Anyway the point is with solid TOA data, some inferences can be made, that some will be comfortable with.
@manacker:
I just have seen you have made following assertion regarding the data, based on which the ocean warming over the past 50 years is diagnosed:
You are totally making this up. What scientific publication by whom uses backward projection of the recent data to derive the ocean warming of the past 50 years, which is being used as evidence for the ocean warming (e.g. in the IPCC report)? Name it, please.
Another strange day in climate science with just one more of those “corrections” which ALWAYS only go upwards.
It may be asked why nobody showed comparable interest in the GISS dataset, where land data is extrapolated over the Arctic ocean, thus increasing trends, because land is known to warm faster than the seas.
Or why there is still no breakthrough in Urban Heat / micro siting issue adjustment, despite several authors have puiblished, that this effect may have inflated land temperatures by a factor of 2.
Now we have the strange stiuation with 2 data sets (Hadcrut4 and UAH) showing almost no warming separately, but with an increased trend when combined.
Worse, surface based measurements are now even higher compared with satellite data trends, despite basic physics says they should be lower, much lower by a whopping factor of about 1.5.
The best conclusion may just be to stick with the satellite data (perhaps RSS may now be better than UAH after the instrument failure) to evade all the issues of data and sampling quality on the surface.
It does not change anything in the overall picture though.
Climate models now still fail almost everywhere and Rosenthal et al 2013 have just demonstrated in their landmark study, that ocean heat content is near the very low end of the last 10000 years, and that it will take about 400 years just to recapture temperatures of the Medieval Warm period, but only if warming continues.
Well the authors are addressing a real problem–characterizing a central metric for climate change. While much is behind the pay-wall they appear to implement an established methodology, include imputation of missing values, cross-validation, and they have results. The methodology can be reviewed and critiqued, the results can/will inform the observation process, and future testing moving beyond the present cross-validation in theory can be done. No climate models or their results are directly involved here.
From this perspective then, what warrants: The best conclusion may just be to stick with the satellite data For now, where is the beef? Who cares about comparable interest in GISS? Irrelevant, here. The paper is ultimately at it core a methodology paper exploring bias in the estimation process under the condition of missing data. It seems to be pretty responsible thing to do.
Nothing changes? Sure it does. The work is incremental unless we get a black swan. But you can’t anticipate those.
– The best conclusion may just be to stick with the satellite data
You have also NOAA / Reynolds SST, with a glorious 16 year pause – not seen in the models. It may need some correction, though, if we think about it. ;)
Yeah, yeah, Rosenthal. Yeah, yeah, Marcott. We are at the cold end of the Holocene and may have already slipped into the deep ice chasm of glaciation but for the carbon steel dioxide crampons bitterly clinging to the wall of the abyss.
===============
It just takes hundreds of years to warm and cool oceans by a notable amount and this agrees very well with the tiny increases in ocean heat content we measure today.
Rosenthal makes perfectly sense, if you agree that MWP and previous warm periods and the little ice age lasted for hundreds of years. Then it is just a logical consequence.
Beyond that, the even higher values in the early/mid holocene make sense as well, because the flow of heat to the Arctic was still blocked at that time leading to this accumulation of heat.
In my view, Rosenthal should be the poster chart of climate science, filling the gap after the Hockey Stick has been abandoned.
http://climateaudit.files.wordpress.com/2013/11/rosenthal-2013-figure-2c-annotated.png?w=760&h=520
“Both methods provide superior results than excluding the unsampled regions, with the hybrid method showing particular skill around the regions where no observations are available.”
How precisely does one know what method better estimates temperatures where we have no measurements?
Cross-validation?
Cross-validation with what? Testing against actual measurements in a different area with completely different climate does not seem to me to provide any basis for confidence.
GaryM, of course one does not precisely know for sure, but this is the fate of people who actually count white swans, rather than simply asserting that they are all white. David Hume talked about all this a long time ago. As mwgrant says, they use a holdout sample to tune the predictive model, and then they go for what’s unknown. No, it’s not perfect, but that’s life in the real inductive world.
Also, the people who are hailing this paper as “disappearing the pause” don’t understand the paper. In the paper, this is NO trend estimate over the last 16-17 years that exceeds twice the standard error of the estimate.
GaryM, to elaborate a bit… What they do is this. They estimate a model to predict observed temperatures in “the Border cells”…cells where there are direct measurements that are around the edges of the areas (poles and parts of Africa) where there are no direct observations. They statistically tune the model to achieve a pretty low average prediction error (small bias) in those border cells, without using those border cells in the estimation…in fact, without using those border cells or any cells within 1700 km of those border cells. This seems pretty reasonable.
this is the fate of people who actually count white swans, rather than simply asserting that they are all white.
If they’re holding a watch they may just be estimating when the next one will leave.
Agreed, this has been my main question.
Between my house, 2 local reporting stations one a couple miles east and the other a couple miles west of my house, and the larger airport 30 miles west, none report the same temp generally.
And just the line that separates Arctic and Tropic air masses north or south are going to change the average temperature, if you’re only estimating where that line is, what’s the error margin on something like that?
@GaryM
“Cross-validation with what?
Best shot as its been awhile…addressing cross-validation and more general comments on handling error in geostatistics.
0.) Itself :O)
As NW noted the approach taken in geostatistics and the cross-validation approach indicated by Cotwan and Way are in the the spirit of the ‘holdout’ or jack-knife.
1.) The authors say enough above to prime the pump on cross-validation. As Mosher pointed out to me earlier (I had missed it) some arctic buoy data were employed–maybe they are limited in quantity at this time but they were used and the use is apparently documented [Way comment]. One author (Way) in a comment response has indicated: ‘Hey this cross-validation is important to our work. Go crawl over it, through it and under it’ (My wording). It is a pretty clear message/request/invitation.
The approach taken seems to be a logical line of development from the perspective of applying geostatistical techniques. This is important because error analysis weighs heavily in those techniques and offers prospects not found in other commonly employed interpolation/estimation schemes [below]. Need more data? Of course. But after one gets that data one still has to process it and processing is really what the paper is about. I view side-bar implications from interim processing such as reported as potentially useful and natural part of an evolving process in which shelf-life of ideas/implications is always lurking in the background. Take note and move on. Not everything is answered at once.
2.) Kriging is more than an interpolation scheme. Local and global error estimation is at its core. That is one big reason why it has proven useful for spatial analysis in the earth sciences. One has to think beyond just estimating the value of a variable at a point, and estimation with error analysis integrated part-in-parcel is plain useful.
In many applications of kriging, e.g. hydrology, mining, a typical practice has been to use the statistical (variogram/kriging) model to estimate the value at each datum location using the data less that point’s value. It is important to note that unlike most interpolation techniques used for spatial estimation kriging also estimates the error at a/each location where applied.* Thus in the cross validation phase one can build a picture (maps, diagnostics) of the estimated performance of the model over and in the context of the area under study.
Of course since kriging produces predictions of local estimation errors one can produce error maps for the kriged entity. This can be can be quite useful in designing future ‘sampling’ programs (use of virtual data in unsampled areas is also a neat costing trick), confidence bands and/or surfaces for mapped temperature/anomaly contour lines.
4.) There is a lot of speculation without the paper in hand, but that is the breaks. In commenting I recognize the limitation, but prefer to view the glass as half full. The authors’ comments here are certainly helpful.
5.) BTW General caveat–an extra precaution when looking at any contoured results: all spatial estimation techniques have tendencies for characteristic anomalies that show up in the final product. Practiced eyes are needed.
NW, mwgrant,
I think understand what they did, at least as well as a (statistical) layman can. My question was more rhetorical actually. I was pointing out the fact, that they are verifying one set of assumptions by testing them against another. Their assumption is that they can cross validate their model using data from an area with a totally different climate.
I think it’s nonsense.
You can make educated guesses all day long. And you may have more confidence in one kind of guess than another. But to claim such precision based on a method depending on so many unverifiable assumptions, is ludicrous. In the context of a push for the public policy of decarbonization.
The billions being spent on “climate science” should be being spent on getting actual data. Instead, we are funding an army of “scientists” who use statistics in various forms to “analyze” other people’s data.
If you actually think the entire population is at risk from global warming, that is a stupid way to do business. But hey, at least you get your funding and don’t have to even leave your office while telling the rest of the world what the average temperature is in remote regions of the antarctic (and deep sea) – TO WITHIN TENTHS OF A DEGREE.
I ask again. Does anybody really believe this stuff? (“You can keep your massaged temperature reports if you like them. Period!”)
GaryM, Why don’t you ask Cowtan and Way how much dedicated climate science funding they got? And they don’t even live in your country so whose tax money are you complaining about?
WHUT,
My last comment was not just about them specifically. it was about your whole industry – a black hole of tax payer’s money throughout the developed world.
“Why don’t you ask Cowtan and Way how much dedicated climate science funding they got?”
As to the SS contributors funding, this is what they say about their work on the SS site:
“Earlier in the year, Skeptical Science ran an appeal to fund the publication of the Cook et al Consensus Project paper. The required funds were raised in less than a day, a powerful example of citizen-science in action. Our new paper ‘Coverage bias in the HadCRUT4 temperature record’ is somewhat different from the consensus paper: it is not a Skeptical Science project, and the primary audience are the users and providers of global surface temperature data.
…
As a spare time project neither Robert nor myself have academic funds which can be legitimately contributed to making this paper open access. In the light of your generosity last time, we would like to ask you to help crowd-fund making our paper open-access and freely available to the general public.”
Unless I completely misread the synopsis of their paper above, their work is a reanalysis of government funded data, which might well have been done on their own time. I do wonder who paid for the computers they used to do this yeoman’s work n favor of government control of the world energy economy.
Not to mention, finding out they are members of the SS troop of Klimate Keystone Kops, makes me wonder how anyone is surprised that they “found” that warming was greater (where no one has measured it) than was thought before.
Now that I read it, their comment that “neither Robert nor myself have academic funds which can be legitimately contributed to making this paper open access”, begs the question of what “legitimate academic funds” (again, dedicated or otherwise) were used in doing this bit of Skeptical Science propaganda.
The more I think about the funding issue WHUT raised, the curiouser I become.
“Our new paper ‘Coverage bias in the HadCRUT4 temperature record’ is somewhat different from the consensus paper: it is not a Skeptical Science project, and the primary audience are the users and providers of global surface temperature data.”
It’s not a Skeptical Science funded project, and its intended audience are the “users and providers” of the temperature reports they have reanalyzed.
So now I am curious how this was funded. Was this back door funded in some way by NASA and UEA because they are unhappy with their own results?
Who paid for the computer time and other expenses of this piece of “citizen science”?
GaryM, “So now I am curious how this was funded. Was this back door funded in some way by NASA and UEA because they are unhappy with their own results?”
Can’t people just do stuff for the challenge?
captdallas,
Absolutely. And if they funded their work, whatever it might have cost, more power to them.
In fact, even if they were funded by NASA and UEA directly, for the express purpose of finding what they “found”, that would not make their analysis wrong (I think there are plenty of other reasons to make that argument).
But given the politicization of the debate, I think it s fair on all sides to at least know about funding.
In litigation, the experts of both sides are paid. That does not disqualify them. But the jury is entitled to know who paid, and how much, to take that into account as a factor in evaluating the testimony.
GaryM,
Talking about funding, there is another recent and equally interesting peer-reviewed climate science paper written by Caldeira and Myrhvold [1]. I blogged about it the other day:
http://contextearth.com/2013/11/13/simple-models-of-forced-warming/
Ask Myrhvold where he gets his funding — THE Nathan Myhrvold , founder of Microsoft Research and likely billionaire through his patent portfolio company Intellectual Ventures.
I don’t think he needs any outside funding. Pretty cool, eh?
BTW, I think their paper is spot on.
[1] K. Caldeira and N. Myhrvold, “Projections of the pace of warming following an abrupt increase in atmospheric carbon dioxide concentration,” Environmental Research Letters, vol. 8, no. 3, p. 034039, 2013.
Some of this geoengineering funding is coming from Bill Gates.
Is that OK with you, that billionaires set the direction of research?
Imagine if banking used data in this manner.
You would have never ending QE.
Never ending QE? Let’s hope so.
Think of the Grandchirren.
==========
Oh that’s right, with that and derivatives, they do use data in that manner!
http://www.aljazeera.com/indepth/features/2013/03/201332610025946947.html
A comment or ten on observational data. I have a machine that does BP. And pulses . I am very happy with the pulse measurement . It is always accurate, I trust it implicitly.
BP on the other hand is inherently unreliable, It is dependent on the placing of the cuff over the artery in the same place every time.It is dependent on the size of the arm and the size of the cuff being used. It changes with the time of day, minute by minute and on the stress levels of the patient and of the measuree. Mechanical measurements are technically more precise than human measurements but when they are wrong are extremely wrong (high or low).
Fudge factors creep in all the time with winding down to zero’s and 5’s and whether the patient needs to have good levels for the BP trial drug or not .
This paper has all the attributes of a good BP trial paper.
The data all fits the assumptions made by previous models.
It is a confirmational paper par excellence with not one iota of doubt allowed to creep in.
I feel very sorry for Steve, it is very tough when people you have worked with use methods that you would like to agree with produce results like this.
Gee , they even produced one result that agreed with his!
Random errors?, Systemic errors?
Unknown unknowns?
But when all the changes go one way remember the Ponzi schemes Steve.
By the way for all the people with BP problems out there, High BP could kill you in 30 years, but you can afford to wait and check it out with minimal risk before taking BP medication , which is wonderful stuff but can have a lot of nasty side effects, even worse if not needed.
angech ” BP medication , which is wonderful stuff but can have a lot of nasty side effects, even worse if not needed.”
That should be posted over ever clinic door. Borderline high normal is definitely not worth the risk.
Borderline high normal is definitely not worth the risk.
My feeling exactly (140/90) before I had a heart attack in 1992. After my quintuple bypass I decided to follow my cardiologist’s advice to take the medication. The only side effect I’ve noticed after 20 years is a big reduction in BP (typically 115/70). My heart muscle remains damaged from the attack however, which might not have happened if I’d followed the earlier advice to take the medication. How was I to know? I didn’t want to be on medication the rest of my life, but that can be a self-fulfilling prophecy if you don’t survive the attack. YMMV.
Mine was a little lower, 135/86 but they wanted me to start anyway even with a good ekg. Turns out that my slight elevation was due to chronic dehydration living in the tropics and all and the HTCZ water pill aggravated that into a remarkably painful DVT. At the same time a friend took a required physical and they started him on BP meds with HTCZ so he could meet the training program BP guidelines. He lost all his teeth due to chronic dehydration and tended to pass out a lot.
So I guess it is about a toss up.
All the talk of how much the oceans have warmed and how accurate the past data is is rather secondary: they are in trouble, today, and if they are in trouble, we are in trouble:
http://phys.org/news/2013-11-scientists-hot-sour-breathless-oceans.html
These inconvenient facts do not set well with those who think humans can have no significant effect on planet Earth. If the facts related to the slow human caused death of the ocean don’t mesh with your meta-memeplex, perhaps you need to do some house cleaning.
It doesn’t seem consistent to me. More upwelling causing more mixing is preventing the Pacific from warming. More stratified with less mixing causing loss of oxygen and lower pH.
Steven,
You’ve got your physics upside down on this one. Nothing is causing the Pacific to not warm, because it is warming, along with the rest of the global ocean. Where did you get this “more upwelling” idea?
R. Gates, probably just some silly rumor.
R Gates,
I was a little worried about hot sour breathless oceans.
I was initially concerned that 540 “international scientists” had apparently
contributed to an unidentified UN report – 26 pages long! According to a journalist, that is.
No references, vague references to a theory that squid might migrate along with changes to ocean conditions, and a smattering of “it’s really, really, bad” future facts.
It is obvious, that if the atmosphere, aquasphere, lithosphere, and so on, behave chaotically, then at any given time, extremely small inputs from human activity may result in large excursions of any given parameters.
This is called “change”. Many people don’t like “change”.
If you are really concerned, you might like to form a society for the abolition of “change”, and agitate for laws against “change”.
Natural “change” has managed to wipe out more than 99% of all species that have ever existed on Earth, without our help. Other than exterminating the human race, I can’t really see a way of not causing “change”. Laws don’t seem to work. People seem like to water, food, shelter, electricity etc.
But good luck anyway. You’ll need it.
Live well and prosper,
Mike Flynn.
R. Gates
You write (rather “unskeptically”):
Citing a scaremongering blurb by Seth Borenstein about “hot, sour, breathless oceans”.
Such rubbish, Gates – I’m shocked that you, as a self-proclaimed “skeptical warmist” would cite such BS.
The global ocean is supposed to have warmed by 0.05C over the past decade (since ARGO was installed), after correcting raw ARGO data that showed net cooling.
And the pH is still very much in the basic (not “sour”) range, due to its enormous buffering capacity.
And “breathless”?
Gimme a break, Gates.
If you want to call yourself a “skeptical warmist”, don’t cite every scaremongering rubbish blurb out there.
It makes you look silly (even if you aren’t)
Max
Gates- Max noticed that you are dodging my question. He thinks it is because you dont know the answer. I guess I will have to wait for a real expert to address the issue I raised. I am going to wait until The Chief returns.
“Surface warming has slowed somewhat, in large part due to more overall global warming being transferred to the oceans over the past decade. However, these sorts of temporary surface warming slowdowns (and speed-ups) occur on a regular basis due to short-term natural influences.”
I have seen similar comments many times but I find myself perplexed.
What exactly are the physical processes that would heat the atmosphere for 30 odd years and then suddenly change to stop heating the atmosphere and heat the oceans instead?
I find this extremely puzzling. This is, I think a very serious question but I have not seen the mechanism explained.
So what is the cause of the change?
Hugh,
The net flow of energy is always from ocean to atmosphere. You understand that part, right? As the atmosphere warms, the net flow of heat from ocean to space slows down slightly, allowing the oceans to warm, this in turn may cause the atmosphere to not warm as fast (though it will still warm) but that really depends on how fast GH gas concentrations continue to rise.
OK, that makes sense. But wouldn’t this manifest itself as a slow decrease in the rate of heating rather than the abrupt change we have seen?
Addendum … if this is true wouldn’t also mean that the oceans, which has a heat capacity many orders of magnitude greater than the atmosphere, act to dampen the rate of temperature increase? My understanding (and it’s been 35 years since I took physics as an undergraduate) is that temperature is a state variable and energy has a magnitude (I know the terminology is not right here but as I said it’s a long time since I studied this) … Couldn’t most of the heat go into the oceans and mitigate the effects of global warming?
R Gates,
You wrote : –
” . . . the net flow of heat from ocean to space slows down slightly, allowing the oceans to warm . . .”.
Absolute nonsense. A body experiencing a net loss of energy does not warm. If the rate of loss slows, the rate at which the temperature falls also slows.
You cannot warm anything by reducing the rate at which it cools. If you don’t believe me, try it.
In typical Warmist fashion, you will no doubt claim that what you wrote really means something different. It doesn’t work anymore.
Try something else!
Live well and prosper,
Mike Flynn.
Thanks MIke, let me have a try at mucking it up
Gates says ocean sends heat to atmosphere which heats up, obviously the ocean was hot and now the air is too. Heat from ocean into air leaves ocean cooler.
Hot atmosphere radiates more heat into space cools down overnight but hot ocean pumps more heat into air thus cooling down more.
Oh damn he said the ocean was heating up from its heat loss. I must have it wrong,
You are incredible wrong about your physics Mike Flynn. The oceans nor atmosphere are seeing a net loss of energy. Both are warming, but the oceans are simply warming much faster and at greater amounts, being the primary heat reservoir of the planet.
R. Gates
Try thinking “skeptically” before you answer this.
If the ocean is gaining heat (warming up) since ARGO started in 2003, why is the sea surface temperature losing heat (cooling) over this same period?
Max
R. Gates
Let’s ASS-U-ME that you are right.
The oceans are absorbing the AGW heat that is not being seen in the atmosphere.
So, instead of a significant fraction of a degree global (atmospheric) warming (which could become unpleasant if it continued indefinitely) we have a few thousandths of a degree of ocean warming, disappearing forever into the deep blue sea, where it affects no one.
This is good news, indeed!
Let’s do a quick sanity check on that.
If CO2 levels rise to 650 ppmv by 2100, we would see around 2.2ºC warming from today, using IPCC’s latest mean ECS estimate of 3ºC, assuming the heat all goes into the atmosphere and assuming equilibrium is reached. This equals forcing of around 7.5W/m-2.
But if this warming goes into the ocean instead, it would hardly be noticed.
The mass of the atmosphere is about 5,140,000Gt and its specific heat is about 1,000J/kgºC
The mass of the upper ocean is about 637,000,000Gt and its specific heat is about 4,000J/kgºC
So the same amount of energy would warm the top 2000 meters of ocean by 0.004ºC.
Doesn’t sound like a catastrophe for anyone, Gates, even all those little fishies down there.
Isn’t that good news, Gates?
Try being a bit more skeptical, before you get all alarmed.
Max
R. Gates
That’s the upper 2000m of the ocean, of course.
Max
Mike Flynn: Absolute nonsense. A body experiencing a net loss of energy does not warm. If the rate of loss slows, the rate at which the temperature falls also slows.
You cannot warm anything by reducing the rate at which it cools. If you don’t believe me, try it.
Why is it so hard to understand that the sun warms the earth every day and the earth cools at night; and if you decrease the rate of cooling the net effect is to have a slightly higher temperature result from the daily warming?
What you wrote would be true only if there were not heat input to the earth, an obviously counterfactual conditional..
Max, being a chemical engineer and all does not understand how thermal diffusion works. The heat content will never uniformly spread throughout the depths but the diffusive source will have a higher temperature than anywhere else. What this means is that the SST and subsurface layers will always maintain a higher temperatute than the depths. The .004C is a canard by the deceptive ChemE and he likely in fact knows this but prefers to spread FUD.
-Mike Flynn: Absolute nonsense. A body experiencing a net loss of energy does not warm. If the rate of loss slows, the rate at which the temperature falls also slows.
You cannot warm anything by reducing the rate at which it cools. If you don’t believe me, try it.-
“Why is it so hard to understand that the sun warms the earth every day and the earth cools at night; and if you decrease the rate of cooling the net effect is to have a slightly higher temperature result from the daily warming?”
If monday the day is 90 and the nite is 70, tuesday it’s 95, then nite will be
warmer than 70. But tuesday is 85 than nite could cooler than 70.
With body of water there less variation in temperature. If average temperature of upper surface is cooler, than lower part doesn’t warm as
fast.
But if upper water is being cooled by mixed with lower water, then the lower water warms while top cools.
So generally warmer lower water and cooler upper water, should tend to indicate an increase in mixing of the water.
Webby
You are confused again (as usual).
If we take only the top 700m of the ocean, the GH warming by 2100 would be 0.01ºC. (Yawn!)
And, hey, isn’t all that missing heat supposed to be disappearing into the deep blue sea?
How does it get way down there if it doesn’t first warm the upper ocean?
It’s the old shell and pea con game, Webby – now you see it, now you don’t.
Max
Matthew R Marler,
Place an object on the surface of the Earth. As the earth rotates, the object will start absorbing energy from the Sun. As the object is rotated “away” from the Sun, its temperature will commence to drop. This will continue for around 18 hours, depending on latitude, season, local weather conditions and so on.
The object will reach a maximum temperature in unconcentrated sunlight of less than 100 C, regardless of whether the object is in Death Valley or the Libyan desert.
I don’t make the rules. Nature does. The Pantheon in Rome has been absorbing sunlight for around 2000 years. The gelato shop across the road has been absorbing sunlight for around 50 years. The temperature of both is indistinguishable at dawn.
Live well and prosper,
Mike Flynn.
Typical denialist bilge. Just because a pipe is flowing faster doesn’t mean there’s more of what it carries in the middle lengths.
Folks want an example of a skeptic calling out a “skeptic” on BS? Here! Here!
AK,
Pipes, water – more stupid, irrelevant analogies.
You might care to provide a succinct physical explanation telling us all how photons avoid interacting with matter for several hundred meters.
Then after the photons have interacted with water several hundred meters down, explain why the now warmer water remains at depth, rather than becoming less dense and rising towards the surface.
You are talking nonsense, I think. I await your explanation. I am always willing to learn. Please, no more silly analogies or links to Warmist tracts.
Live well and prosper,
Mike Flynn
“A body experiencing a net loss of energy does not warm.”
One of my gripes with the warmists is their claim that they know there is currently a net gain of energy in our climate. They don’t know.
So how precisely do you know that we are currently experiencing a “net loss of energy”? Where are the measurements of energy in and out? Where are the measurements of actual total global climate heat content?
All the talk of measurements, and experimentation, and verifiability used against CAGWers claims of warming, are equally relevant to anyone claiming they “know” the Earth’s climate as a whole is cooling.
GaryM:
The measurements are here:
http://www.clivar.org/sites/default/files/GSOP/resops/DISCUSSION_II_LOEB.pdf
On page 13.
Placing your money on CERES TOA, you’d have net losses at TOA over the last 13 years, but it’s slight. If atmospheric temps were flat, the Oceans are not warming. I don’t know what any error bars are, so maybe they are warming a bit. The source at the link should be acceptable.
GaryM wrote:
Where are the measurements of energy in and out?
Is this a joke?
You aren’t keeping up with the scientific literature. Try Loeb et al, Nature Geoscience 2012. Trenberth et al, BAMS 2009. Or “An update on Earth’s energy balance in light of the latest global observations,” Graeme L. Stephens et al, Nature 2012: http://www.nature.com/ngeo/journal/v5/n10/full/ngeo1580.html
GaryM,
“Real” scientists such as geophysicists measure heat loss, and then try to figure out where the heat is coming from.
There are differences about both the amount, and the sources. Real scientists admit they have no way of knowing the temperature of the core, the amounts of heat generated as a result of radioactive decay, and all the rest.
There doesn’t seem to be any dispute that the Earth is cooling, amongst “real” scientists.
I could be wrong.
Live well and prosper,
Mike Flynn.
David Appell,
From the article you cited (which I believe was a subject of a post here at Climate, Etc. previously):
“The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm2 (95% confidence) due largely to instrument calibration errors. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change.”
Yes, there are measurements of energy in and out. But not measurements that tell you what the energy balance is.
Like virtually everything else in climate “science”, the energy imbalance is the product of models. It is just (falsely) represented to be the product of precise measurements by satellites.
It’s hard to explain, but changes in surface temperature plus the rate of change of ocean heat content are an independent way at getting the energy imbalance. The long-term rise in ocean heat content is due to an energy imbalance, which means the surface warming isn’t keeping up with the changing forcing.
Jim D,
It’s not hard to explain. You are just describing one of the ways in which the supposed energy balance in inferred. My comment above was in reference to the suggestion that the TOA energy imbalance has been measured by satellite. Everything I have read indicates the instruments involved are not sensitive enough to measure the AGW effect on the difference between ingoing and outgoing radiation.
Virtually everything involved in CAGW, observations, models, and paleo, depend on models, statistics and assumptions. Nothing wrong with that, but it seems to be a genetic trait of the species Warmus Advocatus to fudge over the nature of the “science” that underlies their assertions of certainty.
The satellite-derived energy imbalance has bigger error bars than surface/ocean-derived estimates, but they are consistent with each other.
Mike Flynn: Place an object on the surface of the Earth. As the earth rotates, the object will start absorbing energy from the Sun. As the object is rotated “away” from the Sun, its temperature will commence to drop. This will continue for around 18 hours, depending on latitude, season, local weather conditions and so on.
The object will reach a maximum temperature in unconcentrated sunlight of less than 100 C, regardless of whether the object is in Death Valley or the Libyan desert.
I don’t make the rules. Nature does. The Pantheon in Rome has been absorbing sunlight for around 2000 years. The gelato shop across the road has been absorbing sunlight for around 50 years. The temperature of both is indistinguishable at dawn.
All well and good, but that has nothing to do with what you wrote and what I wrote in response.
Mike Flynn, I wish you would drop that – of course you can warm a body (or a system) by reducing by reducing the rate at which it cools. This is not debatable!
Mike Flynn can’t help himself. The memeplexes battling in his head are in conflict and so the debris from this battle end up spewing across CE blog posts.
Edim
Ya gotta “reduce the rate at which it cools” to less than a net zero for your statement to be true.
Believe that’s what Mike is saying.
Max
“This is not debatable!” If you think (like normal people) that “to warm” means to increase the temperature of, rather than merely moderate the rate of cooling of, then I agree it needs no debate, because Edim, you’re clearly just plain wrong. Sorry.
Edim is correct in the basic thermodynamics. If the oceans are transferring X+1 w/m^2 of energy to the atmosphere per second, but receiving X w/m^2 per second from solar SW, then the gain in ocean heat content is 1 w/m2, even though they are transferring energy to the atmosphere, and some might call this “cooling” but from a thermodynamics perspective, the transfer of energy, being greater in than out, is not actually cooling. Seems like a great many people can’t quite fathom this basic concept.
Actually, the oceans are receiving X + 1 w/m^2, but transferring out X w/m^2, and thus gaining 1 w/m^2…duh, seems basic math escapes me.
Edim,
I hope you are attempting sarcasm by writing: –
“Mike Flynn, I wish you would drop that – of course you can warm a body (or a system) by reducing by reducing the rate at which it cools. This is not debatable!”
If you are not, may I respectfully suggest you attempt to stop anything at all cooling, by reducing the rate at which it is losing energy.
Unless you can demonstrate a circumstance where losing energy results in an increase in temperature, I win. “Increasing temperature by reducing the rate of cooling” is Warmist nonsense. Make a cup of coffee. Now, stop it cooling by reducing its rate of heat loss. Even if you surround it with a perfect insulator, (physically impossible), its temperature cannot rise.
You may debate it, neither you nor anybody lose can do it. Neither can Nature, it appears.
Live well and prosper,
Mike Flynn.
R. Gates
It appears that there is a typo in your last comment.
You are saying that the net heat into the ocean from the sun = X
And the net heat out of the ocean to the atmosphere = X+1
And yet you claim the ocean is warming by 1?
You cannot be serious.
Max
R. Gates
Thanks for correcting the typo.
(I didn’t think you really meant what you wrote.)
Mike Flynn is correct. You cannot warm an object by reducing the rate at which it is cooling – UNLESS you reduce the cooling rate to less than 0 (in other words, warm the object).
That’s what your corrected comment also shows.
Max
I would think you certainly can warm a system by reducing one of the means by which it cools, over a period of time. Put two pots of water on a stove with the same heat. Put a lid on one pot. The pot with a lid will warm faster that the pot without (all other things being equal).
For the period in which the heat of the lidded pot is higher than the un-lidded one, that increased heat over that period of time is caused by the lid, the reduction in cooling.
It does not work without the exterior heat source. But when you are talking about a system, you can cause a rise in net heat, ie. net warming, by reducing the escape of heat, cooling.
Put it another way. You are standing in a vertical tube. One pipe is letting water in, another is letting water out. You can raise the level of the water by monkeying with either tube. If you slow the flow out of the outlet pipe too much, and drown, it will be little consolation that only the other pipe was adding water.
Gary M
You wrote (bold type by me):
manacker,
Thanks for the support.
Obviously, too much time spent reading the Book of Warm rots the brain of the Warmist concerned.
I’m starting to understand the language, I think. Cooling means warming.
Live well and prosper,
Mike Flynn.
Max,
Mike Flynn is no where close to being correct on this issue. The ocean is gaining energy faster than it is losing it to the atmosphere. This is simple thermodynamics, can be illustrated by numerous examples, and either Mike is being intentionally obtuse, or he really doesn’t understand basic thermodynamics.
max,
I missed the assumption that the object was experiencing net cooling. I assumed the subject was our climate, which, in my opinion, no one knows whether it is currently experiencing net heating or cooling.
Hugh, you are right– the oceans are buffering the atmosphere from as large of temperature increases as we might otherwise see with rising GH gas concentrations. But also remember the natural variability in ocean to atmosphere energy transfer caused by things like the ENSO cycle. During El Niños, a bit more energy is transferred to the atmosphere, and during la Nina’s a little less, though across all cycles and all timeframes it is always very positive, as over 50% of the net energy in the atmosphere at any given time came directly from the ocean, and of course the ocean receives its net energy directly from the sun.
So the Arctic is warming faster than we thought just last week.
It was already warming far faster than models had predicted. Now it makes the models even more wrong about Arctic warming. The models are broken. This puts another nail in the GCM coffin. When do we get around to burying them? How wrong do they have to be before the team admits there is a problem?
” It was already warming far faster than models had predicted. Now it makes the models even more wrong about Arctic warming. ”
Wrong. the models overwarm the arctic. FOOL.
http://berkeleyearth.org/graphics/model-performance-against-berkeley-earth-data-set#warming-at-the-poles-since-1950
WTF are you talking about? CMIP5 ensemble overestimates global average warming and underestimates Arctic warming.
http://www.nature.com/srep/2013/130327/srep01556/full/srep01556.html
“Most CMIP5 climate models underestimate Arctic winter warming over the past two decades”
This is encyclopedic information dopey, wickedpedia in particular which means it’s so undisputed even William Cannoli lets it through.
http://en.wikipedia.org/wiki/Climate_change_in_the_Arctic
“current climate models frequently underestimate the rate of sea ice retreat”
Dipschit.
This is probably the right time to remind you all that a global temperature anomaly is not a very meaningful metric: A temperature increase of 0.1 degrees Celsius in the Arctic desert (especially in winter!) represents far less excess energy than a temperature increase of 0.1 degrees Celsius in a humid tropical forest.
Has there been any attempts at all to measure something like a “global enthalpy anomaly”?
Yes, it is very easy to do that as a first order check. Add the amount of heat that is entering the ocean (dH) and compare that to the Planck response of SST (dW) and the numbers jive.
dH + dW(ocean) + dLatent(ocean) ~ dW(land) + dLatent(land)
There is some correction for latent heat of evaporation that impacts the lapse rate differently for ocean versus land which I show in the equation. Latent is tricky because whatever cooling it provides at the surface is a warming at higher altitudes when it condenses out.
Judith Curry, you assert, based on the graphic at the bottom of your article:
Apparently, you allege the disagreement between measured temperatures and model simulations, because we can see in the graphic that the measured temperatures are currently at the boundary of the 95% confidence interval of the model simulation sample. Please correct me, if I misunderstand you here.
Does the graphic account for differences between the time variability of the forcings from the climate drivers in the real world and the time variability of the forcings from climate drivers, ideally prescribed after 2005 in the CMIP5 scenarios? This is an important question, because even a perfect model would show a disagreement between the statistical properties of the simulated climate and measurements in such a case.
Anyway, regardless whether such differences between real world forcings and CMIP5 forcings have been accounted for in the graphic or not, a 95% probability confidence interval means that 1 out of 20 data points of the probability distribution lies outside of the 95% interval. Thus, if measured temperature and model simulations have exactly the same probability distribution, i.e., when they statistically agree perfectly, one out of 20 data points of the measured temperature data points must lie at or outside of the 95% confidence interval of the model simulation sample on average. This would happen in clusters due to autocorrelation, though. Therefore, I do not see that your assertion is supported by the graphic you show, because the graphic shows only that there are some instances when this happen, as one should expect. Your assertion of a disagreement between measurements and model simulations only based on the fact that you find an instance, like the current one, where the measured temperature lies at the 95% boundary is based on logically and statistically fallacious reasoning then.
But would ya wanna predict with one?
=========
kim
Not if I had to bet my own money on it.
Max
Or marry it ter yer weather?
Show me one model of the CMIPS that is right. Just one will suffice.
John Bills
They’re ALL wrong.
But, taken together, they can predict the next 100 years accurately.
Max
I do not understand your request. What is “a model is right” supposed to mean, exactly?
Are you asking me to present you a climate model that perfectly reproduces Nature?
No, merely one that predicts anything.
=========
Judith Curry writes:
This alleged pause that has never been defined based on precise scientific (statistical) criteria, and that, perhaps, has never been?
This alleged pause that has never been defined based on precise scientific (statistical) criteria, and that, perhaps, has never been?
The alleged pause is defined with the same data, terms and criteria as the alleged rise. goose.gander.good
Well, if so then the global surface temperature rise since 1970, or any longer time period since the 19th century is real, whereas the alleged pause isn’t. Former is highly statistically significant. In contrast, the alleged pause lacks statistical significance.
Jan Perlitz
Here are three papers from the Met office on the pause
http://www.metoffice.gov.uk/research/news/recent-pause-in-warming
I was there talking to several of their scientists two weeks ago and they certainly acknowledge the pause, but as yet have not determined its exact cause other than to suggest heat is now entering the ocean rather than staying in the atmosphere. What the mechanism is for that change will no doubt be the subject of more papers
tonyb
I know these publication, but I didn’t find any definition for the alleged pause, based on scientific (statistical) criteria in there. Perhaps, I missed it and you could point me to it?
Jan
I posted three Met office reports that go into great detail on the pause. THEY admit there is one. Their SCIENTISTS admit there is one. The data presented shows there is one.
Perhaps you can clarify where you disagree with their analysis as I thought that was the job of sceptics?
Just for good measure here is an example of a temperature record-CET- that is actually showing a substantial decline and prompted a full scale meeting at the met office last July to talk about this and the pause
http://www.metoffice.gov.uk/hadobs/hadcet/
As you know, CET is a reasonable but not perfect proxy for Northern Hemisphere temperatures but does not have the noise introduced by tens of thousands of climate records all doing different things-going up, going down or remaining static.
How significant the pause turns out to be we shall have to wait and see. In the case of CET it reverses some 320 years of steadily rising temperatures and seen in historic context may turn out to be merely another blip in this very long record.
tonyb
tony b and Jan P. Perlwitz
The “pause” in the “globally and annually averaged land and sea surface temperature anomaly” (HadCRUT4) is observed in the HadCRUT4 record.
It is a “physical observation” (warts and all). As such it represents “empirical evidence” that the global surface temperature is not warming at present..
Its “statistical significance” is a subjective premise.
Its “definition based on scientific (statistical) criteria” is an even more nebulous concept.
Max
Ask Dr. Cowtan about the pause. He has been putting out stuff on SkS as KevinC and provided some interesting models ofwarming via his Trend Calculator and the simple response function model
http://diyclimate.x10.mx
This is DIY stuff and it inspired me to work out the CSALT model.
http://ContextEarth.com/context_salt_model/navigate
These models reproduce the pause very readily.
Web
Your link to csalt didn’t work. Please repost.
tonyb
@climatereason:
So, you are asking me to just uncritically accept what other scientists “admit”, at least when it seems to confirm your views, even though it’s just an fait-accompli assumption.
Actually, according to the MET Office, the data show:
“- A wide range of climate quantities continue to show changes. For instance, we have observed a continued decline in Arctic sea ice and a rise in global sea level. These changes are consistent with our understanding of how the climate system responds to increasing atmospheric greenhouse gases.
– Global mean surface temperatures remain high, with the last decade being the warmest on record.
– Although the rate of surface warming appears to have slowed considerably over the most recent decade, such slowing for a decade or so has been seen in the past in observations and is simulated in climate models, where they are temporary events.”
Thus, on one hand the starting point of the MET Office publications is to take the claimed “pause” as fait accompli assumption, but then they come to the conclusion, nothing unusual can be seen in the data, and climate models simulate similar behavior. Which is obviously not the same what e.g., Judith Curry says when she is making her assertions about the “pause”.
If just every temporary wobble in a data series of a climate variable, which is opposite to the longer-term trend and not even statistically significant, called a “pause” what’s the point? Making headlines in the Daily Mail?
I do not have a problem with efforts to also understand short-term variability in the system. On the contrary, it’s necessary. I don’t even have a problem with calling some short-term behavior a “pause” as a work term. But many of the claims about the “pause”, which are out there, outside of the realm of science, are not really about that, aren’t they?
Jan
Good to see you being sceptical. I am often Sceptical of stuff from the Met Office and Nasa as well.
So if there is no ‘pause’ what do YOU think is happening?
tonyb
@manacker:
No explanation for anything comes from pure “physical observation”. There is always an explanatory framework in everyone’s head, which puts observation in a context, through which the observation is interpreted. I state this in this general form, since such a framework isn’t necessarily a scientific one. For instance, it also could be a framework of superstitious beliefs or one of non-scientific prejudices.
A statement like,
“The “pause” in the “globally and annually averaged land and sea surface temperature anomaly” (HadCRUT4) is observed in the HadCRUT4 record.”
cannot be made by you without any presumption in your head about how such a “pause” is recognized in the data, based on which you state your interpretation of the data.
My link to the CSALT model does indeed work.
The pause is very easily explained by a SOI downward trend in the last 10 years
However, since any SOI bias away from a mean index of zero can not be sustained, this pause is expected to disappear within a few years.
The other mitigating circumstance is the Cowtan and Way correction. This suggests that the pause is not as flat as we were lead to believe.
I will add the C&W correction to the CSALT model as soon as I can get a hold of their data.
Jan P Perlwitz
The “pause” is an observed slowdown or slight reversal of the observed late 20thC warming trend, which lasted almost 30 years.
It is “recognized in the data” by all those thermometer readings (even the ones next to AC exhausts in the summer or heated buildings in the winter), and as such has been (reluctantly?) “recognized” by the Met Office, by James E. Hansen and by others.
Neither the “pause” nor the late 20thC warming cycle can be “explained” with any high certainty, simply because there are still too many unknowns in what makes our climate behave as it does.
This is even more so for the early 20thC warming cycle and the mid-century cycle of slight cooling, both of which lasted around 30 years.
If the “pause” lasts as long as these other cycles, then it becomes as significant.
Max
One could define “pause” in this way. Simply as negative deviation from the average longer-term trend. However, then a “pause” could be observed 50% of the time, even if the longer-term trend stays the same, assuming a symmetric probability distribution of the trends with a given period length, based on which the trends are estimated. It would be foolish to not recognize something like this, which is just a feature of the probability distribution.
WebHubTelescope,
Adjusting your CSALT model so soon?
It’s a good thing you didn’t offer an incorrect model to the IPCC, then. They seem to have enough of their own.
If your model can’t even accurately forecast the past, what is it useful for?
Live well and prosper,
Mike Flynn.
If your model can’t even accurately forecast the past, what is it useful for?
You mean hindcast the past like this model does?
http://davidappell.blogspot.com/2013/09/a-useful-paper-on-one-models-results.html
Jan P Perlwitz
Good. So we agree.
The “pause” is real as it is based on physically observed data, we both “recognize” it as such, and we agree that its “statistical significance” is less than that of the late 20thC warming cycle (which lasted over twice as long).
IF it lasts 30 years, however, its “statistical significance” will be the same (even though the amount of observed cooling may be less than the observed warming during the late 20thC warming cycle).
Max
No, not really.
We don’t agree on the pause. I said one could define it as you proposed, and I said that any trend lower than the average trend of the trend distribution would be a “pause” then, for a symmetric distribution 50% of all the cases. However, I do not accept such a definition, even if it was possible from a purely technical point of view. It doesn’t make any scientific sense to me to claim a “pause”, if such a “pause” isn’t statistically distinguishable from the longer-term, statistically significant warming trend, and to claim the presence of a “pause”, even if nothing has changed in the system with respect to continuing global warming, and the deviation from the average trend was just due to some random short term fluctuation.
and we agree that its “statistical significance” is less than that of the late 20thC warming cycle (which lasted over twice as long).
You are talking about the warming “cycle” as if it was something from the past. I don’t see any substantial empirical evidence at present for such a claim. The trend is also statistically significant for the most recent 30 years. And why “cycle”? I don’t see just cycles. I see a secular increase. I see some justification for the hypothesis that the secular increase is overlaid with some multi-decadal quasi-cyclic behavior over the course of the 20th century. How much of this is really internally generated like due to lower frequency chaotic behavior, or whether the observed pattern is just due to how external forcings (mostly solar, GHG, and Aerosols) coincidentally combined remains to be seen.
If the temperature record of coming years became distinguishable from the currently detectable statistically significant warming trend, based on robust statistical analysis, it would change things.
David Appell,
I’m not sure what use a model that forecasts the past is. I use the term advisedly.
If you believe it has some use, have you considered selling (or giving) it to someone like the IPCC?
Their models seem to be of no use for anything, except to demonstrate they are worthless. Maybe yours is better.
Live well and prosper,
Mike Flynn.
Jan P Perlwitz
Your lengthy waffle surprised me.
I thought you had recognized that the currently observed reversal of the late 20thC warming cycle was real, as have James E. Hansen and the Met Office.
But it appears you are still in denial.
“Tant pis” (as the French say).
Max
I am not aware of any scientific publication or other statement where Jim Hansen or the MET Office are supposed to have “recognized” that we currently are observing a “reversal” of the warming observed during the late 20th century, i.e. where they supposedly have claimed that something substantially has changed in the physical system compared to the last decades of the 20th century. Especially, now you are talking about the physical process of global warming as a process of continuing heat accumulation in the oceans, land, cryosphere, and troposphere due to the radiative disequilibrium coming from increasing greenhouse gases in the atmosphere, which involves much more than just the surface temperature record and for which the surface temperature is not the most important component or indicator.
I think you just have made this up about Jim Hansen and the MET Office.
You mean I do not accept assertions as to be true, just because you make them w/o having available any scientific evidence to back up such an assertion. If you like to call this to be “in denial” suit yourself.
Jan, “This alleged pause that has never been defined based on precise scientific (statistical) criteria, and that, perhaps, has never been?”
A rose by any name would smell as sweet.
The precise definition of the term “pause”, as I’ve understood it by trying to read between the lines of the climate blogosphere, is any period of duration equal to that from January 1998 to the most recent December inclusive (presently 15 years = 180 months) such that the HadCRUT4 temperature trend of that period is less than +0.5 °C/century.
Since 1967 there has been only one pause by this definition, namely the 15 years starting with 1998, which trended up +0.417 °C/century.
The 14 years starting 1998 trended up +0.524 °C/century. Hence for there to have been a pause prior to this year one would have to raise the threshold slightly to allow this one.
The nearest thing to a pause substantially before that was the 15 years starting with 1980 which trended up +0.974 °C/century. The 14 years starting with 1980 trended up +0.892 °C/century.
This definition of “pause” is precise enough to start an annual pool on whether “the pause” will reach 0 (horizontal trend) or below, to be adjudicated at WoodForTrees after each December’s HadCRUT4 anomaly is announced.
For this December (16 years) it looks like the trend will be around +0.4 °C/century. Last December (15 years) it was +0.417 as noted above, while the preceding December (14 years) it was +0.524. In contrast the 30 years starting 1974 trended up +2.01 °C/century, considerably stronger than “the pause”.
At this rate I doubt many people will want to bet on “the pause” reaching 0 for December 2014. 2015 may hold more promise for some though.
Too many significant figures. Trying to be too precise.
> A rose by any name would smell as sweet.
I guess that depends what you mean by “sweet”, Cap’n:
http://perrysperennials.info/articles/rosefrag.html
@Appell: Too many significant figures. Trying to be too precise.
The extra digits were in case anyone took my proposal of a pool seriously. They give a sanity check that everyone in the pool is on the same page, i.e. that we’re all using the same data and same formulas for trends. Everyone else only needs the first significant digit, as you rightly point out.
Dr. Perlwitz
Only pause that matters is one of the peak annual insolation, it has been constant for the last 350 years
http://www.vukcevic.talktalk.net/CET-Jun.htm
The rest is simply the natural variability due to the oceans perpetual circulation change.
http://www.nasa.gov/topics/earth/features/perpetual-ocean.html
Puzzled Scientists Say Strange Things Are Happening On the Sun
“‘ If so, the decline in magnetic activity could ease global warming, the scientists say. But such a subtle change in the sun—lowering its luminosity by about 0.1%—wouldn’t be enough to outweigh the build-up of greenhouse gases and soot that most researchers consider the main cause of rising world temperatures over the past century or so. ‘Given our current understanding of how the sun varies and how climate responds, were the sun to enter a new Maunder Minimum, it would not mean a new Little Ice Age,’ says Judith Lean. ‘It would simply slow down the current warming by a modest amount.'”
http://science.slashdot.org/story/13/11/13/0150213/puzzled-scientists-say-strange-things-are-happening-on-the-sun
When doubling CO2 is less than 1 watt per square meter, why is .1% of 1360 watt square meter small?
Particularly when we have not doubled CO2, nor likely in less than 50 years.
What if instead of .1% is was actually .15 %. So rather 1.3 watts, it was 2 watts per square meter less at top of atmosphere?
Anyhow, I agree we aren’t going to enter another Little Ice Age, if nothing else it would probably require century of such cooling before it resembled
the LIA. And it seems we also need some volcanic eruption exceeding 50 cubic km of ejecta, like we had during the LIA.
I just thought something. The AGW religion holds that “global warming” causes more hurriances. And there quite of historical evidence which shows this is not true. So if we assume that cooler conditions cause more hurricanes, if we have more and larger hurricanes when it’s cooler, is this another small cooling effect?
gbaikie,
WHT knows all. He has made a model. It doesn’t work, so he is going to “adjust” it to fit a little bit of the past.
WHT confuses estimates of albedo with fact.
He assumes the GHE exists. It doesn’t. He must spend a lot of time reading the “Book of Warm”. He sounds like he actually believes what he writes. Pity him, don’t condemn him.
Live well and prosper,
Mike Flynn.
“The AGW religion holds that “global warming” causes more hurriances.”
Wrong.
Ironic that you call science religion without yourself knowing basic facts.
1360 is not the comparison yardstick, it is closer to 250 w/m^2 after geometry and albedo is figured in.
And then you screw up on 1, as it should be more like 3.7 or 10 w/m^2 when the water vapor positive feedbacks are factored in. That gives 3C ECS.
No wonder no one ever responds to your arithmetical drivel gbaikie.
WHT
Rather than getting too concerned about “arithmetic” regarding the impact of the sun on our climate, it would seem wiser to look at the recent past (past 1300 years or so).
We cannot explain the MWP and LIA with human GH gases, so it leaves other human influences plus natural forcings.
Other human influences were arguably not very significant prior to industrialization (~1750), so that leaves natural forcing.
The warmest century of the MWP was (at least) as warm as the past century, and the coldest century of the LIA was arguably around 1C cooler.
So, in addition to multi-decadal bumps and grinds caused by natural variability, we have 1C either way that was caused by natural factors.
We know the sun was unusually inactive during the depth of the LIA.
We also know that it was unusually active during the 20thC.
And we know that is has become very inactive lately.
So we have the sun essentially driving our climate for over 1,000 years, including the first half of this century (before there was much GHG influence) with swings of at least 1C.
So it seems silly to write the sun off as insignificant by only considering changes in the direct solar irradiance.
Max
-1360 is not the comparison yardstick, it is closer to 250 w/m^2 after geometry and albedo is figured in.
And then you screw up on 1, as it should be more like 3.7 or 10 w/m^2 when the water vapor positive feedbacks are factored in. That gives 3C ECS.
No wonder no one ever responds to your arithmetical drivel gbaikie.-
O3 is suppose to be one of the greenhouse gas and O3 is near top of atmosphere.
Sunlight obviously is the primary factor which would “forces” all greenhouse effects, so less sunlight, subtracts or is the reverse of your “water vapor positive feedbacks” and all of your stipulated feedbacks.
“By their percentage contribution to the greenhouse effect on Earth the four major gases are:
water vapor, 36–70%
carbon dioxide, 9–26%
methane, 4–9%
ozone, 3–7% ”
http://en.wikipedia.org/wiki/Greenhouse_effect
But in general averaging sunlight where is doesn’t shine, seems a bit silly to me.
Just posted a reference at RC to Cowtan and Way’s comments here, nothing more than the web address, and it was Bore-holed.
This may sound like a stupid question but in your opinion is that due to not wanting to send anyone to the competition or are they just boycotting a skeptic site? I noticed they don’t list Climate Etc or WUWT on their blog roll that is headlined: Other Opinions; but Dr Curry only lists her buddys as well. I know she allows links to Real Climate as I’ve done it. My only conclusion would be that they are afraid people might get wise to their propaganda. They don’t want free exchange just group think.
The Borehole at RC makes for an interesting read. To their credit, they keep it on line even though it makes them looks more intolerant than they probably are. My experience there leads me to believe that bore-holing depends on who the moderator is and his mood at the time. They never borehole remarks from their regular supporters, no matter how fatuous (e.g. “there is no debate”). As to Way & Cowtan, it will be entertaining to watch attack dog Ray Ladbury gradually come around to supporting the paper. He and others there are in a bit of a bind, though, having finally acknowledged the pause and explained where the missing heat has been hiding. Now it turns out not to have been missing after all.
That sounds funny. I’ll have to check a few of those threads out.
One of the hallmarks of the warmist movement, and RC may still be the standard-bearer in this regard even in its reduced state, is they have not been, can not be, and will not be wrong. Thus Hansen’s predictions have proven out, the hockey stick is unbroken, Trenberth was misunderstood, etc. Data will be tortured until it gives in. Now that the new paper is out, how long will it be (I assume it’s already happened) before someone at RC or elsewhere trumpets the news that, “Aha, the warming that we told you would ‘come back with a vengeance’ has arrived! Not only did we find it hiding in the deepest levels of oceans and elsewhere during the alleged pause, we now know the pause didn’t exist to begin with. Add it all up, people, the catastrophe is here!” And there you have it, a veritable heat bonanza.
anonymous,
I am so new to this stuff that I would never know the difference. I am glad you explained things to me like that point blank. If the reality is as you describe they would be in denial so I guess that’s why they hurled the invective first. I also understand that the media portrayal and the public perception would be more in RC direction. I still haven’t looked at McIntyre’s papers (I keep promising myself) but looking at the balance of all the charts I’ve seen it looks more like a boomerang than a hockey stick. And yet here is Mann front and center trying to slay the denial dragon with truth, justice and the American way. I just now read Trenberth on wiki I had no idea that happened. After reading that, all I can say is that it sounds Clintonesque. I have been under the impression that Hansen’s predictions were pretty good? So I take it you think the pause is having an effect on team catastrophy?
Uncle Ray is making progress. ‘Of two minds’ at 8:19 AM today. Prediction: will be of one warm mind by week’s end.
…accept for Aunt Judy
To my prediction that someone at RC would proclaim even more heat playing hide-and-seek than we imagined, not only in the deeper ocean but in the kriged-up Arctic, a Mr. Roger Lambert is giving Stefan a nudge. Here it is, quick, before it gets bore-holed:)
Roger Lambert says:
16 Nov 2013 at 12:24 PM
Stefan, could you comment on the stoichiometry of the heat? Enough ‘hidden’ heat was accounted for in deeper ocean strata to explain the “pause”. Now this study interpolates enough missing heat somewhere else. Do we have too much heat floating around or not?
That’s hilarious! heat popping up everywhere!!
post was still there
ooh, stoichiometry; they use longer words over at RC than here.
It gets better and better over at RC. Last night there was another reference [Steve] to the authors’ contributions to the discussion here. This morning — boreholed. So childish, and so tone-deaf!
Too much emotion, not enough data.
Ordvic, please understand this is my venting more than anything else, based on long experience, to be sure, and I do have strong, informed opinions, but I’m no climate scientist, and you are right to determine things through your own reading.
“Just posted a reference at RC to Cowtan and Way’s comments here, nothing more than the web address, and it was Bore-holed.”
They tend to clip posts that have no content from the poster, unless in direct reply to a request or somesuch. Try adding what the links lead to next time and see if they permit it.
Judith,
Do you have a link to Ed Hawkin’s graphic shown in the post (or better, a link to the data for that graphic)?
http://curryja.files.wordpress.com/2013/02/aafig.jpg
https://www.google.com/search?q=ed+hawkins+graphs&client=firefox-a&hs=DVM&rls=com.floodgap:en-US:unofficial&source=lnms&tbm=isch&sa=X&ei=OAGFUvumF4S-igKr2YCICg&ved=0CAkQ_AUoAQ&biw=1217&bih=605#imgdii=_
http://www.climate-lab-book.ac.uk/author/ed/
It looks like everything is available except the complete paper, even actual data and computer code used are available for downloading.
There is a circularity here. In the regions that are not missing data, if you randomly exclude some and use their method to impute the missing data then their method does a good job if imputation. They infer from that that the method is equally valid for imputing the missing temperatures from the areas that are already undersampled. However, if there is a bias in the lack of coverage, this method will not reveal any bias: it only confirms that if you delete data completely at random from the sampled regions the imputation works, not that their method accurately imputes where it can not be assumed the data are missing at random.
The data are missing from rather large regions, and there is no reason to assume that those regions have any particular relationship to the measured regions. If the true relationship between missing and measured data is different from the relationship that is used in the imputation, then the imputed values will be wrong, and there is no way (now, at least) to disclose the errors.
This based on the abstract and press release.
Kind of frustrating that everything but the paper is available. They do have out of sample data, buoys in the Arctic some floating and some on ice which helps a lot.
1.) Often there Is and should be multiple rounds of sampling. The geostatistical techniques of course bring some known charactirstic ‘design’ tools, e.g., error maps for the region of interet, use of virtual data for ‘optimizing sampling, etc., to that process.
2.) Use of co-regionalized variables brings more locations into play though not without limitations and qualifications. But in the light of a tiered protocol ‘more’ is is better than ‘none’. Uncertainties associated with the quantitative relation between the two variables in principle can be incorporated into the data extended (kriging) model.
3.) These two facets of a geostatistical approach used in tandem make it attractive and potentially viable.
0.) I too am outside the pay-wall so none of this may apply…but I hope it does :O)
Matthew, I was able to look at the paper itself. The model selection is not done on the basis of prediction of randomly excluded data. Rather, it is done on the basis of prediction of excluded data near the edges of the regions where data is missing. I think this is a reasonable way to address the issue you are raising as well as it can be addressed. Of course, the problem remains that patterns of spatial covariance of observations with missing observations cannot be observed; they have to be based on an informed assumption. Dr. Curry suggests that the covariance pattern is likely to be different in the arctic for various reasons. So I think that’s where we are.
There is some information that the correlation length in the arctic is shorter ( during some seasons) than the correlation length at other parts of the globe.
Reference
http://iabp.apl.washington.edu/data_satemp.html
‘The statistics of surface air temperature observations obtained from buoys, manned drifting stations, and meteorological land stations in the Arctic during 1979–1997 are analyzed. Although the basic statistics agree with what has been published in various climatologies, the seasonal correlation length scales between the observations are shorter than the annual correlation length scales, especially during summer when the inhomogeneity between the ice-covered ocean and the land is most apparent. During autumn, winter, and spring, the monthly mean correlation length scales are approximately constant at about 1000 km; during summer, the length scales are much shorter, i.e. as low as 300 km. These revised scales are particularly important in the optimal interpolation of data on surface air temperature (SAT) and are used in the analysis of an improved SAT dataset called IABP/POLES. Compared to observations from land stations and the Russian North Pole drift stations, the IABP/POLES dataset has higher correlations and lower rms errors than previous SAT fields and provides better temperature estimates, especially during summer in the marginal ice zones. In addition, the revised correlation length scales allow data taken at interior land stations to be included in the optimal interpretation analysis without introducing land biases to grid points over the ocean. The new analysis provides12-hour fields of air temperatures on a 100-km rectangular grid for all land and ocean areas of the Arctic region for the years 1979–1997.
The IABP/POLES SAT data set is then used to study spatial and temporal variations in SAT. This data set shows that on average, melt begins in the marginal seas by the first week of June and advances rapidly over the Arctic Ocean, reaching the pole by 19 June, 2 weeks later. Freeze begins at the pole on 16 August, and the freeze isotherm advances more slowly than the melt isotherm. Freeze returns to the marginal seas a month later than at the pole, on 21 September. Near the North Pole, the melt season length is about 58 days, while near the margin, the melt season is about 100 days. A trend of +1°C/decade is found during winter in the eastern Arctic Ocean, but a trend of –1°C/decade is found in the western Arctic Ocean. During spring, almost the entire Arctic shows significant warming trends. In the eastern Arctic Ocean this warming is as much as 2°C/decade. The spring warming is associated with a trend toward a lengthening of the melt season in the eastern Arctic. The western Arctic, however, shows a slight shortening of the melt season. These changes in surface air temperature over the Arctic Ocean are related to the Arctic Oscillation, which accounts for more than half of the surface air temperature trends over Alaska, “Eurasia, and the eastern Arctic Ocean but less than half in the western Arctic Ocean.
Yes, you can’t even get the annual cycle phased correctly if you are using arctic land temps to infer arctic ocean and sea ice temps.
Variability in correlation length in space and in time are would not be as surprise given the underlying organized dynamic system. It is just not the correlation length either. Existence of a finite correlation length necessitates other parameters characterizing the correlation, e.g., sill, nugget, etc. Such are the cards dealt.
NW: Rather, it is done on the basis of prediction of excluded data near the edges of the regions where data is missing.
That is a good detail.
Your post is good.
Steven Mosher | November 14, 2013 at 12:47 pm |
thanks for an informative post.
From Judith’s excerpt from Kennedy we have:
“Unknown unknowns will only come to light with continued, diligent and sometimes imaginative investigation of the data and metadata.”
I suspect that many of those laypersons and even more technical persons discussing the issues of uncertainties in the observed temperature series and using those series for important science efforts have not mulled through the assumptions required to make these estimates and potential areas where a better understanding of these assumptions could potentially and significantly change the currently accepted uncertainties. Here I am referring the known unknowns or at least unknown or poorly understood conditions that could affect the instrumental temperature record over the historical period in which we use it.
We appear to have temperature series put forth with uncertainty limits that are then exceeded with a later version of the same series or from newer series. I believe this is the case more recently with GHCN versions 2 and 3 and the Best effort versus HadCRU4 , GISS and NCDC – if you are allowed to select part of the period for comparison. Cowtan and Way, though I have not yet read the paper, would appear to be following this same line.
Without placing any personal preference of any of these methods, I think one can see that even a measurement of utmost importance to the AGW issue, i.e. instrumental temperature series, and more importantly the trends derived from these series, remains a work in progress.
I have looked in some detail at the algorithms that are used in most temperature series to adjust those series using breakpoints of station difference series, and while I think that approach is the better for using an objective approach versus depending on more subjective meta data, it continues to have some potential weaknesses in areas such as finding slow non climatic changes effecting temperature measurements. We have some confidence in validating these adjustments after having the satellite record available for comparison and after recognizing that the lower troposphere trends can differ from the surface temperatures trends and have differences varying by global regions. I judge that much of the uncertainty resides in the period before we had satellite data even when conceding that satellite data has uncertainties unique to those measurements. Benchmarking these various approaches versus some simulated climate works well for me as long as we include in the benchmarking some conditions that might arise from the so-called known unknowns.
Pingback: Curry on the Cowtan & Wray ‘pausebuster’: ‘Is there anything useful [in it]?” | Watts Up With That?
Stitching together multiple data sets collected by different methods is fraught with potential problems, as we’ve often seen in climate science.
To me, when the satellite data sets clearly show “the pause”, and you then turn around and say stitching the satellite data onto the surface station data makes “the pause” largely go away. . . .well, that cries out to me that you’re actually seeing an artifact of the stitching process of two data sets collected by different means rather than real data.
Re: “Kriging across land/ocean/sea ice boundaries makes no physical sense.” But does infilling make any more sense? Doesn’t Cowtan & Way’s testing show kriging makes more sense than infilling?
http://www.dailymail.co.uk/sciencetech/article-2503370/Quantum-physics-proves-IS-afterlife-claims-scientist.html
Infilling by Kriging within Africa makes sense. Kriging or any other kind of infilling that uses data from land to infer something about ocean temps makes no physical sense.
But does kriging make less sense than infilling, across physical boundaries?
I think there is one exception, ice free coastal land temperatures seem to be useful as long as they are very close to sea level. At least checking the buckets to intakes issue it seemed to be, but that’s a stats guy’s call.
Judith
There are three choices
1. leave arctic blank. This is CRU. understand that this approach
amounts to INFERRING that trends north of 70, are less than
trends at 70. That is, by leavinng the arctic missing you imput
the global average trend to the area north of 70
2. Extrapolate from 70 to 90. GISS. this asssumes that the trend is the
same at 90 as is at 70. in short, that the polar amplification magically
ceases at 70 north
3. A method like Roberts and Kevins.
The argument that a method isnt physical, holds against ALL THREE approaches. the question is which is best.
Since 3 has passed cross validation and is compared to bouy data, I’m going to bet on 3.
There is a 4th option. Do a more comprehensive job of figuring out what the AO temperatures are, and then do a comprehensive uncertainty analysis. Otherwise, I prefer option #1
So then Judith, do you agree that kriging is at least no worse than current algorithms for areas that have no coverage?
Steven,
There’s a third choice: Accepting that coverage is incomplete. If the resulting time series is more reliable, and if it in addition has less “random” variability, this time series may be more useful for almost all practical purposes. It’s not truly global, but that’s fine as long as that’s recognized and known by users of the time series.
There is a fifth option. Admit you don’t know enough to compute a “global average temperature” with anywhere near sufficient precision to detect changes of tenths of a degree per day/year/decade.
But then what would happen to the gravy train of other people’s money?
There is a fifth option. Admit you don’t know enough to compute a “global average temperature” with anywhere near sufficient precision to detect changes of tenths of a degree per day/year/decade.
All scienfitic measurements have uncertainties. So do GISTEMP and HadCRUT4 — they’ve both been very open about that.
If you disagree with their error bars, you’re welcome to engage them on their science.
Regarding a global average temperature, i don’t think this is prima facie important. I think the key issue is comparing model simulations with observations, implying making the comparison in regions where you have the observations. This is what Ed Hawkins did, and I think this is the most illuminating thing that can be done with the ‘global’ surface temperature data
“There is a 4th option. Do a more comprehensive job of figuring out what the AO temperatures are, and then do a comprehensive uncertainty analysis. Otherwise, I prefer option #1
of course we all like option 4.
But I’m not seeing the motivation behind option 1. That is no different than asserting that the trend above 70N is the same as the global average.
Now, if we had no other physical argument for thinking that the trend above 70 was the same as the global average, then this might be defensible. But we do have a reason for thinking the trend above 70N will be different:polar amplification. I know of nothing that would suggest this will suddenly disappear north of 70.
I can think of “purity” reasons why folks might want to avoid infilling
But, if you wanted to make a bet, would you bet
A) the trends above 70 are lower than they are at 70?
B) higher?
C) about the same
which bet would you take
“GaryM | November 14, 2013 at 5:50 pm |
There is a fifth option. Admit you don’t know enough to compute a “global average temperature” with anywhere near sufficient precision to detect changes of tenths of a degree per day/year/decade.”
nobody claims that
Judith, can you please answer this directly: do you agree that kriging is at least no worse than current algorithms for areas that have no coverage?
Judith Curry wrote:
Regarding a global average temperature, i don’t think this is prima facie important. I think the key issue is comparing model simulations with observations,
But that would require comparing regional changes to regional predictions. Models haven’t yet reached that level — and may never reach it.
There are many things, though, that physics can predict in toto, but not in smaller regions or domains — basically all of thermodynamics
“Global warming” is global. How can it be assessed with something less than global measurements of global variables?
@David Appell,
But warming isn’t the same globally.
Dr, Curry,
“Regarding a global average temperature, i don’t think this is prima facie important.”
Well, if the consensus were correct that GAT is rising as fast as they claim it is, it would at least be evidence in support of their claim (not proof, evidence). And I am the wrong one to tell that it is not prima facie important. The IPCC and every other CAGW advocate has been crying GT wolf for decades.
Steven Mosher,
” nobody claims that”
Read harder.
Steven Mosher,
You are right. I should apologize. You warmists are claiming to know changes in temperature in hudredths of a degree per decade.
“The updated 100-year trend (1906–2005) of 0.74°C ± 0.18°C is larger than the 100-year warming trend at the time of the TAR (1901–2000) of 0.6°C ± 0.2°C due to additional warm years. The total temperature increase from 1850-1899 to 2001-2005 is 0.76°C ± 0.19°C. The rate of warming averaged over the last 50 years (0.13°C ± 0.03°C per decade) is nearly twice that for the last 100 years.”
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/tssts-3-1-1.html
(Please note the header – “TS.3.1.1 Global Average Temperatures”)
See also:
“the average surface temperature across the contiguous 48 states has risen at an average rate of 0.14°F per decade (1.4°F per century)”
http://www.epa.gov/climatechange/science/indicators/weather-climate/temperature.html
My bad. I heartily apologize for understating the absurd claims of precision by you warmists.
Mi Cro wrote:
But warming isn’t the same globally.
A global average is a global average. And climate science says that the global average should depend, in part, on the global distribution of all GHGs.
@David Appell
“A global average is a global average. And climate science says that the global average should depend, in part, on the global distribution of all GHGs.”
This isn’t climatology, it’s math.
But yes that is what it says, and yes of course local temps will respond as they might respond to local well mixed GHG’s. But what if there is no trend in regional temps as GHG’s went up, and it’s only once you average all of the regions does anything like a warming trend shows up(actually it’s more like a couple or three burps of warming in different regions)?
‘ “Global warming” is global. How can it be assessed with something less than global measurements of global variables?’ I like the question, but your answer makes no sense. Are you claiming that we must look at the entire system that contains all the heat, or thermodynamics doesn’t work? It’s not true; in fact, we are not look at the entire system, but a tiny fraction of it. The ocean contains almost all the heat, and climate scientists now suspect that there is significant heat transfer into (and presumably out of) the deep ocean.
Are you seriously claiming that the climate system is chaotic, but ceases to be once you look at the global surface temperature average? I’d like some evidence for this very important claim.
miker wrote:
Are you claiming that we must look at the entire system that contains all the heat, or thermodynamics doesn’t work?
Yes. Obviously, yes.
4. A method unlike Bobby and Kev’s.
5. Admit that you do not have sufficient data, and resist the urge to make $#!^ up and pretend that you do.
No. The question is: Which, if any, is sufficient?
&
When a scientist makes methodological decisions based upon “how I would bet it is”, he is in fact deciding how its going to be. This, esp when combined with your two false-choice restrictions of the possible alternatives, is how science succumbs to bias.
Pick 4, IFF the sufficiency question can be answered with a defensible positive. Else, pick 5.
@Appell
‘ “miker wrote: Are you claiming that we must look at the entire system that contains all the heat, or thermodynamics doesn’t work?” Yes. Obviously, yes.’
———-
Well, then answer my question: Why should we be talking about global surface temperature when it is part of a larger system not in equilibrium – that is. including the deep ocean. Is there any reason to believe that global surface temperature alone is not chaotic?
David Appell
We are talking about warming cycles that can be measured in hundredths of a degree per year over ~30 years.
So the “fifth option” simply recognizes that you cannot get a globally and annually averaged figure that is accurate to a hundredth of a degree(expressed in thousandths of a degree) from individual readings that are significantly less accurate UNLESS you include the individual inaccuracy in your error bars.
Add to that the number of measuring stations that have been shut down over time, the gaps that need to be filled because there are no measurements, the human errors involved (particularly in the past record), local distortions to the land record from urbanization, the known inaccuracies of the SST record resulting from changing measurement methods, etc., and you have a can of worms.
Another unexplained fact is that the surface record shows more rapid warming than the satellite record, despite the fact that GH warming should occur more rapidly in the troposphere than at the surface.
But the record is the best we have – so we have to live with it for what it’s worth, until something better comes along.
Max
Max wrote:
you cannot get a globally and annually averaged figure that is accurate to a hundredth of a degree(expressed in thousandths of a degree) from individual readings that are significantly less accurate
I am sure the scientists who calculate monthly global temperature anomalies know this.
So what evidence is there to the contrary? (Honestly, I’m interested to know.) Hard, scientific evidence.
free math help for people who been hit in the head with a baseball
JCH: I don’t see any science in your reply, so I am going to ignore it.
David Appell
“Science” to “prove” that 1+1=2?
Sorry, David.
You need to go back to first grade.
Max
Judith: Do we really care precisely how much warming has been occurring in obscure corners of the planet? Has anyone ever constructed a temperature or climate index weighted towards where most people live and grow/harvest products?
One could even go further and recognize that extreme weather causes the most problems. We really want to know how fast warm extremes are increasing and cold extremes are decreasing – and this approach would remind everyone that their are costs and BENEFITS to climate change.
Polar warming is important to sea level rise, but temperature alone does not tell us the significance of warming. Warmer, moister air brings more precipitation. We need to know about the balance between accumulation and melting. Greenland (and the polar bears) survived several millennia of the Holocene Climate Optimum.
We do care about the global average temperature, because it is determined by the greenhouse gases, which are well-mixed in the atmosphere.
It’s good to have a representative time series, and it’s better to have many time series as the rate of warming varies. An 84% complete coverage serves as well as the single time series as a time series with 100% coverage.
David Appell
I’d consider that a statement of faith, rather than an observation backed by empirical scientific evidence, David.
The globally and annually averaged land and sea surface (or tropospheric) temperature may be partially determined by GH gases but (despite IPCC’s statement of 95% confidence), there is still way too much uncertainty to state that “it is determined by GH gases”, as I’m sure you will agree.
Max
It is a statement of physics, not of faith.
Here are just some of the measurements that have detected an enhanced greenhouse effect:
“Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997,” J.E. Harries et al, Nature 410, 355-357 (15 March 2001).
http://www.nature.com/nature/journal/v410/n6826/abs/410355a0.html
“Comparison of spectrally resolved outgoing longwave data between 1970 and present,” J.A. Griggs et al, Proc SPIE 164, 5543 (2004). http://spiedigitallibrary.org/proceedings/resource/2/psisdg/5543/1/164_1
“Spectral signatures of climate change in the Earth’s infrared spectrum between 1970 and 2006,” Chen et al, (2007) http://www.eumetsat.int/Home/Main/Publications/Conference_and_Workshop_Proceedings/groups/cps/documents/document/pdf_conf_p50_s9_01_harries_v.pdf
“Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Phillipona et al, Geo Res Letters, v31 L03202 (2004)
http://onlinelibrary.wiley.com/doi/10.1029/2003GL018765/abstract
“Measurements of the Radiative Surface Forcing of Climate,” W.F.J. Evans, Jan 2006
https://ams.confex.com/ams/Annual2006/techprogram/paper_100737.htm
It’s a statement of physics, not of faith:
“Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997,” J.E. Harries et al, Nature 410, 355-357 (15 March 2001).
http://www.nature.com/nature/journal/v410/n6826/abs/410355a0.html
“Comparison of spectrally resolved outgoing longwave data between 1970 and present,” J.A. Griggs et al, Proc SPIE 164, 5543 (2004). http://spiedigitallibrary.org/proceedings/resource/2/psisdg/5543/1/164_1
“Spectral signatures of climate change in the Earth’s infrared spectrum between 1970 and 2006,” Chen et al, (2007)
“Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Phillipona et al, Geo Res Letters, v31 L03202 (2004)
“Measurements of the Radiative Surface Forcing of Climate,” W.F.J. Evans, Jan 2006
I have links to all these papers, but for some dumb reason this site won’t allow more than a couple of URLs in a post.
David Appell,
Serious question.
When the Earth’s global average surface temperature was 300 K, (and it must have passed through this as it cooled from the molten state), what concentration of greenhouse gases was necessary to cause this temperature?
Why has the Earth’s surface cooled from 300 K to whatever it is at present?
I think you are reading from the Book of Warm, but I am always open to new facts. I await your answer.
Live well and prosper,
Mike Flynn.
David Appell
This ended up in the wrong place so am re-posting:
“Physics” (as you put it) tells us that CO2 (among other GH gases, principally H2O) absorbs and re-radiates LW radiation. This has been corroborated by empirical evidence based on physical observations.
“Physics” does NOT tell us that this represents the principle determining factor of global average temperature, as you stated. This has NOT been corroborated by empirical evidence based on physical observations or reproducible experimentation.
Max.
Mike Flynn wrote:
When the Earth’s global average surface temperature was 300 K, (and it must have passed through this as it cooled from the molten state), what concentration of greenhouse gases was necessary to cause this temperature?
I don’t know, without doing some research.
Why does it matter? There are many factors that influence climate. Some predominate over others, depending. Right now anthropogenic GHGs seem to predominate — is there some other apparent cause for modern warming? If so, what is the evidence?
David Appell,
OK, you should at least know this.
When the surface was 0.001 C warmer than it is now, why did it cool?
Surely there was more CO2 in the atmosphere to make it warmer then?
Live well and prosper,
Mike Flynn.
“Physics” does NOT tell us that this represents the principle determining factor of global average temperature, as you stated.
Of course it does — climate change is the sum of all forcings. Q.E.D.
Mike Flynn wrote:
When the surface was 0.001 C warmer than it is now, why did it cool?
Where did you ever learn that CO2 was the only climate forcing?
“Of course it does — climate change is the sum of all forcings. Q.E.D.”
That’s how linear systems work. This isn’t linear.
Harold wrote:
That’s how linear systems work. This isn’t linear.
What is your evidence that recent warming hasn’t been a linear function of climate forcings?
David Appell,
Good Warmist response, but no longer sufficient.
Let me rephrase. Why did the Earth’s average surface temperature fall from 0.001 C warmer than it is now, to its present temperature?
You refer to, in usual Warmist fashion, “climate” and “forcing”.
Now, climate is the average of weather – no more no less. “Forcing” in this context, is a meaningless Warmist concept. Stick to physics, and you might convince someone. Warmist wafflespeak is only effective with Warmists.
Live well and prosper,
Mike Flynn.
Mike Flynn: Unless you stop the name calling, you won’t get any further replies.
Got it?
David Appell,
You haven’t provided any answers so far, so you are obviously confusing me with someone who cares.
Live well and prosper,
Mike Flynn.
Don’t you just love the logic? “When did you stop beating your wife?”
Don’t you just love the logic? “When did you stop beating your wife?”
Do you think CO2 doesn’t absorb infared radiation, or do you think the Earth doesn’t emit it?
Mike Flynn, the last time the temperature was 300 K would have been over 40 million years ago in the Eocene, when GHG concentrations were at least 1000 ppm. So you ask why did it cool. It was because over long geologic periods without much volcanic activity such as since the Eocene peak, CO2 tends to be sequestered in the soil and rocks. Less CO2 equals cooling.
Jim D,
It’s a start. Now consider when the Earth was only 0.001 C warmer. More CO2? Less CO2?
How did it continue to cool to the present temperature?
Live well and prosper,
Mike Flynn.
Mike Flynn, unforced decadal noise is about 0.1 C, so nothing less than that over any period is attributable to climate factors and would not be called climate change. The pause people are confounded by this, because they keep thinking the pause anomaly which is 0.1 C is something to do with climate. Hope that helps.
Jim D,
You have obviously been absorbing the lessons from the Book of Warm. Well done! So far, you have managed the following : –
1. Pretend to misunderstand the question.
2. Assume an air of superior knowledge.
3. Use Warmist terms that can have different meanings ascribed to them if anybody is silly enough to quote them back at you.
4. At all costs, avoid giving a direct answer. You may paint yourself into a corner.
Unfortunately, Jim D, people have realised that facts trump fantasy. As you can’t provide any cogent facts, I will leave you to enjoy your fantasy.
Live well and prosper,
Mike Flynn.
Mike Flynn, I gave you the reason your question made no sense in a climate context. Maybe, if you read the answer again, you will see how your 0.0001 C can be viewed in the big picture.
i dont care about how much warming there is in the corners.
but thats not the question
The question is:
Given all the data you have construct the very best estimate. the estimate that minimizes error.
This is not policy changing science. That is what makes it so cool.
key issue is to accurately characterize the error, to decide whether the ‘very best estimate’ is useful for anything
Steven Mosher,
“This is not policy changing science.”
No, but that was clearly their intent. Which is the reason that the fact that they are Skeptical Science drones is so instructive.
The “pause” in reported “global” temps is the bete noir of the CAGW cognoscenti right now. It not only undermines the claims of imminent thermageddon, it also makes the GCMs look as useless for setting global energy policy as they are.
The “pause” undermines two of the legs of the CAGW stool. Observations and models. And worst of all, the “pause” comes from the work of some of their own prelates, NASA and UEA.
I wonder if you would find this so much “fun” if their conclusion was that the guessed, sorry estimated, temps of the Antarctic have been over stated.
GaryM wrote:
The “pause” in reported “global” temps is the bete noir of the CAGW cognoscenti right now.
You want to pretend that the data is sacrosanct and cannot be questioned, but this is false.
The data itself depends on models. All climate data does:
http://davidappell.blogspot.com/2013/11/without-models-there-are-no-data.html
There is no prefect data — it all depends on models. The C&W paper is an excellent lesson in that rule.
David Appel,
Of course. Observations, GCMs and paleo all depend on models. Its unfortunate the public is not informed of this when y’all get your scare headlines in the media.
I am not the one who pretends that (artificial) data is sacrosanct. That would be you and your comrades in arms. Your inflated claims of certainty rise even as your models prove increasingly unreliable.
But keep preaching brother. Your congregation is with you.
GaryM: Shove it. I am not responsible for how the media reports on global warming, only what I myself write. And I think that informs the public very well.
Of course. Observations, GCMs and paleo all depend on models. Its unfortunate the public is not informed of this when y’all get your scare headlines in the media.
You wrote as if you don’t understand that the data, too, depends on models.
Nor have you offered any reason to doubt the data uncertainties that climate scientists assign to their data. It’s clear you don’t have any.
“You wrote as if you don’t understand that the data, too, depends on models.”
Nope. You just assumed I didn’t. Which is how, after all, climate “science” is done.
“Nor have you offered any reason to doubt the data uncertainties that climate scientists assign to their data.”
If you need someone to offer you evidence that increasing failure of models to match observations is inconsistent with an increase in the certainty of what those models predict, you are beyond hope.
D Appell: “Shove it.”
Oh yeah? Well up your hole with a Mello Roll.
GaryM: “they are Skeptical Science drones.”
When I go there, the only post I can find by Cowtan and Way is a crowd-funding request for open access fees for their new paper. If that’s all there is, I think this is unfair. Is there something you found on SkS, written by them, that is clear BS?
NW,
I posted elsewhere in this thread (lord knows where) their comment on Skeptical Science requesting funding from SS denizens to make their paper and data available free of charge. And Robert Way is listed as a member of the SS “Team”.
http://www.skepticalscience.com/team.php
curryja | November 14, 2013 at 6:35 pm |
key issue is to accurately characterize the error, to decide whether the ‘very best estimate’ is useful for anything
======
+1000
Combine the error, with the meaningless time frame over which we have data and it’s not hard to see why many, including myself, aren’t ready to buy AGW.
David Appell
“Physics” (as you put it) tells us that CO2 (among other GH gases, principally H2O) absorbs and re-radiates LW radiation. This has been corroborated by empirical evidence based on physical observations.
“Physics” does NOT tell us that this represents the principle forcing of our climate, as you stated. This has NOT been corroborated by empirical evidence based on physical observations or reproducible experimentation.
Max.
No one is saying CO2 is the “principle forcing of our climate.”
They are saying it is a major forcing for the *perturbation* of our climate. And, yes, this has been corraborated by both empirical evidence and by climate models.
David Appell
You wrote that greenhouse gases determine global average temperature.
Let me quote it for you:
I simply pointed out to you that this statement was a statement of faith, as GH gases are not necessarily the principle determining factor for global temperature.
So stop waffling and simply concede that you should have stated:
That would have been a “physics-based” statement, rather than one based on “faith”.
Max
“No one is saying CO2 is the ‘principle forcing of our climate.’”
Uh oh, so CO2 is NOT the control knob for the heat content of the climate? You better not tell the IPCC that.
David Appell
You claim:
Please show empirical evidence corroborating the premise that CO2 is a major forcing for the *perturbation* of our climate.
Forget the climate models, David. This is no empirical evidence.
And forget subjective interpretations of dicey paleo-climate proxy data of carefully selected periods of our geological past, using the argument from ignorance (“we can only explain this if we assume…”).
Show me empirical evidence based on real-time physical observations or reproducible experimentation, which corroborates your claim.
Max
I simply pointed out to you that this statement was a statement of faith, as GH gases are not necessarily the principle determining factor for global temperature
False. False, false, and false.
I have already given you the many studies that show a change in Earth’s outgoing longwave radition, due to man’s greenhouse gas emissions.
Uh oh, so CO2 is NOT the control knob for the heat content of the climate? You better not tell the IPCC that.
No, CO2 isn’t the largest influence on our climate.
But its perturbations ARE currently the biggest factor on the PERTURBATION of our climate.
Learn the difference — it’s getting to be tiring to explain this time and time again.