by Judith Curry
And how might these controversies be resolved?
A journalist has asked me the following questions (in bold below; paraphrased by moi). It is very good to see a journalist asking such questions, when the prevailing view is reflected by this recent Guardian article: Kofi Annan: We must challenge climate-change skeptics who deny the facts.
They are very good questions, and if the IPCC were doing its job in the way that I think it should be done, reporters wouldn’t need to ask these questions. Actually the IPCC First Assessment Report (FAR;1990) did a pretty good job in this regard.
Here is my first quick cut at responding to these questions; for reference, I also include the relevant FAR statements:
What are the most controversial points in climate science related to AGW?
JC’s list:
The two overarching issues:
- Whether the warming since 1950 has been dominated by human causes
- How much the planet will warm in the 21st century
More specific, technical issues that need to be resolved in support of addressing these overarching issues:
- Causes of the 1900-1940 warming; the cooling from 1940-1976; and the recent hiatus in warming since 1998. How are these explained in context of AGW being the dominant influence since 1950?
- Solar impacts on climate (including indirect effects). What are the magnitudes and nature of the range of physical mechanisms?
- Nature and mechanisms of multi-decadal and century scale natural internal variability. How do these modes of internal variability interact with external forcing, and to what extent are these modes separable from externally forced climate change?
- Deep ocean heat content variations and mechanisms of vertical heat transfer between the surface and deep ocean.
- Sensitivity of the climate system to external forcing, including fast thermodynamic feedbacks (water vapor, clouds, lapse rate).
- Climate dynamics of clouds: Could changes in cloud distribution or optical properties contribute to the global surface temperature hiatus? How do cloud patterns (and TOA and surface radiative fluxes) change with shifts in in atmospheric circulation and teleconnection regimes (e.g. AO, NAO, PDO)? How do feedbacks between clouds, surface temperature, and atmospheric thermodynamics/circulations interact with global warming and the atmospheric circulation and teleconnection regimes?
The key areas of scientific uncertainty from the FAR are:
- Clouds: primarily cloud formation dissipation and radiative properties which influence the response of the atmosphere to greenhousc forcing
- Oceans: the exchange of energy between the ocean and the atmosphere, between the upper layers of the ocean and the deep ocean, and transport within the ocean, all of which control the rate of global climate change and the patterns of regional change,
- Greenhouse gases: quantification of the uptake and release of the greenhouse gases, their chemical reactions in the atmosphere, and how these may be influenced by climate change,
- Polar ice sheets: which affect predictions of sea level
JC comment about FAR list: Clouds and oceans remain as outstanding issues. Progress has definitely been made regarding greenhouse gases and polar ice sheets, although substantial outstanding issues remain particularly re polar ice sheets.
What is the data that provides the greatest challenge to the dominant view of AGW?
- Global data sets of surface temperature and atmospheric temperature (satellite) that show a hiatus in warming for 16+ years
- Antarctic sea ice data since 1979 (satellite), which shows substantial increase in sea ice extent in recent years
- Global trends in sea level rise, which show values in the 1930s and 1940’s were comparable to the recent values
- Recent assessment of lower aerosol forcing lead inescapably to reductions in the estimated upper bound of climate sensitivity.
- (The late 20th century divergence between observed surface temperatures and temperatures derived from tree rings)
JC comment: I could use additional input here, preferably global or hemispheric data sets.
What would provide significant progress in our understanding of the climate system?
The primary need is better data, both in the present/future and in the past:
- Historical data archaeology: historical records from written logs or newspaper articles of arctic sea ice extent, sea surface temperatures, extreme weather events, sea level.
- Better paleoclimate proxies: Information is needed on surface temperature, ocean circulation patterns, extreme weather events, rainfall. Most current paleo proxies in use are inadequately calibrated. More research is needed to calibrate existing proxies and develop new proxies. This research should have a rigorous biogeochemical basis.
- Ocean data: It is critical to maintain and enhance the current ocean observing system, both from satellites and in situ measurements. The deep ocean is a key frontier for understanding natural climate variability.
Major theoretical efforts are needed in a number of areas, related to an improved framework for climate sensitivity, networking of the atmosphere and ocean teleconnection patterns, solar indirect effects.
Regarding climate models, I have argued that the current path of climate model development (higher resolution, more chemistry) is not going to improve the present situation whereby the climate models are useless for regional climate variability, decadal variability, and are too sensitive to CO2 forcing.
JC note: This is a technical thread, please keep your comments relevant and constructive, I am looking forward to your take on these questions. I expect to send my responses to the reporter in a few days.
P.S. I just realized that maybe I should have put links to relevant posts for each of these points, but I ran out of time. If you are looking for further details on one of these points, mention it in the comments and I or someone will provide relevant link
Judith,
I would have said that there were three overarching issues, the third being the extent to which further warming would be harmful or beneficial, and to where and to whom.
+10
+10, too.
donaitkin | May 4, 2015 at 12:54 am
Excellent point. If significant warming doesn’t produce catastrophe anyhow, then the rest of the points may still be unresolved, but they would cease to be controversial.
Exactly what was in my mind when I first read Don’s comment. If there is no cause for alarm then the whole climate change issue become merely academic (as it is rapidly becoming right now) and unworthy of further research funding.
Or rather worthy of academic-scale funding, instead of “crisis save-the-world” hyper-funding that it now receives.
Yup. Major point. Harm vs. benefits of warming. Warming is demonstrably happening most during the winter in higher latitudes. That is beneficial warming. Just ask people who live there if they prefer milder or harsher winters in the future. The earth is greening due to CO2 fertilization and warming. In a sane world that would be considered a good thing.
On the other hand, the modelled future drying in the U.S. Southwest may lead to the collapse of the high altitude ecosystems here. High temperatures and drying not only cause trees such as the pinon and ponderosa to starve, but make the weakened tree vulnerable to insects such as the bark beetle.
So warmer being good or bad depends on where you are and we need to hear from the biologists and economists. The economics of any change in climate will be somewhat specific to location and that overall cost-benefit would have to be considered. I agree that sans economic good vs. harm this becomes a somewhat irrelevant subject, but the whole point of the doom and gloomsayers is the harm aspect, such as rising sea levels impacting the trillions of dollars of development along coastlines, not to mention vulnerability of atoll societies.
In a sane world, we would be asking all the hard questions.
I think it is the tale of the two Fargo’s: 10 degrees F warmer average morning lows in Fargo, ND, in January and February may not bother many people who live there. 10 degrees F warmer average afternoon highs in Fargo, GA, in July and August would make for an unlivable place.
Khal:
Regional downscaling of GCM models exhibit no skill.
Please explain your Southwest US precipitation forecast…
Tom it doesn’t appear there’s any significant warming happening in the tropics. Instead existing climate zones are expanding towards the poles with the exception of existing polar climate zones which are shrinking.
This IMO is highly likely due to observed (ARGO) ocean SST ceiling temperature of 30C. At about 28C strong convective cells that reach the stratosphere become common forming a heat pump from surface to emission altitude that doesn’t allow the ocean to get any warmer. Land temperatures generally don’t acquire mean annual temperatures higher than the ocean. Record mean annual land temperature was set in 1960’s in Ethiopia’s tropical salt desert at 35C.
There are no skillful regional climate models. Global models don’t have much skill running much hotter than observation. In general they project too much warming in low latitudes and too much in high northern latitudes. American southwest has experience much more severe droughts in prehistoric times than current one so what’s happening now is not outside the range of natural variability in any way.
Correction: GCM’s project too little warming in high northern latitudes. Too much in tropics.
I agree that the issue of harms vs. benefits is difficult to quantify based on availiable information. It would seem necessary to have much more reliable information regarding sea level rise expected in the next 50 years.
donaitkin: the third being the extent to which further warming would be harmful or beneficial, and to where and to whom.
I agree.
And a part of the assessment would be a thorough review of the evidence concerning: has the combination of increased CO2, increased global mean temperature, and increased global rainfall produced any demonstrable net harm?
Pingback: What are the most controversial points in climate science? | Enjeux énergies et environnement
Food for thought. Many influences to be confirmed, and many magnitudes to be quantified.
Svalgaard, Dr. Leif. “Reconstruction-Solar-EUV-Flux.pdf.” Scientific, September 23, 2014.
http://www.leif.org/research/Reconstruction-Solar-EUV-Flux.pdf
“Conclusions (pg 43)
• We can reconstruct with confidence the solar EUV flux [and its proxy F10.7] back to 1840
• The reconstructed EUV flux confirms the discontinuities in the Sunspot Records reported by Clette et al., 2014
• There is more geomagnetic data earlier than 1840, and it now seems important to acquire and process the earlier data.
• The EUV flux is concordant with the revised Sunspot Number and the Solar Wind Magnetic Flux
• There is no Modern Grand Solar Maximum
• Some of this may still be controversial. Aggressive and serious opposition is welcome”
Note that in Dr. Lean’s graph of TSI, It is GCR and EUV that is highly variable.
Lean, Dr. Judith. “Solar Spectrum, Variability, and Atmospheric Absorption.” Scientific. NASA – Science@NASA, April 6, 2011. http://science.nasa.gov/headlines/images/sunbathing/sunspectrum.htm
“Note definition as integral over entire spectrum.
Note concession that extreme UV and x-ray variation > 1%.
If these are absorbed by atmosphere, what happens to their energy?
This image, courtesy of Dr. Judith Lean at the US Naval Research Laboratory, shows the spectrum of solar radiation from 10 to 100,000 nm (dark blue), its variability between Solar Maximum and Solar Minimum (green) and the relative transparency of Earth’s atmosphere at sea level (light blue). At wavelengths shorter than about 300 nm, there is a relatively large variation in the Sun’s extreme UV and x-ray output (greater than 1%), but the Earth’s atmosphere is nearly opaque at those wavelengths. For Earth-dwelling beach-goers there is no significant difference between Solar Max and solar minimum.”
Note step-wise spectral irradiance below 10^2 nm. Sparse data?
Ms. Pooh, I was wondering what is the impact of the UV variation on the atmosphere? Eyeballing that graph I would get about 0.1 watts/m2 for the visible spectrum, plus say another 0.1 watts per m2 for the UV spectrum? That’s a lot of variability when the heat uptake is 0.5 to 0.6 watts/m2.
Yup, I’ve been harping on this for years. EUV and more energetic solar shortwave above visible vary by as much 50% in relative power through a sunspot cycle. These energetic bands are all absorbed in the stratosphere and greatly effect its chemistry. Over an 11-year cycle the effects don’t persist long enough make marked climate changes but what of solar cycles that persist for many decades or centuries behind phenomenon known as solar grand maximums and minimums? Do the power changes in high energy bands then have time to accumulate marked effects through changes in stratosphere chemistry?
The effect of EUV is interesting. I suspect that it is not the direct effect of GCR and EUV that drives change, but rather the knock-on energy effects on atmospheric circulation. I haven’t had the time to trace it through, nor do I have the scientific background to justify conclusions. Something like this for “Cooler Temperatures”?
Solar Cycle Minimum -> Corona -> less EUV -> Energy -> Arctic Pole -> Ozone/Energy -> Polar Vortex -> Circumpolar Winds -> Slippage of Polar Vortex -> Meridional Jet Stream -> Atmospheric Winds -> Descent of Arctic Air
Perhaps Solar Max could restore the vortex to the arctic pole, resulting in a longitudinal jet stream, blocking intrusion of Arctic Air.
Quite frankly, I don’t know. Pooh is a bear of very little brain.
Here is something that may be helpful and could be added (or did I miss it? – If so, please provide link.):
Normalize ALL climate forcing factors to a single scale and see how they compare.
That might tell us what to focus on.
George Devries Klein, PhD, PG, FGSA
The FAR list of the IPCC in 1990 does provide many good discussion points that have as yet not been adequately explored due to its infiltration policitisation by ideologists.
The most controversial aspects of climate science related to AGW should IMO cover the issue of using global parameters for policy purposes when climate is essentially a regional phenomenon with specific risks pertaining to each of those regions.
The greatest challenge to the dominant view of AGW is the lack of sufficient reliable data, both spacial and temporal that would be needed to determine the domain population PDF’s required to provide better estimates of the error bounds of current climate projections used for policy purposes.
Significant progress in our understanding of the climate system would seem to hinge on researchers taking a more meteorological approach in the study of regional climate trends and the improvement of shorter term weather forecasting would be most helpful for vulnerable communities to adapt to climate change affecting their regions.
Finally, research efforts in climate science to date have fixated on CO2 levels as the main cause of AGW, to the exclusion of many promising areas of research, such as the cause and effect of changes in clouds, wind and ocean currents on our climate.
Judy: I take exception to your definition of “climate science” as the “natural science of the climate system”.
Other, equally controversial points include:
– how serious are the impacts of climate change?
– how important is climate change relative to other changes?
– how serious are the impacts of climate policy?
– how to design a climate policy that is effective, cost-effective and equitable?
– how to trade-off the impacts of climate change against the impacts of climate policy?
What is so wrong / offensive about ““natural science of the climate system” ?
I get what you’re saying here, Richard, but I think that Dr. Curry has it more or less correct. We should focus here on the appropriately scientific issues, not the policy issues. Despite many CAGW protestations to the contrary, I don’t believe climate science has nearly as much to contribute to policy discussions as it thinks it does.
That said, the only one of your questions that is really addressable in a scientific way is the first: “how serious are the impacts of climate change?”
Even that question is poorly-posed, because determining the seriousness of the total impact of any climate change would require input from quite literally hundreds of disciplines. Indeed, the simplistic thinking that the results of climate change can be evaluated on a simple good-bad scale is part of what has led to the current situation in climate science. The notion that the current state of the climate is optimal, so that any change will be for the worse, is anti-scientific on its face. Yet it is consistently articulated by the most vocal climate scientists.
In other words, these big-picture, policy-based questions are not very helpful in putting climate science back on the track to being a respectable discipline.
The distinction between climate science (and expertise in it) and good public policy to respond to climatic conditions (and expertise in that) is well worth thinking about.
You don’t see seismologists saying they have the expertise to set building codes. They acknowledge all the other skills that need to be involved. So why would people who are good at running and tweaking climate models (for example) think they have the expertise required to determine emissions policy?
In terms of the request from the reporter, i think the focus was on WG1 issues
I’d go with the focus on WG1 but mention in an early sentence the many impact and mitigation-vs-adaptation questions.
I can’t see in Judith’s post where climate science had been defined as described but agree that the impacts of climate change and climate policy proposals have been given scant attention.
Richard, I wouldn’t put other fields UNDER a Climate Science umbrella. Climate scientists are simply collaborators in the overarching Dynamic Systems Analysis. Such an analysis includes a bunch of fields.
Last night I wrote a comment for Tamsin Edwards to digest, which suggested she start hitting the books so she could understand a little bit more about the dynamic system components. I realize you understand the subject, and I think it’s important to make it clear this is a very very very complex problem.
Most climate scientists don’t grasp this idea, they start pushing for moves a, b, c, etc without having any idea whatsoever of the consequences. The ignopedists at SKS and climate gurus like Dr. Mann will have to be largely ignored as long as they don’t step out of their “climate science” milieu.
I agree with Doctor Tol. There would be no controversy without supposed net negative effects of global warming. Climate science is purely academic w/o hyperbolic claims of imminent doom.
Justifying extraordinary remedies requires extraordinary hype.
Hi Richard, I think this particular reporter was focused on the actual WG1 questions. But I agree, we wouldn’t care about those if climate change isn’t ‘dangerous’
Dr. Curry is correct if you are using “climate science” in the sense of the normal definition of the term. It is the consensus that had blurred the definition of science (and climate for that matter) to include anything they want it to.
Other than the first bullet point, none of those other issues are science at all. They are cost benefit/policy questions. And if the first point refers to economic impacts, that is not science either.
Now if you want to define the term “climate debate”, that would necessarily include all the listed issues. But that does not make them science.
I raised five research questions. A number of people responded that “this is policy, not science”. That is a mistaken response, for two reasons.
First, the first four questions have a large, positive components. What happens to wheat yields in Africa if the world were warmer? What happens to wheat yields in Africa if fertilizers would be applied? Which effect is larger? What happens to the cost of transport if we apply a carbon tax on fuels? And what if we apply a biofuel mandate? Which effect is larger, for emissions and costs?
Second, while you may be uncomfortable with the normative elements of these questions, dismissing them as “policy” suggests that there is no reasoned and informed debate possible.
By the way, climate science (narrowly defined) would not be controversial if the impacts of climate change were small OR IF effective climate policy were cheap.
I agree. The whole AGW, CAGW and climate change debate is about risk – i.e. consequence of an event or condition x the probability of that consequence or condition occurring. If there is no serious consequence, then there we need to redirect the climate research funds and researchers to more important issues (as Bjorn Lomborg has been pointing out for 17 years or so).
If GHG emissions are reducing the risk of a seriously damaging abrupt cooling event, we need to know this.
So, the most important thing we need to know is the consequences of human’s GHG emissions, in terms of damages and benefits (by region).
Richard, Peter, agree. This is a major issue not because of the science per se but because of the massive, costly impact of proposed and implemented policy responses. My reply to Steve Pruett below touches on this. The critical science questions are those for which answers are needed to guide good policy, with human welfare being the highest consideration.
For what it’s worth, I agree with Richard Tol’s comment about WG2 issues. I haven’t written about this topic, but survey data that I’ve examined shows that the divide between “skeptics” and warmists is much, much sharper on WG2 issues as to whether emissions have had and will have “serious” “negative” impacts on climate, whereas differences on climate sensitivity between lukewarmers and low-end IPCC supporters are not nearly as sharp.
For examples, most lukewarmers take the position that impacts to date have not been “serious” “negative”, whereas virtually 100% of warmists believe that there have already been “serious negative” impacts on climate.
One commenter above argues that this issue should be set aside because “We should focus here on the appropriately scientific issues, not the policy issues.”
Analysis of whether impacts to date have been “serious negative” undoubtedly involves value judgements, but it is not “policy” either. It seems to me that it ought to be possible to objectively analyze whether impacts to date have been “serious” “negative” and tease out more precisely where the disagreements lie and that this ought to be a front-and-center topic. I’ve been looking in particular recently at analyses purporting to show that increased temperatures have already had a negative impact on crop yields. It appears to me that the statistical analysis purporting to demonstrate this is fatally flawed and sheds no light on the matter – a topic that I hope to address in the next couple of months.
For what it’s worth, I’ve spent a fair amount of time looking at the literature for WGII issues, and my impression is a not insignificant portion of it is seriously flawed, or even completely wrong. I’ve only posted about the WGII’s handling of the issue of economic damages, but there are tons of other problems. Even when the IPCC did a fair job summarizing the existing research, there is no assurance of quality as much of the existing research has serious problems.
Can you claim that there is an understanding of AGW absent a reliable and validated model that can make useful predictions 1, 5, and 10 years into the future?
We have a 55+ year track record of increasing environment absorption, with respect to either time or atmospheric concentration that:
1. Is accelerating.
2. Is over 55% of emissions
3. Is increasing over twice as fast as emissions (last 30 years)
4. Converges with maximum potential emissions below 500 PPM.
That makes a 500 PPM maximum limit a pretty safe prediction.
Is “environment absorption” supposed to mean something?
The flora and I would prefer a level of 800-1600 ppm, but it’s unfortunately unlikely.
Is “environment absorption” supposed to mean something?
Well, yeah…
1. Carbon is CO2 divided by 3.67
2. 5.1-5.6 GT of carbon (over 55% of emissions) are being absorbed by the ocean or land every year.
3. The net environmental land/ocean annual absorption of carbon (NELOAAOC) has increased by 3.5% per year for the last 30 years.
4. Prior to 1985 NELOAAOC was increasing at 3.0% per year (the trend is accelerating).
5. The anthropogenic CO2 emissions have increased 2% per year for the last 30 years.
6. In 20-40 years depending on scenario NELOAAOC will exceed emissions.
7. There can’t be an increase in the atmospheric CO2 concentration
in PPM if the net change in atmospheric CO2 is negative.
“absent a reliable and validated model that can make useful predictions 1, 5, and 10 years into the future?”
Exactly!
Or at least present that magical mathematical proof that everyone seems to be hiding to shows that a model with 100% error at 3 months still produces magically accurate averages 100 years into the future.
I swear, if I hear one more environmentalist who has never written a solver in their life extol the virtues of climate models I will…. well, be just as cynical as I already am…
Global sunshine variation due to aerosols. Solar dimming is now changing to solar brightening due to clean air act in some parts of the globe.
This causing additional warming since 1980 in the NH.
It is a fact often not pointed out.
How is it that CO2 changes are presumed to have been confirmed as dominating over changes in surface incident solar radiation (via variations in clouds and aerosols) as *the* cause of changes in ocean heat content when (a) the heat trapped by greenhouse gases cannot penetrate past the paper-thin ocean “skin” layer, (b) solar heat energy *can* penetrate past the skin layer and into the surface waters by several meters, (c) a CO2-induced ocean heat content change has never been subjected to empirical tests or experiments with real observational evidence, and (d) the one known experiment (using variations in cloud cover to simulate greenhouse gas forcing) revealed that the heat trapped in the skin layer may only change by ~0.002 K?
—–
http://www.skepticalscience.com/How-Increasing-Carbon-Dioxide-Heats-The-Ocean.html
“Sunlight penetrating the surface of the oceans is responsible for warming of the surface layers. … Greenhouse gases, such as carbon dioxide, trap heat in the atmosphere and direct part of this back toward the surface. This heat cannot penetrate into the ocean itself, but it does warm the cool skin layer [“0.1 to 1 mm thick on average”], and the level of this warming ultimately controls the temperature gradient in the [skin] layer. …. It should be pointed out here, that the amount of change in downward heat radiation from changes in cloud cover in the experiment, are far greater than the gradual change in warming provided by human greenhouse gas emissions.”
—–
http://www.realclimate.org/index.php/archives/2006/09/why-greenhouse-gases-heat-the-ocean/
“Of course the range of net infrared forcing caused by changing cloud conditions (~100W/m2) is much greater than that caused by increasing levels of greenhouse gases (e.g. doubling pre-industrial CO2 levels will increase the net forcing by ~4W/m2)”
—–
“There is an associated reduction in the difference between the 5 cm and the skin temperatures [in the experiment]. The slope of the relationship is 0.002ºK (W/m2)-1.
Kenneth: the atmosphere doesn’t “know” whether the source of its energy content is caused by x, y, or z effect. This means the ocean can absorb heat from an air column sitting right above it. It also means the ocean can have a slight difficulty transferring heat to the atmosphere. Water absorbs visible spectrum and ultra violet, this warms it, but if the air sitting above it is slightly warmer then the heat transfer mechanism is changed).
http://oceandatacenter.ucsc.edu/home/photos/data_sm.png
Photosynthesis drives down the CO2 level during the day so any effect is mostly a nighttime effect.
Dr. Curry: reference this comment: ”
Deep ocean heat content variations and mechanisms of vertical heat transfer”, I was wondering, how do the models account for geothermal energy? I understand the overall heat flux is about 0.08 to 0.1 watts/m2?
The models should account for this flux, and its geographic distribution. It’s much more intense over oceanic ridges.
I visualize the phenomenon to be a constant flow of energy from the sea floor towards the deepest ocean layers. This in turn sets up a “steady state” condition which transfer the energy upwards in the water column. I haven’t worked on all the ramifications, but it seems to me this ought to be studied in detail.
Eventually they do need to take deep water temperatures over sample areas with high, medium, and low intensity flux to get real data,as well as conveyor belt cross sections to make sure their ocean temperature reanalysis is tied to reality.
As far as I know, geothermal energy isn’t considered at all. Nor are the multitude of potential effects from underwater volcanoes
Dr Curry: I consider volcanoes a member of the geothermal flux. I’m used to work with gridded geophysical products, and I think they could take the Deep Sea Drilling project and oil company borehole temperature records together with sea floor drop core data to make a heat flux model. This can be put in as a sea floor bottom layer which contributes heat from the bottom. It seems really straight forward to set up a fine scale grid near the sea floor, and use that in GCMs, as well as in reanalysis products. I bet Berkeley Earth would love this type of project.
Fernando,
In case you’d not seen this recent event. Links thru to more info at bottom. Sea floor dropped quite a bit. Presume it released some warmth, maybe CO2? Connection to “the blob”? Previous eruption 1998 & 2011. Fascinating.http://www.nbcnews.com/science/science-news/there-she-blows-underwater-volcano-may-be-erupting-oregon-n352226
Which would include various fertilization effects through upwards transport of deep nutrients by warmed water(s). Which, in turn, could have knock-on effects through DMSO emissions, changes to light absorption depth, changes to evaporation via changes to the nature and extent of natural surfactants, etc.
The selective omission of data for models is an approach well known to thrifty Australian bakeries: it’s called leaving the meat out of the meat pie.
There are differences. The pie maker omits knowingly and with calculation, the modeller simply omits what is too hard to obtain or too awkward to fit.
The result is something which looks convincingly like a meat pie or a climate…but just isn’t.
Fernando,
It seems to escape people’s notice that a body losing energy is cooling, by definition.
The Earth is losing heat at a measurable rate. Therefore it has a calculable black body temperature, which just happens to be close to the 33C used by the concensus to prove GHE.
As Dr Curry says, it appears that climate scientists assume an Earth created cold, and warmed since. John Tyndall and Joseph Fourier would no doubt be vastly amused.
Hi, Judith, I have had an interest in 2 phase fluid flow for decades. 2 phase fluid flow (even in 5 mm airlift tubing with air moving plugs of water upwards) is not fully understood. But, I have not seen anyone trying to view or treat a cloud as a massive area of 2 phase flow. First simply, you imagine the cloud as water droplets falling down through rising air particles going up. This is a heat exchanger! and the water in the droplets going down is transferring its energy to the saturated air going up. When the air exits the cloud at top, what happens? This air is saturated with H2O and warm, and no longer has to negotiate the heavy water droplets that were slowing its upward motion down, so it is very low pressure, and it accelerates! Accelerating low pressure saturated air going up, means condensation ABOVE the cloud. Does this sound like Makarieva and Gorshkov cloud as implosion theory or is it somewhat different? I think seeing the clouds as an imperfect heat exchanger and condenser inclosed in its own “cloud droplet thermal blanket” will aid everybody in understanding what is truly going on with cumulus clouds as they grow. I think the cloud is growing at top and at bottom, The cloud acting as a barrier and as a heat exchanger, and as a condenser, might help move the conversation closer to reality. At bottom, the cooler larger droplets falling down, are increasing the pressure there and this is probably pulling the bottom of the cloud down. Both by the temperature and physically by the interactions of the droplets with the air particles.
Well said, Brian.
Brian, thanks for raising this issue, I have been trying to push this idea also. see this previous post
http://judithcurry.com/2014/06/25/model-structural-uncertainty-are-gcms-the-best-tools/
They seek it here,
they seek it there,
in ocean deep,
in hot spot sweet,
that demned elusive
‘missing’ heat. *
*Modellers in towers
can’t quite seem ter
manage clouds
through the iris
into space
be it gone
without a trace
:)
Pingback: Judith Curry: What Are The Most Controversial Points In Climate Science? | The Global Warming Policy Forum (GWPF)
1. The new version 6 of UAH brings it much closer to the RSS results, so both satellite data sets agree on a lower rate of warming (1.1-1.2 degrees C per century) than the surface temperature data sets which depend on poorly sited temperature stations on land and sparse sampling in the ocean.
2) The missing tropospheric “hot spot”
3) The fact that models, if absolute temps are considered, differ from each other by about 3 C, an absolutely huge amount considering the tiny increase since 1880 of 0.8 C.
4) Inability of models to predict the pause
The most controversial, but hardly recognised as such, point in climate science is the Bern Model which models a strong saturation n the Co2 uptake from the atmosphere in the coming decades. This alleged saturation is the corner stone of the hype and the key element of the RCP scenario set.
What is the data that provides the greatest challenge to the dominant view of AGW?
1. The temperature and CO2 graphs in Al Gore’s “An Inconvenient Truth” are still a major problem. They show that in each cycle temperature decreases rapidly while CO2 is at maximum, and that temperatures rise while CO2 is at minimum. And for good measure, at intermediate concentrations of CO2 temperatures sometimes rise and sometimes fall. IOW that CO2 was absolutely not the dominant driver of climate back then, so it is highly questionable that it can be the dominant driver now.
2. The tropical troposphere had to warm more than the tropical surface during the ~1970-2000 period of global warming, if that warming was mainly caused by CO2. It didn’t. So something else was the main cause of the warming.
I would argue that you need to define controversial a bit more carefully. I don’t think that either of these are particularly scientifically controversial
As I understand it a vast majority of active climate scientists regard anthropogenic influences as having dominated since 1950 and, by and large, the IPCC warming projections are regarded as robust. Yes, there are some dissenting voices and, yes, there are some who are presenting results that suggest warming by 2100 may be less than the IPCC projections suggest, but the IPCC ranges are quite broad anyway and so the difference aren’t all that great. This doesn’t make these points controversial, though.
To me, these are controversial in the media and on blogs, but not all that controversial within the scientific community.
I guess the question to answer is “if the vast majority of active climate scientists regard anthropogenic influences as having dominated since 1950, what is the experimental evidence they would produce to justify that view”?
Looking at the question from a statistical POV (another part of the great tapestry that is science) the attribution is very difficult to sustain on any of the datasets currently available as far as I am aware. But I’m open to being shown otherwise.
HAS,
I’m not quite sure what you’re getting at, but you seem to be suggesting that we need some kind of observation (data) that convincingly shows that it’s anthropogenic, and that we don’t have that. Well, I don’t think that’s possible. Doing these attribution studies require models. There is no other way. In fact, the more than 50% anthropogenic attribution study actually works by trying to see if it is possible for more than 50% of the observed warming to be natural/non-anthropogenic. The result is that there is a less than 5% chance of this being the case, hence they reject the hypothesis that more than 50% of the warming since 1950 was non-anthropogenic.
I’m not sure what you’re implying by this. Datasets tell us nothing of the physical processes that underly the system being observed. If you’re rejecting the idea that we can understand attribution using models, then you’re essentially arguing for a world in which we choose not to understand anything. Without a model you can make no statement about whether it is natural or anthropogenic.
You start with data and then you build models, and you test those models independently against the data. I’m unaware of any attribution study that shows, under those criteria, that there is a less than 5% chance that more than 50% of the observed warming to be natural/non-anthropogenic.
But as I said I’m open to being shown otherwise.
In a sense this is what is done. I get the impression that you’re suggesting that the models are somehow being unduly influenced by the data and that, hence, the study isn’t independent. I don’t think this is quite correct. It may be true that some models have been rejected on the basis of being large outliers, but the fundamentals of the models is physics, not a best fit to known data.
Nothing that complicated. I’m just saying that the attribution studies don’t support the conclusions when tested against the datasets they are meant to be modelling. Have a look at Table 10.1 line 2 (AR5 WG1) which makes it clear this assertion is based on multiple studies using the CIMP5 models. These models fail to model natural variation in-sample on the timescales on which attribution is being claimed, and are similarly proving to be inadequate out of sample on similar timescales.
So to return to your original point these are very controversial as representing suitable methods for attribution.
It may be true that some models have been rejected on the basis of being large outliers, but the fundamentals of the models is physics, not a best fit to known data.
Here’s a lineup from the AR5:
http://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2012/Fig12-09.jpg
So, with deference to your moniker, all the models better have the same physics. Why so much variance? Evidently, the non-linearity of the physical equations matter a lot.
Which model is most accurate? and on what basis would you make that assumption?
HAS:
You start with data and then you build models, and you test those models independently against the data. I’m unaware of any attribution study that shows, under those criteria, that there is a less than 5% chance that more than 50% of the observed warming to be natural/non-anthropogenic.
Then we have appear to have a ‘consensus’, at least between you, me and ATTP, that this is the right approach. :-)
I would add:
1) It makes no sense to talk about the evidence for/against a single model on its own (which is what a p-value attempts to do). You must always compare two or more models.
2) The maths can only tell you about models you have thought of, developed and plugged in to the equations. If you did not model the correct explanation, statistics can’t tell you anything about it.
The bare minimum models we need to consider to explain the temperature record are (a) anthropogenic warming and (b) natural variation. (Realistically we should also consider (c) a combination of the two, and ask what is the most probable magnitude of each).
Now, ‘consensus’ climate scientists often refer to natural variation as ‘noise’, which is revealing. Clearly a simple noise process with independent yearly variations about a constant ‘true’ temperature is unlikely to produce the observed recent temperature record: much less likely than a model that specifies a rising trend, plus noise.
So it is easy, if you characterise (b) as ‘noise’ to show that (b) is less likely than (a). But the problem here is that a sixteen year hiatus due to ‘noise’ is itself very improbable under (a). (This is the only use for a p-value: if it is very small, you might want to start looking for a better model).
An alternative model (say b’) has decadal and multi-decadal natural variations of a similar magnitude to the postulated anthropogenic warming. Then we may be seeing them reinforcing during the 80’s and 90’s, and cancelling during the 00’s and 10’s.
It seems obvious (b’) is going to do better than (b). It also seems pretty obvious to me that the maximum likelihood version of (c) would have roughly equal magnitudes of (a) and (b’). And that makes it seem highly implausible, to me, that you can put a high probability on “more than 50%” of the warming being anthropogenic. But that depends on the details, and plugging in the numbers, and I’m not a climate scientist.
However, I have not seen any references to a study done in this way. The IPCC, in making its attribution statements, falls back on ‘expert opinion’. Until I see this done properly, my ‘non-expert’ opinion is that about 50% of the warming in the late 20th century was anthropogenic, and that climate models overestimate climate sensitivity by a factor of two.
When i see this done properly we will see that 100% of the warming was caused by the same natural variability that caused the Roman And Medieval Warming. There is no real data that proves this to be wrong.
What do you think worked before that stopped working such that now we need manmade CO2 to cause warming that is just like all the warming periods in the past ten thousand years.
This warming is the warming phase of a natural cycle that has worked the same for ten thousand years.
Except that this is still consistent with it being controversial on blogs and in the media, and not within the scientific community. It is probably true that the level of natural/internal variability differs across the models. However, on a sufficiently long timescales (decades) I think they all agree that the influence of natural variability is small. So, it is very difficult (if not impossible) to develop a physically plausible model that would show that more than 50% of the warming since 1950 was non-anthropogenic.
It is very easy to produce a physically plausible model (based on fit to the observations) that would show more than 50% of the warming since 1950 was non-anthropogenic.
On the other hand it is very hard to find a CIMP5 model (based on fit to the observations) that shows less than 50% of the warming since 1950 is anthropogenic. However in this case attribution is typically just the difference between model runs with and without certain inputs. What the CIMP5 attribution studies show is just that this class of model project require non-anthropogenic inputs to project 50% of the warning since 1950.
Ironically it is the fact they can’t project lower anthropogenic warming than 50% that is their downfall.
The problem is that the CIMP5 models don’t fit the data (in or out of sample), and therefore just aren’t plausible (whether in physics or otherwise). There may be a small (diminishing) group of scientists that are still committed to these models, but that doesn’t mean they are not controversial in the wider body of science.
If it’s based on a fit to the observations, then I don’t think it is necessarily physically plausible. However, if you could produce a physically plausible model that shows that more than 50% of the warming since 1950 was non-anthropogenic, I’d like to see it.
Net negative feedback from clouds. Just because Ken Rice is ignorant of physical explanations for GCM incompetence doesn’t mean none exist. Ken Rice is the new denier… 18 years without significant lower troposphere warming despite 10% increase in atmospheric CO2 is, to any objective observer, compelling evidence that atmospheric CO2 isn’t the governing factor (boogeyman) it was made out to be by the manufactured climate consensus.
However, if you could produce a physically plausible model that shows that more than 50% of the warming since 1950 was non-anthropogenic, I’d like to see it.
It’s very simple – albedo uncertainty is much greater than modeled greenhouse forcing. So of course it’s plausible that 100% of the warming is non-anthropogenic. I don’t think that’s likely, but it’s certainly plausible.
However, on a sufficiently long timescales (decades) I think they all agree that the influence of natural variability is small.
Look at the GISP proxy:
http://www.ig.utexas.edu/people/staff/charles/images/gisp2_d18o_small.jpg
See all the spikes ( +/- 2C )through out the Holocene?
Now, that’s proxy data and it’s not particularly clear how wide a region that pertains to.
But if you mean by small +/- 2C, then yes, natural variability is small.
It can’t simply be albedo uncertainty, it would have to be albedo changing and what would cause that?
It can’t simply be albedo uncertainty, it would have to be albedo changing and what would cause that?
Remember the Monty Python bit about the doctor using explosives for medical treatment?
“Now, many of the medical profession are sceptical about my work. They point to my record of treatment of athlete’s foot sufferers – eighty-four dead, sixty-five severely wounded and twelve missing believed cured.”
That’s what albedo is like: trends and values missing, presumed constant.
Of course it could be just albedo, because we don’t know precisely what planetary albedo is or how it has varied. Just because one doesn’t know or understand a phenomenon doesn’t mean it doesn’t exist. Clouds are responsible for most of albedo change and clouds are quite transitory. Apparently, ENSO fluctuation changes the areas of clear and cloudy which can influence temperature. Further, the CCN ( cloud condensation nuclei ) postulate of the cosmic ray school is also a possibility. If this line of reasoning is correct, we should be seeing a slowdown in temperature increases concomitant with the recent solar cycle slowdown ( perhaps that’s what the ‘pause’ is ). Cycle 25 is modeled to be the really low one, so this will be testable over the next two decades ( resumption of warming would obviate cosmic CCN effect, cooling or continued pause might corroborate it ).
Now, that’s not what my fill in the blank opinion is. I tend to believe that most of the RF modeled to occur with increased GHGs is realized in the form of heat gain, though perhaps not all of the positive feedbacks are.
aTTP
“If it’s based on a fit to the observations, then I don’t think it is necessarily physically plausible.”
If it’s based on a fit to the observations that performs out of sample, you’d probably start to think otherwise.
However back to your claim that 50% attribution is non-controversial. You missed my ironic comment. The CIMP5 models themselves are presumably deemed to be physically plausible. On the one hand they do not reliably reproduce the near-term climate out of sample, but at the same time they are deemed good enough to diagnose the level of non-anthropogenic forcing required to model that climate.
HAS,
Unless you can explain what the underlying physical process is, then I seriously doubt it.
I still think you’re missing my point that the controversy exists mainly in the media and on blogs, and not within the scientific community. That might tell you something.
ATTP writes- “my point that the controversy exists mainly in the media and on blogs, and not within the scientific community.”
ATTP/Ken– Your bias is showing. There is no consensus that the majority of warming since 1950 is due to humans. If a majority of those self identified as scientists studying the issue have stated that as their position I’d ask the basis for that conclusion. What relaible information has led them to that conclusion.
Please state how you reached that conclusion
Rob,
Your lack of reading comprehension is showing. If you want me to defend something I’ve said, it would be best to ensure that what you’re asking me to defend is what I actually said, not what you think I said.
ATTP: try weakening aerosol influence, this yields a negative feedback elsewhere in the system. Or maybe you just need a neural grid to learn using data and have it produce 50 alternate model realizations. Do you guys do AI? We use it to guide the model structure development.
aTTP said
“Unless you can explain what the underlying physical process is, then I seriously doubt it.” in response to my suggestion he would start to concede as plausible models that fit the observations and performed out of sample.
I very much doubt that anyone faced with a model that fit the observations and was performing out of sample would describe the model as “physically implausible” with or without attribution to a specific physical process. (In fact under these circumstances science has been known to invent a physical process to complete the description).
If instead you are saying the only models you will accept are bottom up models derived from core scientific principles then you are shifting the goal posts, and incidentally ruling out current attribution studies that depend on the CIMP5 models (some aspects are well understood physically and some are not and are physically implausible).
“I still think you’re missing my point that the controversy exists mainly in the media and on blogs, and not within the scientific community. That might tell you something.”
I think you are ignoring my point that there is a large body of scientific, statistical, engineering and economics literature that would regard as very controversial the use of models like GCMs in the way that they are in the IPCC literature to make claims about attribution.
…and Then There’s Physics: “I still think you’re missing my point that the controversy exists mainly in the media and on blogs, and not within the scientific community.”
In your dreams.
I think you need to get out more!
aTTP – You say you don’t think it’s possible to obtain observation (data) that shows that the warming is anthropogenic. There’s a reason for you as to why there’s controversy. You say the studies require models. That’s true, but until the models have been verified by observation (data) they are unreliable. An unreliable model is obviously going to be controversial. In order to lay the controversy to rest, someone has to find ways of verifying (or disproving) the models. In a complex area such as climate, if direct measures are impossible, there is probably a need for multiple tests, each of which can help to build confidence in the models, but on the basis that if any test fails then the models fail. I can suggest one such test: theory says that the warming by man-made CO2 occurs in the tropical mid troposphere and spreads from there. The theory (it’s the theory implemented in the models) requires the tropical mid troposphere temperature to rise more than the tropical surface temperature. We have measures for both covering the entire period of (claimed) man-made warming namely ~1970-2000. We have two major satellite analyses, UAH and RSS, covering tropical mid troposphere. We also have direct measures of tropical mid troposphere temperature by radiosondes. We have several analyses covering tropical surface temperature. So we have plenty of data for this particular test. Here is a presentation of one such test : http://tinyurl.com/k2enje4 – as you can see, the models failed the test, not just a little bit out but a comprehensive failure. Given the scale of the failure, it is actually amazing (and disturbing) that there is still any controversy. The failed test should have put the whole thing to rest ages ago, but somehow the supporters of these failed models have managed to keep the controversy going.
That’s not what I said. I was suggesting that you can’t determine if more than 50% of the warming since 1950 was anthropogenic or not from observations alone.
…and Then There’s Physics: “I was suggesting that you can’t determine if more than 50% of the warming since 1950 was anthropogenic or not from observations alone.”
Nor can you determine it from the output of the current set of computer
gamesmodels either.And before you start with your patronising dismissals, I say that from the perspective of an ex-chemical engineer (which gives me a very fair grasp of – inter alia physics, thermodynamics and chemical reaction kinetics) who has been paid to write computer models too – albeit in the field of finance, and mine, although by no means perfect, have been proven to be a great deal more predictive than anything in the field of climate modelling.
…and Then There’s Physics | May 4, 2015 at 8:12 am |
You say you don’t think it’s possible to obtain observation (data) that shows that the warming is anthropogenic.
That’s not what I said. I was suggesting that you can’t determine if more than 50% of the warming since 1950 was anthropogenic or not from observations alone.
Huh???
Umm, Gee, if you measure the change in IR vs CO2 concentration in PPM you should be able to estimate the TCR.
The IPCC says 110% of the warming was due to CO2. If the TCR is 1/3 of the IPCC claimed level (which appears to be true based on a recent study) then 110/3 ≒ 37%, 37% is less than 50%..
The CO2 forcing is less than 50% of the “since 1950” warming,
Question asked and answered.
The question in my mind is what “having dominated since 1950 means”. I assume this is tied to the transient climate response. What I see (and most of your climate community seems to miss) is the delicate balance between temperature change and fossil fuel resources.
I realize I’m in a minority, but I also happen to know a lot about the subject, and it seems to me with simply lack the fossil fuels needed to reach the more extreme predictions. This makes the TCR a key to response planning. If TCR is 1.5, then the lack of fossil fuels and/or cheap energy is more important. Either way, we need to worry about efficiency. But if I focus on efficiency then the response project is quite different: high thermal efficiency coal plants are a better option than solar panels.
I need to decide what to do, but right now I get too much political garbage. Given what I get to see about GCM performance, it seems to me there’s need for a lot more agendaless data acquisition. It really bothers me to see a request for $100 billion USD a year in charity for poor countries peddled by a largely untrained and uneducated UN bureaucracy. It bothers me a lot more to see the lack of solid support provided to back the request. Those GCMs don’t justify the cash, so the game has to be raised way above what you have right now.
I think I wrote too much. Just try to understand, the balancing act between fossil fuel reserves and climate response seems to be lost in all the clutter. Think about it, and keep an open mind. And watch your sources, when it comes to fossil fuel resources there is a lot of hidden agendas.
> The question in my mind is what “having dominated since 1950 means”.
The answer in the world is in the AR5, and is related to “more than half.”
There are 64 occurences of that expression in Chapter 10 alone.
More than half is too wide a range.
> More than half is too wide a range.
“More than half” ought to be sufficient for the claim that “It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010.” The only relevant occurence of “dominated” relates to the fact that the anthropogenic forcings are dominated by GHGs.
***
Earlier, I said that there were 64 occurences “More than half” in AR5. This is false. There are only 8 occurences:
https://ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf
I found 8 (if I recall correctly) references to Held in your earlier offering, but it was unclear as to which you were referring. Just wanting you to know the effort is being made, but crypticism in Climateball isn’t all that helpful. For another day.
> I found 8 (if I recall correctly) references to Held in your earlier offering, but it was unclear as to which you were referring.
I was not referring to any occurence in particular, Danny. Since there are only 8, and considering your willingness to do some work, it should not be that difficult to read them all.
I’m neither your monkey nor your guru. Help yourself. RTFR.
***
> but crypticism in Climateball isn’t all that helpful.
Sometimes, like in the other case you’re injecting here, prudence and politeness required it. Which means that to get something clearer, you will need more manip tricks than those you usually. I will do as usual and ignore your suave ad homs.
Again, beware your wishes.
Willard,
“I’m neither your monkey nor your guru.” Nope, you’re not, but you are a …………..yes, dear I’ll answer it………….sorry, got to go.
Which means that to get something clearer, you will need more manip tricks than those you usually pull, that is.
Oh, and there are 11 occurences of “Held.”
“To me, these are controversial in the media and on blogs, but not all that controversial within the scientific community.”
Interesting (but ultimately ineffective) belittlement tactic….
“You typically need many people/groups”
This is a rather silly appeal to size (fallacy) making something correct.
Andrew
ATTP
“Whether the warming since 1950 has been dominated by human causes”
I think one thing that would help people is to understand the two fundamentally different ways of addressing this issue.
Here I will simply talk about the structure of the arguments because I think it matters and it illuminates how folks talk at cross purposes
From a high level the first thing to recognize is that we have a time series.
A single time series. That time series has developed as the result of certain causes. In a lab setting we could run an experiment and control and vary these causes and develop a working theory of how things work. This situation leads to our first two skeptical objections. They are objections to the whole enterprise of understanding broadly construed
A) Climate science is not a lab science where you run controlled experiments. Lab science, running controlled experiments and confirming or rejecting hypothesis your theory entails, is the King science. Therefore, since lab science is the king science, and since climate science is not a lab science, it can never attain the kind of knowledge we normally refer to as ‘science’. Consequently it can be doubted, and anything that can be doubted should not be used to inform policy. This objection takes on many many forms, but they all come back to mantra of what ‘real science’ is. Anytime you see a skeptic make the ‘scientific method’ point they are using this form of attack. Further at some point they all rely on the assumption that policy requires some form of certainty from science to make decision.
B) the time series is the result of a chaotic system which we cannot understand. So while there are causes for the time series, its trajectory is chaotic and not predictable.
I will call these two kinds of skeptics “obstructionist” Their aim is to derail the whole enterprise of understanding, in the first place ( case A) by narrowly construing the concept of actionable intelligence and in the second case by presupposing that all metrics of a chaotic system will be unpredictable. They don’t even want to try.
For folks who don’t fall into the obstructionist class, then we have a live problem. How do I take a single time series and understand its causes. In other words, how do I handle problems in observational science? For folks who don’t get the difference, an example might help. When we try to understand how the sun works we obviously can’t run experiments and say “zero out” the sun spots. Or another example might be one from geology: we don’t run experiments on plate tectonics in order to understand how the continents came to be where they are.
In some case, however, nature will assist and serve up “natural” or “found” experiments. For example a period of no sun spots. Still here we dont have a perfect set up since we dont control all the other variables. Another example of a natural experiment would be the stopping of all flights after 9-11 which gave visibility into the effects of contrails. These natural experiments are helpful but always subject to attack from obstructionists. They will always e able to say “what about x”.
In natural experiments we are working from the data to the theory. Lets’ take a simple example. We look at the time before say 1850 and call this condition zero where there is no human effect and we look at time after 1850 as condition 1 where there is an effect. And this natural experiment over the last 2000 years, shows us that adding c02 has warmed the planet in a unprecedented way.
As with any natural experiment there are three lines of attack.
1. Attack the data
2. Point out that the theory is underdetermined by the data
3. Point out the theory is incomplete
WRT #1 The attacks on the data of the HS are well known. You will see the same kind of attacks on the surface data. In fact ANY historical data is subject to attack. The methods are pretty clear. because the experiment wasnt planned or controlled, there will always be data issues. All one has to do is apply the rules of lab science to observation science. you attack the sampling. you attack the calibration. you beat on uncertainty. If you beat on the data the person who wants to analyze the “natural” experiment will never get to first base.
WRT#2
Pointing out that the theory is underdetermined by the data is equally simple.
Suppose you have a set of causes that explain the time series: I’ll take an example from Berkeley earth. We explain the temperature from 1750 to today as a function of two variables. There are two forks here
2.1: demand that the explanation work out of sample, say for previous times or times before 1750. This attack can take the form of people pointing to deep time.
2.2: arm wave about alternate theories: Something else might explain the time series.
The thing to recognize, I think, is that these lines of attack never go away. Structurally they are always there. So when “web” for example explains the temperature as a function of 3 or 4 parameters, one can always attack the data, or demand more and more out of sample performance, or complain about curve fitting, or argue that something else could have caused it.
Here too you will see competing explanations.. take scaffetta as an example ( ok a bad one, but you get te idea ) The point is because climate science is an observational science we will always be subject to these types of attacks. And of course they often look reasonable.
WRT#3. suppose you explain the temperature. You look at the data you construct explain temperature. Then the incompleteness attack is opened.
You just explained one aspect of the climate what about the rest. Your theory is incomplete.
So, natural experiments are open to all sorts of attack that dont go away. You see this in your post “how many times do we have to do this?” well,
with regards to the temperature series the list of objections that people can make just on the data side is extensive. what about x? Sometimes the objection can only be handled by data that was never collected. In essence these objections are just obstructionist demands that observational science has to follow lab criteria OR it tells us nothing. Where does that leave us?
Synthetic experiments or simulation.
Since I cannot control the variables in the real world, the second option is to create a synthetic world where I can; Modelling or simulation. In simulation we move from the theory to the observation. In simulation we can flip C02 on and off and hold volcanos constant. we can hold everything constant and only change the sun. And with simulation we can make predictions or test our hypothesis. From a high view simulation makes an observational science more like a lab science. We run controlled “experiments”. And the responses or objections are canned, they are already built in.
1. These really are not experiments
2. You cant model natural perfectly.
3. your answer isnt perfect
4. Even if your answer is perfect, your explanation is still underdetermined,
something else could have caused it.
it doesnt matter what physical system you are simulating folks will have the same objections.
To those outside observational science these objections will always look reasonable. They can appear to be made in good faith. Until you point out that they rely on observational science all the time. To those who have chosen observational science as a discipline these objections look unreasonable.
Steven,
It’s rather long, but I think I broadly agree and have made similar arguments myself.
Steven,
Two interesting assumptions:
A) “Whether the warming since 1950 has been dominated by human causes”
How is 1950 significant vs. alternative beginning points of say 1850 or even earlier?
B) Why is 1850 “condition zero”? http://www.nature.com/news/2003/031210/full/news031208-7.html
This is all very helpful. But how can you tell when it was done well, and when it’s just curve-fitting? Leave out all objections made in bad faith. In good faith, how do we know if we can trust it?
There are at least three other ways:
Attack the policy implications by underlining that we should do no harm.
Attack the policy implications by appealing to future technological advances.
Attack the INTEGRITY ™ of the establishment.
In other words:
https://contrarianmatrix.wordpress.com
> how do we know if we can trust it?
You can’t. All you have is circumstantial evidence.
If you know, you don’t need to trust.
Stephen – everything you said sounds appropriate and fair but I share miker613’s concern. The psychological phenomenon that people find “patterns” in random data and events is know as apophenia. As a forecaster I’ve seen it. People are natural curve fitters. When you have flexibility around a diverse set of potential confounding driver variables – it is not that hard to correct departure from expected patterns and re-explain keeping favored relationships intact. Did aerosols impact key years, then chines coal particulates and now is heat going into the ocean. As an observer it just seems so ex post facto. I go back to the question I ask a lot, what would/could happen to call the relationships posited into question?
In a sense that’s the point of the scientific method. It takes more than one person, or one group, to convince others of the strength of their scientific ideas. You typically need many people/groups getting broadly consistent results before it becomes accepted by the broader community. There are multiple groups producing millenial temperature reconstructions, instrumental global temperature records, doing climate modelling, etc……
attp, “You typically need many people/groups getting broadly consistent results before it becomes accepted by the broader community.”
It should only take one.
Pardon? It might only take one to do it first, but it would normally take more than one to convince everyone else that it is right.
attp, “It might only take one to do it first”
That would be the one. It seems to take science forever to figure out who that one was, but it should only take one. Finding out who takes getting past the gate keepers. That is why it is said that science advances one death at a time.
There is only one reason why more than a couple dozen people in the world are arguing about climate science. If not for the alleged impending man made climate catastrophe, nobody cares. We could get by nicely on 2 or 3 cheap climate models. No more funding for research on the dreaded effects of climate change on the sex lives of various species of gnats, mountain goats, woodpeckers etc. The superfluous climate scientists and ex-cartoonist MOOC charlatans go back to driving cabs and waiting tables.
Let’s stipulate that one can make a list of several thousand unreasonable objections to the way the climate science team has conjured up its alleged near unanimous consensus. And let’s pretend that the Climatariat Inquisitors will soon reveal the list of a million or so well-funded Merchants of Doubt, who are obstructing in bad faith. There are still many reasonable objections from many reasonable people that have gone unanswered.
The consensus crowd have been trying to sell it by hook or by crook for decades, but the science of climate catastrophe is still not convincing. Maybe instead of continuing with the failed strategy of demonizing and belittling, they should try smoking the bad guys out by challenging the doubters to debate. Take on all comers in public forums from sea to shining sea. Why won’t they do that?
Mosher, I think you dismiss chaos far to quickly. Chaos is not what is left over when nothing else works, in fact it can be looked for and found. People just do not want to find it because it makes the observed changes unexplainable and there is no money in that, right?
This whole discussion seems to assume a Newtonian sort of answer, where the cause is of the same scale as the change. But if the nonlinear dynamics are right big changes can be due to infinitesimal causes. Where is the federally funded research program looking for this answer? I do not see it, just more billions being poured into carbon cycle research.
Willard.
Yes but I wanted to keep to the “purely” scientific aspects.. hehe.
‘Attack the policy implications by underlining that we should do no harm.
Attack the policy implications by appealing to future technological advances.
Attack the INTEGRITY ™ of the establishment.
“This is all very helpful. But how can you tell when it was done well, and when it’s just curve-fitting? Leave out all objections made in bad faith. In good faith, how do we know if we can trust it?”
1. there is nothing wrong with curve fitting. I use curve fitting every day because I have to get stuff done. There are various ways to do it, but in the end… If you start with data you will be curve fitting.
2. Trust? I dunno. Today I do 4 forecasts, two of them are pure curve fits
two of them are from a model that knows a physical thing or two.
as time goes by I will learn which one I can take action on and which ones I can ignore. meh…So when folks ask which one to trust I just explain what I did and all the assumptions and black magic… deciding which one to use is above my pay grade.
“Mosher, I think you dismiss chaos far to quickly.”
it was already getting too long. plus you miss the point
my point is not to assess these strategies. my point is to outline that there is NOTHING special or unique about them. They are standard canned pre programmed responses. That’s my whole point.
Planning
“Stephen – everything you said sounds appropriate and fair but I share miker613’s concern. The psychological phenomenon that people find “patterns” in random data and events is know as apophenia. As a forecaster I’ve seen it. People are natural curve fitters. When you have flexibility around a diverse set of potential confounding driver variables – it is not that hard to correct departure from expected patterns and re-explain keeping favored relationships intact. Did aerosols impact key years, then chines coal particulates and now is heat going into the ocean. As an observer it just seems so ex post facto. I go back to the question I ask a lot, what would/could happen to call the relationships posited into question?”
appealing to humans as pattern matchers is WEAK version of the argument. I’m pointing out the stronger version.
“Pardon? It might only take one to do it first, but it would normally take more than one to convince everyone else that it is right.”
all it actually takes is people giving up the behavior of trying to prove it wrong.
Steven Mosher, so what?
Building, testing and using models of natural complex and chaotic systems is in no way exempt from meeting the normal standards of science. Where the tests you cite can be applied (even if only to the building blocks) they should be applied. And these studies should also conform to the basic rules of modelling just as model building in experimental science needs to.
You are claiming a duality here that doesn’t really exist. Model building is the thing of science. It is purposeful and that purpose needs to inform the construction of the model and the tests for its validation. Unfortunately the models being used here often lack this second part or it is only hazily expressed. For this reason the grounds used in the critiques range widely, and this is not unreasonable given the failure of the modellers to be explicit on this score.
The solution is for the modellers of the climate to be more formal in their approach to defining their purpose and in designing in their validation techniques.
One outcome I suspect would be much simpler models being used for policy purposes particularly forecasting (with more consensus around their acceptability) while the more complex models return to their original purpose as an aid to understanding phenomena largely within sample as far as time is concerned (with much less controversy being attached to that purpose).
How is 1950 significant vs. alternative beginning points of say 1850 or even earlier?
B) Why is 1850 “condition zero”?
See Even Danny knows the tricks!!!
Suppose we are talking about murder rates and I say murder rates before 1970 are X and after 1970 are Y.
I’m trying to extract an unplanned natural experiment connected to 1970.
Now in the lab we would have a well defined experiment. Heck I started one 8 weeks ago. on that date I made a chage ( actually 70,000 changes)
nobody quibbles about my start date because I DEFINED IT. thats how a lab works. But in the real world we have to “find’ experiments. So I say I find one that starts at 1850.
DOING THAT I KNOW WHAT THE STUPID PET TRICK IS.
attack the start date. pretty effin easy. its just applying the rules of lab science to observation science.
Steven,
Why is it a trick? From here: http://judithcurry.com/2015/05/04/what-are-the-most-controversial-points-in-climate-science/#comment-699771
Your words: “We look at the time before say 1850 and call this condition zero where there is no human effect and we look at time after 1850 as condition 1 where there is an effect.” (Note the “no human effect”)
Then I provided: http://judithcurry.com/2015/05/04/what-are-the-most-controversial-points-in-climate-science/#comment-699788
indicating human effects may go back 8000 years.
If AGW theory is accurate, choosing a start date of 1850 could have already had effects in the pipeline so my question is valid and not meant to be a trick. You’ve taught me better than that (I think) and I’m paying attention. What’s wrong with that line of thought?
Steven,
Oh, and murder rates are not an equivalent to climate because there is no chance for a murder event to be in a pipeline. I understood your point, but don’t think you looked in depth at mine.
Mosher, I’m used to working with lab results scaled up to fine grid scales which we try to confirm with intensive data collection. We scale up the fine grid to a final grid and run hindcasts and so on. I keep thinking theres data holes.
For example, do you have a data set showing humidity in the lower troposphere over the last 20 years? Is the data gridded? Have you run 3D movies to display the difference between your model and the data?
HAS
Good comment, particularly
The solution is for the modellers of the climate to be more formal in their approach to defining their purpose and in designing in their validation techniques. [mwg bold]
Upfront process, documented. ‘Nuf said.
Steven Mosher, “DOING THAT I KNOW WHAT THE STUPID PET TRICK IS.
attack the start date. pretty effin easy. its just applying the rules of lab science to observation science.”
Science of Doom had a less pet tricky approach I guess with the initial versus boundary value problem issue. Since there are quite a few model parameters that depend on absolute temperature values, it could be a valid question that some are over analyzing motives a touch too much and real issues a bit too little.
Mosher:
“…and in the second case by presupposing that all metrics of a chaotic system will be unpredictable. They don’t even want to try.”
Are you saying Chaos is being used as an arguing technique? I’d agree with that. At the same time it’s being used for understanding. As far as ‘all metrics’ being unpredictable I have some examples. I expect the IPWP to collapse every time to the East. I expect the PDO to change between signs for the next 500 years perhaps 15 to 20 times. We may be seeing the field of Chaos used simply for its obstructionist attributes. The same as an activist approach mentioned with climate science.
“For example, do you have a data set showing humidity in the lower troposphere over the last 20 years? Is the data gridded? Have you run 3D movies to display the difference between your model and the data?”
did you read what I wrote, or just perform a stupid pet trick on your own?
My point is stronger than your point. my position predicts that you would do more pet tricks. Because there are always pet tricks.
even if i gave you your data there would be more pet tricks. neverendingpettricks
Danny Thomas | May 4, 2015 at 4:59 pm |
“Steven, Oh, and murder rates are not an equivalent to climate because there is no chance for a murder event to be in a pipeline.”
Statistically Steven is right [read a” drunkard’s walk”] . People and Climate arte in statistical pipelines. You cannot predict who but you can predict some.
Angech2014,
Please help me with this. How can a build up of GHG’s in the atmosphere not as yet having led to warming (due to being long lived) be equal to a group of singular events with a beginning year? Murder is a instantaneous occurrence. Warming due to GHG’s is not. I keep seeing where “if we stopped emissions today” those impacts would take time (years) to reach a point of warming ending.
Steven Mosher | May 4, 2015 at 12:59 pm
re objectionist argument
“the time series is the result of a chaotic system which we cannot understand. So while there are causes for the time series, its trajectory is chaotic and not predictable. ”
If one can obtain data then a system ceases to be truly chaotic.
Once one has data , local trends can be predicted from the data one has.
Because one is in a chaotic system the longer the time series goes the less likely it is to stay stable and predictable but the more predictable in the short term as one has more data for the local events.
Mosher’s argument is flawed in that he would label all INTERNAL critiques among researchers in observational science as “stupid pet tricks.” Every single thing that one working researcher would say to another about how their work needed to be improved would be a “stupid pet trick.” That’s because the adequacy and credibility with which these things have been dealt with is the currency of quality in observational science.
For example, Mosher’s cavalier justification of curve-fitting is contradicted by his assent in the paleoclimate area to Steve McIntyre’s critique about the dangers of cherry-picking proxies by fit the instrumental data. In that context, he understands perfectly well why curve-fitting approaches that lack out-of-sample validation are unreliable. But here he deploys an unconvincing immunizing stratagem by calling his own objection a “stupid pet trick.”
Never trust these sorts of debating tactics that prove too much.
I don’t doubt that much of the attribution of warming since 1950 is due to CO2. It is a new phenomenon. Paleo history shows much greater change over much longer periods of time. If this could be considered two climate periods there is two periods of flat temperatures and one period of rising temperatures during a time of steadily rising CO2. I have seen no emperical evidence as to how much of this warming is due to humans when it appears to be part of the normal warming pattern that shows up since coming out of the LIA.
http://3.bp.blogspot.com/-tXWZ-sLc4GE/Uxv4wzJn9Zl/AAAAAAAAQAs/4wETMsUaw4o/s1600/amoss.GIF
as do those who chose to study english in their post secondary academic career?
Mosher:
“A) Climate science is not a lab science where you run controlled experiments. Lab science, running controlled experiments and confirming or rejecting hypothesis your theory entails, is the King science. Therefore, since lab science is the king science, and since climate science is not a lab science, it can never attain the kind of knowledge we normally refer to as ‘science’. … Anytime you see a skeptic make the ‘scientific method’ point they are using this form of attack. Further at some point they all rely on the assumption that policy requires some form of certainty from science to make decision.”
My scientific background is in astronomy, which is also not a lab science. I don’t think that fundamentally changes the nature of the scientific method, which is to compare predictions with “observations”. Whether the observations come from lab experiments, thermometers, or telescopes does not fundamentally change the way you assess whether the data supports one hypothesis or another. (Read E.T. Jaynes, or I.J. Good)
The advantage of lab experiments is that you get to design things so that the results will clearly distinguish the hypotheses you are interested in. They may distinguish them so clearly you don’t need statistics, the result is obvious, which is why many scientists can get by with simple rule-of-thumb statistics. With observational science nature may have mixed multiple effects together, and you are going to need the full apparatus of probability theory to attempt to disentangle them. And you may have to accept you will not get better than a statement of probability as your result. This is the situation in climate science.
But that does not give you a license to run computer models and call them experiments. Astronomers don’t confuse a model of a pulsar with observations of a pulsar.
I agree with you and ATTP that attribution is a question of modelling. No measurement as such can tell you if most of the warming is due to A or B. But to show that most of the warming is not due to natural variation, you would need to show that the observed warming is unlikely under any plausible scenario of natural variation (compared to under anthropogenic warming). That requires you to model both anthropogenic warming AND natural processes. But all the modeling effort seems to have gone into AGW, with natural variation represented by straw-man ‘noise’ processes.
If you want to show that Edward de Vere, not Shakespere, wrote Hamlet, you need to compare those two hypotheses. Showing that it is much more likely to have been written by Edward de Vere than by a billion monkeys does not help you.
Steven Mosher: “and since climate science is not a lab science, it can never attain the kind of knowledge we normally refer to as ‘science’.”
Yes, it is known as “Post-Normal” science.
Here’s Mike Hulme** on the subject:
The danger of a “normal” reading of science is that it assumes science can first find truth, then speak truth to power, and that truth-based policy will then follow…exchanges often reduce to ones about scientific truth rather than about values, perspectives and political preferences.
…‘self-evidently’ dangerous climate change will not emerge from a normal scientific process of truth-seeking…scientists – and politicians – must trade truth for influence. What matters about climate change is not whether we can predict the future with some desired level of certainty and accuracy.
Climate change is telling the story of an idea and how that idea is changing the way in which our societies think, feel, interpret and act. And therefore climate change is extending itself well beyond simply the description of change in physical properties in our world…
The largest academic conference that has yet been devoted to the subject of climate change finished yesterday [March 12, 2009] in Copenhagen…I attended the Conference, chaired a session…[The] statement drafted by the conference’s Scientific Writing Team…contained…a set of messages drafted largely before the conference started by the organizing committee…interpreting it for a political audience…And the conference chair herself, Professor Katherine Richardson, has described the messages as politically-motivated. All well and good.
The danger of a “normal” reading of science is that it assumes science can first find truth, then speak truth to power, and that truth-based policy will then follow…exchanges often reduce to ones about scientific truth rather than about values, perspectives and political preferences.
…‘self-evidently’ dangerous climate change will not emerge from a normal scientific process of truth-seeking…scientists – and politicians – must trade truth for influence. What matters about climate change is not whether we can predict the future with some desired level of certainty and accuracy.
Climate change is telling the story of an idea and how that idea is changing the way in which our societies think, feel, interpret and act. And therefore climate change is extending itself well beyond simply the description of change in physical properties in our world…
The function of climate change I suggest, is not as a lower-case environmental phenomenon to be solved…It really is not about stopping climate chaos. Instead, we need to see how we can use the idea of climate change – the matrix of ecological functions, power relationships, cultural discourses and materials flows that climate change reveals – to rethink how we take forward our political, social, economic and personal projects over the decades to come.
There is something about this idea that makes it very powerful for lots of different interest groups to latch on to, whether for political reasons, for commercial interests, social interests in the case of NGOs, and a whole lot of new social movements looking for counter culture trends.
Climate change has moved from being a predominantly physical phenomenon to being a social one…It is circulating anxiously in the worlds of domestic politics and international diplomacy, and with mobilising force in business, law, academia, development, welfare, religion, ethics, art and celebrity.
Climate change also teaches us to rethink what we really want for ourselves…mythical ways of thinking about climate change reflect back to us truths about the human condition…
The idea of climate change should be seen as an intellectual resource around which our collective and personal identifies and projects can form and take shape. We need to ask not what we can do for climate change, but to ask what climate change can do for us…Because the idea of climate change is so plastic, it can be deployed across many of our human projects and can serve many of our psychological, ethical, and spiritual needs.
…climate change has become an idea that now travels well beyond its origins in the natural sciences…climate change takes on new meanings and serves new purposes…climate change has become “the mother of all issues”, the key narrative within which all environmental politics – from global to local – is now framed…Rather than asking “how do we solve climate change?” we need to turn the question around and ask: “how does the idea of climate change alter the way we arrive at and achieve our personal aspirations…?”
We need to reveal the creative psychological, spiritual and ethical work that climate change can do and is doing for us…we open up a way of resituating culture and the human spirit…As a resource of the imagination, the idea of climate change can be deployed around our geographical, social and virtual worlds in creative ways…it can inspire new artistic creations in visual, written and dramatised media. The idea of climate change can provoke new ethical and theological thinking about our relationship with the future….We will continue to create and tell new stories about climate change and mobilise these stories in support of our projects. Whereas a modernist reading of climate may once have regarded it as merely a physical condition for human action, we must now come to terms with climate change operating simultaneously as an overlying, but more fluid, imaginative condition of human existence.
https://buythetruth.wordpress.com/2009/10/31/climate-change-and-the-death-of-science/
** http://www.mikehulme.org/category/bio-and-cv/
Here is the original article from which the above excerpts are taken, in the Guardian, unsurprisingly.
http://www.theguardian.com/society/2007/mar/14/scienceofclimatechange.climatechange
I repeat: We will continue to create and tell new stories about climate change and mobilise these stories in support of our projects.
I rest my case!
Both von Storch / Bray in 2008 and Verheggen et al in 2012 found 66% of published climate scientists believe that over half of recent warming is anthropogenic.
Not 97%. But still a healthy majority. (But everyone should read about the problems these scientists describe.)
Is there a control group that has not been influenced by group think and herd mentality?
curryja asks, “What is the data that provides the greatest challenge to the dominant view of AGW?”
I just finished the chapter on TSI reconstructions for my latest book.
You may wish to consider the recent shift towards a reduced trend in TSI during the early part of the 20th Century.
The recommendation was that the CMIP5 models use the Lean reconstruction, which continues to show a substantial increase in trend through the 1940s, while the Dora, Krivova, Svalgaard and Wang reconstructions show much lower trends:
https://bobtisdale.files.wordpress.com/2015/05/figure-4-149.png
The Lean reconstruction is obviously now the outlier. Without that trend in the TSI data, the climate models would have even more trouble simulating the warming from the mid-1910s to the mid-1940s, which is comparable to the warming rate during the later warming period. And the models do a pretty bad job, as it is, simulating the early warming using the Lean reconstruction.
The data are from the Leif Svalgaard’s research page:
http://www.leif.org/research/
Specifically the spreadsheet here:
http://www.leif.org/research/TSI%20(Reconstructions).xls
One last note: The Svalgaard reconstruction in the above graph is outdated. Leif’s latest reconstruction (the blue curve identified as “Based on Corrected Sunspot Number”) shows an even smaller trend because he’s increased the solar maximums before the 1940s:
http://www.leif.org/research/TSI-Reconstruction-2014.png
Cheers
http://i.imgur.com/X25SbGa.png
Assuming a century+ time period for oceans to fully respond to an increase (level shift) in solar forcing, not hard to explain 1940-2015 warming.
It is hard to see the convincing argument that there is much post 1940s GHG forcing. About 0.24°C is the maximum that seems possible..
PA – Your analysis misses an essential point : we don’t know that solar and GHG forcings are the only significant influences on ocean temperature. So we can’t make much use of the 0.24°C – it could be higher or lower than the GHG forcing, depending on how the other unknown factors varied.
PA: You obviously missed something. In the last graph I presented, the new research by Dr. Svalgaard has increase the strength of the solar cycles before the 1940s, eliminating the “level shift”, as you called it.
And since you prefer sunspots, see his recent research on those as well:
http://www.leif.org/research/IAUS286-Mendoza-Svalgaard.pdf
Cheers
Here is a Woodfortrees of Leif Svaalgard’s TSI “best guess” based on his (last months?) WUWT comment. Note: LS also predicts the SSN/TSI relationship is going to fail. He did not say when.
thx bob this is helpful
Lief’s work is really solid. Of course some folks will object to changing the past!
It will be interesting to see what happens in terms of recommended TSI series for future work. Sun nuts are already objecting to adjusting the series.
Bob Tisdale | May 4, 2015 at 10:14 am |
PA: You obviously missed something. In the last graph I presented, the new research by Dr. Svalgaard has increase the strength of the solar cycles before the 1940s, eliminating the “level shift”, as you called it.
And since you prefer sunspots, see his recent research on those as well:
http://www.leif.org/research/IAUS286-Mendoza-Svalgaard.pdf
Cheers
1. I like being corrected.
2. Sunspots were the only measure I could add quickly at WFT.
3. It takes 260 Watt-Years per meter squared to raise the temperature of the top 2000 meters of ocean 1 K. 130 W-Y/m2 if you assume a ramp profile (zero change at the bottom). This is going to produce a centuries long upward ramp response to a step increase in input at the surface.
4. There only appears to be about a 0.5 W/m2 variation in the long term average TSI. What is the mechanism to leverage a significant change in temperature from a small change in solar forcing?
Do like your website by the way.
Leif is not the authority when it comes to the sun. Very biased.
This is very good, very to the point. We need to keep returning to the basic issues in climate science and re-iterate the requirement for climate science to provide us with hard physical, scientific evidence for the claims that man has significantly influenced the climate and probably will do so in the near and long term future.
The nexus of the entire debate is the IPCC AR5 attribution statement that it is ‘extremely likely’ humans have been responsible for ‘more than half’ of the observed warming over the period 1950-2010, with the best estimate being ‘similar to the observed warming’ over this period – meaning that the IPCC believes man has been the dominant cause of modern warming and that all other influences have tended to cancel one another out.
The ‘science’ on which this attribution statement is based is extremely shaky but it is taken as ‘fact’ by consensus climate science and a stepping off point for a whole host of ‘scientific’ claims by scientists, politicians and advocating bodies, whereas it is but an article of faith.
Tamsin Edwards has an article in the Guardian about lukewarmers:
http://www.theguardian.com/science/2015/may/03/climate-change-scepticism-denial-lukewarmers
Dr Edwards is perceived as an ‘honest broker’ in the climate debate, a conciliatory and reasonable voice who steers away from name-calling and is generally highly regarded by sceptics and warmists alike. This is true but alas, her categorisation of three types of people in the climate debate – sceptics, lukewarmers and mainstream climate scientists (and their trusting followers) is rooted firmly in the egregious IPCC AR5 attribution statement. hence she says of lukewarmers – whom she identifies as making up the majority of people opposing consensus climate science in the UK:
“They agree carbon dioxide is a greenhouse gas, that the world is warming, and that a significant fraction of this is down to humans.”
Sceptics – the non lukewarmers – she scathingly identifies as being “in denial”. Hence this ‘honest broker’ is using the AR5 attribution statement to attempt to reframe the climate debate – in the UK at least – as essentially a stand-off between lukewarmers who agree man has warmed the planet ‘significantly’ but don’t believe that climate change poses a future major threat to humanity and mainstream climate science and its advocates which does perceive a real threat from anthropogenic GHG warming. The nub of the disagreement according to Dr Edwards basically comes down to uncertainty over climate sensitivity and the current crop of generally lower estimates. Sceptics “in denial” about the basic ‘facts’ are then relegated to the sidelines.
In reality, there is no hard evidence to support the contention that more than half of post 1950 warming is due to CO2 emissions. With the combination of decreasing estimates for climate sensitivity and growing awareness of the role of natural climate forcings, the observed warming since 1950 directly attributable to AGW may be less than 50%, insignificant or even negligible.
+1
> In reality, there is no hard evidence to support the contention that more than half of post 1950 warming is due to CO2 emissions.
Evidence is what presents itself to the (mind’s) eye.
In Chapter 10 of the AR5, many empirical lines of evidence are presented.
Cf. the Executive Summary.
There is an abundance of empirical evidence to suggest that the planet has warmed; there is very little empirical evidence to suggest that the majority of that warming is due to humans – attribution studies are not empirical evidence.
But surprise me: pluck something out of IPCC AR5 The Physical Science Basis which is a real game changer in the debate over whether humans have caused recent global warming, other than the usual ‘internal ocean cycles have tended to cancel each other out’ and ‘solar variability is too insignificant to account for the observed rise in temperatures’ – because both of those assertions are still open to debate and are not ‘settled science’. Indeed, if oceanic cycles were settled science, Mann would not have recourse to them very recently to explain the current hiatus in global warming based upon an esoteric combination of AMO and PMO to produce the NMO!
> attribution studies are not empirical evidence.
Unless one uses studies as empirical evidence (say like C13 or V14 did), it makes little sense to expect any study to be some kind of empirical evidence. Attribution, which is an inferential matter, still rests on empirical evidence. To require an inferential process to be evidential through and through would go against our main evolutionary trump, which is to use our brains to think before we type.
***
> But surprise me: pluck something out of IPCC AR5.
Open Chapter 10, and search for “held.” Prepare to be surprised. That Judy won’t dare to challenge Yoda should give you enough empirical evidence not to go a bridge too far in your rant, Jaime.
Willard.
I see you were also dining out on the Held references above. Someone had to do your homework for you. Four papers referenced in Chpt 10 are authored by Held (title and brief extract follow to give a sense of method):
Held & Snowden “Robust responses of the hydrological cycle to global warming”. Using the climate change experiments generated for the Fourth Assessment of the Intergovernmental Panel on Climate Change …..
Held, Winton et al “Probing the fast and slow components of global warming by returning abruptly to preindustrial forcing”. The fast and slow components of global warming in a comprehensive climate model are isolated by examining the response to an instantaneous return to preindustrial forcing.
Schneider & Held “Discriminants of twentieth-century changes in earth surface temperatures”. The probability with which features of the discriminants … can be attributed to different natural and anthropogenic climate processes remains to be assessed …
von Deimling, Held et al “Climate sensitivity estimated from ensemble simulations of glacial climate”.
So three simulation studies none of which do attribution, and one exploratory study in 2000 that leaves the attribution “to be assessed”.
Willard,
As usual, you are long on cryptic references but short on content. Attribution studies of course rest on empirical evidence, but they can use that empirical evidence to serve their own ends, i.e. to ‘prove’ that such and such is the case if one presumes that this or that is so. They rest upon assumptions as much as empirical evidence.
As HAS has pointed out, which of these Held papers should I prepare to ‘be surprised’ about. If there lies within any of them any science conclusive enough to rudely interrupt my rant and by definition the ‘rants’ of all those other sceptics who question the empirical basis of the IPCC’s conclusions, then please do let us know urgently, so we do not waste our time only to end up with egg on our faces. Many thanks.
> So three simulation studies none of which do attribution, and one exploratory study in 2000 that leaves the attribution “to be assessed”.
Here’s the abstract of that 2000 study:
http://www.gfdl.noaa.gov/bibliography/related_files/tns0102.pdf
The first sentence of an abstract meets Jaime’s “hard evidence” desideraturm. The penultimate one meets Jaime’s challenge.
***
Econometricians might appreciate this sentence in 10.2.2:
This adage indicates that Jaime’s and Judy’s (?) concerns may very well lead to a Procrustean bed.
https://ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf
Denizens might also appreciate Box 10.1. It’s in blue.
***
Criticizing a document without having read it first is suboptimal.
Willard,
“The first sentence of an abstract meets Jaime’s “hard evidence” desideraturm. The penultimate one meets Jaime’s challenge.”
Wow, you can look at an abstract and draw those sweeping conclusions just from a few words. Here are a few more words from the study itself:
“Many features of the temperature changes indicated by the first discriminants are SUGGESTIVE of human influences on climate. The relatively uniform and steady warming of the ocean surface points to a gradual global increase in the radiative forcing of the earth’s surface, consistent with EXPECTED effects of the increase in greenhouse gas concentrations, or possibly, an INCREASE IN SOLAR IRRADIANCE (Schimel et al. 1996; Tett et al. 1999)
The structure of the warming over continents is consistent with expected effects of an increase in radiative forcing.
The localized cooling over continents is SUGGESTIVE of radiative effects of anthropogenic sulfate aerosols.
Both spatial and temporal features of the dominant discriminants are consistent with EXPECTED effects of anthropogenic greenhouse gases and sulfate aerosols.
The probability with which features of the discriminants in Fig. 1 can be attributed to different NATURAL and anthropogenic climate processes REMAINS TO BE ASSESSED in confirmatory analyses, that is, in statistical comparisons of features of the observational discriminants with simulations.” [My emphasis]
A lot of ‘expected’ and ‘suggestive’ caveats there, a non anthropogenic (solar) cause revealed to be alternative possible cause of ocean warming, and admission that their exploratory analysis is contingent upon confirmation by model simulations.
That’s not all. The oceans have not warmed temporally and spatially (horizontally or vertically) as uniformly as predicted. The North Atlantic has cooled significantly and the authors suggest this is due to anthropogenic forcing of the thermohaline circulation, as was also notably postulated in a recent paper by Mann. But the RAPID study on the most recent slowdown in the THC clearly states that the observed slowdown in mass transport is an order of magnitude greater than that predicted due to global warming. On land, the authors’ explanation for the observed localised cooling over continents being due to anthropogenic sulphate aerosols is also brought into doubt by recent studies. So again, ‘evidence’ seriously brought into question.
http://www.natureworldnews.com/articles/1931/20130515/sulfate-aerosols-cooling-ability-overestimated-researchers.htm
So, all in all, not quite the devastating empirical evidence of anthropogenic climate change which you make it out to be.
Willard, Schneider & Held muses about attribution but doesn’t attempt it.
In many respects, as I noted earlier on this thread, IPCC is quite explicit about the source of its attribution of >50% anthropogenic. On p869 they reference the claim to {9.4.1, 9.5.3, 10.3.1, Figure 10.5, Table 10.1}. The first two are references to the chapter on evaluating climate models, 10.3.1 deals exclusively with climate model based estimates related to temps, Fig 10.5 shows the result of model based estimates, and Table 10.1 line 2 explicitly states the claim is based on multiple CMIP5 model experiments.
> Schneider & Held muses about attribution but doesn’t attempt it.
The best empirical evidence are in the detection studies. Schneider & Held shows how it’s possible to do that without relying on simulations.
OTOH, whoever attempts attribution needs to work with a statistical model, which usually implies simulations, considering the nature of the beast. To pretend, like Jaime just did, that “expected” is a caveat misrepresents the very idea of doing attribution studies in the first place.
Until we see a stadium wave of studies that work otherwise, Jaime’s desiderata remains a fool’s errand.
***
To return to the Chapter 10, which Denizens don’t seem to have read while still ranting against:
https://ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf
Skepticism regarding attribution studies, to be consistent, requires that we also ditch all discussions about sensitive matters. While I don’t care about sensitive matters, I fear Denizens might feel lukewarm about that splendid suggestion.
Willard
Before you roam off into another topic can we agree that Held had nothing in Chpt 10 that empirically showed attribution of >50% anthropogenic forcing to the recent temperature rise. Some mild speculation back a decade and half, but not even a contribution to the IPCC’s finding based on GCMs.
Held ter account, yer might say.
> Before you roam off into another topic […]
The topic was the claim that there was no “hard evidence to support the contention that more than half” etc. I pointed out that there are many empirical lines of evidence in the relevant chapter of the AR5. When challenged, I hinted at “held,” because it suffices to find this paragraph:
https://ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf
Notice: “detection and attribution.” There’s a reason why there’s a chapter for the two concepts. They go hand in hand.
***
I also chose “held” because Judy would rather talk about the SPM and press releases instead of discussing with Yoda about science:
http://www.aps.org/policy/statements/upload/climate-seminar-transcript.pdf
***
Then the topic switched to “attribution studies are not empirical evidence.” I pointed the absurdity of that argument, the simplest KO being that attribution was an inferential matter, not an evidencial one. Another KO was to observe that unless one can show how to do attribution without any model, Jaime’s argument begged an impossible demand. A third argument, which is not a KO, but perhaps the most decisive, was that skepticism regarding attribution made lukewarm concerns regarding sensitivity look a bit silly.
It might not be a good idea to rant against attribution, more so for the sake of bashing attribution models because they rely on simulations. This is like bashing number theory because it contains inductive proofs.
There’s no need to rely on Granger causality analysis (when will HAS publish his, BTW?) that Jaime’s rant is beyond ridiculous.
the logical absurdity of the ‘fingerprint detection’ used by the IPCC for attribution is the subject of a long-planned (but alas incomplete) post.
Also, i’m not impressed by AR5 chapter 10. Yes it mentions detection and attribution in the same sentence, but actually says and does nothing about actual detection. Note AR4 did deal with detection; AR5 does not. ‘Detection’ essentially disappears in context of the circular reasoning of their attribution arguments.
I didn’t cotton onto this issue until after the APS workshop, or I would have taken this up more vehemently with Held at the time
> but actually says and does nothing about actual detection.
HAS’ main trump that Schneider & Held 2001, cited in AR5, is only about attribution.
Something has to give.
***
> I didn’t cotton onto this issue until after the APS workshop, or I would have taken this up more vehemently with Held at the time
Justification acknowledged. Let me remind you of a previous one:
http://judithcurry.com/2015/04/20/aps-discussion-thread/#comment-696031
Vehemence might be more suitable for blog exchanges than workshops.
***
> ‘Detection’ essentially disappears in context of the circular reasoning of their attribution arguments.
I’d like to know a bit more about that “circular reasoning.”
Like I said, I have a post planned on this topic, i expect to get to it within the month. Note, Science of Doom has a series of posts on this topic.
Obviously Willard has deemed model output as physical empirical evidence. Poor Willard. He has fallen so far.
Willard
Your first contribution to this thread was:
“> In reality, there is no hard evidence to support the contention that more than half of post 1950 warming is due to CO2 emissions.
“Evidence is what presents itself to the (mind’s) eye.
“In Chapter 10 of the AR5, many empirical lines of evidence are presented.
Cf. the Executive Summary.”
You were wrong about Chpt 10 presenting any (let alone many) lines of empirical evidence of attribution. Nor did your references to Held add anything in this regard
Can we agree about that, then you can wander off talking about detection.
Willard, as HAS points out, you asserted that Ch. 10 presented ‘many empirical lines of evidence’ and vaguely referenced Held in that respect. When pushed, you explicitly referred to a few lines in an abstract of a 2000 paper by Held and asserted that this met my demands for empirical evidence. Most obviously it did not.
So now you switch the emphasis to declaring that attribution studies cannot be looked upon as empirical evidence (even though you have stated that they are based upon empirical evidence), that the suggestion is absurd, but that, mysteriously, they are still scientifically valid as regards ‘fingerprinting’ the anthropogenic contribution to recent global warming. You furthermore state that my requirement for empirical evidence re. anthropogenic fingerprinting ‘begs an impossible demand’. So in effect you are agreeing with me that attribution studies are not empirical evidence and you have tacitly acknowledged (by your silence) that there is a complete lack of empirical evidence in Ch. 10, Despite this however you label my ‘rant’ against attribution studies on the basis that they are not empirical evidence as ‘absurd’ and ‘beyond ridiculous’, whereas your acceptance of them as ‘evidence’ (whilst still acknowledging that they are not physical/empirical evidence) is somehow more firmly rooted in the scientific method!
I am in awe.
> You were wrong about Chpt 10 presenting any (let alone many) lines of empirical evidence of attribution.
HAS puts “empirical evidence of attribution” in my mouth. I said there were many lines of evidence in chapter 10. Considering I’ve told many times already that requiring empirical evidence of attribution amounts to an impossible demand because the very idea of an evidence of attribution is absurd, HAS has no reason to put these words in my mouth.
The onus is on HAS and Jaime to show us how evidence of attribution looks like. It also incumbs Judy to do so if she recycles that contrarian claptrap. Since she even reuses the “circular reasoning” claptrap, can we expect she’ll recycle that one too?
INTEGRITY ™ – Crowdsourcing Claptraps
“In reality, there is no hard evidence to support the contention that more than half of post 1950 warming is due to CO2 emissions.
Evidence is what presents itself to the (mind’s) eye.
In Chapter 10 of the AR5, many empirical lines of evidence are presented.
. . . . . Considering I’ve told many times already that requiring empirical evidence of attribution amounts to an impossible demand because the very idea of an evidence of attribution is absurd, HAS has no reason to put these words in my mouth.
The onus is on HAS and Jaime to show us how evidence of attribution looks like.”
I think it was you that put the words into your own mouth Willard by implying strongly that the hard evidence to support the IPCC’s AR5 attribution statement was to be found in Ch. 10 and then indeed going on to state categorically that a few words in the abstract of a Held paper satisfied the demand for that hard evidence. But no matter now, you appear to want to keep going round in circles as regards this issue.
You now say that the onus is upon us to demonstrate what empirical evidence of attribution looks like. I’ll have a stab at this and suggest (as climate scientists have also suggested) that a world warmed by CO2 should:
1. Warm progressively with increasing CO2 emissions
2. Warm faster at the poles
3. Warm more strongly in the mid troposphere above the tropics.
4. The oceans should warm relatively uniformly.
What has actually happened:
1. Global warming has stalled for a period in excess of 15 years (not predicted as a realistically probable outcome in any of the climate model projections).
2. The Arctic has warmed fairly strongly since 1979 but has recently started to cool. The predicted catastrophic disappearance of sea-ice has not happened and indeed, since 2005/6, the rapid reduction in Arctic sea-ice appears to have stalled.
Temperatures have not risen significantly in the Antarctic as a whole and SH sea-ice area has reached record highs. Global sea-ice area is not significantly reduced compared to 36 years ago.
3. Just hasn’t happened. Period.
4. The North Atlantic region in particular has cooled significantly.
This is what the empirical ‘fingerprint’ of anthropogenic GHG mediated global warming should look like and thus it would appear that our culprit has up until now been wearing rubber gloves at the scene of the crime! The empirical evidence of attribution would consist of the set of observations proving that these predictions have actually occurred. But no matter, the IPCC can still come up with model-based attribution studies to prove that CO2 dunnit with greater than 95% certainty.
Willard
“HAS puts “empirical evidence of attribution” in my mouth. I said there were many lines of evidence in chapter 10. Considering I’ve told many times already that requiring empirical evidence of attribution amounts to an impossible demand because the very idea of an evidence of attribution is absurd, HAS has no reason to put these words in my mouth.”
And black is now the new white.
We’ve moved from an explicit statement that “In Chapter 10 of the AR5, many empirical lines of evidence are presented” at the beginning of this thread, to the point where “the very idea of an evidence of attribution is absurd”.
Both points of view are equally untenable.
Stop digging and move on.
> We’ve moved from an explicit statement that “In Chapter 10 of the AR5, many empirical lines of evidence are presented” at the beginning of this thread, to the point where “the very idea of an evidence of attribution is absurd”.
We’ve moved nowhere. Attribution is an inferential concept. It contrasts with detection, which is clearly less inferential. That”Empirical evidence” is an observational concept. While there’s no dichotomy between inference and observation, it should go without saying that “empirical evidence” is not what attribution provides.
***
From at least Descartes onward, evidence is what presents itself to the mind’s eye:
http://plato.stanford.edu/entries/evidence/
Hacking’s magnum opus might interest econometricians, for it tries to explain why the notion of probability emerged so late in the history of mathematics. Spoiler: we needed the notion of evidence.
***
> Both points of view are equally untenable.
Were you playing home right now, HAS, you might have a chance to save yourself with this simple proof by assertion. Alas, you’re not.
Show me.
I guessed Willard wouldn’t stop digging. Capital invested in this thread which must see a return. Tedious. I’m out.
> I’m out.
Before you go, Jaime, please note that your
has more to do with detection than attribution, contrary to what you imply. You dispute that the world is warmed by CO2, which is the main part of the detection problem:
https://ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf
At least you concede that “there is an abundance of empirical evidence to suggest that the planet has warmed.”
Next time, read harder.
Willard, you can play around with terminology all you like. It appears to me that the ‘detection’ of climate change seems to have involved only the trivial fact that the world has warmed overall since 1950 (concurrent with a rise in CO2ppm and the highly dubious dismissal of natural internal and external forcings as having virtually zero net effect on temp, plus the even more dubious assertion that the temp rise and CO2 levels are ‘unprecedented’), rather than the far less trivial observation that climate change has proceeded in accordance with what one would expect from the ‘fingerprint’ of GHG warming. If you can identify the ‘fingerprint’ you are a long way towards attribution. Technically, ‘attribution’ would then involve merely demonstrating that the source of the GHGs which caused the warming was anthropogenic. In the absence of the ‘fingerprint’, the empirical part of ‘attribution’ (what you say is more to do with ‘detection’) is largely missing and hence IPCC attribution relies exclusively upon assigning anthropogenic GHGs (via model runs) as the cause of the trivial rise in temperatures based upon the twin assumptions that climate sensitivity is greater than more recent observationally derived estimates and internal and external natural forcings are negligible or have tended to cancel one another out. That’s not science, it’s hokum.
“Nature and mechanisms of multi-decadal and century scale natural internal variability. How do these modes of internal variability interact with external forcing, and to what extent are these modes separable from externally forced climate change?”
I think that the AMO is driven by solar wind variability, and functions as an amplified negative feedback. With strong solar wind, particularly in the 1970’s, causing it to cool, and weaker solar wind since the mid 1990’s causing it to warm. With weaker solar wind conditions increasing negative NAO/AO states, which then affects the wind driven AMOC rates and increases AMO and Arctic warming.
http://snag.gy/HxdKY.jpg
What are the most controversial points in climate science related to AGW?
—————————————————————————
In my most humble opinion it is.
1. How catastrophic will it get?
2. Can we control CO2 to control earth’s temperature?
3. How can anyone stand Al Gore?
+1
The hockeystick and its loopy implications are still the big controversy.
Not only has there been failure to indicate any past climatic period, even a short one, which was stable and clement, there has been no serious attempt to establish the possibility, though there are constant attempts to imply it. So the single most critical matter is simply evaded. (Fortunately, not by tonyb.)
You cannot depart from a level which does not exist. You cannot stabilise what was never meant to be stable. You cannot “tackle” what has to progress. This upward curve in the holocene is another curve in a continuum which has been nothing but curves. The holocene is not a line, it’s a dropped spaghetti.
Some cooling in the 70s and some subsequent warming weren’t supposed to happen? Marked warmings like the Optimum and marked coolings like the LIA didn’t occur? The seas weren’t both higher and lower within the last few thousand years? Apart from up and down, what are the other ways for temps and sea levels to travel, since they can’t be static? Don’t be surprised if one of two favourites wins in a two-horse race.
We are so used to hearing references to “stabilising” “action” and “tackling” that we are numb to the sheer pottiness of dialling a preferred climate. We even refer to the only and actual climate as “internal variability”!
+1 mosomoso. I have yet to see any worthwhile and statistically valid estimate of climate trends suitable for policy makers.
Surely the most important question of all is: does it matter?:
What’s the consequence?
What’s the damage function?
Is higher GHG concentration a net benefit or net cost?
+1
Well… there is an incorrect assumption there.
The question is “Whether the warming since 1950 has been dominated by GHG (dominated = more than 50%).”
If the post 1950s warming hasn’t been over 50% GHG – who cares? If GHG isn’t the dominant player it is a fact, perhaps a concern, but not a problem. If over 1/2 the warming is out of our control we are in the passenger seat. It makes more sense to figure out where the driver is going.
“What are the most controversial points in climate science?”
Number one for me is that period cooling always was, and always will be a serious problem. While I have yet to see any tangible evidence that the small theoretical warming from increased CO2 will cause any harm. Especially as increased forcing of the climate increases continental interior precipitation.
typo.. periodic cooling..
An important point of controversy, certainly the most immediate if not the most important, is severe weather events increasing (or not) in severity or frequency due to AGW.
“What is the data that provides the greatest challenge to the dominant view of AGW?”
Hard to say. Top contenders:
1) No significant increase in lower troposphere temperature measured by satellite MSUs in past 18 years despite 10% increase in atmospheric CO2 during that time.
2) No statistical increase in severe weather events, or severity of individual events, in the past 60 years.
3) No acceleration in sea level rise over pre-1950 levels.
4) Greening of the earth in recent decades.
Prof. Curry:
Your second point
ties in with your observation about
and together encompass essentially the entire controversy relevant to the public (that is, beyond academic scuffles).
If warming is modest and relatively slow, then the effect will be the same as if we instituted an emergency global response to CO2 emissions. That is, we would be moving from a (presumed) rapid, multi-degree warming to a multi-generational, adaptable pace of warming.
This would shift the debate from crisis to careful planning. Or at least it should, if the debate were based on the actual science.
And how anomalous is the late 20th century warming in
relation to the long term climate proxy record?
Seems it ain’t.
http://www.atmos.washington.edu/2006Q2/211/articles_optional/Soon2003_paleorecord.pdf
Another overarching issue:
Collusion among leading climate scientists to suppress contrary data & models from appearing in the literature. In a word “Climategate”. In a few words “Mike’s Nature Trick” and “Hide the decline”.
These lock-step, consensus models are all interlinked:
1. BBC: Big Bang Cosmology
2. SSM: Standard Solar Model
3. AGW: Man-Made Global Warming
4. SNM: Standard Nuclear Model
and all inconsistent with reality.
Why haven’t scientists spoken up against the nuttiness of the hockey-stick? How could they tolerate it’s becoming the main theme of the IPCC TAR ? Are they so illiterate?
What does this fact tell about the trustworthiness of scientists in general, and scientific institutions? What does this fact tell about bias in science?
I might accept the opinions of Dr. Tasmin Edwards (for example) as a “honest broker” once I’ve read her opinion about the scientific quality of the hockey-stick. As long as she prefers to keep quiet on this, I won’t trust her. As long as scientists refrain from denouncing this non-sense I will refrain from trusting their opinion on any matter.
Dear Judith,
as you might know there is a blog in Sweden called “Klimatupplysningen” (Climate Information or more poetically “Climate Enlightenment”) where a group of 8 people are trying to give information that does not support the IPCC agenda. We are all more or less sceptic (and so are also many of our followers). This article about “the most controversial points in climate science is so clear that I would like to translate it into Swedish and present it to a Swedish audience. I will of course tell that it comes from your blog.
Best regards
Sten Kaijser
(I trust that you can read my email and will tell me if you have objections)
Hi Sten, by all means go ahead and use anything from my blog.
Sten, thanks for joining the discussion. Are skeptics the target of disrespect in you country? If not, perhaps your voices will be heard there.
Here in the USA many leading consensus scientists ridicule and demonize skeptics. Many who behave this way are government scientists paid with my tax money. A very troubling condition, indeed.
Another overarching issue:
Suddenly putting MSU temperature data at arms length due to it being contrary to the CO2 control-knob hypothesis with concurrent sudden embrace of much less reliable estimates of OHC as the main metric to measure changes in planetary heat budget.
How variable is the energy that Earth receives from the Sun?
1. Over a typical solar cycle, what variations routinely occur in the
a.) Intensity and wavelength of light of different frequencies – far infrared to visible to ultraviolet to x-rays to gamma rays to cosmic rays?
b.) Intensity and energy of particles of different mass and magnetic fields coming from the Sun?
2. During a sudden solar eruption, flare or coronal mass ejection?
3. Is the Sun correctly described by the SSM (Standard Solar Model) or might it in fact be the pulsar remains of the supernova that birthed the solar system?
See: “Solar energy,” Advances in Astronomy (submitted 1 Sept 2014) https://dl.dropboxusercontent.com/u/10640850/Solar_Energy.pdf or
“Solar Energy for school teachers,”
https://dl.dropboxusercontent.com/u/10640850/Supplement.pdf
Good post. I would only add that it is the narrowing of empirical estimates of climate sensitivity which are most policy relevant, and that means mostly narrowing the uncertainty in aerosol influence, both direct and indirect. It is the extremely high sensitivity “long tail” which justifies extreme and costly mitigation efforts. narrowing the aerosol influence has the potential of moving the discussion from “alarmed and speculative” to “calm and reasoned”.
As others have noted, policy should be based ultimately on long term harms and benefits, but until the uncertainty in aerosol influence is much reduced, estimates of future harm (and benefits) also fall in the “alarmed and speculative” range.
Aerosols are the key issue.
The incredibly poor quality of the scientific work. With no quality control and no mechanism to check (other than Climate Audit-type amateurs), the ‘science’ is just ridiculously poor.
Much of what some folks think we know just ain’t so.
++
What are the most controversial points in climate science?
The most controversial point is the obvious one.
The null hypothesis is life wonderful, warmth is great, and CO2 is a beneficial minor constituent of the atmosphere.
All of the Global Warming talking points should have been fought, point by bitter point, and proven to reasonable certainty (at least 3 sigma preferably 5).using real statistics (not “global warming statistics”) computed from actual data (not models and theory), .
“What are the most controversial points in climate science related to AGW?”
The “A” part of the acronym is, and should be, the most controversial.
The entire IPCC, and camp-follower, science is predicated on a given–that mankind has an overwhelming effect on the Earth’s climate.
This “given”, or received belief, should be the most controversial, debated, and researched point in AGW.
And yet, it is not.
Thus, in the future, there needs to be actual scientific evidence, that shows/proves/supports the received belief that mankind, and our activities, are the main drivers of climate.
Agreed. It is impossible to quantify
a.) Anthropological if you don’t know
b.) Natural variations in climate
> If you are looking for further details on one of these points, mention it in the comments and I or someone will provide relevant link.
I’m looking for more details on anything related to “Whether the warming since 1950 has been dominated by human causes” besides the Monster bestiary, pretty please with some sugar on it.
In the post curryja writes:
What is the data that provides the greatest challenge to the dominant view of AGW?
– Global data sets of surface temperature and atmospheric temperature (satellite) that show a hiatus in warming for 16+ years
– …
At this potential this is not a challenge–it has the potential. When looked at from the perspective of the 99.9% coverage tolerance band* of 1970 to late 90’s linear regression models for annual average GISS anomalies** versus time the ‘hiatus’ the recent ‘observations’, i.e., those of the ‘hiatus’ years, are still comfortably contained.
Note that there potential issues with any rigorous inference relating to the underlying regressions–normality of the residuals, some possible autocorrelation, need to extrapolate the model, selection of the appropriate regression line, etc. Still the tolerance band does suggest that the challenge of the ‘hiatus’ has not yet arrived. ‘anomly’ versus time is simple in concept but really entertainingly rich.
——-
* two-sided, alpha=0.05.
** I have not looked at the beta v. 6 UAH data, but do not expect anything different–tolerance band are generous.
typo –
“At this potential this is not a challenge…” should be
“At this point this is not a challenge…”
Oops. Really should edit outside the box–in an editor.
Example here: http://judithcurry.com/2015/05/04/what-are-the-most-controversial-points-in-climate-science/#comment-699721
“At this potential this is not a challenge–it has the potential. When looked at from the perspective of the 99.9% coverage tolerance band* of 1970 to late 90’s linear regression models for annual average GISS anomalies** versus time the ‘hiatus’ the recent ‘observations’, i.e., those of the ‘hiatus’ years, are still comfortably contained.”
should be…
At this time this is not a challenge–it has the potential. When looked at from the perspective of the 99.9% coverage tolerance band* of 1970 to late 90’s linear regression models for annual average GISS anomalies** versus time the recent ‘observations’, i.e., those of the ‘hiatus’ years, are still comfortably contained.
As suspected varying the start year of the hiatus does not change the containment. Pursing this way of looking at the hiatus another five years or so of the flat/low trend are needed before life would start to get interesting.
http://i1285.photobucket.com/albums/a593/mwgrant1/reg70-97-000-TolBand%201997-2000_zpsqql0fyth.png
In a background trend of nearly 0.2 C per decade, when you have an anomalous year of 0.3 C above the trend line (1998), a 15-20 year hiatus would be expected while the trend line catches up. This is why the hiatus is always constructed to include 1998 near its beginning. It is a construct built on a warm anomaly. Without 1998 you have nothing you can hang that hiatus on.
http://www.woodfortrees.org/plot/gistemp/mean:12/from:1950/to:1997.5/plot/gistemp/from:1998.5/mean:12
Hi Jim D,
Both camps pin too much on the regression game while the game is still in play. Most people don’t seem to know the rules of the game. That gets to the thrust of my comment–that while there may be questions about the validity of and use of a particular application of a tolerance band/interval, that in general is the statistical methodology one would expect to be used for making predictions of numerous future observations when using regression. And it is hardly surprising that one can drive a truck through the tolerance bands/intervals.
This also touches indirectly on the loose terminology used by many people who show these graphs with the data, a regression line and some band called ‘uncertainty’ or maybe 2 sigma. It is not concisely clear what statistical entity are being plotted and the reader is left to their one devices.
What does your plot communicate? To me you have just put up a plot and left the reader to fill in any inference, formal or informal. The hiatus or pause or leveling off is simple: it is what has been ‘observed’. [If one stretches the definition of ‘observe’. ] However, this back-and-forth on defining ranges is a very good segue. While we surely can make plots of [average] anomaly versus date/time, we seem to have lost sight of the fact the time-anomaly relationship is causal. The implicit assumption is that at some level date/time serves as a proxy for the actual aggregated causative factors. This is really an extraordinary situation that merits discussion. [Yes, I impose the requirement that for a relationship to be a physically useful predictive relationship it must be causal.] Now this opens the prospect of uncertainty in the ‘independent’ variable, here the date/time, and suggests application of other forms of regression. These may or may not make a different. My point is that any realistic approach has to explore and document such matters–there are more.
BTW below I just posted the image for 1970-1997. As the original comment suggests, I looked at other late nineties endpoints. It does not change the outcome. Whether I use 1997, 1998, or 1999 as the last year for the regression range the outcome is the same–at this point the ‘hiatus’ is not a realized challenge at this time. I’ve also started playing a little with going the other way–regressing on the hiatus/pause/whatever and see how well the preceding years are bound by the tolerance band/interval–but I have not gotten back to it. It requires effort as such games elevate the need for doing serious diagnostics on and detailed discussion all of the regressions. Also this is well trodden soil and any serious effort needs a little due diligence research and attribution. I like my seat in the peanut gallery. Maybe I’ll put it on my nascent blog where I am not driven by response time.
Cheers
I think any trend less than 30 years is not worth much. They are just too sensitive to end points. If you take 30-year trends ending in 2000 and ending in 2015 or years between, you get a robust result of 0.16-0.18 C per decade. 15-17 year trends are an especially bad choice, being 1.5 solar cycles, so you have short-term solar variations aliased in there.
http://www.woodfortrees.org/plot/gistemp/mean:12/from:1950/plot/gistemp/from:1970/to:2000/trend/plot/gistemp/from:1985/to/trend
Jim D
Just to be sure, you realize that I am saying–as others have said elsewhere–the observations since around 2000 are not perfectly fine in that they are contained inside the tolerance bands.* No more, no less. With one exception I have no interest in the trend over the hiatus period. Now I am interested in the the tolerance bands** associated with such calculations and the extent to which they contain the ‘observations’ of the preceding 30 or so years. There aresome resampling based approaches–thinking permutation tests and runs tests–that also might be interesting.
——
*These particular models are based on 27, 28, and 29 year ranges.
** Yes, the number of observations is well under 30. However, that constrained is reflected the widths and shape of the tolerance bands. All of that is probably a lot of work to say ‘time will tell’.
I sure am butching my tags today..Monday!
Jim D | May 4, 2015 at 11:41 am | Reply
In a background trend of nearly 0.2 C per decade, when you have an anomalous year of 0.3 C above the trend line (1998), a 15-20 year hiatus would be expected while the trend line catches up. This is why the hiatus is always constructed to include 1998 near its beginning. It is a construct built on a warm anomaly. Without 1998 you have nothing you can hang that hiatus on.
http://www.woodfortrees.org/plot/gistemp/mean:12/from:1950/to:1997.5/plot/gistemp/from:1998.5/mean:12
To every graph, there is an equal and opposite graph:
http://www.woodfortrees.org/plot/wti/mean:12/from:1950/to:1997.5/plot/gistemp/from:1998.5/mean:12
The trend line in 36 year satellite record is well under 1.5C decade. UAH reanalysis about to be released has it under 1.2C/decade. That’s also the instrument record trend since 1950.
You need to face reality, Jimmy. The warmunists got it wrong. They latched onto a couple of higher than average decades 1980-2000 and pretended it was only going to get worse from there. It didn’t. You’re the denier now. LOL
correction: well under 0.15C/decade and UAH reanalysis under 0.12C/decade
“the observations since around 2000 are not perfectly fine in that they are contained inside the tolerance bands.”
should be
“the observations since around 2000 are perfectly fine in that they are contained inside the tolerance bands.”
http://i367.photobucket.com/albums/oo120/chandlerray/Wheel/charlie_brown.jpg
The 30-year trend did not change during the pause and remained over 0.16 C per decade. The pause is part of a self-canceling blip with the little-mentioned 1998 step just before it, so it just represents cherry-picking at its worst.
Jim D, I haven’t been following this thread, but I caught your comment that “The pause is part of a self-canceling blip with the little-mentioned 1998 step just before it …” Two responses: first, the 1998 rise is not “little-mentioned,” many have mentioned, here and elsewhere, that much of the global warming was related to the 1998 El Nino, which accounted for much of the post-1975 temperature rise. Second,we’ve had no statistically significant warming since 1998, yet you can tell us that “the pause is part of a self-cancelling blip.” Well, I don’t know your definition of “cancelled,” but we’ve seen a blip and a plateau: at first sight, you seem to imply that the plateau is transient. I can’t see the future, and I don’t know what the “self-cancelling” mechanism might be. The term implies that there is a natural correcting mechanism at work when there’s an unusually sharp change. Plausible, mosomoso, Pope and others will tell you that that’s been going on for centuries to millennia, what mechanism do you see for it?
As I mentioned elsewhere, the pause only exists because of the 1998 anomalous year. You can demonstrate that for yourself. If people were honest about the pause they would state that it is not continuous with previous years, but after a large step up that cancels it in any longer average.
Wrong, Jimbo. The pause exists even if the trend line begins in 2000 or 2005. Jimmy D is the new denier. What a laugh riot. The denier shoe is on the other foot!
Last 10 years flat as a pancake:
http://woodfortrees.org/plot/rss/last:120/plot/rss/last:120/trend
Last 15 years flat as a pancake:
http://woodfortrees.org/plot/rss/last:180/plot/rss/last:180/trend
Last 36 years total rise 0.44C
http://woodfortrees.org/plot/rss/every/plot/rss/every/trend/detrend:0.44
Last 36 years decadal trend (0.44 / 3.7) 0.12C/decade
Warmunists lost, Jimmy. 0.12C/decade is not cause for alarm. The pause killed the cause. We are now in a “wait & see” attitude.
And before you go cherry picking UAH instead of RSS satellite data, there is about to be a UAH reanalysis released that makes UAH essentially the same as RSS:
http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade/
Jim D,
At this point in time a person is going to see what a person wants to see. Quantitatively discerning a pause or lack of a pause is just beyond our means at this time. Frankly, as I noted earlier, I have reservation about any anomaly versus time calculation or plot as a practical predictor [corrections].
…The hiatus or pause or leveling off is simple: it is what has been ‘observed’. [If one stretches the definition of ‘observe’. ] However, this back-and-forth on defining ranges is a very good segue. While we surely can make plots of [average] anomaly versus date/time, we seem to have lost sight of the fact the time-anomaly relationship is [not](my bad–sigh) causal. The implicit assumption is that at some level date/time serves as a proxy for the actual aggregated causative factors. This is really an extraordinary situation that merits discussion. [Yes, I impose the requirement that for a relationship to be a physically useful predictive relationship it must be causal.] Now this opens the prospect of uncertainty in the ‘independent’ variable, here the date/time, and suggests application of other forms of regression. These may or may not make a differen[ce]. My point is that any realistic approach has to explore and document such matters–there are more.
At this time both sides of the hiatus melee are straining at gnats.
(I apologize to anyone who has struggled with my multiple typos on this thread. It has been a tough day.)
Jim D,
This is why the hiatus is always constructed to include 1998 near it’s beginning.
No – you are misunderstanding how the hiatus is constructed. 1998 happens to be included if you simply start from the present and look backwards in time until you get a statistically significant trend. That’s the fairest way to look at it without cherry picking. When you do that, the “hiatus” is about 18 years (depending on the data set).
The 1998 El Niño was followed by quite a deep La Niña, so you can’t start there either.
It can’t be statistically significant because if you take one or two years longer, the result changes significantly. This is not a sign of a robust choice. On the other hand 30-year trends ending any time during the “pause” are robustly near 0.17 C per decade, not being sensitive to start dates.
Jim D,
So the whole trend form 1970 to 1998 by which the the consensus hangs it’s hat is not worth much by your definition.
Actually that trend was faster than the models predicted, mainly because of 1998.
Again Jim, you are misunderstanding the application. You are dead right you add years on and it changes the significance, that is the idea behind it. In fact what you do is add on months, you update on a monthly basis.
What happens is that the length of the period of no significant warming varies depending on the monthly result. So if you have a particularly warm month, it actually shortens the length of the period of no warming. Cooler month will extend it.
The point is, rather take a starting point and continue to the present which can leave you open to charges of cherry picking the start date, you simply start from the present and see how far back you can look before you get a significant positive trend – as you certainly will because of the warming since the mid 1800s. It turns out that depending on the data set, you have to look back further than about 18 years in order to get a stastically significant positive trend.
Thus – the hiatus.
Jim D
You’ll never convince them the way you are going. The simple thing to do is tell them to delete the rogue year completely and then redo the analysis.
Jim D | May 5, 2015 at 10:00 pm |
It can’t be statistically significant because if you take one or two years longer, the result changes significantly. This is not a sign of a robust choice. On the other hand 30-year trends ending any time during the “pause” are robustly near 0.17 C per decade, not being sensitive to start dates.
http://www.giss.nasa.gov/research/briefs/hansen_07/fig1x.gif
The “rise” that preceded the “hiatus” lasted 20 years 1978-1998. When the hiatus hits 2018, either the hiatus is significant or the rise isn’t .
Oh, and the 30 trend from 1930 to 1960 is less than zero. From 1940 to 1970 the trend is much less than zero.
At this time hiatus or non-hiatus is open, but…
[youtube http://www.youtube.com/watch?v=3nxbUNFErkA?feature=player_detailpage&w=640&h=360%5D
Sigh…
… a sigh is still a sigh,
the fundamental things apply,
as time goes by. )
Sam/Casablanca.
:O) A sighin’ of the times.
For the times they are a’changin’!
While I did not intend to get into the assumptions needed for different applications of ordinary linear regression, the direction of some comments suggests that providing some information in that regard might be useful. Helsel and Hirsch wrote an excellent book as a part of the USGS series on statistics in water resources research. It is one of my most treasured and well-worn books on statistics.
So, on the assumptions of linear regression…
from Helsel and Hirsch:
http://pubs.usgs.gov/twri/twri4a3/pdf/twri4a3-new.pdf
9.1.1 Assumptions of Linear Regression
There are five assumptions associated with linear regression. These are listed in table 9.1. The necessity of satisfying them is determined by the purpose to be made of the regression equation. Table 9.1 indicates for which purposes each is needed.
The assumption of a normal distribution is involved only when testing hypotheses, requiring the residuals from the regression equation to be normally distributed. In this sense OLS is a parametric procedure. No assumptions are made concerning the distributions of either the explanatory or response variables. The most important hypothesis test in regression is whether the slope coefficient is significantly different from zero. Normality of residuals is required for this test, and should be checked by a boxplot or probability plot. The regression line, as a conditional mean, is sensitive to the presence of outliers in much the same way as a sample mean is sensitive to outliers.
Here is the table. The column relevant to formal inference from a an ordinary linear regression is the last one.
http://i1285.photobucket.com/albums/a593/mwgrant1/Assumption%20of%20OLR%20from%20Helsel%20and%20Hirsch_zpsnjbwatoy.png
This material concisely gets to some of the points that I caveat in the last paragraph and which Rud also raised. I hope that it may give pause to those on both ‘sides’ arguing too strenuously of the pause. :O). This does not prevent one from trying different methodologies and that can provide insights, but perhaps with less ardor and certainty.
Still I am grateful for the back and forth because it pushes me to delve further into the methodologies.
BTW one of the strong points of the book by Helsel and Hirsch is the presentation of non-parametric useful when not all of the assumption for the different parameteric test hold. As one might epect the statistical power falls of but such is life.
Check out the new blog Count Bayesie http://www.countbayesie.com, note the Count will be doing a guest post at CE sometime in June
Yes, I went to the site one day after I noticed your new line-up. Highlightiing Bayesian perspectives is a good addition in view of the nature of climate change research. I look forward to the post. Thanks, Judith.
[My apologies for the excessive typos–my mind and eyes are seriously stretched and the rapid response of comments is not in my comfort zone. Oh yeah, there is a matter of impatience.]
Here is an example showing the two-sided 95% confidence band containing the regression line, the two-sided 95% prediction band for a single observation, and the two-sided 95% tolerance band with 99.9% coverage of all future observations.
http://i1285.photobucket.com/albums/a593/mwgrant1/reg70-97-000_zpsyxwiowlw.png
Goes here -http://judithcurry.com/2015/05/04/what-are-the-most-controversial-points-in-climate-science/#comment-699715
mw – +10
Can you be more specific about your statistical methodology. As you surely know, the OLS BlUE theorem breaks down in the presence of autocorrelation, since residual error is not random. Your tolerance bands look suspicious. An old PhD level econometrician, who learned the hard way decades ago that hammers and screws do not fasten wood.
Hi ristvan,
I will address all of your good points. The original post comment at
http://judithcurry.com/2015/05/04/what-are-the-most-controversial-points-in-climate-science/#comment-699715
did caveat the plots and the approach (linear least squares regression tolerance intervals* with specified alpha and coverage).
“Note that there [are] potential issues with any rigorous inference relating to the underlying regressions–normality of the residuals, some possible autocorrelation, need to extrapolate the model, selection of the appropriate regression line, etc. ….”
Monthly GISS anomalies were obtained from
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
and annual arithmetic means calculated from the monthly values. The interval calculations were done in R using the base package and the ‘tolerance’ package.
Yes, I was and am aware of the potential complication from autocorrelation. That is one reason for the caveat. Also if one examines the residual from the regression–qqnorm plots and histograms–they will see the they appear to be bimodal–another problem. That may well be consistent with some autocorrelation. In addition I expect that if anything the presence of any autocorrelation likely will increase any intervals calculated. In addition I refer to the results as suggesting that the challenge of the ‘hiatus’ has not yet arrived. Enough on that.
Perhaps the width of the tolerance band looks a little/suspiciously wide. Also to me too few points seem to lie in the vicinity of the regression line; that was noticeable quite in the beginning. Anyone seriously looking at trends definitely needs to go beyond simple linear regression–and if calculated intervals are needed then the correct one for comparisons of future observations should be used. I appreciate your question and would be interested in you response to this ‘answer’. As I also stated above folks Both camps pin too much on the regression game while the game is still in play.
HTH
Best regards,
mwg
What is the data that provides the greatest challenge to the dominant view of AGW?
Christian Schlüchter learned recently that Hannibal didn’t cross the icy Alps to invade Rome. Hannibal’s army crossed a forest. We now know that glaciers come and go on a lot faster Earthly timetable than ever realized. Sure, the alps were gone 10,000 years ago. Moreover, the glaciers also were gone 2,000 and 4,000 years ago and the reasons for their retreat obviously was not caused by modernity.
For the last 100,000 years Earth has mostly been locked in an ice age punctuated only briefly by periods of warming such as the interglacial that gave birth to our species. Earth has been locked in ice age conditions for more than 80% of the time over the last one million years. Those are all of the facts. The hiatus won’t last forever –e.g. “when Nigel Calder and I updated our book, The Chilling Stars,” says Henrik Svensmark, “we wrote a little provocatively that, we are advising our friends to enjoy global warming while it lasts.”
To paraphrase David Roberts, you go to work in the climate you have, not the climate you might want. And, this country has to go back to work soon.
The ice does advance after hundreds of years of more snowfall in warm times and then it gets cold. The ice does retreat after hundreds of years of less snowfall in cold times and then it gets warm.
popesclimatetheory: “The ice does advance after hundreds of years of more snowfall in warm times and then it gets cold.”
Like this, perhaps?
Glacier-like hazards found on Ben Nevis
Hazards common in arctic and alpine areas but described as “extremely unusual” in the UK during the summer have been found on Ben Nevis.
A team of climbers and scientists investigating the mountain’s North Face said snowfields remained in many gullies and upper scree slopes.
On these fields, they have come across compacted, dense, ice hard snow call neve.
Neve is the first stage in the formation of glaciers, the team said.
http://www.bbc.co.uk/news/uk-scotland-highlands-islands-28885119
Whether the warming since 1950 has been dominated by human causes
No, of course not. The Roman and Medieval warm periods happened without out manmade CO2
How much the planet will warm in the 21st century
The same amount as the warming in the Roman and Medieval Warm periods.
It is a natural cycle, warm periods always have followed cold periods.
The OP/ED, article, blog post that I would like to see for us laymen: When one Scientist says “X” and another says “Y” on a specific topic — Who should you believe?
Why is there so much nuanced (for us layman) scientific confrontation? Arctic Sea Ice for example — where one group says we should focus on “extent” and Others say, Nope, we should focus on “volume“?
Why should anyone who is environmentally inclined believe anything written in the OP/ED section of the Wall St. Journal or by Heritage, Cato? On the flip side, why should Grist, Huffington Post, or Guardian ever be believed?
Who are the good umpires (for us laymen)? What are the characteristics of a “good umpire”?
Maybe we should be listening to scientists trying to be “peace-makers” more — like Dr. Ramanathan with his “Fast Mitigation”.
Full circle back to the 1970’s? http://www.breitbart.com/big-government/2015/05/03/global-warming-low-sun-spot-cycle-could-mean-little-ice-age/
Judith, I would say that this issue is also of some importance over the long term:
“How strong are carbon-cycle feedbacks.”
The mainstream claims that peak warming is related closely to cumulative carbon emissions and that if emissions cease GMST will remain nearly constant on a centennial timescale relies on carbon-climate and carbon-carbon feedbacks being strong, as well as on ECS being high.
I would tend to rephrase your first issue more quantitively:
“Whether and to what extent the warming since 1950 has been dominated by human causes”
I regard ” How much the planet will warm in the 21st century” as certainly being a key issue, and strongly dispute ATTP’s claim that this issue is not particularly scientifically controversial.
thx nic, i was trying to think of a good carbon cycle question
Dissapearing arctic ice is extremely beneficial for biodiversity. Most people love a warm climate, Pensioners move to Arizona, Florida Southern France and Spain. Even Arrhenius and Callendar were convinved about the net benefitts of global warming for humans.
When was it decided that warming is bad?
“When was it decided that warming is bad?”
When someone discovered they could get paid for it.
In their paper “Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise”
( http://www.nature.com/srep/2015/150421/srep09957/full/srep09957.html )
and Comment (http://www.sciencedaily.com/releases/2015/04/150421105629.htm )
Patrick T. Brown et al state that “Under the IPCC’s middle-of-the-road scenario, there was a 70 percent likelihood that at least one hiatus lasting 11 years or longer would occur between 1993 and 2050”, but that “There is no guarantee,……, that this rate of warming will remain steady in coming years, ……Our analysis clearly shows that we shouldn’t expect the observed rates of warming to be constant. They can and do change.”
Well, while without doubt “they can and do change”, the critical scientific question for us fence sitters to have answered (Judith?, salutations!) is why the present temperature stasis dominated by unforced noise will not at some point end, and why (political/economic question) catastrophic AGW will not return as an eventual worry for future generations. Living in somewhat wintery cold Ontario, I of course hope the ‘skeptics’ are right! On the other hand with our new warmer summers I have had to get used to those darn Japanese beetles eating my beans and roses! A negative feedback! On the good side, the vastly reduced cost of heating my house following an extensive program of insulation – promoted by a government scared of global warming – was definitely a positive feedback; and I think therefore that on the basis of that economic argument I will continue to carry out a cost-benefit analysis of installing solar panels to at least feed the batteries of my electric bicycles! When you get old everything helps!
Maybe you could explain how the data you listed provides a “great challenge” to the dominant view. I don’t think the sea level is any real “challenge” to dominant view. We expect it to accelerate based on physics and it has. We see melting as predicted in the Arctic (land and sea ice) and the Antarctic (land ice). The hiatus of a short term surface record is also not a challenge if there are explanation for the “hiatus” within the context of the theory (and there are). Even with the recent aerosol data, the mainstream view is still for ECS over 2 degree. I think your examples seem to be pretty thin gruel.
“We expect it to accelerate…”
I don’t think ‘YOU’ necessarily constitute ‘WE’, Joseph…
In fact, I’m sure you don’t.
As of this writing, the word ‘convection’ is absent in over 100 replies. Evidently uncontroversial, it is the one issue which, in my mind, undermines all consensus calculation. Only Chris Essex has commented on the failure to allow convective compensation for decreases in the radiative conductance of the troposphere. To be sure, there is a ‘convective adjustment’ which constrains thermal gradient increases beyond some arbitrary value to avoid implausible temperature increases. Constraints are made in many calculations – but there is also an obligation to discuss how they affect results. The CliSci attitude appears to be to label the outcome ‘radiative-convective equilibrium’, for equilibrium, once achieved, requires no further work to persist. Can anyone cite a value for convective energy flux at ‘equilibrium’? A thermodynamic steady-state, however, is characterized by the constant work or dissipation required to avoid relaxation towards thermodynamic equilibrium. In nontrivial physical models, terms dependent on both potentials and potential gradients appear. The latter are silent in climate modeling.
Understanding good scientific hypotheses and theory as those which can provide understanding and our capable of falsification – it seems to me there should be a controversy over how the various hypotheses might be rejected. What conditions could occur across the globe over various time frames that would call global warming theory into doubt? Put more broadly what projected events would support the hypothesis and which would challenge the hypothesis. Relatedly – what is good evidence looking back post hoc? That one pole has more ice and the other less? Storms? …
Very good point. Often times, the null hypothesis gets inverted. And there is much goalpost moving, as in comments above arguing the pause is insufficient to falsify climate models, when in 2011 the consensus (Santer) said it now would be.
How do you get this from what he said?
Rud,
Pause?……….What pause?
Rud – In case you didn’t see it – I suggested something that you might want to write on in this weeks Energy Review. It’s about how people expecting renewables and batteries to change the market, tend to rely on todays market drivers remaining unperturbed. I understand if you don’t want to (I can’t write on everything people request of me), but if you are interested I think you’d do a great job and it’s a crucial point that hasn’t received much attention. (I.E. If batteries are employed wide scale to do arbitrage the benefits of arbitrage will decrease. )
JHC, from the core idea of hypothesis falsification. Feynman, Popper, and all that. Chapter One of The Arts of Truth. Thought had given sufficient minimalist examples for you to comprehend.
Aplanningengineer, I did not catch that. Will go have a look, since have been front and center on global energy storage technology for a long while. If can contribute something new to CE, will draft something up for you and Judith. She can get you my offline coordinates. Many thanks for your several excellent electricity grid systems posts.
JCH, as I recall Santer said that a pause would need to last 15 years to be a significant refutation of the CAGW case. That threshold has long been passed.
LMAO:
http://sealevel.colorado.edu/files/2015_rel2/sl_ns_global.eps
‘JCH, as I recall Santer said that a pause would need to last 15 years to be a significant refutation of the CAGW case. That threshold has long been passed.”
That is NOT what he argued.
Santer’s reference to a pause in global warming.
http://onlinelibrary.wiley.com/doi/10.1029/2011JD016263/abstract
Mosher, beth: “Because of the pronounced effect of interannual noise on decadal trends, a multi-model ensemble of anthropogenically-forced simulations displays many 10-year periods with little warming. A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”
OK, I’ll leave Rud to answer further if he so chooses. I’m not sure that his 5.01 sufficiently clarifies his 2.39.
I am studying to become an obfuscating politician. What do you think?
Try Dr John Cook. I believe he gives on-line courses that are available through all the major universities around the world.
Judith,
I think making predictions about future progress in climate modeling is also a Wicked Problem. So the topic is best avoided.
Will
All predictions are inherently flawed; the future is never what we predicted, it always surprises us.
A key element in any projection of warming this century and beyond is, obviously, the level of CO2 emissions: however, at least by my analysis, emissions in the mid to latter half of this century will be substantially constrained by finite accessible fossil fuel reserves and resources. To the degree this is correct, for example, the RCP 85 emission scenario is almost certain to be highly unrealistic beyond mid century. Any policy action not taking this limitation into account promises to be equally unrealistic.
Well…
It is worse that that. Look at fhhaynie’s site.
The 30 year trend is 3.5% annual growth in environmental absorption vs 2% average annual growth in emissions. This crude figure of merit is much closer to fhhaynie’s projections than the IPCC projection.
Maximum CO2 less than 500 PPM is basically the death of CAGW as a working theory of reasonable people for planning purposes. Much like reasonable people don’t plan on a Chicxulub sized asteroid strike next year.
Click on my name or go to http://www.retiredresearcher.wordpress.com to get a detailed estimate of the relative contributions (natural/anthropogenic) to the accumulation of CO2 in the atmosphere (with confidence limits).
This is very interesting, thanks for posting the link, I look forward to your future posts
Thanks for taking a look. I hope some young scientist or economist will take some of these ideas, replicate or improve on the results and publish,
Fhhaynie, why not just guest post here? Granted, peer review is brutal (Judith posted an erratum to one of my ‘stupid’ mistakes. I was devastated, but what an object lesson in how science has progressed beyond printed peer review. Won’t make that mistake again, even if Willard on a precious thread disgreed. Try it out.
Yes, I am definitely open to a carbon cycle post from Fhhaynie
What is the process? E-mail me or if you are able, copy my blog into yours.
What’s the process? How do you cross post a rather lengthy blog? Is Dr. Curry the person to do it? I quit publishing in journals over 20 years ago after I retired. I welcome any critique on blog or any other that wishes to cross post it. Click on my name.
fhhaynie,
Get over the fact tat you stopped publishing 20 years ago! You have some great insights. I, for one, want to hear more.
OK? Will posting on Dr. Curry’s blog do? Publishing in journals is a lengthy process.
fhhaynie, I’m not qualified to comment on your blog post or the exchanges there, but am interested in the last two comments;
Rick A: “Thank you for updating this analysis. 460 ppm by 2060. Could you estimate when we will hit 560 ppm? I am interested in that number because that is a doubling of CO2 from 280 ppm.”
Fhhaynie: “My analysis suggests that even with continued anthropogenic emissions at increasing rates, 560 ppm will not be reached in this century. Natural emissions are expected to decrease at a faster rate after around 2060.”
If this is correct, it has significant policy implications – whatever may result from any further warming, whether harmful or beneficial – and there will be a mixture – would, on your estimate occur over a very long time period, which argues against the case for urgent emissions reductions (I case which I do not in any event accept as a sensible approach).
Faustino
I agree. We need a good economist to do some “what if” cost benefit analyze.
I am interested too. Mr Haynie’s analysis is IMO a breakthrough in distinguishing CO2 from “natural” or “organic” sources and that from “anthropogenic” or “inorganic” sources and their relative contributions to the accumulation rates.
While I remain sceptical that CO2 should be researched to the exclusion of other known factors affecting climate, such as clouds, wind and ocean currents, Mr Haynie’s work appears to have falsified CAGW from human emissions of CO2.
There is only 760 GT of carbon in fossil fuel reserves.
At the current 2% annual increase in emissions we run out of fossil fuel in 47 years.
At year 44 the atmospheric absorption exceeds the emissions and the atmospheric CO2 level tops out. The CO2 level tops out around 473 PPM.
At year 48 the atmospheric CO2 rate in PPM starts dropping from 473 PPM at 11 PPM per year.
560 PPM? Really? When will we hit 560 PPM of atmospheric CO2? Not now, not ever, never,
PA | May 4, 2015 at 9:04 pm |
I believe I understand your analysis based on reserves.
My question is what about all the hydrocarbons not yet discovered and not yet in the reserves?
Do we have a handle on what the potential maximum amount of all potential hydrocarbons is?
Peak oil has been predicted many times and we keep finding new oil to add to the reserves – so I am just curious what your take is on that issue.
Also – what if 10 billion people burn every shred of vegetation (trees, crop waste) etc. – and we put all that CO2 into the air – could that push us over 560?
If CO2 got above 560 ppm in history before (which it did), I am skeptical that we cannot get back above 560 ppm in the future – the question I have is when would that happen?
So my question is assuming we keep producing CO2 at the projected rate according to fhhaynie’s analysis (or your own)(i.e. reserves get bigger and bigger via new discovery or we burn a bunch of trees) – when could we expect to hit 560 ppm.
Thanks for your reply.
Richard Arrett | May 5, 2015 at 12:05 pm |
PA | May 4, 2015 at 9:04 pm |
I believe I understand your analysis based on reserves.
My question is what about all the hydrocarbons not yet discovered and not yet in the reserves?
Well… the point (which surprises me) is that it really doesn’t matter. In 44 years absorption will equal emissions. Rate of atmospheric CO2 increase per year in PPM when absorption equals emission is zero (0).
Fossil fuels are going to get more expensive. The 2% annual increase in emissions that I’ve assumed isn’t reasonable. At 10-15 GT of emissions we can basically burn fossil fuel until the cows come home and not get over 500 PPM. The global warmers are going to interfere with fossil fuel consumption so that we don’t get over 15 GT. In about 31 years, because of increased absorption, the CO2 level stops rising. The level of CO2 will level off around at something under 460 instead of 473.
The ocean is virtually an infinite sink since it already has 38,000 GT of carbon, and converts much of the “excess” CO2 to wildlife. Guess what effect 2.7 GT (the annual amount absorbed by the ocean) has on a 38,000 GT reservoir?
Anyway I’ve gotten curious enough to actually plot the trends and rate of change trends out to 2100, it ought to be interesting. I noticed that warmers plot the exponential emissions rise but don’t plot the exponential absorption rise which is over twice as fast. The exponential emissions rise isn’t going to go on much longer, in fact 2013 and 2014 emissions are claimed to be the same.
Some say more: “Whether and to what extent the warming since 1950 has been dominated by human causes”
http://www.truth-out.org/news/item/30402-ipcc-more-than-all-of-observed-warming-has-been-caused-by-mankind-s-greenhouse-gas-emissions#a3
I somewhat agree with “more than all”. See my comment below. It’s the imbalance that is the clue.
Jim D,
But, but, but then that means the “global cooling” of the 1970’s actually happened. Yet we’ve been thru all that and although not considered a “consensus” and portrayed as not being mainstream, it was certainly downplayed. You, Joseph, JCH, and W all were “deniers”. (Hmmm, there’s something about a class discussing that psychological condition).
Using both sides of the same sword?
Wouldn’t that question hinge somewhat on the recent Stevens paper?
La Ninas and PDO cool phases occur too. Does it have to warm all the time under positive forcing? When the forcing rise almost stops as it may have in the 60’s and 70’s, you can see cooling. Lately the forcing rise has been more robust, and so is the imbalance.
Jim D,
“Does it have to warm all the time under positive forcing?” Wouldn’t it under the theory? What would trigger a transition from atmospheric warming to oceans as example? As was stated “Lately the forcing rise has been more robust, and so is the imbalance.” Have particulates/aerosols increased at a higher rate since 1970 (post Clean Air)? Stevens brings in the questions of aerosol’s impacts: “These findings suggest that aerosol radiative forcing is less negative and more certain than is commonly believed.” http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00656.1
Do the puzzle pieces fit the scenario?
It has to warm on the long term under positive forcing because that is energy conservation. You can delay surface warming with ocean variations, but you can’t stop it. The sign of the imbalance tells you whether energy is going into the system. That has to go somewhere. The aerosol forcing seems to have been going up and down, some from volcanoes. Maybe China is contributing to an increase lately. The GHG part is steadily rising at 0.3-0.4 W/m2 per decade, and by far dominates what the aerosols can offset.
Jim D,
“The aerosol forcing seems to have been going up and down, some from volcanoes. Maybe China is contributing to an increase lately.” There are a lot of “maybe’s” in that. What are the measurements? Current (state of the art) science from Stevens indicates the aerosols impacts are “more certain” at the same time as being “radiatively less negative”. Still using both sides of the same sword if they are the reason for “the pause” (and earlier hiati (?sp) 1940’s, 1970’s) while being an offset (or redistributor?) for greater “energy imbalance” today and hiding “more than the observable temp. increase” in a time of increasing GHG’s. I’m simply not grasping how that process can have a two way impact. W/o measurements think Capt. D would classify as SWAG.
I never understood why skeptics seem to like how Stevens reverse-engineered aerosol effects from model errors. Perhaps it is not the method so much as the bottom line that they like? Anyway, if you take the Stevens results as given, aerosol effects are a diminishing percentage of GHG effects, possibly now hovering around 10% and getting smaller, so it is the forcing from GHGs to focus on going forwards.
Jim D,
How does that line up with the “more than the observable warming SINCE 1950”? It can’t be both as Stevens stated and as that which caused the “pauses”. The author of the truthout article indicates 30%+ so must be wrong if your 10% is accurate. All while GHG’s continued to increase to even higher levels.
GHG’s increased, aerosols maintained/reduced while temps paused caused by those aerosols???? Or are you indicating there is some measure of aerosols being greater and only recently reduced? If so, based on what? Your line of presentation has me confused.
(I don’t take them as a given as I’m underqualfied to do so, but others have. Aerosols have been indicated as the cause of earlier pauses. Post WWII and 1970’s.)
Yes, 10% is what you get from Lewis’s interpretation of Stevens, 30% is more like the AR5 value. Take your pick. My estimates are based on pre-industrial to current changes. For 1950, it is harder because you have to know what part of the warming was just coming from what was in the pipeline before that. Your article wasn’t talking about pipeline warming at all, but just that GHGs alone are more than the warming we have seen, because aerosols offset some. It is a different perspective that also gives a “more than all” attribution.
JimD,
While I grasp how you got the number. The IPCC indicates “more than 1/2”, the author indicates 30%+ more than all the observed, and yet those same aerosols are supposed to be responsible for the 1930/1940’s and 1970’s coolings. It cannot be all of those. It’s not a question of “my picking” it’s a question of actualities and applications. I’m not looking to a choice of mine to make an argument of mine valid, but instead looking at how it fits.
We’ve gone down this rabbit hole numerous times, a few previous posts
http://judithcurry.com/2015/01/19/most-versus-more-than-half-versus-50/
http://www.realclimate.org/index.php/archives/2014/08/ipcc-attribution-statements-redux-a-response-to-judith-curry/
Dr. Curry,
Will go back and re-read. I recall your 50 +/-30 for attribution vs. Gavin’s. I fall in lot’s o’ rabbit (rabbet?) holes.
Still don’t get how Jim can use aerosols as (I understand) he is. Two sides of same sword. This author as it up 30% after the Stevens paper came out. Makes me wonder how bad it would have been if Stevens hadn’t come out with his, but I’m guessing it’d be worse than we thought!
Danny, my argument is different from your article. The forcing from GHGs and aerosols dominates the forcing change since pre-industrial times. This is a positive number probably between 2 and 3 W/m2 including the aerosols. All the warming we have had so far has not canceled this forcing, as an imbalance of 0.5-1 W/m2 is still left over. The imbalance means even if the forcing stopped changing now, there is still some warming. That is why I say “all and more” for the anthropogenic fraction, and that includes aerosols. The presence of pipeline warming means “all and more” by definition as long as you also recognize that the 2-3 W/m2 is almost completely anthropogenic.
Jim D,
I understand. If you are accurate, and the author of the linked article are accurate from a different perspective, and Gavin is accurate with 110% the what reason would IPCC have to use “most of the current warming” and greater than 50% attributable to man? If these analysis are conductible on blogs how does it follow that 97% of the greatest climate minds cannot state something along the lines of………..well, it’s actually cooling but man has changed that? The SPM was designed to communicate something to policy makers, but I’m not aware that even it says anything along those lines.
“It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human induced contribution to warming is similar to the observed warming over this period.”
Similar (resembling without being identical) to should be 100% +/- 1%. 110% or 135% just doesn’t seem “similar”.
As a communication device, IPCC reports leave a lot to be desired and a lot to the imagination.
If you look at the pictures, such as the bar chart presented in your article, you don’t need the words “very likely most” because you can phrase it any way you want just from the picture. It is very useful to look at the data presented and not just try to fathom the meaning of the words blindly. The pictures are there to help you understand what they are saying. The article helps you to interpret bar charts with whiskers too where they say the most likely values are near the center of the ranges shown.
Jim D,
I looked at the error bars and notice they go both directions, not just one. In fact, I’ve followed it to this: http://clivebest.com/blog/?p=6183 for yet another interpretation. That leads to: http://clivebest.com/blog/?p=6143. Damnified rabbit holes.
Yes, they go in both directions, meaning it can exceed 100%. Now you are starting to see. In fact a large fraction of the range is above 100%, so this is where the certainty comes from, and 95% of the range is above 50%. This is what the article means by likely “more than all”.
Wow. I must come across as dumber than I am. “Now I’m starting to see”? I understood all along how it was derived, but it’s built on assumptions of what could be and unsubstantiated as to why it is as the author indicates, 30+ % more. If Stevens, as you stated, indicates aerosol effects are 10%, there can be no more than 110% warming to reach 100% of observable. Right? To reach 130+%, there would have to be …….gasp……….”natural” warming. During the so called pause, it would have to be greater cooling (ghg’s increasing), greater cooling during previous hiatus (ghg’s were increasing) and even greater for measured cooling (ghg’s were increasing). And even you’ve indicated in the past some .1-.2 of natural warming. Ice age it would be if not for emissions.
So the author was saying that the GHG bar on that chart is 30% larger than the Observation bar. Maybe you did understand this, but I don’t know now. He was not talking about Stevens. Stevens would only reduce the aerosol bar size, which doesn’t affect the GHG bar, so that is still 30% larger.
JimD,
I understand he’s saying there is some 30% more warming than observable which indicates 30% is masked by something. What I’m saying is Stevens (10%, your number) indicates aerosols are not the cause of all of it. So it must be cooling naturally then. How am I not being clear?
This is the problem with looking at temperatures not forcings. You would rescale the temperature to agree by reducing the sensitivity. With less aerosol effect an even lower sensitivity accounts for the temperature rise. I prefer my method of attribution in terms of the forcing and imbalance because it does not need the sensitivity, just the signs of terms.
JimD,
I understand.
Unfortunately, neither this author nor IPCC chose to go about it in that fashion. IPCC chose a much less than clear explanation and this author cannot be accounting for current understanding of aerosols or that based on his approach there must be underlying cooling. No one that I know of is actually indicating there is underlying cooling, but I miss so much it might be out there.
That author did mention long-term cooling due to Milankovitch, which is what was expected to happen in this millennium because Arctic sea ice is more favored now than it was in 1000 AD, for example. The LIA was part of that expected trend which should have continued. I agree my method is much better, but the public don’t understand forcing, so that’s that.
> We’ve gone down this rabbit hole numerous times,
See also:
http://initforthegold.blogspot.com/2015/01/more-than-all.html
Prof Curry says “Global trends in sea level rise, which show values in the 1930s and 1940’s were comparable to the recent values”
Why is it a challenge to the dominate view (paradigm?) of climate scientists that the rate of sea level rise has not substantially accelerated since the 1930s-1940s? Do cryosphere models predict that the modest increase in surface temperatures since then should have produced a larger increase in melting of ice — and hence faster sea level rise?
IMO yes. Is in AR4, AR5, many technical papers (Rigot last year being particularly alarming about the Amundsen Embayment) and many warmunist commentaries for near three decades (most notably Hansen, most recently Hansen’s 2007 Reticence paper in Environ Res. Lett. 2: 024002 claiming 5 meters by 2100).
Ristvan,
I am unfamiliar with the sea level literature. But I see in Hansen’s 2007 ERL paper no claims or near term forecasts that are challenged by the slow acceleration of sea level rise reported by the IPCC since 1930s — as implied by Prof Curry’s text.
Sea level seems unlike the surface temperature pause (where current data does not match the naive forecasts of models, and therefore creates a challenge requiring explanations).
A failure of sea level rise to accelerate in the future would create a challenge. A serious one, on top of the pause.
What am I missing?
Hansen’s 2007 ERL paper:
http://iopscience.iop.org/1748-9326/2/2/024002/pdf/1748-9326_2_2_024002.pdf
Editor, I will endeavor to send you separately a gratis copy of essay PseudoPrecision, which covers some, hardly all, of this. Part of what you are missing is that satellite altimetry began in 1979, with its own precision problems. Previously, everything was tide gauges. And except for very few gauges (now findable using differential GPS), most of those move up or down with the land they sit on independent of sea level. The issues are isostatic rebound and plate tectoniks. The assumption that land doesn’t move is FALSE on mm/yr scales.
ristvan,
You are absolutely correct, as far as I know. The continents move in 3 dimensions.
It appears, for example, that the Himalayas are rising around 10mm per year. A corresponding volume of the Earth must be depressed somewhere. How much and where will be unknown, if it takes place on an ocean bed.
What is the effect on sea levels? A moments consideration shows that a perfect sphere would have no dry portions whatever. However, the Earth is not a perfect sphere, and its solid surface is constantly changing. Hence, sea levels have always changed, and I guess will continue to do so, as long as there is liquid water on the face of the Earth.
Will controlling the climate stop the Himalaya rising? Will reducing CO2 restore Antarctica to its previous fertile, ice free state? Will it green the Sahel?
Does any climate scientist really care, as long as the grant money keeps flowing?
The fact that the imbalance is positive (0.5-1 W/m2) means that whatever warming we have had has not yet caught up to the forcing change. The forcing change is almost entirely anthropogenic, with GHGs positive (2-3 W/m2) and offset by some anthropogenic aerosols (0-1 W/m2). The fact that all the warming seen so far is still short of the anthropogenic forcing would indicate that not only can all the warming can be attributed to the forcing change but more is due even without additional forcing.
I strongly question your “facts” and conclusions. Click on my name for details.
The facts presented are not in contrast with those of Lewis and Curry, for example.
I am questioning the direct link between anthropogenic emissions and the specified amount of global warming that you have stated. Click on my name to see why.
Are you making the Salby argument? Explain how ocean acidification fits into your argument. Where is the carbon all coming from?
I am showing statistically what the data is telling us. It is not an argument. You show me were I have made mistakes in my conclusions.
Did you assume that C13 stays in the atmosphere and cannot be exchanged with C14 from the ocean? This would underestimate the anthropogenic fraction. Some is in the ocean now (acidification).
I’ve made no assumptions and I have analyzed 13/12 C ratio data. I did not include 14C in my analysis. It is not required in estimating net flux changes. Read what I have done before making assumptions.
OK, how is the ratio affected by exchanges of carbon with the ocean? Would it be made to look more “natural” for example? How do you account for this factor?
Your question indicates that you either have not read my blog or you don’t understand what I have done? What level of math knowledge do you have?
Have you taken any statistics, probability, or numerical analysis coarses?
I don’t understand what you have done. There is no brief summary, so it is hard to see the point you are trying to make. With no emissions, would the C13/C12 ratio change with time? How quickly?
The rate of change in the 13/12 index is physically related to the rate of accumulation through both biological and inorganic fractionation processes that are temperature dependent. The index will become more negative as long as the accumulation becomes more positive. When the net rate of accumulation turns negative the rate of change in the index will become positive. Both natural and anthropogenic emissions are contributing to the net rate of accumulation and the rate of change in the 13/12 C index.
Emissions add up to something near 200 ppm. You have somehow obtained only 50 ppm from anthropogenic sources and 50 ppm from natural sources. What happened to the other 150 ppm of emissions, and why is that natural rise acceleration so correlated with the anthropogenic rise and emission totals?
Please don’t confuse total accumulation with emission rates. What I have estimated are both short term and long term “net” emission rates (emission rates less sink rates).
Well. fhhaynie seems to be on to something.
The 30 year rate of annual increase in net environmental carbon absorption (emissions – atmospheric increase) is 3.5% per year. The previous 25 years the rate was 3% per year on average.
The 30 year average annual increase in carbon emissions is 2%.
Net absorption is currently over 55% of emissions.
The absorption and emission curves intersect below 500 PPM.
This isn’t a fancy analysis, just a plot of 55 years of ever increasing net environmental absorption vs a much lower rate of anthropogenic emissions increase.
The fact that the imbalance is positive (0.5-1 W/m2) …
Well, you can call it a fact, but that doesn’t make it so.
The broadband satellite measures of outgoing longwave radiance look like this:
http://www.climate4you.com/images/OLR%20Global%20NOAA.gif
And we don’t know incoming radiance well at all because reflection and scattering are highly directional dependent so a single satellite sensor or even an array can’t capture what earth is absorbing.
Here’s what Hansen said:
“The notion that a single satellite at this point could measure Earth’s energy imbalance to 0.1 W m −2 is prima facie preposterous. Earth emits and scatters radiation in all directions, i.e., into 4π steradians. How can measurement of radiation in a single direction provide a proxy for radiation in all directions?
It is implausible that changes in the angular distribution of radiation could be modeled to the needed accuracy, and the objective is to measure the imbalance, not guess at it. There is also the difficulty of maintaining sensor calibrations to accuracy 0.1 W m−2, i.e., 0.04 percent. That accuracy is beyond the state-of-the art, even for short periods, and that accuracy would need to be maintained for decades”
The earth may well be in imbalance, but that is not known or a fact.
BTW, according to understanding, earth is always in imblance:
losing more than incoming during northern summer, and
losing less than incoming during northern winter.
There are two independent methods of measuring the imbalance, the satellite top-of-atmosphere budget, and the rise rate of the ocean heat content. The decadal rise of the ocean heat content is more evidence of an imbalance. These constrain it to a positive value. Even Lewis and Curry acknowledge a positive term when they say that ECS is larger than TCR. You will have a hard time finding anyone who says the imbalance is not positive (except Monckton who just assumed it was zero).
And how much of OHC change is adiabatic?
You will have a hard time finding anyone who says the imbalance is not positive (except Monckton who just assumed it was zero).
Doesn’t matter what people say, there is much greater uncertainty in the measurements than any trend to be found, so it is not demonstrable.
JimD, “There are two independent methods of measuring the imbalance, the satellite top-of-atmosphere budget, and the rise rate of the ocean heat content. The decadal rise of the ocean heat content is more evidence of an imbalance. These constrain it to a positive value.”
Current best estimates I have seen are in the 0.6 +/- 0,4 Wm-2 range, so that would be 0.2 to 1.0 Wm-2. Ocean up-take is limited to about half the oceans and start in roughly 1955. Recent paleo estimates indicate a positive ocean imbalance since roughly 1700 AD that was preceded by a negative imbalance that started around roughly 3000 BC. Those indicate what are the relevant planetary climate time scales.
Skeptics have been accepting a positive imbalance as taken, but now that I have laid out the logical implications of that, they might want to start rethinking that if they don’t want global warming to be anthropogenically forced.
Skeptics have been accepting a positive imbalance as taken, but now that I have laid out the logical implications of that, they might want to start rethinking that if they don’t want global warming to be anthropogenically forced.
I suspect the warming is anthropogenic, but there is not sufficiently accurate or precise scientific evidence to demonstrate this as fact.
From a forcing perspective, the anthropogenic contribution is more than all of the warming so far.
“From a forcing perspective, the anthropogenic contribution is more than all of the warming so far.”
And how do you figure that Jim?
Because even after all the warming, the imbalance is still positive.
JimD, Because even after all the warming the imbalance is still positive.
https://lh3.googleusercontent.com/-OvQHzJy8gFU/VLF3XHImAqI/AAAAAAAAMGM/X1vv5Tx0kiI/w689-h411-no/oppo%2Bover%2Bmann.png
Planetary time scales can be a bit longer than 60 years. Some portion of that “warming” could be recovery. One of the biggest issues is “what is normal”. Based on the original estimates that impact all of the “benchmarks” “normal” was supposed to be 15C. How well do the models do with absolute temperature?
Jim D | May 4, 2015 at 2:08 pm |
From a forcing perspective, the anthropogenic contribution is more than all of the warming so far.
The warming is 1 °C more or less.
The GHG effect has been measured. The total GHG warming is around 0.24°C.
CGAGW (virtual warming) is about 0.23°C. It can be scientifically proven that CGAGW has zero (0) effect on plants, zero (0) effect on animals, zero (0) effect on land, and zero (0) effect on the ocean. The only known effect of CGAGW is on other climate scientists with computers.
The remaining 0.53°C is a mix of solar, UHI, land use change, natural cycles, etc.
The take away point is GHG is only responsible for 24% of the post 1900 warming. Since GHG is responsible for less than 1/4 of the warming, claiming it is responsible for more than half is simply wrong.
PA, even according to the lowest estimates by Lewis and Curry, the GHG minus aerosol warming is far in excess of the warming seen so far. This is why ECS is larger than TCR. If you stopped changing the forcing now, it would rise from the TCR value to the ECS value for the current forcing.
Jim D | May 4, 2015 at 4:37 pm |
PA, even according to the lowest estimates by Lewis and Curry, the GHG minus aerosol warming is far in excess of the warming seen so far. This is why ECS is larger than TCR. If you stopped changing the forcing now, it would rise from the TCR value to the ECS value for the current forcing.
https://www.ipcc.ch/publications_and_data/ar4/wg1/en/tssts-6-4-2.html
Class, now follow along in your book:
1. TSR from IPCC is 2 X CO2 Forcing.
2. 2 * 5.35 ln (395/373) = 0.6131885943 W/m2
3. Measured change in forcing was 0.2 W/m2 for 22 PPM.
4. The IPCC TSR is too high by more than 3 times.
5. The logrithmic
TSR for GHG is F = 3.49 * ln (395/373) = 0.2000026349 or 0.2 W/m2 (close enough for our purposes).
Everybody guesses the ECS by multiplying the TSR by 1.5. Who am I to mess with success? So the ECS is about 1°C. The ECS is only valid after about 100 years.
However, as noted by myself elsewhere the CO2 forcing is only 1/4 of the total adjusted temperature change since 1900. Therefore there is no guarantee (and indeed some doubt) that raw temperatures will even be warmer in 2100.
Concerning data set challenges, consider upper troposphere humidity, the main water vapor feedback. AR4 and AR5 assert roughly constant UTrH with delta T. Models produce it. And the consensus is that roughly doubles sensitivity by itself; so about 2/3 of total modeled feedback. Radiosonde measurements suggest rising specific humidiry but falling relative humidity. The instruments have dry biases than could be corrected rather than rejected. (The Glenn Paltridge kerfuffle, see also essay Humiditynismstill Wet). Sat measurements go both ways, the present preponderance directionally support corrected radiosondes. The newest GPS technique also supports radiosondes; rejected by AR5 because not a long enough track record.
Water vapor feedback is the biggest CO2 amplifier. It is also (at least indirectly) central to the negative adaptive iris hypothsis and the recent Mauritsen and Stevens paper. Better data would inform model parameterization, tropocal convective process dynamics, and sensitivity approximation along f/(1-f) or Monckton equation lines, as discussed in my last guest post. I think the observational absence of the modeled tropical troposphere hotspot is directly related to this data uncertainty.
Re: climate controversy, 5/4/2015:
What are the most controversial points in climate science related to AGW?
The proposed responses are within the context of the presumption of AGW. Here are two points of controversy without that presumption.
AGW is based on the Equilibrium Climate Sensitivity (ECS), the change in Global Average Surface Temperature for a doubling of CO2. So
(1) why, and specifically (2) how, did AGW-based models put the probability that ECS would measure below 1C between about 1% and 3%, extrapolated?
After all, weren’t all the significant elements models published in approved, peer-reviewed, professional journals, and allegedly supported by a consensus?
(Judging those models by the failure to predict the end of GAST increases since about 1998 is a mere instantaneous embarrassment before a relatively naïve public. That judgment is erroneous because the observation interval is only about half a climate cycle, where 5 to 10 cycles approximates a popular statistical rule of thumb.)
At an even higher level of inquiry, the failure of that model based on AGW casts doubt over the criteria accepted and applied for the scientific method.
You missed an obvious one (among many):
Have temperatures ever been measured properly, to ensure they don’t need post facto adjustments now or in the future?
Andrew
Will we have sufficient global warming to avoid the next glaciation?
A mile thick glacier grinding again through Chicago will cause far greater damage than a few degrees of warming. Tzedakis et al. (2012) etc. indicate we are close to the start of the next glaciation if not already on the way down. Recent estimates of low climate sensitivity suggest we may see very little global warming in comparison to the major cooling into a glaciation.
Will we be able to achieve enough global warming to counteract natural cooling to the next glaciation?
Tzedakis, P.C., J.E.T. Channell, D.A. Hoddell, H.F. Kleiven and L.C. Skinner. 2012. Determining the natural length of the present interglacial. Nature Geoscience 5: 138-141
Problem of the length of the current interglacial V. A. Dergachev, O. M. Raspopov, Geomagnetism and Aeronomy Dec. 2013, Vol. 53, # 7, pp 876-881
David,
Better figure it out soon. A couple more cycles and we’ll know: http://www.breitbart.com/big-government/2015/05/03/global-warming-low-sun-spot-cycle-could-mean-little-ice-age/
Great thread!
What has to be the single greatest point of contention is the value of GCM climate models in predicting.
Forty years ago, when I was a meteorology student, we knew about the greenhouse effect, had recently discovered about abrupt climate change from ocean- and lake-bottom sediments, and feared the earth was cooling – perhaps because of human-caused pollution.
Although our models were simpler then, I was warned that while they could illustrate processes, models had no predictive value.
How would you describe the most complete model of our planet’s climate, earth itself? As a non-linear system goverened by feedback, perhaps. A professor of thermodynamics would go further: because of life, he’d also say it’s dynamic, steady-state, and far-from-equilibrium.
How would such a system react to a signal (forcing)? Models assume that every change is small and takes place during a short time period without accounting for the multitude of feedback processes and their delays. An engineer would tell you that such a system will respond with ringing and oscillations – exactly what we observe with our charts and measurements but which the models can’t resolve.
What we call natural variability may largely be a result of the instabilties due to delays in feedback, positive or negative, which can exaggerate or diminish a system’s response unexpectedly. Anyone who’s spent anytime with electric circuits and oscilloscopes has seen this but it’s characteristic of all non-linear systems.
How do models account for this?
http://www.clim-past-discuss.net/11/1519/2015/cpd-11-1519-2015.pdf
This new paper in EGU’s Cimate of the Past journal is an interesting look at 400 years of hurricane records in french Antilles (Caribbean). As the paper concludes
“During the four centuries in the study of French Antilles, there is no clear trend to prove a real contemporary change in the severity of cyclones.”
Especially interesting as it appears shorter reconstructions have tended to come to the opposite conclusion.
Interesting link. Thank you!
Just to continue the conclusions carry on with some interesting points.
My take home message is intense hurricane periods have occurred in the past, they will likely occur in the future (irrespective of AGW impacts) therefore adaption and greater resilience to these climate threats are a better strategy than putting great resources into trying to stop AGW.
Judith –
You might scan over the APS framing document, http://www.aps.org/policy/statements/upload/climate-review-framing.pdf for items that you might have missed. This seems to be a pretty good statement of some of the uncertainties.
ATTP, made a comment at his post : “If the hiatus is partly due to interval variability, then we would expect accelerated warming quite soon and we would expect periods of faster than average warming to largely compensate for periods of slower than average warming.”
I replied
GHG effects are more or less instantaneous in the atmosphere.*
I base this on two indisputable facts.
1. It is recognized that any atmosphere with GHG in will exhibit warming and cooling at a rate commensurate to the amount of GHG present as it is warmed and cooled by the passage of the sun*
2. The temperature change of the surface atmosphere between day and night will often vary at least 20 degrees*. Hence a large change in the amount of heat in the atmosphere happens over a very short time frame dependent proportionally on the amount of GHG.
Hence, ATTP, my argument is that the heat from CO2 levels which are reputedly fairly consistent must be present at the right level for such an amount of GHG every day.
Your argument as well I expect.
Natural variability must therefore exist and the problem is that natural variability is a term for we do not know what the other causes are that are stopping the temperature from going up as predicted.
Obviously I would and have argued that Climate Sensitivity is less than predicted for the usual suspects of negative feedbacks from clouds, aerosols and other causes of increased albedo.
I suspect that BBD and yourself acknowledge that this may have a small effect.
But this is at odds with the overall concerns that you have and other ideas which are more debatable are put up which have traction.
There is a lot of emphasis put into high Climate Sensitivity and lots of runaway graphs which do not equate with what the historic record of the earth’s atmosphere shows. At the moment this idea is not working.
“I base this on two indisputable facts.”
Delusions of grandeur, oh well.
The most critical issue? AGW replaced nuclear war as a focus of the great demonic evil of humanity.
Notwithstanding science, AGW is mainly about emotion and the marketing of a meme that has singular overarching prominence. It is mostly sizzle. It displaces other great dangers and threats.
It is demonized and misrepresented. It takes away from other real dangers such ss nuclear warfare, economic crisis ….
Here are two questions for those who view the Central England Temperature Record (CET) as an important body of information useful for understanding past changes in the earth’s climate regime over the past 250 years.
The graphic shown below places the IPCC AR5 ensemble model envelope onto the same illustration as the Hadley Centre CET 1772-2013 temperature record and the Hadley Centre GMT 1850-2008 temperature record. The AR5 ensemble model envelope has a lower bound of a near-flat +0.03 C/decade rise, and an upper bound which exceeds a steep +0.4 C/decade rise.
http://i1301.photobucket.com/albums/ag108/Beta-Blocker/CET/AR5-Figure-1-4–and–CET-1772-2013_zps02652542.png
As a local temperature record, CET contains periods of warming which match the upper extremes of the AR5 ensemble model envelope.
If we take the public commentaries of mainstream climate scientists as an indication of how they view the IPCC AR5 model envelope, any near-term warming which occurs anywhere within the AR5 ensemble model boundary validates the entire modeling envelope, including its extreme upper range.
That is to say, as mainstream climate scientists view current trends in global mean temperature data, any warming at all which occurs above a rate of roughly +0.03 C/decade is an indication that extremely high rates of warming are just as likely to be occurring in the future as are lower rates of warming which remain just above the AR5 ensemble lower boundary.
It has been said by a number of mainstream climate scientists that impacts from global warming are dangerous not only because of the amount of warming that is now happening, but also because of the rate at which it is happening, especially as the rate of increase affects the earth’s biosphere — possibly exceeding the ability of the earth’s biosphere, and also the earth’s human and animal populations, to adapt to the ongoing temperature increases.
Because the Central England Temperature record contains periods of warming which match the upper extremes of the AR5 ensemble envelope, it seems logical to ask the following two questions concerning past changes in the local biosphere of the Central England region:
Question #1: Is there any evidence, physical and/or anecdotal, of any regionally significant adverse impacts having occurred on Central England’s local biosphere, impacts which might have occurred in the hundred years which passed between 1810 and 1910 — a historical time frame which saw three distinct multi-decade episodes of temperature rise approaching +0.2 C/decade, +0.3 C/decade, and +0.4 C/decade, respectively?
Question #2: If there is in fact such evidence of significant past impacts on Central England’s local biosphere, impacts which had occurred between 1810 and 1910, how do these prior impacts compare with those thought to be occurring in Central England today, where the rate of warming has, at times within the past thirty years, exceeded a rate of +0.4 C/decade?
What are the most controversial points in climate science related to AGW?
Deep ocean heat content variations and mechanisms of vertical heat transfer between the surface and deep ocean.
The inability and need to explain a physical mechanism for heat to suddenly decide to go into the ocean at a faster rate than it had before at just the time a pause in surface temperatures occurred is an absolute joke..
Sensitivity of the climate system with Global data sets of surface temperature and atmospheric temperature (satellite) that show a hiatus in warming for 16+ years.
30 plus explanations, none substantiated, ensure that this is the most controversial point.
3. Adjusting records in the past downwards daily, weekly, monthly, yearly and decadely and records using adjusted data to determine the current world surface temperature. Over 2/3 of the 30,000 sites listed are probably artificial composites to give a world grid. Worse than this the gridded responses are then possibly used to raise the temperature at some real sites, see Australian changes.
JC comment: I could use additional input here, preferably global or hemispheric data sets.
Make sure they use the words Zeke says are not needed on all estimated model sets. “These are data set model guesses, not real data”.
This is very encouraging. Not holding my breath for the Hockey Team to comply … but …
Climate Science take note: New gold standard established for open and reproducible research
http://wattsupwiththat.com/2015/05/04/climate-science-take-note-new-gold-standard-established-for-open-and-reproducible-research/
Let’s not forget the emergencies from the recent past.
From the article:
…
UN scientists warn time is running out to tackle global warming
· Scientists say eight years left to avoid worst effects
· Panel urges governments to act immediately
· 2014: Global warming is already here and could be irreversible, UN panel says
David Adam, environment correspondent
…
Governments are running out of time to address climate change and to avoid the worst effects of rising temperatures, an influential UN panel warned yesterday.
Greater energy efficiency, renewable electricity sources and new technology to dump carbon dioxide underground can all help to reduce greenhouse gas emissions, the experts said. But there could be as little as eight years left to avoid a dangerous global average rise of 2C or more.
The warning came in a report from the Intergovernmental Panel on Climate Change (IPCC) published yesterday in Bangkok. It says most of the technology needed to stop climate change in its tracks already exists, but that governments must act quickly to force through changes across all sectors of society. Delays will make the problem more difficult, and more expensive.
Rajendra Pachauri, who chairs the IPCC, said the report would underpin negotiations to develop a new international treaty to regulate emissions to replace the Kyoto protocol when it expires in 2012.
…
http://www.theguardian.com/environment/2007/may/05/climatechange.climatechangeenvironment
I think a serious issue is perception of risk. On the first Earth Day, 45 years ago, speeches were given about how half the world’s population would soon starve, oceans boiling, 20 ft sea level rise (oh, right, Gore’s movie…), half of species extinct, plagues and pestilence….none of it happened. Much of the urgency of climate change alarmism is of this same sort: vague sky is falling stuff. But we’ve seen it before and it is from the mind of those scared of death and scared of the future, not from science. If you take the forecasts of medium emissions and look at the projected climate change, it isn’t even scary. It is the media stories and press releases and interviews which make it scary, but the things they say aren’t even in the AR5 report. It is the result of deep risk aversion and fear, the same kind that says kids can’t play tag at recess anymore or point their finger and say “bang”.
Scaredy cats ‘n cool cats, the latter of the not easily
herded kind.
Here’s some more sky is falling stuff. I think climate science has made its own bed and now is tossing and turning. Especially, since the temp record is diverging from the models. I understand that it may not YET be statistically significant, but that day may come.
https://stevengoddard.wordpress.com/2015/05/03/40-years-since-climatologists-blamed-california-drought-on-global-cooling/
I don’t know if this is the most controversial, but it does seem to me that something may be wrong with tropical convection and the tropopause doctrine on which all models seem to be based. I seem to remember Lindzen saying that this theory was based on 1D model problems. Yet we know convection is very 3D in nature and chaotic. I don’t see how it is possible to square tropospheric observations with model behavior in the tropics. Is it possible to define a 1D model that has parameters that can be used to match data?
A data challenge is to develop good enough paleo data for past ice ages and interglacials to test the climate models. Right now it is not clear if the failure of the models for these periods is due to the models or the data.
This is a good one too. There seems to have been some progress for the last 1000 years of so with more and better proxies. But 24000 years ago? Seems still to strain credulity to me.
I still believe that proxies like ice cores and trees might be natural integrators – knocking off the peaks and valleys – a smoothing function. And then for some of them, there’s the question (in my mind at least – not being an expert) if they reflect temperature or some combination of temperature and other factors.
It is very clear that the failure is the theory and models.
Data is data. Model output that does not match data is junk.
If the data is wrong, that does not make the models right.
That means they have no way to build a proper model.
You must first have data.
Controversial? Meh. Fresh? http://www.cbc.ca/news/technology/fiords-store-lots-of-carbon-fighting-climate-change-1.3060177
http://www.nature.com/articles/ngeo2421.epdf?referrer_access_token=HJbEYF5FtWW2a9moxRSvdNRgN0jAjWel9jnR3ZoTv0PxIy-AjS9wTMOGhWIJkPkbWHJXEs9XIvVr2sSwWrgVn3YLLRllrzoVcW6vskB_9w0kHjtJbMnn8-7DS7O57M8B-_OOcoPffznTTxKY7EmuPOmOPAlb9LA4If6FkP7McI9HaZngBrpy00seLTApj0SiDZ7lhe-H59dp8elqQaeBUdNc4v75ABB2Qthxugpe6QA%3D&tracking_referrer=www.cbc.ca
Until climate science provides plausible explanations with plausible evidence on what caused previous transitions from glacial to interglacial periods and back, how can climate scientists be confident that natural variation is not the major driver of current changes? Such explanations may be possible if all the questions in Dr. Curry’s list were answered, but it is also possible that the explanation will remain ambiguous and speculative.
In computational biology, machine learning algorithms are showing great promise in predicting the outcome of complex biological processes. Many of these do not evaluate linear, or even non-linear, relationships between variable, but they use various strategies to recognize complex patterns between multiple variables that the human mind simply cannot discern. These methods also have the advantage of not requiring parameterization but only requiring large existing data sets from which to “learn” the patterns of explanatory variables that can accurately predict the outcome of interest. Thus, there is no need for “expert” assumptions being built in to the models. They are entirely objective. I am not aware they they have been used in climate science. I anyone knows of any examples in which they have, please share citations with us.
Finally, Craig Loehle (just above) is exactly right about the perception of risk being even more important than the risk itself. Toxicologists have been dealing with this for many years. Risks of highly dreaded outcomes (e.g., cancer) caused by something that is beyond the individual’s control and not fully known or disclosed (e.g., exposure to potentially carcinogenic compounds in the food supply or air) are viewed as much more dangerous than risks that are not as dreaded, are known and understood, and are under the person’s control (e.g., driving a car). This is very often the case even though the objectively calculated risk of the former example is generally orders of magnitude lower than the risk associated with the latter example. Real toxicology (not the precautionary principle), risk assessment, and risk management require assessment of risks and benefits and determining as nearly as possible which policy decisions will maximize the benefit to risk ratio. So, the best course of action, the course of action that will ultimately save the most lives or most improve quality of life is seldom the one that eliminates or maximally reduces risk. It is the one that maximizes benefit, while minimizing risk to the largest extent that is practical. Every time IPCC advocates drastic reductions in CO2 production, there should be a detailed assessment of the risks as well as the benefits of achieving this goal.
Years ago, and perhaps still, a US agency made regular estimates of the cost-per-life saved from various regulations or interventions. I recall that many simple and easily-implemented small-scale road improvements had figures around $200,000 (but as a rule were not actioned), while some public-outcry cancers scares with a cpls of around $3.5 billion (essentially, which weren’t going to cause any deaths) led to new and severe regulations. One leading expert in the field of cancer research and treatment complained that almost all cancer deaths related to a few cancers, but the focus of research and regulation was on exotic cancers with potentially trivial death rates and an extremely high cpls, if any.
Irrationality rules, OK. The current problem is that none of the various over-hyped scares have had anything like the huge acceptance and politically driven costs of alleged CAGW. If Lewandowski and Cook were rationally interested in human welfare rather than self-aggrandisement, they would be investigating why so many of influence, highlighted by the current US President, have swallowed this nonsense as a transcendent issue which dominates policy-making.
Faustino
Indeed, Great Cunn. We pay for scares at the expense of real need.
One would think that billions of poor incinerating whatever they can get hold of, and then inhaling, might be contributing to a problem or two. How much deforestation and how much domestic flame and smoke are prevented by having a nice coal power station burning away in the distance? Don’t ask Lew or Cook to quantify; their computational skills will suddenly desert them. Untaxable carbon is barely carbon at all for those guys.
Today I found myself e-shopping for a little power bank for a mobile phone. You see, I just don’t like it when the power goes off here, even for a matter of hours. I’m very first world in that way. Maybe it’s because I live in the scrub, but I tend to treat electricity as a recurring miracle, and I want to secure it. Living in timber country amid bloodwoods and other prime firewood timbers I get to enjoy the romance of my slow combustion every winter. But I can choose to burn, and choose to flick a switch when I choose not to burn. That’s still not the deal for most of the world, but it should be the deal.
We need to observe what happens immediately AFTER Earth Hour to see what all humans really want.
Until climate science provides plausible explanations with plausible evidence on what caused previous transitions from glacial to interglacial periods and back, how can climate scientists be confident that natural variation is not the major driver of current changes? Such explanations may be possible if all the questions in Dr. Curry’s list were answered, but it is also possible that the explanation will remain ambiguous and speculative
It is really simple. it snows more when it is warm and then it gets cold. it snows less when it is cold and then it gets warm. It is a natural cycle.
Look at the actual data. natural variation is the driver of climate changes.
CO2 is a trace gas, it is an important trace gas, it makes green things grow better with less water. It did not cause the climate cycles of the past ten thousand years and it did not cause this modern cycle that is following the same profile as all of the past warming cycles in the past ten thousand years. It is a natural cycle. We are warm now because we are supposed to be warm now.
Steve Pruett, see what Jennifer Marohasy is doing.
http://jennifermarohasy.com/2014/07/the-need-for-a-new-paradigm-including-for-rainfall-forecasting/
Pingback: Vilka är de mest kontroversiella frågorna inom klimatvetenskapen? - Stockholmsinitiativet - Klimatupplysningen
http://wattsupwiththat.com/2015/04/30/how-plasma-connects-the-sun-to-the-climate/
Some very good info. on possible solar/climate connections.
http://www.climate4you.com/ClimateAndClouds.htm
Good data on clouds and climate.
◾Whether the warming since 1950 has been dominated by human causes
◾How much the planet will warm in the 21st century
I would say 10% of the warming is due to human causes due to waste heat and the Urban Island Effect.
The planet between now and 2050 will likely be cooler then it is now due to prolonged minimum solar conditions and the associated primary and secondary effects.
The PDO ,AMO and ENSO should also favor cooling up to 2050.
In other words the factors that drive the climate system have changed from being mostly in a warm phase from 1950 -2005 to a cold phase post 2005. This predominate cold phase on balance I expect will last to 2050. Beyond 2050 is to far off.
◾Causes of the 1900-1940 warming; the cooling from 1940-1976; and the recent hiatus in warming since 1998. How are these explained in context of AGW being the dominant influence since 1950?
My answer below.
http://hockeyschtick.blogspot.com/2014/09/new-paper-finds-natural-ocean.html
thx for this link
Glad to help.
◾Solar impacts on climate (including indirect effects). What are the magnitudes and nature of the range of physical mechanisms?
◾Nature and mechanisms of multi-decadal and century scale natural internal variability. How do these modes of internal variability interact with external forcing, and to what extent are these modes separable from externally forced climate change?
My response. They are all interconnected but unless solar activity is in an extreme phase(active or inactive ) much of the solar connection/internal variability will be obscured by noise in the climate system and conflicting solar signals acting in opposition to one another. This occurring when the sun is in an 11 year rhythmic sunspot cycle. It will take a Grand Maximum or Grand Minimum to expose the solar /climate connections in a more straight forward manner.
Below I list my low average solar parameters criteria which I think will result in secondary effects being exerted upon the climatic system.
My biggest hurdle I think is not if these low average solar parameters would exert an influence upon the climate but rather will they be reached and if reached for how long a period of time?
I think each of the items I list , both primary and secondary effects due to solar variability if reached are more then enough to bring the global temperatures down by at least .5c in the coming years.
Even a .15 % decrease from just solar irradiance alone is going to bring the average global temperature down by .2c or so all other things being equal. That is 40% of the .5c drop I think can be attained. Never mind the contribution from everything else that is mentioned.
What I am going to do is look into research on sun like stars to try to get some sort of a gage as to how much possible variation might be inherent with the total solar irradiance of the sun. That said we know EUV light varies by much greater amounts, and within the spectrum of total solar irradiance some of it is in anti phase which mask total variability within the spectrum. It makes the total irradiance variation seem less then it is.
I also think the .1% variation that is so acceptable for TSI is on flimsy ground in that measurements for this item are not consistent and the history of measuring this item with instrumentation is just to short to draw these conclusions not to mention I know some sun like stars (which I am going to look into more) have much greater variability of .1%.
I think Milankovich Cycles, the Initial State of the Climate or Mean State of the Climate , State of Earth’s Magnetic Field set the background for long run climate change and how effective given solar variability will be when it changes when combined with those items. Nevertheless I think solar variability within itself will always be able to exert some kind of an influence on the climate regardless if , and that is my hurdle IF the solar variability is great enough in magnitude and duration of time. Sometimes solar variability acting in concert with factors setting the long term climatic trend while at other times acting in opposition.
THE CRITERIA
Solar Flux avg. sub 90
Solar Wind avg. sub 350 km/sec
AP index avg. sub 5.0
Cosmic ray counts north of 6500 counts per minute
Total Solar Irradiance off .15% or more
EUV light average 0-105 nm sub 100 units (or off 100% or more) and longer
UV light emissions around 300 nm off by several percent.
IMF around 4.0 nt or lower.
The above solar parameter averages following several years of sub solar activity in general which commenced in year 2005..
If , these average solar parameters are the rule going forward for the remainder of this decade expect global average temperatures to fall by -.5C, with the largest global temperature declines occurring over the high latitudes of N.H. land areas.
The decline in temperatures should begin to take place within six months after the ending of the maximum of solar cycle 24.
Secondary Effects With Prolonged Minimum Solar Activity. A Brief Overview.
A Greater Meridional Atmospheric Circulation- due to less UV Light Lower Ozone in Lower Stratosphere.
Increase In Low Clouds- due to an increase in Galactic Cosmic Rays.
Greater Snow-Ice Cover- associated with a Meridional Atmospheric circulation/an Increase In Clouds.
Greater Snow-Ice Cover probably resulting over time to a more Zonal Atmospheric Circulation. This Circulation increasing the Aridity over the Ice Sheets eventually. Dust probably increasing into the atmosphere over time.
Increase in Volcanic Activity – Since 1600 AD, data shows 85 % approximately of all major Volcanic eruptions have been associated with Prolonged Solar Minimum Conditions. Data from the Space and Science Center headed by Dr. Casey.
Volcanic Activity -acting as a cooling agent for the climate,(SO2) and enhancing Aerosols possibly aiding in greater Cloud formation.
Decrease In Ocean Heat Content/Sea Surface Temperature -due to a decline in Visible Light and Near UV light.
This in turn should diminish the Greenhouse Gas Effect over time, while promoting a slow drying out of the atmosphere over time. This may be part of the reason why Aridity is very common with glacial periods.
In addition sea surface temperature distribution changes should come about ,which probably results in different oceanic current patterns.
◾Deep ocean heat content variations and mechanisms of vertical heat transfer between the surface and deep ocean.
The exert below shows not only the immense energy which is needed to change ocean heat content but how little impact an apparent looking large increase in ocean heat content has on the global temperature.
That along that the process is slow does not correlate with how rapidly the climate can change character. I think sea surface temperature anomalies are a bigger player when it comes to the climate because they can change quick enough to show cause and effect with the climate over a relatively short period of time.
In addition I think that it is surface changes in the ocean temperature that translate to the deep ocean more so then deep ocean temperature changes translating to the ocean surface.
Reverting the ocean heat content change back to temperature change is highly revealing. It shows how little change has really been measured.
The increase in ocean heat content over the 94 ARGO months September 2005 to June 2013 was 10 x 1022 J = 100 ZJ = 100,000 XJ Sounds big and alarming.
There are 0.65 Xm3 in the upper 2,000 m of the oceans. Each cubic meter of ocean water weighs 1.033 tonnes. To raise 1 tonne by 1 Kelvin requires 4 MJ of heat energy. Thus, to raise 0.65 Xm3 x 1.033 tonnes per cubic meter = 0.67145 Xte of upper-ocean water by 4 MJ per tonne requires 2,685,800 XJ. The 100,000 XJ of ocean heat content increase in the past 94 months represents a total ocean warming 0.037233 K, equivalent to less than 0.0475 K per decade.
Accordingly, even on the quite extreme NODC ocean heat content record (Figure 5), the change in mean ocean temperature in the upper 2,000 m in recent decades has been less than 0.05 K per decade – precisely the change in air temperature Nature will concede has occurred in the past decade and a half. Therefore, there is no need to look any deeper than the upper or “mixed” 2,000 m of the ocean. The abyssal layer – which has scarcely been measured – is in any event mostly very cold – often as little as 4° Celsius.
The ARGO bathythermographs show much less warming than NOAA would
About 65% of the rise in temperatures out of the little ice age was before the CO2 forcing kicked in. The initial period coming out of the LIA was more robust and just as rapid as the late 20th century rise.
http://3.bp.blogspot.com/-tXWZ-sLc4GE/Uxv4wzJn9ZI/AAAAAAAAQAs/4wETMsUaw4o/s1600/amoss.GIF
http://3.bp.blogspot.com/-tXWZ-sLc4GE/Uxv4wzJn9Zl/AAAAAAAAQAs/4wETMsUaw4o/s1600/amoss.GIF
This might give rise to another point – whether there is ever “global” warming or whether global heat redistribution is dominant. Text from BBC:
Scientists have found that abrupt and large temperature changes first occurred in Greenland, with the effect delayed about 200 years in the Antarctic. In the 1990s, scientists took ice cores from Greenland that revealed very abrupt and large swings in temperature approximately 20,000 to 60,000 years ago. But it wasn’t clear how this influenced global climate change. The 3,405 metre-long ice core, taken from the centre of West Antarctica, is the longest high resolution ice core. Researchers documented 18 abrupt climate events.
“This record has annual resolution, meaning we can see information about every year going back 30,000 years, and close to that resolution all the way back to 68,000 years ago,” explains Eric Steig, professor of Earth and Space Sciences at the University of Washington, who co-wrote the paper. “Our new results show unambiguously that the Antarctic changes happen after the rapid temperature changes in Greenland. It is a major advance to know that the Earth behaves in this particular way.”
The new core also supports the “bipolar seesaw” effect between poles, meaning that when it’s warm in Greenland, Antarctica is cooling, and vice versa. “The fact that temperature changes are opposite at the two poles suggests that there is a redistribution of heat going on between the hemispheres,” said Christo Buizert, lead author on the study and a post-doctoral researcher at Oregon State University. “We still don’t know what caused these past shifts, but understanding their timing gives us important clues about the underlying mechanisms.”
During large changes in climate in the northern hemisphere, the atmosphere and ocean transfer the heat around the globe. The 200-year difference in the timing directly points to the ocean, explained Prof Steig. The atmospheric circulation of heat would have shown up in the Antarctic record in a matter of years or decades.
http://www.bbc.com/news/science-environment-32599228
http://www.bbc.com/news/science-environment-32599228
Faustino
Sorry, second link should be:
http://www.nature.com/articles/nature14401.epdf?referrer_access_token=meIAXfVMVyonKQlFxn065dRgN0jAjWel9jnR3ZoTv0M2TtaRJ_7ImIaEOXy02c7lQE5sixd-Mm8LIpRvyW9g6dKxPMws0wQ1RR-mk1r0NAXn5WjgzzGADqBeR_VWPBqEYCyKWMh818mEUMNMNW7UX7BguGzPAKWrYUQyjNkvnPnHYwaD82dzQWHw1C-4tsoI&tracking_referrer=www.bbc.com
Excellent article and commentary. However I would like to criticize the frequent use by commentators of the term “climate change”, giving it the same meaning as do the rest of the world that it represents a trend of our environmental characteristics which is then linked to CO2 emissions.
The term has no scientific value except when it is applied to the fact that our climate and our weather are ever-present variables.
This is a scientific platform which should dispense with the use of such an unscientific term.
One minor problem with the GHE.
After trapping, storing, and accumulating heat from the Sun for about four billion years, with CO2 concentrations of up to 95%, the Earth seems to have actually cooled.
Therefore, TOA energy balance negative.
Average surface energy balance negative.
TCS negative to chaotically irrelevant.
Antarctica had abundant flora and fauna in the past. So did the Sahara region. How do we ensure that Man influences the climate in such a way as to simultaneously restore the Antarctic and the Sahara to their previous biodiversity?
Does anybody really believe that weather – and its average – climate – can be tailored by altering the amount of CO2 in the atmosphere?
It seems to have had no discernible effect on the average cooling over the last four billion years. I am unaware of cosmic changes to the properties of matter over the last couple of hundred years.
Reblogged this on I Didn't Ask To Be a Blog.
A long while back, a local newspaper reporter published an article reporting that sea level was rising 2mm / year. Horrors! We would be like Venice in the future if we failed to address CO2 emissions.
Fortunately, a local scientist commented that local land subsidence was 1mm / year. Plate tectonics.
Take away: For sea level, account for subsidence as well as thermal expansion.
In addition to the recent data sets that you identify, the geologic record provides many data sets that cast extreme doubt regarding the validity of AGW. Among the most significant are the ice cores from Antarctica and Greenland, because they provide synchronized records of temperature and atmospheric CO2. Among the most significant items from the ice core data are that over 80% of the last 800kA were glacial periods, substantially colder than anything encountered during human civilization, interspersed with interglacial periods, most of which reached warmer levels than at present, yet on every time scale CO2 changes lag temperature changes by centuries to millennia. Also, during the current interglacial (which is unusually long and has unusually stable temperatures, leading the an illusion of long-term climate stability), the peaks of the embedded warm intervals have been declining monotonically during the last 5kA, suggesting the long term variation (mechanism unknown) is trending toward the next glacial period. Until the mechanism causing all of this climate change without the involvement of CO2 radiative forcing can be identified, there is no reason to consider the warming during the last 50 years to have been anthropogenic.
MFGEO says ‘Among the most “significant items from the ice core data are that over 80% of the last 800kA were glacial periods, substantially colder than anything encountered during human civilization, interspersed with interglacial periods, most of which reached warmer levels than at present,”
I can well understand that temperatures during glacial periods were colder than humans have ever experienced, and in terms of the current cycle things should get colder rather than warmer. However, yesterdays report in http://www.bbc.com/news/science-environment-32625429 concerning present CO2 levels of 400 ppmv, coupled with the graph http://en.wikipedia.org/wiki/File:Carbon_Dioxide_400kyr.png representing CO2 levels back to 400 Kyrs, and showing maximum levels during each of the last three glacial periods of c. 300 ppmv, is not consistent with your claim that interspersed interglacial periods reached warmer levels than at present. The benthic foram data http://en.wikipedia.org/wiki/File:MilankovitchCyclesOrbitandCores.png goes back to 800 kyrs but the maximum temperatures seem to be even lower than that at 400 kyrs. Could you point me towards a graph justifying your claim.
Also would you have a comment on the graph http://www.ferdinand-engelbeen.be/klimaat/eocene.html based on data from the Vostok ice core.
My regrets for any misunderstanding – it is not comfortable sitting on the fence!
Pingback: The most controversial points in climate science | meteoLCD Weblog
“Global data sets of surface temperature and atmospheric temperature (satellite) that show a hiatus in warming for 16+ years”
With the April numbers, RSS has no warming for 18 years and 5 months. UAH version 6.0 has no warming for 18 years and 4 months.
http://wattsupwiththat.com/2015/05/04/el-nio-has-not-yet-paused-the-pause/
http://wattsupwiththat.com/2015/04/29/new-uah-lower-troposphere-temperature-data-show-no-global-warming-for-more-than-18-years/
However GISS and Hadcrut4.3 are in record breaking territory and show no pause.
Thanks for an excellent, informative posting, Ms. Curry.
CERES_EBAF-TOA_Ed2.8 is a thoroughly under-utlized resource. Currently we have 179 months of data between 3/1/2000 and 1/31/2015, almost 15 years.
We have quite a few variables in this dataset, in fact everything needed to calculate stuff at the Top of Atmosphere. The dataset is available at a 1° x 1° resolution, more than enough to find regional effects.
On the other hand, error bars are sorely lacking and there is no closure between incoming short wave and outgoing long wave fluxes, that is, precision of it may be high, but accuracy is still inferior.
However, one can calculate trends for both incoming and outgoing fluxes separately. And find the imbalance is not increasing in spite of stable surface temperatures and increasing atmospheric carbon dioxid content, in direct contradiction to computational climate models.
Also, computational models fail to replicate a simple symmetry on the largest possible regional scale, that of between hemispheres. Observed annual average inter-hemispheric difference of absorbed short wave radiation is practically zero (it is well within measurement error), the difference being less than one part in two thousand. And that for two hemispheres with vastly different clear sky albedoes.
It is truly an enigma and a much more important one than any of the so called “practical” questions concerning the climate system.
Pingback: The orthodoxy offers a defence — of sorts « DON AITKIN
Pingback: Weekly Climate and Energy News Roundup #179 | Watts Up With That?
Pingback: Touring the frontiers of climate science, the exciting parts of science | The Fabius Maximus website